WorldWideScience

Sample records for schwinger variational method

  1. Electromagnetic Radiation : Variational Methods, Waveguides and Accelerators Including seminal papers of Julian Schwinger

    CERN Document Server

    Milton, Kimball A

    2006-01-01

    This is a graduate level textbook on the theory of electromagnetic radiation and its application to waveguides, transmission lines, accelerator physics and synchrotron radiation. It has grown out of lectures and manuscripts by Julian Schwinger prepared during the war at MIT's Radiation Laboratory, updated with material developed by Schwinger at UCLA in the 1970s and 1980s, and by Milton at the University of Oklahoma since 1994. The book includes a great number of straightforward and challenging exercises and problems. It is addressed to students in physics, electrical engineering, and applied mathematics seeking a thorough introduction to electromagnetism with emphasis on radiation theory and its applications.

  2. Quantum statistical field theory an introduction to Schwinger's variational method with Green's function nanoapplications, graphene and superconductivity

    CERN Document Server

    Morgenstern Horing, Norman J

    2017-01-01

    This book provides an introduction to the methods of coupled quantum statistical field theory and Green's functions. The methods of coupled quantum field theory have played a major role in the extensive development of nonrelativistic quantum many-particle theory and condensed matter physics. This introduction to the subject is intended to facilitate delivery of the material in an easily digestible form to advanced undergraduate physics majors at a relatively early stage of their scientific development. The main mechanism to accomplish this is the early introduction of variational calculus and the Schwinger Action Principle, accompanied by Green's functions. Important achievements of the theory in condensed matter and quantum statistical physics are reviewed in detail to help develop research capability. These include the derivation of coupled field Green's function equations-of-motion for a model electron-hole-phonon system, extensive discussions of retarded, thermodynamic and nonequilibrium Green's functions...

  3. Schwinger variational calculation of ionization of hydrogen atoms for ...

    Indian Academy of Sciences (India)

    Schwinger variational calculation of ionization of hydrogen atoms for large momentum transfers. K CHAKRABARTI. Department of Mathematics, Scottish Church College, 1 & 3 Urquhart Square,. Kolkata 700 006, India. MS received 7 July 2001; revised 10 October 2001. Abstract. Schwinger variational principle is used here ...

  4. The sources of Schwinger's Green's functions.

    Science.gov (United States)

    Schweber, Silvan S

    2005-05-31

    Julian Schwinger's development of his Green's functions methods in quantum field theory is placed in historical context. The relation of Schwinger's quantum action principle to Richard Feynman's path-integral formulation of quantum mechanics is reviewed. The nonperturbative character of Schwinger's approach is stressed as well as the ease with which it can be extended to finite temperature situations.

  5. First-principles calculation method for electron transport based on the grid Lippmann-Schwinger equation

    Science.gov (United States)

    Egami, Yoshiyuki; Iwase, Shigeru; Tsukamoto, Shigeru; Ono, Tomoya; Hirose, Kikuji

    2015-09-01

    We develop a first-principles electron-transport simulator based on the Lippmann-Schwinger (LS) equation within the framework of the real-space finite-difference scheme. In our fully real-space-based LS (grid LS) method, the ratio expression technique for the scattering wave functions and the Green's function elements of the reference system is employed to avoid numerical collapse. Furthermore, we present analytical expressions and/or prominent calculation procedures for the retarded Green's function, which are utilized in the grid LS approach. In order to demonstrate the performance of the grid LS method, we simulate the electron-transport properties of the semiconductor-oxide interfaces sandwiched between semi-infinite jellium electrodes. The results confirm that the leakage current through the (001 )Si -SiO2 model becomes much larger when the dangling-bond state is induced by a defect in the oxygen layer, while that through the (001 )Ge -GeO2 model is insensitive to the dangling bond state.

  6. Annihilation probability density and other applications of the Schwinger multichannel method to the positron and electron scattering; Densidade de probabilidade de aniquilacao e outras aplicacoes do metodo multicanal de Schwinger ao espalhamento de positrons e eletrons

    Energy Technology Data Exchange (ETDEWEB)

    Varella, Marcio Teixeira do Nascimento

    2001-12-15

    We have calculated annihilation probability densities (APD) for positron collisions against He atom and H{sub 2} molecule. It was found that direct annihilation prevails at low energies, while annihilation following virtual positronium (Ps) formation is the dominant mechanism at higher energies. In room-temperature collisions (10{sup -2} eV) the APD spread over a considerable extension, being quite similar to the electronic densities of the targets. The capture of the positron in an electronic Feshbach resonance strongly enhanced the annihilation rate in e{sup +}-H{sub 2} collisions. We also discuss strategies to improve the calculation of the annihilation parameter (Z{sub eff} ), after debugging the computational codes of the Schwinger Multichannel Method (SMC). Finally, we consider the inclusion of the Ps formation channel in the SMC and show that effective configurations (pseudo eigenstates of the Hamiltonian of the collision ) are able to significantly reduce the computational effort in positron scattering calculations. Cross sections for electron scattering by polyatomic molecules were obtained in three different approximations: static-exchange (SE); tatic-exchange-plus-polarization (SEP); and multichannel coupling. The calculations for polar targets were improved through the rotational resolution of scattering amplitudes in which the SMC was combined with the first Born approximation (FBA). In general, elastic cross sections (SE and SEP approximations) showed good agreement with available experimental data for several targets. Multichannel calculations for e{sup -} -H{sub 2}O scattering, on the other hand, presented spurious structures at the electronic excitation thresholds (author)

  7. The inverse problem for Schwinger pair production

    Directory of Open Access Journals (Sweden)

    F. Hebenstreit

    2016-02-01

    Full Text Available The production of electron–positron pairs in time-dependent electric fields (Schwinger mechanism depends non-linearly on the applied field profile. Accordingly, the resulting momentum spectrum is extremely sensitive to small variations of the field parameters. Owing to this non-linear dependence it is so far unpredictable how to choose a field configuration such that a predetermined momentum distribution is generated. We show that quantum kinetic theory along with optimal control theory can be used to approximately solve this inverse problem for Schwinger pair production. We exemplify this by studying the superposition of a small number of harmonic components resulting in predetermined signatures in the asymptotic momentum spectrum. In the long run, our results could facilitate the observation of this yet unobserved pair production mechanism in quantum electrodynamics by providing suggestions for tailored field configurations.

  8. Schwinger's Approach to Einstein's Gravity

    Science.gov (United States)

    Milton, Kim

    2012-05-01

    Albert Einstein was one of Julian Schwinger's heroes, and Schwinger was greatly honored when he received the first Einstein Prize (together with Kurt Godel) for his work on quantum electrodynamics. Schwinger contributed greatly to the development of a quantum version of gravitational theory, and his work led directly to the important work of (his students) Arnowitt, Deser, and DeWitt on the subject. Later in the 1960's and 1970's Schwinger developed a new formulation of quantum field theory, which he dubbed Source Theory, in an attempt to get closer contact to phenomena. In this formulation, he revisited gravity, and in books and papers showed how Einstein's theory of General Relativity emerged naturally from one physical assumption: that the carrier of the gravitational force is a massless, helicity-2 particle, the graviton. (There has been a minor dispute whether gravitational theory can be considered as the massless limit of a massive spin-2 theory; Schwinger believed that was the case, while Van Dam and Veltman concluded the opposite.) In the process, he showed how all of the tests of General Relativity could be explained simply, without using the full machinery of the theory and without the extraneous concept of curved space, including such effects as geodetic precession and the Lense-Thirring effect. (These effects have now been verified by the Gravity Probe B experiment.) This did not mean that he did not accept Einstein's equations, and in his book and full article on the subject, he showed how those emerge essentially uniquely from the assumption of the graviton. So to speak of Schwinger versus Einstein is misleading, although it is true that Schwinger saw no necessity to talk of curved spacetime. In this talk I will lay out Schwinger's approach, and the connection to Einstein's theory.

  9. Conformable variational iteration method

    Directory of Open Access Journals (Sweden)

    Omer Acan

    2017-02-01

    Full Text Available In this study, we introduce the conformable variational iteration method based on new defined fractional derivative called conformable fractional derivative. This new method is applied two fractional order ordinary differential equations. To see how the solutions of this method, linear homogeneous and non-linear non-homogeneous fractional ordinary differential equations are selected. Obtained results are compared the exact solutions and their graphics are plotted to demonstrate efficiency and accuracy of the method.

  10. Splines and variational methods

    CERN Document Server

    Prenter, P M

    2008-01-01

    One of the clearest available introductions to variational methods, this text requires only a minimal background in calculus and linear algebra. Its self-contained treatment explains the application of theoretic notions to the kinds of physical problems that engineers regularly encounter. The text's first half concerns approximation theoretic notions, exploring the theory and computation of one- and two-dimensional polynomial and other spline functions. Later chapters examine variational methods in the solution of operator equations, focusing on boundary value problems in one and two dimension

  11. Are Crab nanoshots Schwinger sparks?

    Energy Technology Data Exchange (ETDEWEB)

    Stebbins, Albert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Yoo, Hojin [Univ. of Wisconsin, Madison, WI (United States); Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States)

    2015-05-21

    The highest brightness temperature ever observed are from "nanoshots" from the Crab pulsar which we argue could be the signature of bursts of vacuum e± pair production. If so this would be the first time the astronomical Schwinger effect has been observed. These "Schwinger sparks" would be an intermittent but extremely powerful, ~103 L, 10 PeV e± accelerator in the heart of the Crab. These nanosecond duration sparks are generated in a volume less than 1 m3 and the existence of such sparks has implications for the small scale structure of the magnetic field of young pulsars such as the Crab. As a result, this mechanism may also play a role in producing other enigmatic bright short radio transients such as fast radio bursts.

  12. Schwinger-Keldysh diagrammatics for primordial perturbations

    Science.gov (United States)

    Chen, Xingang; Wang, Yi; Xianyu, Zhong-Zhi

    2017-12-01

    We present a systematic introduction to the diagrammatic method for practical calculations in inflationary cosmology, based on Schwinger-Keldysh path integral formalism. We show in particular that the diagrammatic rules can be derived directly from a classical Lagrangian even in the presence of derivative couplings. Furthermore, we use a quasi-single-field inflation model as an example to show how this formalism, combined with the trick of mixed propagator, can significantly simplify the calculation of some in-in correlation functions. The resulting bispectrum includes the lighter scalar case (m3H/2) that has not been explicitly computed for this model. The latter provides a concrete example of quantum primordial standard clocks, in which the clock signals can be observably large.

  13. Julian Schwinger — Personal Recollections

    Science.gov (United States)

    Martin, Paul C.

    We're gathered here today to salute Julian Schwinger, a towering figure of the golden age of physics — and a kind and gentle human being. Even at our best universities, people with Julian's talent and his passion for discovery and perfection are rare — so rare that neither they nor the rest of us know how to take best advantage of their genius. The failure to find a happier solution to this dilemma in recent years has concerned many of us. It should not becloud the fact that over their lifetimes, few physicists, if any, have surmounted this impedance mismatch more effectively than Julian, conveying not only knowledge but lofty values and aspirations directly and indirectly to thousands of physicists…

  14. SU(N) irreducible Schwinger bosons

    Science.gov (United States)

    Mathur, Manu; Raychowdhury, Indrakshi; Anishetty, Ramesh

    2010-09-01

    We construct SU(N) irreducible Schwinger bosons satisfying certain U(N-1) constraints which implement the symmetries of SU(N) Young tableaues. As a result all SU(N) irreducible representations are simple monomials of (N -1) types of SU(N) irreducible Schwinger bosons. Further, we show that these representations are free of multiplicity problems. Thus, all SU(N) representations are made as simple as SU(2).

  15. Variational methods in molecular modeling

    CERN Document Server

    2017-01-01

    This book presents tutorial overviews for many applications of variational methods to molecular modeling. Topics discussed include the Gibbs-Bogoliubov-Feynman variational principle, square-gradient models, classical density functional theories, self-consistent-field theories, phase-field methods, Ginzburg-Landau and Helfrich-type phenomenological models, dynamical density functional theory, and variational Monte Carlo methods. Illustrative examples are given to facilitate understanding of the basic concepts and quantitative prediction of the properties and rich behavior of diverse many-body systems ranging from inhomogeneous fluids, electrolytes and ionic liquids in micropores, colloidal dispersions, liquid crystals, polymer blends, lipid membranes, microemulsions, magnetic materials and high-temperature superconductors. All chapters are written by leading experts in the field and illustrated with tutorial examples for their practical applications to specific subjects. With emphasis placed on physical unders...

  16. Schwinger mechanism in electromagnetic field in de Sitter spacetime

    Science.gov (United States)

    Bavarsad, Ehsan; Pyo Kim, Sang; Stahl, Clément; Xue, She-Sheng

    2018-01-01

    We investigate Schwinger scalar pair production in a constant electromagnetic field in de Sitter (dS) spacetime. We obtain the pair production rate, which agrees with the Hawking radiation in the limit of zero electric field in dS. The result describes how a cosmic magnetic field affects the pair production rate. In addition, using a numerical method we study the effect of the magnetic field on the induced current. We find that in the strong electromagnetic field the current has a linear response to the electric and magnetic fields, while in the infrared regime, is inversely proportional to the electric field and leads to infrared hyperconductivity.

  17. Resurgent transseries & Dyson–Schwinger equations

    Energy Technology Data Exchange (ETDEWEB)

    Klaczynski, Lutz, E-mail: klacz@mathematik.hu-berlin.de

    2016-09-15

    We employ resurgent transseries as algebraic tools to investigate two self-consistent Dyson–Schwinger equations, one in Yukawa theory and one in quantum electrodynamics. After a brief but pedagogical review, we derive fixed point equations for the associated anomalous dimensions and insert a moderately generic log-free transseries ansatz to study the possible strictures imposed. While proceeding in various stages, we develop an algebraic method to keep track of the transseries’ coefficients. We explore what conditions must be violated in order to stay clear of fixed point theorems to eschew a unique solution, if so desired, as we explain. An interesting finding is that the flow of data between the different sectors of the transseries shows a pattern typical of resurgence, i.e. the phenomenon that the perturbative sector of the transseries talks to the nonperturbative ones in a one-way fashion. However, our ansatz is not exotic enough as it leads to trivial solutions with vanishing nonperturbative sectors, even when logarithmic monomials are included. We see our result as a harbinger of what future work might reveal about the transseries representations of observables in fully renormalised four-dimensional quantum field theories and adduce a tentative yet to our mind weighty argument as to why one should not expect otherwise. This paper is considerably self-contained. Readers with little prior knowledge are let in on the basic reasons why perturbative series in quantum field theory eventually require an upgrade to transseries. Furthermore, in order to acquaint the reader with the language utilised extensively in this work, we also provide a concise mathematical introduction to grid-based transseries.

  18. Resurgent transseries & Dyson-Schwinger equations

    Science.gov (United States)

    Klaczynski, Lutz

    2016-09-01

    We employ resurgent transseries as algebraic tools to investigate two self-consistent Dyson-Schwinger equations, one in Yukawa theory and one in quantum electrodynamics. After a brief but pedagogical review, we derive fixed point equations for the associated anomalous dimensions and insert a moderately generic log-free transseries ansatz to study the possible strictures imposed. While proceeding in various stages, we develop an algebraic method to keep track of the transseries' coefficients. We explore what conditions must be violated in order to stay clear of fixed point theorems to eschew a unique solution, if so desired, as we explain. An interesting finding is that the flow of data between the different sectors of the transseries shows a pattern typical of resurgence, i.e. the phenomenon that the perturbative sector of the transseries talks to the nonperturbative ones in a one-way fashion. However, our ansatz is not exotic enough as it leads to trivial solutions with vanishing nonperturbative sectors, even when logarithmic monomials are included. We see our result as a harbinger of what future work might reveal about the transseries representations of observables in fully renormalised four-dimensional quantum field theories and adduce a tentative yet to our mind weighty argument as to why one should not expect otherwise. This paper is considerably self-contained. Readers with little prior knowledge are let in on the basic reasons why perturbative series in quantum field theory eventually require an upgrade to transseries. Furthermore, in order to acquaint the reader with the language utilised extensively in this work, we also provide a concise mathematical introduction to grid-based transseries.

  19. Massive Schwinger model at finite θ

    Science.gov (United States)

    Azcoiti, Vicente; Follana, Eduardo; Royo-Amondarain, Eduardo; Di Carlo, Giuseppe; Vaquero Avilés-Casco, Alejandro

    2018-01-01

    Using the approach developed by V. Azcoiti et al. [Phys. Lett. B 563, 117 (2003), 10.1016/S0370-2693(03)00601-4], we are able to reconstruct the behavior of the massive one-flavor Schwinger model with a θ term and a quantized topological charge. We calculate the full dependence of the order parameter with θ . Our results at θ =π are compatible with Coleman's conjecture on the phase diagram of this model.

  20. Gravity Before Einstein and Schwinger Before Gravity

    Science.gov (United States)

    Trimble, Virginia L.

    2012-05-01

    Julian Schwinger was a child prodigy, and Albert Einstein distinctly not; Schwinger had something like 73 graduate students, and Einstein very few. But both thought gravity was important. They were not, of course, the first, nor is the disagreement on how one should think about gravity that is being highlighted here the first such dispute. The talk will explore, first, several of the earlier dichotomies: was gravity capable of action at a distance (Newton), or was a transmitting ether required (many others). Did it act on everything or only on solids (an odd idea of the Herschels that fed into their ideas of solar structure and sunspots)? Did gravitational information require time for its transmission? Is the exponent of r precisely 2, or 2 plus a smidgeon (a suggestion by Simon Newcomb among others)? And so forth. Second, I will try to say something about Scwinger's lesser known early work and how it might have prefigured his "source theory," beginning with "On the Interaction of Several Electrons (the unpublished, 1934 "zeroth paper," whose title somewhat reminds one of "On the Dynamics of an Asteroid," through his days at Berkeley with Oppenheimer, Gerjuoy, and others, to his application of ideas from nuclear physics to radar and of radar engineering techniques to problems in nuclear physics. And folks who think good jobs are difficult to come by now might want to contemplate the couple of years Schwinger spent teaching elementary physics at Purdue before moving on to the MIT Rad Lab for war work.

  1. Holographic Schwinger effect with a moving D3-brane

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zi-qiang; Chen, Gang [China University of Geosciences (Wuhan), School of Mathematics and Physics, Wuhan (China); Hou, De-fu [Central China Normal University, Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOS), Wuhan (China)

    2017-03-15

    We study the Schwinger effect with a moving D3-brane in a N = 4 SYM plasma with the aid of AdS/CFT correspondence. We discuss the test particle pair moving transverse and parallel to the plasma wind, respectively. It is found that for both cases the presence of velocity tends to increase the Schwinger effect. In addition, the velocity has a stronger influence on the Schwinger effect when the pair moves transverse to the plasma wind rather than parallel. (orig.)

  2. Variational methods for field theories

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Menahem, S.

    1986-09-01

    Four field theory models are studied: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. Free field theory is used as a laboratory for a new variational blocking-truncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes (Boron-Oppenheimer approximation). This ''adiabatic truncation'' method gives very accurate results for ground-state energy density and correlation functions. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Euclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. The transfer-matrix method is used to find a good (non-blocking) trial ground state for the Ising model in a transverse magnetic field in (1 + 1) dimensions.

  3. Canonical field anticommutators in the extended gauged Rarita-Schwinger theory

    Science.gov (United States)

    Adler, Stephen L.; Henneaux, Marc; Pais, Pablo

    2017-10-01

    We reexamine canonical quantization of the gauged Rarita-Schwinger theory using the extended theory, incorporating a dimension 1/2 auxiliary spin-1/2 field Λ , in which there is an exact off-shell gauge invariance. In Λ =0 gauge, which reduces to the original unextended theory, our results agree with those found by Johnson and Sudarshan, and later verified by Velo and Zwanziger, which give a canonical Rarita-Schwinger field Dirac bracket that is singular for small gauge fields. In gauge covariant radiation gauge, the Dirac bracket of the Rarita-Schwinger fields is nonsingular, but does not correspond to a positive semidefinite anticommutator, and the Dirac bracket of the auxiliary fields has a singularity of the same form as found in the unextended theory. These results indicate that gauged Rarita-Schwinger theory is somewhat pathological, and cannot be canonically quantized within a conventional positive semidefinite metric Hilbert space. We leave open the questions of whether consistent quantizations can be achieved by using an indefinite metric Hilbert space, by path integral methods, or by appropriate couplings to conventional dimension 3/2 spin-1/2 fields.

  4. The Feynman-Schwinger representation in QCD

    Energy Technology Data Exchange (ETDEWEB)

    Yu. A. Simonov; J.A. Tjon

    2002-05-01

    The proper time path integral representation is derived explicitly for Green's functions in QCD. After an introductory analysis of perturbative properties, the total gluonic field is separated in a rigorous way into a nonperturbative background and valence gluon part. For nonperturbative contributions the background perturbation theory is used systematically, yielding two types of expansions,illustrated by direct physical applications. As an application, we discuss the collinear singularities in the Feynman-Schwinger representation formalism. Moreover, the generalization to nonzero temperature is made and expressions for partition functions in perturbation theory and nonperturbative background are explicitly written down.

  5. Dynamically assisted Sauter-Schwinger effect in inhomogeneous electric fields

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Christian; Schützhold, Ralf [Fakultät für Physik, Universität Duisburg-Essen,Lotharstrasse 1, 47057 Duisburg (Germany)

    2016-02-24

    Via the world-line instanton method, we study electron-positron pair creation by a strong (but sub-critical) electric field of the profile E/cosh{sup 2} (kx) superimposed by a weaker pulse E{sup ′}/cosh{sup 2} (ωt). If the temporal Keldysh parameter γ{sub ω}=mω/(qE) exceeds a threshold value γ{sub ω}{sup crit} which depends on the spatial Keldysh parameter γ{sub k}=mk/(qE), we find a drastic enhancement of the pair creation probability — reporting on what we believe to be the first analytic non-perturbative result for the interplay between temporal and spatial field dependences E(t,x) in the Sauter-Schwinger effect. Finally, we speculate whether an analogous effect (drastic enhancement of tunneling probability) could occur in other scenarios such as stimulated nuclear decay, for example.

  6. Multifaceted Schwinger effect in de Sitter space

    Science.gov (United States)

    Sharma, Ramkishor; Singh, Suprit

    2017-07-01

    We investigate particle production à la the Schwinger mechanism in an expanding, flat de Sitter patch as is relevant for the inflationary epoch of our Universe. Defining states and particle content in curved spacetime is certainly not a unique process. There being different prescriptions on how that can be done, we have used the Schrödinger formalism to define instantaneous particle content of the state, etc. This allows us to go past the adiabatic regime to which the effect has been restricted in the previous studies and bring out its multifaceted nature in different settings. Each of these settings gives rise to contrasting features and behavior as per the effect of the electric field and expansion rate on the instantaneous mean particle number. We also quantify the degree of classicality of the process during its evolution using a "classicality parameter" constructed out of parameters of the Wigner function to obtain information about the quantum to classical transition in this case.

  7. Comparing Erlang Distribution and Schwinger Mechanism on Transverse Momentum Spectra in High Energy Collisions

    Directory of Open Access Journals (Sweden)

    Li-Na Gao

    2016-01-01

    Full Text Available We study the transverse momentum spectra of J/ψ and Υ mesons by using two methods: the two-component Erlang distribution and the two-component Schwinger mechanism. The results obtained by the two methods are compared and found to be in agreement with the experimental data of proton-proton (pp, proton-lead (p-Pb, and lead-lead (Pb-Pb collisions measured by the LHCb and ALICE Collaborations at the large hadron collider (LHC. The related parameters such as the mean transverse momentum contributed by each parton in the first (second component in the two-component Erlang distribution and the string tension between two partons in the first (second component in the two-component Schwinger mechanism are extracted.

  8. Quark Propagator with electroweak interactions in the Dyson-Schwinger approach

    Directory of Open Access Journals (Sweden)

    Mian Walid Ahmed

    2017-01-01

    To asses the impact, we study the influence of especially parity violation on the propagator for various masses. For this purpose the functional methods in form of Dyson-Schwinger-Equations are employed. We find that explicit isospin breaking leads to a qualitative change of behavior even for a slight explicit breaking, which is in contrast to the expectations from perturbation theory. Our results thus suggest that non-perturbative backcoupling effects could be larger than expected.

  9. CrasyDSE: A framework for solving Dyson-Schwinger equations

    Science.gov (United States)

    Huber, Markus Q.; Mitter, Mario

    2012-11-01

    Dyson-Schwinger equations are important tools for non-perturbative analyses of quantum field theories. For example, they are very useful for investigations in quantum chromodynamics and related theories. However, sometimes progress is impeded by the complexity of the equations. Thus automating parts of the calculations will certainly be helpful in future investigations. In this article we present a framework for such an automation based on a C++ code that can deal with a large number of Green functions. Since also the creation of the expressions for the integrals of the Dyson-Schwinger equations needs to be automated, we defer this task to a Mathematica notebook. We illustrate the complete workflow with an example from Yang-Mills theory coupled to a fundamental scalar field that has been investigated recently. As a second example we calculate the propagators of pure Yang-Mills theory. Our code can serve as a basis for many further investigations where the equations are too complicated to tackle by hand. It also can easily be combined with DoFun, a program for the derivation of Dyson-Schwinger equations.language: Mathematica 8 and higher, C++. Computer: All on which Mathematica and C++ are available. Operating system: All on which Mathematica and C++ are available (Windows, Unix, Mac OS). Classification: 11.1, 11.4, 11.5, 11.6. Nature of problem: Solve (large) systems of Dyson-Schwinger equations numerically. Solution method: Create C++ functions in Mathematica to be used for the numeric code in C++. This code uses structures to handle large numbers of Green functions. Unusual features: Provides a tool to convert Mathematica expressions into C++ expressions including conversion of function names. Running time: Depending on the complexity of the investigated system solving the equations numerically can take seconds on a desktop PC to hours on a cluster.

  10. Color-superconductivity from a Dyson-Schwinger perspective

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, M.D.J.

    2007-12-20

    Color-superconducting phases of quantum chromodynamics at vanishing temperatures and high densities are investigated. The central object is the one-particle Green's function of the fermions, the so-called quark propagator. It is determined by its equation of motion, the Dyson-Schwinger equation. To handle Dyson-Schwinger equations a successfully applied truncation scheme in the vacuum is extended to finite densities and gradually improved. It is thereby guaranteed that analytical results at asymptotically large densities are reproduced. This way an approach that is capable to describe known results in the vacuum as well as at high densities is applied to densities of astrophysical relevance for the first time. In the first part of the thesis the framework of the investigations with focus on the extension to finite densities is outlined. Physical observables are introduced which can be extracted from the propagator. In the following a minimal truncation scheme is presented. To point out the complexity of our approach in comparison to phenomenological models of quantum chromodynamics the chirally unbroken phase is discussed first. Subsequently color-superconducting phases for massless quarks are investigated. Furthermore the role of finite quark masses and neutrality constraints at moderate densities is studied. In contrast to phenomenological models the so-called CFL phase is found to be the ground state for all relevant densities. In the following part the applicability of the maximum entropy method for the extraction of spectral functions from numerical results in Euclidean space-time is demonstrated. As an example the spectral functions of quarks in the chirally unbroken and color-superconducting phases are determined. Hereby the results of our approach are presented in a new light. For instance the finite width of the quasiparticles in the color-superconducting phase becomes apparent. In the final chapter of this work extensions of our truncation scheme in

  11. Chiral condensate in the Schwinger model with matrix product operators

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, Mari Carmen [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, Krzysztof [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik; Poznan Univ. (Poland). Faculty of Physics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Saito, Hana [Tsukuba Univ. (Japan). Center for Computational Sciences

    2016-03-15

    Tensor network (TN) methods, in particular the Matrix Product States (MPS) ansatz, have proven to be a useful tool in analyzing the properties of lattice gauge theories. They allow for a very good precision, much better than standard Monte Carlo (MC) techniques for the models that have been studied so far, due to the possibility of reaching much smaller lattice spacings. The real reason for the interest in the TN approach, however, is its ability, shown so far in several condensed matter models, to deal with theories which exhibit the notorious sign problem in MC simulations. This makes it prospective for dealing with the non-zero chemical potential in QCD and other lattice gauge theories, as well as with real-time simulations. In this paper, using matrix product operators, we extend our analysis of the Schwinger model at zero temperature to show the feasibility of this approach also at finite temperature. This is an important step on the way to deal with the sign problem of QCD. We analyze in detail the chiral symmetry breaking in the massless and massive cases and show that the method works very well and gives good control over a broad range of temperatures, essentially from zero to infinite temperature.

  12. Schwinger effect for non-Abelian gauge bosons

    Science.gov (United States)

    Ragsdale, Michael; Singleton, Douglas

    2017-08-01

    We investigate the Schwinger effect for the gauge bosons in an unbroken non-Abelian gauge theory (e.g. the gluons of QCD). We consider both constant “color electric” fields and “color magnetic” fields as backgrounds. As in the Abelian Schwinger effect we find there is production of “gluons” for the color electric field, but no particle production for the color magnetic field case. Since the non-Abelian gauge bosons are massless there is no exponential suppression of particle production due to the mass of the electron/positron that one finds in the Abelian Schwinger effect. Despite the lack of an exponential suppression of the gluon production rate due to the masslessness of the gluons, we find that the critical field strength is even larger in the non-Abelian case as compared to the Abelian case. This is the result of the confinement phenomenon on QCD.

  13. Julian Schwinger the physicist, the teacher, and the man

    CERN Document Server

    1996-01-01

    In the post-quantum-mechanics era, few physicists, if any, have matched Julian Schwinger in contributions to and influence on the development of physics. A deep and provocative thinker, Schwinger left his indelible mark on all areas of theoretical physics; an eloquent lecturer and immensely successful mentor, he was gentle, intensely private, and known for being "modest about everything except his physics". This book is a collection of talks in memory of him by some of his contemporaries and his former students: A Klein, F Dyson, B DeWitt, W Kohn, D Saxon, P C Martin, K Johnson, S Deser, R Fin

  14. Short-distance Schwinger-mechanism and chiral symmetry

    DEFF Research Database (Denmark)

    McGady, David A.; Brogård, Jon

    2017-01-01

    In this paper, we study Schwinger pair production of charged massless particles in constant electric fields of finite-extent. Exploiting a map from the Dirac and Klein-Gordon equation to the harmonic oscillator, we find exact pair production rates for massless fermions and scalars. Pair productio...

  15. Proximal extrapolated gradient methods for variational inequalities.

    Science.gov (United States)

    Malitsky, Yu

    2018-01-01

    The paper concerns with novel first-order methods for monotone variational inequalities. They use a very simple linesearch procedure that takes into account a local information of the operator. Also, the methods do not require Lipschitz continuity of the operator and the linesearch procedure uses only values of the operator. Moreover, when the operator is affine our linesearch becomes very simple, namely, it needs only simple vector-vector operations. For all our methods, we establish the ergodic convergence rate. In addition, we modify one of the proposed methods for the case of a composite minimization. Preliminary results from numerical experiments are quite promising.

  16. Pair production by three fields dynamically assisted Schwinger process

    Science.gov (United States)

    Sitiwaldi, Ibrahim; Xie, Bai-Song

    2018-02-01

    The dynamically assisted Schwinger mechanism for vacuum pair production from two fields to three fields is proposed and examined. Numerical results for enhanced electron-positron pair production in the combination of three fields with different time scales are obtained using the quantum Vlasov equation. The significance of the combination of three fields in the regime of super low field strength is verified. Although the strengths of each of the three fields are far below the critical field strength, we obtain a significant enhancement of the production rate and a considerable yields in this combination, where the nonperturbative field is dynamically assisted by two oscillating fields. The number density depending on field parameters are also investigated. It is shown that the field threshold to detect the Schwinger effect can be lowered significantly if the configuration of three fields with different time scales are chosen carefully.

  17. Schwinger-Keldysh formalism. Part II: thermal equivariant cohomology

    Science.gov (United States)

    Haehl, Felix M.; Loganayagam, R.; Rangamani, Mukund

    2017-06-01

    Causally ordered correlation functions of local operators in near-thermal quantum systems computed using the Schwinger-Keldysh formalism obey a set of Ward identities. These can be understood rather simply as the consequence of a topological (BRST) algebra, called the universal Schwinger-Keldysh superalgebra, as explained in our compan-ion paper [1]. In the present paper we provide a mathematical discussion of this topological algebra. In particular, we argue that the structures can be understood in the language of extended equivariant cohomology. To keep the discussion self-contained, we provide a ba-sic review of the algebraic construction of equivariant cohomology and explain how it can be understood in familiar terms as a superspace gauge algebra. We demonstrate how the Schwinger-Keldysh construction can be succinctly encoded in terms a thermal equivariant cohomology algebra which naturally acts on the operator (super)-algebra of the quantum system. The main rationale behind this exploration is to extract symmetry statements which are robust under renormalization group flow and can hence be used to understand low-energy effective field theory of near-thermal physics. To illustrate the general prin-ciples, we focus on Langevin dynamics of a Brownian particle, rephrasing some known results in terms of thermal equivariant cohomology. As described elsewhere, the general framework enables construction of effective actions for dissipative hydrodynamics and could potentially illumine our understanding of black holes.

  18. From the Dyson-Schwinger to the transport equation in the background field gauge of QCD

    CERN Document Server

    Wang Qun; Stöcker, H; Greiner, W

    2003-01-01

    The non-equilibrium quantum field dynamics is usually described in the closed-time-path formalism. The initial state correlations are introduced into the generating functional by non-local source terms. We propose a functional approach to the Dyson-Schwinger equation, which treats the non-local and local source terms in the same way. In this approach, the generating functional is formulated for the connected Green functions and one-particle-irreducible vertices. The great advantages of our approach over the widely used two-particle-irreducible method are that it is much simpler and that it is easy to implement the procedure in a computer program to automatically generate the Feynman diagrams for a given process. The method is then applied to a pure gluon plasma to derive the gauge-covariant transport equation from the Dyson-Schwinger equation in the background-covariant gauge. We discuss the structure of the kinetic equation and show its relationship with the classical one. We derive the gauge-covariant colli...

  19. Inverse scattering theory: renormalization of the Lippmann-Schwinger equation for acoustic scattering in one dimension.

    Science.gov (United States)

    Kouri, Donald J; Vijay, Amrendra

    2003-04-01

    The most robust treatment of the inverse acoustic scattering problem is based on the reversion of the Born-Neumann series solution of the Lippmann-Schwinger equation. An important issue for this approach to inversion is the radius of convergence of the Born-Neumann series for Fredholm integral kernels, and especially for acoustic scattering for which the interaction depends on the square of the frequency. By contrast, it is well known that the Born-Neumann series for the Volterra integral equations in quantum scattering are absolutely convergent, independent of the strength of the coupling characterizing the interaction. The transformation of the Lippmann-Schwinger equation from a Fredholm to a Volterra structure by renormalization has been considered previously for quantum scattering calculations and electromagnetic scattering. In this paper, we employ the renormalization technique to obtain a Volterra equation framework for the inverse acoustic scattering series, proving that this series also converges absolutely in the entire complex plane of coupling constant and frequency values. The present results are for acoustic scattering in one dimension, but the method is general. The approach is illustrated by applications to two simple one-dimensional models for acoustic scattering.

  20. Rainfall variation by geostatistical interpolation method

    Directory of Open Access Journals (Sweden)

    Glauber Epifanio Loureiro

    2013-08-01

    Full Text Available This article analyses the variation of rainfall in the Tocantins-Araguaia hydrographic region in the last two decades, based upon the rain gauge stations of the ANA (Brazilian National Water Agency HidroWeb database for the years 1983, 1993 and 2003. The information was systemized and treated with Hydrologic methods such as method of contour and interpolation for ordinary kriging. The treatment considered the consistency of the data, the density of the space distribution of the stations and the periods of study. The results demonstrated that the total volume of water precipitated annually did not change significantly in the 20 years analyzed. However, a significant variation occurred in its spatial distribution. By analyzing the isohyet it was shown that there is a displacement of the precipitation at Tocantins Baixo (TOB of approximately 10% of the total precipitated volume. This displacement can be caused by global change, by anthropogenic activities or by regional natural phenomena. However, this paper does not explore possible causes of the displacement.

  1. Rotating hybrid stars with the Dyson-Schwinger quark model

    Science.gov (United States)

    Wei, J.-B.; Chen, H.; Burgio, G. F.; Schulze, H.-J.

    2017-08-01

    We study rapidly rotating hybrid stars with the Dyson-Schwinger model for quark matter and the Brueckner-Hartree-Fock many-body theory with realistic two-body and three-body forces for nuclear matter. We determine the maximum gravitational mass, equatorial radius, and rotation frequency of stable stellar configurations by considering the constraints of the Keplerian limit and the secular axisymmetric instability, and compare with observational data. We also discuss the rotational evolution for constant baryonic mass and find a spin-up phenomenon for supramassive stars before they collapse to black holes.

  2. Pion transition form factor through Dyson-Schwinger equations

    Science.gov (United States)

    Raya, Khépani

    2016-10-01

    In the framework of Dyson-Schwinger equations (DSE), we compute the γ*γ→π0 transition form factor, G(Q2). For the first time, in a continuum approach to quantun chromodynamics (QCD), it was possible to compute G(Q2) on the whole domain of space-like momenta. Our result agrees with CELLO, CLEO and Belle collaborations and, with the well- known asymptotic QCD limit, 2ƒπ. Our analysis unifies this prediction with that of the pion's valence-quark parton distribution amplitude (PDA) and elastic electromagnetic form factor.

  3. A Dyson-Schwinger approach to finite temperature QCD

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Jens Andreas

    2011-10-26

    The different phases of quantum chromodynamics at finite temperature are studied. To this end the nonperturbative quark propagator in Matsubara formalism is determined from its equation of motion, the Dyson-Schwinger equation. A novel truncation scheme is introduced including the nonperturbative, temperature dependent gluon propagator as extracted from lattice gauge theory. In the first part of the thesis a deconfinement order parameter, the dual condensate, and the critical temperature are determined from the dependence of the quark propagator on the temporal boundary conditions. The chiral transition is investigated by means of the quark condensate as order parameter. In addition differences in the chiral and deconfinement transition between gauge groups SU(2) and SU(3) are explored. In the following the quenched quark propagator is studied with respect to a possible spectral representation at finite temperature. In doing so, the quark propagator turns out to possess different analytic properties below and above the deconfinement transition. This result motivates the consideration of an alternative deconfinement order parameter signaling positivity violations of the spectral function. A criterion for positivity violations of the spectral function based on the curvature of the Schwinger function is derived. Using a variety of ansaetze for the spectral function, the possible quasi-particle spectrum is analyzed, in particular its quark mass and momentum dependence. The results motivate a more direct determination of the spectral function in the framework of Dyson-Schwinger equations. In the two subsequent chapters extensions of the truncation scheme are considered. The influence of dynamical quark degrees of freedom on the chiral and deconfinement transition is investigated. This serves as a first step towards a complete self-consistent consideration of dynamical quarks and the extension to finite chemical potential. The goodness of the truncation is verified first

  4. On Total Variation Method For Digital Image Enhancement ...

    African Journals Online (AJOL)

    In this paper, we define the necessity for recourse to Total Variation method in digital image filtering and we establish the existence of a refined image in the space of Bounded Variation Functions BV(Ω), for a given “screen” Ω. Keywords: Image (signal), Vibration (Noise), Total Variation, Functions with Bounded Variation

  5. Thermal evolution of the one-flavour Schwinger model using matrix product states

    Energy Technology Data Exchange (ETDEWEB)

    Saito, H.; Jansen, K. [DESY Zeuthen (Germany). John von Neumann Institute for Computing; Banuls, M.C.; Cirac, J.I. [Max-Planck Institute of Quantum Optics, Garching (Germany); Cichy, K. [Frankfurt am Main Univ. (Germany). Inst. fuer Theoretische Physik; Poznan Univ. (Poland). Faculty of Physics

    2015-11-15

    The Schwinger model, or 1+1 dimensional QED, offers an interesting object of study, both at zero and non-zero temperature, because of its similarities to QCD. In this proceeding, we present the a full calculation of the temperature dependent chiral condensate of this model in the continuum limit using Matrix Product States (MPS). MPS methods, in general tensor networks, constitute a very promising technique for the non-perturbative study of Hamiltonian quantum systems. In the last few years, they have shown their suitability as ansatzes for ground states and low-lying excitations of lattice gauge theories. We show the feasibility of the approach also for finite temperature, both in the massless and in the massive case.

  6. Quark Propagator with electroweak interactions in the Dyson-Schwinger approach

    Science.gov (United States)

    Mian, Walid Ahmed; Maas, Axel

    2017-03-01

    Motivated by the non-negligible dynamical backcoupling of the electroweak interactions with the strong interaction during neutron star mergers, we study the effects of the explicit breaking of C, P and flavor symmetry on the strong sector. The quark propagator is the simplest object which encodes the consequences of these breakings. To asses the impact, we study the influence of especially parity violation on the propagator for various masses. For this purpose the functional methods in form of Dyson-Schwinger-Equations are employed. We find that explicit isospin breaking leads to a qualitative change of behavior even for a slight explicit breaking, which is in contrast to the expectations from perturbation theory. Our results thus suggest that non-perturbative backcoupling effects could be larger than expected.

  7. Determining partial differential cross sections for low-energy electron photodetachment involving conical intersections using the solution of a Lippmann-Schwinger equation constructed with standard electronic structure techniques.

    Science.gov (United States)

    Han, Seungsuk; Yarkony, David R

    2011-05-07

    A method for obtaining partial differential cross sections for low energy electron photodetachment in which the electronic states of the residual molecule are strongly coupled by conical intersections is reported. The method is based on the iterative solution to a Lippmann-Schwinger equation, using a zeroth order Hamiltonian consisting of the bound nonadiabatically coupled residual molecule and a free electron. The solution to the Lippmann-Schwinger equation involves only standard electronic structure techniques and a standard three-dimensional free particle Green's function quadrature for which fast techniques exist. The transition dipole moment for electron photodetachment, is a sum of matrix elements each involving one nonorthogonal orbital obtained from the solution to the Lippmann-Schwinger equation. An expression for the electron photodetachment transition dipole matrix element in terms of Dyson orbitals, which does not make the usual orthogonality assumptions, is derived.

  8. Application of New Variational Homotopy Perturbation Method For ...

    African Journals Online (AJOL)

    ... proposed method is very efficient, simple and is more user friendly. Keywords: Variational Iteration Method, Homotopy Perturbation Method, New Variational Homotopy Perturbation Method, Integro-Differential Equations Journal of the Nigerian Association of Mathematical Physics, Volume 20 (March, 2012), pp 497 – 504 ...

  9. Variation and Commonality in Phenomenographic Research Methods

    Science.gov (United States)

    Akerlind, Gerlese S.

    2012-01-01

    This paper focuses on the data analysis stage of phenomenographic research, elucidating what is involved in terms of both commonality and variation in accepted practice. The analysis stage of phenomenographic research is often not well understood. This paper helps to clarify the process, initially by collecting together in one location the more…

  10. Thermal evolution of the Schwinger model with matrix product operators

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, M.C.; Cirac, J.I. [Max-Planck-Institut fuer Quantenoptik, Garching (Germany); Cichy, K. [Frankfurt am Main Univ. (Germany). Inst. fuer Theoretische Physik; Poznan Univ. (Poland). Faculty of Physics; DESY Zeuthen (Germany). John von Neumann-Institut fuer Computing (NIC); Jansen, K.; Saito, H. [DESY Zeuthen (Germany). John von Neumann-Institut fuer Computing (NIC)

    2015-10-15

    We demonstrate the suitability of tensor network techniques for describing the thermal evolution of lattice gauge theories. As a benchmark case, we have studied the temperature dependence of the chiral condensate in the Schwinger model, using matrix product operators to approximate the thermal equilibrium states for finite system sizes with non-zero lattice spacings. We show how these techniques allow for reliable extrapolations in bond dimension, step width, system size and lattice spacing, and for a systematic estimation and control of all error sources involved in the calculation. The reached values of the lattice spacing are small enough to capture the most challenging region of high temperatures and the final results are consistent with the analytical prediction by Sachs and Wipf over a broad temperature range.

  11. Schwinger-Fronsdal Theory of Abelian Tensor Gauge Fields

    Directory of Open Access Journals (Sweden)

    Sebastian Guttenberg

    2008-09-01

    Full Text Available This review is devoted to the Schwinger and Fronsdal theory of Abelian tensor gauge fields. The theory describes the propagation of free massless gauge bosons of integer helicities and their interaction with external currents. Self-consistency of its equations requires only the traceless part of the current divergence to vanish. The essence of the theory is given by the fact that this weaker current conservation is enough to guarantee the unitarity of the theory. Physically this means that only waves with transverse polarizations are propagating very far from the sources. The question whether such currents exist should be answered by a fully interacting theory. We also suggest an equivalent representation of the corresponding action.

  12. A multigrid method for variational inequalities

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, S.; Stewart, D.E.; Wu, W.

    1996-12-31

    Multigrid methods have been used with great success for solving elliptic partial differential equations. Penalty methods have been successful in solving finite-dimensional quadratic programs. In this paper these two techniques are combined to give a fast method for solving obstacle problems. A nonlinear penalized problem is solved using Newton`s method for large values of a penalty parameter. Multigrid methods are used to solve the linear systems in Newton`s method. The overall numerical method developed is based on an exterior penalty function, and numerical results showing the performance of the method have been obtained.

  13. Application of New Variational Homotopy Perturbation Method I ...

    African Journals Online (AJOL)

    Numerical comparisons are made between VIM/HPM and NVHPM results. Keywords: Painlevé Equations, Variational Iteration Method, Homotopy Perturbation Method, New Variational Homotopy Perturbation Method, Ordinary Differential Equations Journal of the Nigerian Association of Mathematical Physics, Volume 19 ...

  14. Variation Iteration Method for The Approximate Solution of Nonlinear ...

    African Journals Online (AJOL)

    Results obtained with the Variational Iteration Method (VIM) on the Burgers equation were compared with the exact found in literature. All computational framework of the research were performed with the aid of Maple 18 software. Keywords: Variational Iteration Method, Burgers Equation, Partial Differential Equations, ...

  15. On Self-Adaptive Method for General Mixed Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Abdellah Bnouhachem

    2008-01-01

    Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.

  16. Statistical methods for handling unwanted variation in metabolomics data.

    Science.gov (United States)

    De Livera, Alysha M; Sysi-Aho, Marko; Jacob, Laurent; Gagnon-Bartsch, Johann A; Castillo, Sandra; Simpson, Julie A; Speed, Terence P

    2015-04-07

    Metabolomics experiments are inevitably subject to a component of unwanted variation, due to factors such as batch effects, long runs of samples, and confounding biological variation. Although the removal of this unwanted variation is a vital step in the analysis of metabolomics data, it is considered a gray area in which there is a recognized need to develop a better understanding of the procedures and statistical methods required to achieve statistically relevant optimal biological outcomes. In this paper, we discuss the causes of unwanted variation in metabolomics experiments, review commonly used metabolomics approaches for handling this unwanted variation, and present a statistical approach for the removal of unwanted variation to obtain normalized metabolomics data. The advantages and performance of the approach relative to several widely used metabolomics normalization approaches are illustrated through two metabolomics studies, and recommendations are provided for choosing and assessing the most suitable normalization method for a given metabolomics experiment. Software for the approach is made freely available.

  17. Statistical methods for handling unwanted variation in metabolomics data

    OpenAIRE

    De Livera, Alysha M.; Sysi-Aho, Marko; Jacob, Laurent; Gagnon-Bartsch, Johann A.; Castillo, Sandra; Simpson, Julie A; Speed, Terence P.

    2015-01-01

    Metabolomics experiments are inevitably subject to a component of unwanted variation, due to factors such as batch effects, long runs of samples, and confounding biological variation. Although the removal of this unwanted variation is a vital step in the analysis of metabolomics data, it is considered a gray area in which there is a recognised need to develop a better understanding of the procedures and statistical methods required to achieve statistically relevant optimal biological outcomes...

  18. Schwinger-Keldysh formalism. Part I: BRST symmetries and superspace

    Science.gov (United States)

    Haehl, Felix M.; Loganayagam, R.; Rangamani, Mukund

    2017-06-01

    We review the Schwinger-Keldysh, or in-in, formalism for studying quantum dynamics of systems out-of-equilibrium. The main motivation is to rephrase well known facts in the subject in a mathematically elegant setting, by exhibiting a set of BRST symmetries inherent in the construction. We show how these fundamental symmetries can be made manifest by working in a superspace formalism. We argue that this rephrasing is extremely efficacious in understanding low energy dynamics following the usual renormalization group approach, for the BRST symmetries are robust under integrating out degrees of freedom. In addition we discuss potential generalizations of the formalism that allow us to compute out-of-time-order correlation functions that have been the focus of recent attention in the context of chaos and scrambling. We also outline a set of problems ranging from stochastic dynamics, hydrodynamics, dynamics of entanglement in QFTs, and the physics of black holes and cosmology, where we believe this framework could play a crucial role in demystifying various confusions. Our companion paper [1] describes in greater detail the mathematical framework embodying the topological symmetries we uncover here.

  19. Variational method for objective analysis of scalar variable and its ...

    Indian Academy of Sciences (India)

    It has been found that objectively analysed scalar field obtained using standard method is superior to the scalar field derived by the triangle method,whereas the derivative fields produced by triangle method are superior to the derivative fields produced using standard method. A variational objective analysis scheme has ...

  20. -Stability Approach to Variational Iteration Method for Solving Integral Equations

    Directory of Open Access Journals (Sweden)

    Rhoades BE

    2009-01-01

    Full Text Available We consider -stability definition according to Y. Qing and B. E. Rhoades (2008 and we show that the variational iteration method for solving integral equations is -stable. Finally, we present some text examples to illustrate our result.

  1. Implicit particle methods and their connection with variational data assimilation

    CERN Document Server

    Atkins, Ethan; Chorin, Alexandre J

    2012-01-01

    The implicit particle filter is a sequential Monte Carlo method for data assimilation that guides the particles to the high-probability regions via a sequence of steps that includes minimizations. We present a new and more general derivation of this approach and extend the method to particle smoothing as well as to data assimilation for perfect models. We show that the minimizations required by implicit particle methods are similar to the ones one encounters in variational data assimilation and explore the connection of implicit particle methods with variational data assimilation. In particular, we argue that existing variational codes can be converted into implicit particle methods at a low cost, often yielding better estimates, that are also equipped with quantitative measures of the uncertainty. A detailed example is presented.

  2. Realization of Massive Relativistic Spin- 3 / 2 Rarita-Schwinger Quasiparticle in Condensed Matter Systems

    Science.gov (United States)

    Tang, Feng; Luo, Xi; Du, Yongping; Yu, Yue; Wan, Xiangang

    Very recently, there has been significant progress in realizing high-energy particles in condensed matter system (CMS) such as the Dirac, Weyl and Majorana fermions. Besides the spin-1/2 particles, the spin-3/2 elementary particle, known as the Rarita-Schwinger (RS) fermion, has not been observed or simulated in the laboratory. The main obstacle of realizing RS fermion in CMS lies in the nontrivial constraints that eliminate the redundant degrees of freedom in its representation of the Poincaré group. In this Letter, we propose a generic method that automatically contains the constraints in the Hamiltonian and prove the RS modes always exist and can be separated from the other non-RS bands. Through symmetry considerations, we show that the two dimensional (2D) massive RS (M-RS) quasiparticle can emerge in several trigonal and hexagonal lattices. Based on ab initio calculations, we predict that the thin film of CaLiX (X=Ge and Si) may host 2D M-RS excitations near the Fermi level. and Collaborative Innovation Center of Advanced Microstructures, Nanjing 210093, China.

  3. Implementation of the parametric variation method in an EMTP program

    DEFF Research Database (Denmark)

    Holdyk, Andrzej; Holbøll, Joachim

    2013-01-01

    The paper presents an algorithm for- and shows the implementation of a method to perform parametric variation studies using electromagnetic transients programs applied to an offshore wind farm. Those kind of studies are used to investigate the sensitivity of a given phenomena to variation...... of parameters in an electric system. The proposed method allows varying any parameter of a circuit, including the simulation settings and exploits the specific structure of the ATP-EMTP software. In the implementation of the method, Matlab software is used to control the execution of the ATP solver. Two...

  4. Discrete Direct Methods in the Fractional Calculus of Variations

    OpenAIRE

    Pooseh, Shakoor; Almeida, Ricardo; Torres, Delfim F. M.

    2012-01-01

    Finite differences, as a subclass of direct methods in the calculus of variations, consist in discretizing the objective functional using appropriate approximations for derivatives that appear in the problem. This article generalizes the same idea for fractional variational problems. We consider a minimization problem with a Lagrangian that depends on the left Riemann– Liouville fractional derivative. Using the Gr¨unwald–Letnikov definition, we approximate the objective functional in...

  5. Variational Principles and Methods in Theoretical Physics and Chemistry

    Science.gov (United States)

    Nesbet, Robert K.

    2005-07-01

    Preface; Part I. Classical Mathematics and Physics: 1. History of variational theory; 2. Classical mechanics; 3. Applied mathematics; Part II. Bound States in Quantum Mechanics: 4. Time-independent quantum mechanics; 5. Independent-electron models; 6. Time-dependent theory and linear response; Part III. Continuum States and Scattering Theory: 7. Multiple scattering theory for molecules and solids; 8. Variational methods for continuum states; 9. Electron-impact rovibrational excitation of molecules; Part IV. Field Theories: 10. Relativistic Lagrangian theories.

  6. Variational Problems with Moving Boundaries Using Decomposition Method

    Directory of Open Access Journals (Sweden)

    Reza Memarbashi

    2007-10-01

    Full Text Available The aim of this paper is to present a numerical method for solving variational problems with moving boundaries. We apply Adomian decomposition method on the Euler-Lagrange equation with boundary conditions that yield from transversality conditions and natural boundary conditions.

  7. Some Implicit Methods for Solving Harmonic Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2016-08-01

    Full Text Available In this paper, we use the auxiliary principle technique to suggest an implicit method for solving the harmonic variational inequalities. It is shown that the convergence of the proposed method only needs pseudo monotonicity of the operator, which is a weaker condition than monotonicity.

  8. Dissecting the hadronic contributions to $(g-2)_\\mu $ by Schwinger's sum rule

    OpenAIRE

    Hagelstein, Franziska; Pascalutsa, Vladimir

    2017-01-01

    The theoretical uncertainty of $(g-2)_\\mu $ is currently dominated by hadronic contributions. In order to express those in terms of directly measurable quantities, we consider a sum rule relating $g-2$ to an integral of a photo-absorption cross section. The sum rule, attributed to Schwinger, can be viewed as a combination of two older sum rules: Gerasimov-Drell-Hearn (GDH) and Burkhardt-Cottingham (BC). The Schwinger sum rule has an important feature, distinguishing it from the other two: the...

  9. Towards a model of pion generalized parton distributions from Dyson-Schwinger equations

    Energy Technology Data Exchange (ETDEWEB)

    Moutarde, H. [CEA, Centre de Saclay, IRFU/Service de Physique Nucléaire, F-91191 Gif-sur-Yvette (France)

    2015-04-10

    We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.

  10. The operator method for angular momentum and SU3

    NARCIS (Netherlands)

    Eekelen, H.A.M. van; Ruijgrok, Th.W.

    It is well known how Schwinger's1) operator method can be used to construct all representations of the angular momentum operators. We give a brief account of this method and show that it is very convenient for a short derivation of the general Clebsch-Gordan coefficients. The method is then applied

  11. The multi-flavor Schwinger model with chemical potential. Overcoming the sign problem with matrix product states

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, Mari Carmen; Cirac, J. Ignacio; Kuehn, Stefan [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, Krzysztof [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Saito, Hana [AISIN AW Co., Ltd., Aichi (Japan)

    2016-11-15

    During recent years there has been an increasing interest in the application of matrix product states, and more generally tensor networks, to lattice gauge theories. This non-perturbative method is sign problem free and has already been successfully used to compute mass spectra, thermal states and phase diagrams, as well as real-time dynamics for Abelian and non-Abelian gauge models. In previous work we showed the suitability of the method to explore the zero-temperature phase structure of the multi-flavor Schwinger model at non-zero chemical potential, a regime where the conventional Monte Carlo approach suffers from the sign problem. Here we extend our numerical study by looking at the spatially resolved chiral condensate in the massless case. We recover spatial oscillations, similar to the theoretical predictions for the single-flavor case, with a chemical potential dependent frequency and an amplitude approximately given by the homogeneous zero density condensate value.

  12. Solution of the Dyson-Schwinger-Equations of the Hamiltonian approach to Yang-Mills-Theory in Coulomb-gauge; Loesung der Dyson-Schwinger-Gleichungen des Hamilton-Zugangs zur Yang-Mills-Theorie in Coulomb-Eichung

    Energy Technology Data Exchange (ETDEWEB)

    Epple, Mark Dominik

    2008-12-03

    In this work we examine the Yang-Mills-Schroedinger equation, which is a result from minimizing the vacuum energy density in Coulomb gauge. We use an ansatz for the vacuum wave functional which is motivated by the exact wave functional of quantum electrodynamics. The wave functional is by construction singular on the Gribov horizon and has a variational kernel in the exponent which represents the gluon energy. We derive the so-called Dyson-Schwinger-equations from the variational principle, that the vacuum energy density is stationary under variation with respect to the variational kernel. These Dyson-Schwinger-equations build a set of coupled integral equations for the gluon and ghost propagator, and for the curvature in gauge orbit space. These equations have been derived in the last few years, have been examined analytically in certain approximations, and first numerical results have been obtained. The case of the so-called horizon condition, which means that the ghost form factor is divergent in the infrared, has always been of special interest. But is has been found in certain approximations analytically as well als numerically that the fully coupled system has no self-consistent solution within the employed truncation on two-loop level in the energy. But one can obtain a solvable system by inserting the bare ghost-propagator into the Coulomb equation. This system possesses two different kind of infrared-divergent solutions which differ in the exponents of the power laws of the form factors in the infrared. The weaker divergent solution has previously been found, but not the stronger divergent solution. The subject of this work is to develop a deeper understanding of the presented system. We present a new renormalization scheme which enables us to reduce the number of renormalization parameters by one. This new system of integral equations is solved numerically with greatly increased precision. Doing so we found the stronger divergent solution for the first

  13. A First Step towards Variational Methods in Engineering

    Science.gov (United States)

    Periago, Francisco

    2003-01-01

    In this paper, a didactical proposal is presented to introduce the variational methods for solving boundary value problems to engineering students. Starting from a couple of simple models arising in linear elasticity and heat diffusion, the concept of weak solution for these models is motivated and the existence, uniqueness and continuous…

  14. Application of New Variational Homotopy Perturbation Method For ...

    African Journals Online (AJOL)

    This paper discusses the application of the New Variational Homotopy Perturbation Method (NVHPM) for solving integro-differential equations. The advantage of the new Scheme is that it does not require discretization, linearization or any restrictive assumption of any form be fore it is applied. Several test problems are ...

  15. Application of the cluster variation method to interstitial solid solutions

    NARCIS (Netherlands)

    Pekelharing, M.I.

    2008-01-01

    A thermodynamic model for interstitial alloys, based on the Cluster Variation Method (CVM), has been developed, capable of incorporating short range ordering (SRO), long range ordering (LRO), and the mutual interaction between the host and the interstitial sublattices. The obtained cluster-based

  16. Schwinger representation for the symmetric group: Two explicit constructions for the carrier space

    Science.gov (United States)

    Chaturvedi, S.; Marmo, G.; Mukunda, N.; Simon, R.

    2008-05-01

    We give two explicit constructions for the carrier space for the Schwinger representation of the group S. While the first relies on a class of functions consisting of monomials in antisymmetric variables, the second is based on the Fock space associated with the Greenberg algebra.

  17. Schwinger α-PARAMETRIC Representation of Finite Temperature Field Theories:. Renormalization

    Science.gov (United States)

    Benhamou, M.; Kassou-Ou-Ali, A.

    We present the extension of the zero temperature Schwinger α-representation to the finite temperature scalar field theories. We give, in a compact form, the α-integrand of Feynman amplitudes of these theories. Using this representation, we analyze short-range divergences, and recover in a simple way the known result that the counterterms are temperature-independent.

  18. Schwinger Model and String Percolation in Hadron-Hadron and Heavy Ion Collisions

    OpenAIRE

    Dias De Deus, J; Ferreiro, E. G.; Pajares, C.; Ugoccioni, R.

    2003-01-01

    In the framework of the Schwinger Model for percolating strings we establish a general relation between multiplicity and transverse momentum square distributions in hadron-hadron and heavy ion collisions. Some of our results agree with the Colour Glass Condensate model.

  19. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    Science.gov (United States)

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  20. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations

    Science.gov (United States)

    Baranwal, Vipul K.; Pandey, Ram K.

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems. PMID:27437484

  1. A Coordinate Descent Method for Total Variation Minimization

    Directory of Open Access Journals (Sweden)

    Hong Deng

    2017-01-01

    Full Text Available Total variation (TV is a well-known image model with extensive applications in various images and vision tasks, for example, denoising, deblurring, superresolution, inpainting, and compressed sensing. In this paper, we systematically study the coordinate descent (CoD method for solving general total variation (TV minimization problems. Based on multidirectional gradients representation, the proposed CoD method provides a unified solution for both anisotropic and isotropic TV-based denoising (CoDenoise. With sequential sweeping and small random perturbations, CoDenoise is efficient in denoising and empirically converges to optimal solution. Moreover, CoDenoise also delivers new perspective on understanding recursive weighted median filtering. By incorporating with the Augmented Lagrangian Method (ALM, CoD was further extended to TV-based image deblurring (ALMCD. The results on denoising and deblurring validate the efficiency and effectiveness of the CoD-based methods.

  2. Discrete gradient methods for solving variational image regularisation models

    Science.gov (United States)

    Grimm, V.; McLachlan, Robert I.; McLaren, David I.; Quispel, G. R. W.; Schönlieb, C.-B.

    2017-07-01

    Discrete gradient methods are well-known methods of geometric numerical integration, which preserve the dissipation of gradient systems. In this paper we show that this property of discrete gradient methods can be interesting in the context of variational models for image processing, that is where the processed image is computed as a minimiser of an energy functional. Numerical schemes for computing minimisers of such energies are desired to inherit the dissipative property of the gradient system associated to the energy and consequently guarantee a monotonic decrease of the energy along iterations, avoiding situations in which more computational work might lead to less optimal solutions. Under appropriate smoothness assumptions on the energy functional we prove that discrete gradient methods guarantee a monotonic decrease of the energy towards stationary states, and we promote their use in image processing by exhibiting experiments with convex and non-convex variational models for image deblurring, denoising, and inpainting.

  3. Elastic scattering of positronium: Application of the confined variational method

    KAUST Repository

    Zhang, Junyi

    2012-08-01

    We demonstrate for the first time that the phase shift in elastic positronium-atom scattering can be precisely determined by the confined variational method, in spite of the fact that the Hamiltonian includes an unphysical confining potential acting on the center of mass of the positron and one of the atomic electrons. As an example, we study the S-wave elastic scattering for the positronium-hydrogen scattering system, where the existing 4% discrepancy between the Kohn variational calculation and the R-matrix calculation is resolved. © Copyright EPLA, 2012.

  4. Variational Iterative Methods for Nonsymmetric Systems of Linear Equations.

    Science.gov (United States)

    1981-08-01

    AD-A1S 365 YALE UNIV NEW HAVEN CT DEPT OF COMPUTER SCIENCE F/G 12/I VARIATIONAL ITERATIVE METHODS FOR NONS YMMETRIC SYSTEMS OF LINEA --ETC(UlI AUG 81...systems of linear equations. Linear Algebra and Its Anolications 29:1-16, 1980. [3] Rati Chandra. Coniuzate Gradient Methods for Partial Differential...19] David M. Young and ang C. lea. Generalized conjugate gradient acceleration of nonsymetrizable iterative methods. Linear Algebra and 1Us Ajpniis s 34:159-194, 1980. " i l- • .

  5. THE CONTROL VARIATIONAL METHOD FOR ELASTIC CONTACT PROBLEMS

    Directory of Open Access Journals (Sweden)

    Mircea Sofonea

    2010-07-01

    Full Text Available We consider a multivalued equation of the form Ay + F(y = fin a real Hilbert space, where A is a linear operator and F represents the (Clarke subdifferential of some function. We prove existence and uniqueness results of the solution by using the control variational method. The main idea in this method is to minimize the energy functional associated to the nonlinear equation by arguments of optimal control theory. Then we consider a general mathematical model describing the contact between a linearly elastic body and an obstacle which leads to a variational formulation as above, for the displacement field. We apply the abstract existence and uniqueness results to prove the unique weak solvability of the corresponding contact problem. Finally, we present examples of contact and friction laws for which our results work.

  6. Variation Iteration Method for The Approximate Solution of Nonlinear ...

    African Journals Online (AJOL)

    Tonistar

    Nigerian Journal of Basic and Applied Science (June, 2016, 24(1): 70-75. DOI: http://dx.doi.org/10.4314/njbas.v24i1.11. ISSN 0794-5698. Variation Iteration Method for The ... rocket motor, acoustic, number theory, heat conduction, shock waves, etc. (Burger, 1948). Hence, obtaining the exact resolution of this equation for a ...

  7. The variational method for density states a geometrical approach

    Science.gov (United States)

    Figueroa, Armando; Castaños, Octavio; López-Peña, Ramón; Marmo, Giuseppe

    2017-09-01

    The conventional variational method is reformulated within a geometrical approach to quantum mechanics, both for finite and infinite dimensional Hilbert spaces. This geometrical setting allows us to deal not only with approximated eigenvalues and eigenstates, but also with approximated Schrödinger equations by means of Hamiltonian evolution equations associated with expectation value functions. We shall consider some specific examples where we extend our approach also to density matrices.

  8. A variational sinc collocation method for strong-coupling problems

    Energy Technology Data Exchange (ETDEWEB)

    Amore, Paolo [Facultad de Ciencias, Universidad de Colima, Bernal Diaz del Castillo 340, Colima (Mexico)

    2006-06-02

    We have devised a variational sinc collocation method (VSCM) which can be used to obtain accurate numerical solutions to many strong-coupling problems. Sinc functions with an optimal grid spacing are used to solve the linear and nonlinear Schroedinger equations and a lattice {phi}{sup 4} model in (1 + 1). Our results indicate that errors decrease exponentially with the number of grid points and that a limited numerical effort is needed to reach high precision. (letter to the editor)

  9. Rayleigh-Ritz variation method and connected-moments expansions

    Energy Technology Data Exchange (ETDEWEB)

    Amore, Paolo [Facultad de Ciencias, Universidad de Colima, Bernal DIaz del Castillo 340, Colima (Mexico); Fernandez, Francisco M [INIFTA (UNLP, CCT La Plata-CONICET), Division Quimica Teorica, Blvd 113 S/N, Sucursal 4, Casilla de Correo 16, 1900 La Plata (Argentina)], E-mail: fernande@quimica.unlp.edu.ar

    2009-11-15

    We compare the connected-moments expansion (CMX) with the Rayleigh-Ritz variational method in the Krylov space (RRK). As a benchmark model we choose the same two-dimensional anharmonic oscillator already treated earlier by means of the CMX. Our results show that the RRK converges more smoothly than the CMX. We also discuss the fact that the CMX is size consistent while the RRK is not.

  10. Discrete Variational Derivative Methods for the EPDiff equation

    OpenAIRE

    LARSSON, STIG; Matsuo, Takayasu; Modin, Klas; Molteni, Matteo

    2016-01-01

    The aim of this paper is the derivation of structure preserving schemes for the solution of the EPDiff equation, with particular emphasis on the two dimensional case. We develop three different schemes based on the Discrete Variational Derivative Method (DVDM) on a rectangular domain discretized with a regular, structured, orthogonal grid. We present numerical experiments to support our claims: we investigate the preservation of energy and linear momenta, the reversibility, and the empirical ...

  11. Solutions of fractional diffusion equations by variation of parameters method

    Directory of Open Access Journals (Sweden)

    Mohyud-Din Syed Tauseef

    2015-01-01

    Full Text Available This article is devoted to establish a novel analytical solution scheme for the fractional diffusion equations. Caputo’s formulation followed by the variation of parameters method has been employed to obtain the analytical solutions. Following the derived analytical scheme, solution of the fractional diffusion equation for several initial functions has been obtained. Graphs are plotted to see the physical behavior of obtained solutions.

  12. Canonical Variate Analysis and Related Methods with Longitudinal Data

    OpenAIRE

    Beaghen, Michael Jr.

    1997-01-01

    Canonical variate analysis (CVA) is a widely used method for analyzing group structure in multivariate data. It is mathematically equivalent to a one-way multivariate analysis of variance and often goes by the name of canonical discriminant analysis. Change over time is a central feature of many phenomena of interest to researchers. This dissertation extends CVA to longitudinal data. It develops models whose purpose is to determi...

  13. A modified subgradient extragradient method for solving monotone variational inequalities.

    Science.gov (United States)

    He, Songnian; Wu, Tao

    2017-01-01

    In the setting of Hilbert space, a modified subgradient extragradient method is proposed for solving Lipschitz-continuous and monotone variational inequalities defined on a level set of a convex function. Our iterative process is relaxed and self-adaptive, that is, in each iteration, calculating two metric projections onto some half-spaces containing the domain is involved only and the step size can be selected in some adaptive ways. A weak convergence theorem for our algorithm is proved. We also prove that our method has [Formula: see text] convergence rate.

  14. A modified subgradient extragradient method for solving monotone variational inequalities

    Directory of Open Access Journals (Sweden)

    Songnian He

    2017-04-01

    Full Text Available Abstract In the setting of Hilbert space, a modified subgradient extragradient method is proposed for solving Lipschitz-continuous and monotone variational inequalities defined on a level set of a convex function. Our iterative process is relaxed and self-adaptive, that is, in each iteration, calculating two metric projections onto some half-spaces containing the domain is involved only and the step size can be selected in some adaptive ways. A weak convergence theorem for our algorithm is proved. We also prove that our method has O ( 1 n $O(\\frac{1}{n}$ convergence rate.

  15. Variational-moment method for computing magnetohydrodynamic equilibria

    Energy Technology Data Exchange (ETDEWEB)

    Lao, L.L.

    1983-08-01

    A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed.

  16. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo

    2010-06-22

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.

  17. Storm surge model based on variational data assimilation method

    Directory of Open Access Journals (Sweden)

    Shi-li Huang

    2010-06-01

    Full Text Available By combining computation and observation information, the variational data assimilation method has the ability to eliminate errors caused by the uncertainty of parameters in practical forecasting. It was applied to a storm surge model based on unstructured grids with high spatial resolution meant for improving the forecasting accuracy of the storm surge. By controlling the wind stress drag coefficient, the variation-based model was developed and validated through data assimilation tests in an actual storm surge induced by a typhoon. In the data assimilation tests, the model accurately identified the wind stress drag coefficient and obtained results close to the true state. Then, the actual storm surge induced by Typhoon 0515 was forecast by the developed model, and the results demonstrate its efficiency in practical application.

  18. ERG and Schwinger-Dyson Equations - Comparison in Formulations and Applications -

    Science.gov (United States)

    Terao, Haruhiko

    The advantageous points of ERG in applications to non-perturbative analyses of quantum field theories are discussed in comparison with the Schwinger-Dyson equations. First we consider the relation between these two formulations specially by examining the large N field theories. In the second part we study the phase structure of dynamical symmetry breaking in three dimensional QED as a typical example of the practical application.

  19. Lattice Hamiltonian approach to the massless Schwinger model. Precise extraction of the mass gap

    Energy Technology Data Exchange (ETDEWEB)

    Cichy, Krzysztof [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Kujawa-Cichy, Agnieszka [Poznan Univ. (Poland). Faculty of Physics; Szyniszewski, Marcin [Poznan Univ. (Poland). Faculty of Physics; Manchester Univ. (United Kingdom). NOWNano DTC

    2012-12-15

    We present results of applying the Hamiltonian approach to the massless Schwinger model. A finite basis is constructed using the strong coupling expansion to a very high order. Using exact diagonalization, the continuum limit can be reliably approached. This allows to reproduce the analytical results for the ground state energy, as well as the vector and scalar mass gaps to an outstanding precision better than 10{sup -6} %.

  20. A variational Bayesian method to inverse problems with impulsive noise

    KAUST Repository

    Jin, Bangti

    2012-01-01

    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.

  1. A Total Variation-Based Reconstruction Method for Dynamic MRI

    Directory of Open Access Journals (Sweden)

    Germana Landi

    2008-01-01

    Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.

  2. The Cluster Variation Method: A Primer for Neuroscientists

    Directory of Open Access Journals (Sweden)

    Alianna J. Maren

    2016-09-01

    Full Text Available Effective Brain–Computer Interfaces (BCIs require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h for the case of an equiprobable distribution of bistate (neural/neural ensemble units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.

  3. A Variational Method in Out of Equilibrium Physical Systems

    CERN Document Server

    Pinheiro, Mario J

    2012-01-01

    A variational principle is further developed for out of equilibrium dynamical systems by using the concept of maximum entropy. With this new formulation it is obtained a set of two first-order differential equations, revealing the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. In particular, it is obtained an extended equation of motion for a rotating dynamical system, from where it emerges a kind of topological torsion current of the form $\\epsilon_{ijk} A_j \\omega_k$, with $A_j$ and $\\omega_k$ denoting components of the vector potential (gravitational or/and electromagnetic) and $\\omega$ is the angular velocity of the accelerated frame. In addition, it is derived a special form of Umov-Poynting's theorem for rotating gravito-electromagnetic systems, and obtained a general condition of equilibrium for a rotating plasma. The variational method is then applied to clarify the working mechanism of some particular devices, such as the Bennett pinch and vacuum a...

  4. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  5. Solving the Fractional Rosenau-Hyman Equation via Variational Iteration Method and Homotopy Perturbation Method

    Directory of Open Access Journals (Sweden)

    R. Yulita Molliq

    2012-01-01

    Full Text Available In this study, fractional Rosenau-Hynam equations is considered. We implement relatively new analytical techniques, the variational iteration method and the homotopy perturbation method, for solving this equation. The fractional derivatives are described in the Caputo sense. The two methods in applied mathematics can be used as alternative methods for obtaining analytic and approximate solutions for fractional Rosenau-Hynam equations. In these schemes, the solution takes the form of a convergent series with easily computable components. The present methods perform extremely well in terms of efficiency and simplicity.

  6. Comment on “Variational Iteration Method for Fractional Calculus Using He’s Polynomials”

    Directory of Open Access Journals (Sweden)

    Ji-Huan He

    2012-01-01

    boundary value problems. This note concludes that the method is a modified variational iteration method using He’s polynomials. A standard variational iteration algorithm for fractional differential equations is suggested.

  7. Variational methods in electron-atom scattering theory

    CERN Document Server

    Nesbet, Robert K

    1980-01-01

    The investigation of scattering phenomena is a major theme of modern physics. A scattered particle provides a dynamical probe of the target system. The practical problem of interest here is the scattering of a low­ energy electron by an N-electron atom. It has been difficult in this area of study to achieve theoretical results that are even qualitatively correct, yet quantitative accuracy is often needed as an adjunct to experiment. The present book describes a quantitative theoretical method, or class of methods, that has been applied effectively to this problem. Quantum mechanical theory relevant to the scattering of an electron by an N-electron atom, which may gain or lose energy in the process, is summarized in Chapter 1. The variational theory itself is presented in Chapter 2, both as currently used and in forms that may facilitate future applications. The theory of multichannel resonance and threshold effects, which provide a rich structure to observed electron-atom scattering data, is presented in Cha...

  8. Local Fractional Laplace Variational Iteration Method for Solving Linear Partial Differential Equations with Local Fractional Derivative

    Directory of Open Access Journals (Sweden)

    Ai-Min Yang

    2014-01-01

    Full Text Available The local fractional Laplace variational iteration method was applied to solve the linear local fractional partial differential equations. The local fractional Laplace variational iteration method is coupled by the local fractional variational iteration method and Laplace transform. The nondifferentiable approximate solutions are obtained and their graphs are also shown.

  9. The Semianalytical Solutions for Stiff Systems of Ordinary Differential Equations by Using Variational Iteration Method and Modified Variational Iteration Method with Comparison to Exact Solutions

    Directory of Open Access Journals (Sweden)

    Mehmet Tarik Atay

    2013-01-01

    Full Text Available The Variational Iteration Method (VIM and Modified Variational Iteration Method (MVIM are used to find solutions of systems of stiff ordinary differential equations for both linear and nonlinear problems. Some examples are given to illustrate the accuracy and effectiveness of these methods. We compare our results with exact results. In some studies related to stiff ordinary differential equations, problems were solved by Adomian Decomposition Method and VIM and Homotopy Perturbation Method. Comparisons with exact solutions reveal that the Variational Iteration Method (VIM and the Modified Variational Iteration Method (MVIM are easier to implement. In fact, these methods are promising methods for various systems of linear and nonlinear stiff ordinary differential equations. Furthermore, VIM, or in some cases MVIM, is giving exact solutions in linear cases and very satisfactory solutions when compared to exact solutions for nonlinear cases depending on the stiffness ratio of the stiff system to be solved.

  10. The bosonized version of the Schwinger model in four dimensions: A blueprint for confinement?

    Science.gov (United States)

    Aurilia, Antonio; Gaete, Patricio; Helayël-Neto, José A.; Spallucci, Euro

    2017-03-01

    For a (3 + 1)-dimensional generalization of the Schwinger model, we compute the interaction energy between two test charges. The result shows that the static potential profile contains a linear term leading to the confinement of probe charges, exactly as in the original model in two dimensions. We further show that the same 4-dimensional model also appears as one version of the B ∧ F models in (3 + 1) dimensions under dualization of Stueckelberg-like massive gauge theories. Interestingly, this particular model is characterized by the mixing between a U(1) potential and an Abelian 3-form field of the type that appears in the topological sector of QCD.

  11. Strange quark matter and quark stars with the Dyson-Schwinger quark model

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H.; Wei, J.B. [China University of Geosciences, School of Mathematics and Physics, Wuhan (China); Schulze, H.J. [Universita di Catania, Dipartimento di Fisica, Catania (Italy); INFN, Sezione di Catania (Italy)

    2016-09-15

    We calculate the equation of state of strange quark matter and the interior structure of strange quark stars in a Dyson-Schwinger quark model within rainbow or Ball-Chiu vertex approximation. We emphasize constraints on the parameter space of the model due to stability conditions of ordinary nuclear matter. Respecting these constraints, we find that the maximum mass of strange quark stars is about 1.9 solar masses, and typical radii are 9-11 km. We obtain an energy release as large as 3.6 x 10{sup 53} erg from conversion of neutron stars into strange quark stars. (orig.)

  12. Density induced phase transitions in the Schwinger model. A study with matrix product states

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, Mari Carmen; Cirac, J. Ignacio; Kuehn, Stefan [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, Krzysztof [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2017-02-15

    We numerically study the zero temperature phase structure of the multiflavor Schwinger model at nonzero chemical potential. Using matrix product states, we reproduce analytical results for the phase structure for two flavors in the massless case and extend the computation to the massive case, where no analytical predictions are available. Our calculations allow us to locate phase transitions in the mass-chemical potential plane with great precision and provide a concrete example of tensor networks overcoming the sign problem in a lattice gauge theory calculation.

  13. Delta and Omega electromagnetic form factors in a Dyson-Schwinger/Bethe-Salpeter approach

    Energy Technology Data Exchange (ETDEWEB)

    Diana Nicmorus, Gernot Eichmann, Reinhard Alkofer

    2010-12-01

    We investigate the electromagnetic form factors of the Delta and the Omega baryons within the Poincare-covariant framework of Dyson-Schwinger and Bethe-Salpeter equations. The three-quark core contributions of the form factors are evaluated by employing a quark-diquark approximation. We use a consistent setup for the quark-gluon dressing, the quark-quark bound-state kernel and the quark-photon interaction. Our predictions for the multipole form factors are compatible with available experimental data and quark-model estimates. The current-quark mass evolution of the static electromagnetic properties agrees with results provided by lattice calculations.

  14. Holographic description of the Schwinger effect in electric and magnetic field

    Science.gov (United States)

    Sato, Yoshiki; Yoshida, Kentaroh

    2013-04-01

    We consider a generalization of the holographic Schwinger effect proposed by Semenoff and Zarembo to the case with constant electric and magnetic fields. There are two ways to turn on magnetic fields, i) the probe D3-brane picture and ii) the string world-sheet picture. In the former picture, magnetic fields both perpendicular and parallel to the electric field are activated by a Lorentz transformation and a spatial rotation. In the latter one, the classical solutions of the string world-sheet corresponding to circular Wilson loops is generalized to contain two additional parameters encoding the presence of magnetic fields.

  15. The mass spectrum of the Schwinger model with matrix product states

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, M.C.; Cirac, J.I. [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Poznan Univ. (Poland). Faculty of Physics; Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Cyprus Univ., Nicosia (Cyprus). Dept. of Physics

    2013-07-15

    We show the feasibility of tensor network solutions for lattice gauge theories in Hamiltonian formulation by applying matrix product states algorithms to the Schwinger model with zero and non-vanishing fermion mass. We introduce new techniques to compute excitations in a system with open boundary conditions, and to identify the states corresponding to low momentum and different quantum numbers in the continuum. For the ground state and both the vector and scalar mass gaps in the massive case, the MPS technique attains precisions comparable to the best results available from other techniques.

  16. Variational methods applied to problems of diffusion and reaction

    CERN Document Server

    Strieder, William

    1973-01-01

    This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...

  17. Variational method for objective analysis of scalar variable and its ...

    Indian Academy of Sciences (India)

    In regard to variational optimization of meteoro- logical parameter a given measure of the 'distance' between the variational scalar analysis and the standard scalar analysis is minimized. The varia- tional analysed field must at the same time satisfy some constraint. The constraint is that the differ- ence between derivative of ...

  18. A numerical method for variational problems with convexity constraints

    OpenAIRE

    Oberman, Adam M.

    2011-01-01

    We consider the problem of approximating the solution of variational problems subject to the constraint that the admissible functions must be convex. This problem is at the interface between convex analysis, convex optimization, variational problems, and partial differential equation techniques. The approach is to approximate the (non-polyhedral) cone of convex functions by a polyhedral cone which can be represented by linear inequalities. This approach leads to an optimization problem with l...

  19. Perturbative vs. variational methods in the study of carbon nanotubes

    DEFF Research Database (Denmark)

    Cornean, Horia; Pedersen, Thomas Garm; Ricaud, Benjamin

    2007-01-01

    Recent two-photon photo-luminescence experiments give accurate data for the ground and first excited excitonic energies at different nanotube radii. In this paper we compare the analytic approximations proved in [CDR], with a standard variational approach. We show an excellent agreement at suffic......Recent two-photon photo-luminescence experiments give accurate data for the ground and first excited excitonic energies at different nanotube radii. In this paper we compare the analytic approximations proved in [CDR], with a standard variational approach. We show an excellent agreement...

  20. A generalized relative total variation method for image smoothing

    NARCIS (Netherlands)

    Liu, Qiegen; Xiong, Biao; Yang, Dingcheng; Zhang, Minghui

    2016-01-01

    Recently, two piecewise smooth models L0smoothing and relative total variation (RTV) have been proposed for feature/structure-preserving filtering. One is very efficient for tackling image with little texture patterns and the other has appearance performance on image with abundant uniform textural

  1. The variational iteration method and the variational homotopy perturbation method for solving the KdV-Burgers equation and the Sharma-Tasso-Olver equation

    Energy Technology Data Exchange (ETDEWEB)

    Zayed, Elsayed M.E. [Dept. of Mathematics, Zagazig Univ. (Egypt); Abdel Rahman, Hanan M. [Dept. of Basic Sciences, Higher Technological Inst., Tenth of Ramadan City (Egypt)

    2010-01-15

    In this article, two powerful analytical methods called the variational iteration method (VIM) and the variational homotopy perturbation method (VHPM) are introduced to obtain the exact and the numerical solutions of the (2+1)-dimensional Korteweg-de Vries-Burgers (KdVB) equation and the (1+1)-dimensional Sharma-Tasso-Olver equation. The main objective of the present article is to propose alternative methods of solutions, which avoid linearization and physical unrealistic assumptions. The results show that these methods are very efficient, convenient and can be applied to a large class of nonlinear problems. (orig.)

  2. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification

    National Research Council Canada - National Science Library

    Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li

    2010-01-01

    .... We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026...

  3. Colour based fire detection method with temporal intensity variation filtration

    Science.gov (United States)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  4. Implicitness and experimental methods in language variation research

    OpenAIRE

    Rosseel, Laura; Grondelaers, Stefan

    2017-01-01

    Implicitness, whether it is used in the context of language attitude research (Garrett 2010), work on language regard (Preston 2010) or studies focussing on the social meaning of language variation (Campbell-Kibler 2007), is a problematic concept in linguistics. Few researchers have taken up the challenge of reflecting on, and defining its nature, let alone that anyone has ever pinpointed its theoretical significance or how exactly we can measure it. Firstly, from a conceptual point of vie...

  5. Some new mathematical methods for variational objective analysis

    Science.gov (United States)

    Wahba, Grace; Johnson, Donald R.

    1994-01-01

    Numerous results were obtained relevant to remote sensing, variational objective analysis, and data assimilation. A list of publications relevant in whole or in part is attached. The principal investigator gave many invited lectures, disseminating the results to the meteorological community as well as the statistical community. A list of invited lectures at meetings is attached, as well as a list of departmental colloquia at various universities and institutes.

  6. Transcendental equations in the Schwinger-Keldysh nonequilibrium theory and nonvanishing correlations

    Energy Technology Data Exchange (ETDEWEB)

    Giraldi, Filippo [School of Chemistry and Physics, University of KwaZulu-Natal and National Institute for Theoretical Physics (NITheP), KwaZulu-Natal, Westville Campus, Durban 4000, South Africa and Gruppo Nazionale per la Fisica Matematica (GNFM-INdAM), c/o Istituto Nazionale di Alta Matematica Francesco Severi, Città Universitaria, Piazza Aldo Moro 5, 00185 Roma (Italy)

    2015-09-15

    The Schwinger-Keldysh nonequilibrium theory allows the description of various transport phenomena involving bosons (fermions) embedded in bosonic (fermionic) environments. The retarded Green’s function obeys the Dyson equation and determines via its non-vanishing asymptotic behavior the dissipationless open dynamics. The appearance of this regime is conditioned by the existence of the solution of a general class of transcendental equations in complex domain that we study. Particular cases consist in transcendental equations containing exponential, hyperbolic, power law, logarithmic, and special functions. The present analysis provides an analytical description of the thermal and temporal correlation function of two general observables of a quantum system in terms of the corresponding spectral function. Special integral properties of the spectral function guarantee non-vanishing asymptotic behavior of the correlation function.

  7. Princípio de ação quântica de Schwinger

    OpenAIRE

    Melo,C.A.M. de; Pimentel,B.M.; Ramirez,J.A.

    2013-01-01

    O princípio de ação quântica de Schwinger é uma caracterização dinâmica das funções de transformação e está fundamentado na estrutura algébrica derivada da análise cinemática dos procesos de medida em nível quântico. Como tal, este princípio variacional permite derivar as relações de comutação canônicas numa forma totalmente consistente. Além disso, propociona as descrições dinâmicas de Schrödinger, Heisenberg e uma equação de Hamilton-Jacobi em nível quântico. Implementaremos este formalismo...

  8. How to solve the Schwinger-Dyson equations once and for all gauges

    Energy Technology Data Exchange (ETDEWEB)

    Bashir, A [Instituto de Fisica y Matematicas, Universidad Michoacana de San Nicolas de Hidalgo, Apartado Postal 2-82, Morelia, Michoacan 58040 (Mexico); Raya, A [Facultad de Ciencias, Universidad de Colima. Bernal DIaz del Castillo hashmark 340, Col. Villa San Sebastian, Colima, Colima 28045 (Mexico)

    2006-05-15

    Study of the Schwinger-Dyson equation (SDE) for the fermion propagator to obtain dynamically generated chirally asymmetric solution in any covariant gauge is a complicated numerical exercise specially if one employs a sophisticated form of the fermion-boson interaction complying with the key features of a gauge field theory. However, constraints of gauge invariance can help construct such a solution without having the need to solve the SDE for every value of the gauge parameter. Starting from the Landau gauge where the computational complications are still manageable, we apply the Landau-Khalatnikov-Fradkin transformation (LKFT) on the dynamically generated solution and find approximate analytical results for arbitrary value of the covariant gauge parameter. We also compare our results with exact numerical solutions.

  9. Variational methods for crystalline microstructure analysis and computation

    CERN Document Server

    Dolzmann, Georg

    2003-01-01

    Phase transformations in solids typically lead to surprising mechanical behaviour with far reaching technological applications. The mathematical modeling of these transformations in the late 80s initiated a new field of research in applied mathematics, often referred to as mathematical materials science, with deep connections to the calculus of variations and the theory of partial differential equations. This volume gives a brief introduction to the essential physical background, in particular for shape memory alloys and a special class of polymers (nematic elastomers). Then the underlying mathematical concepts are presented with a strong emphasis on the importance of quasiconvex hulls of sets for experiments, analytical approaches, and numerical simulations.

  10. Quantum Monte Carlo diagonalization method as a variational calculation

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1997-05-01

    A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)

  11. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification

    OpenAIRE

    Tzong-Shi Lu; Szu-Yu Yiao; Kenneth Lim; Jensen, Roderick V.; Li-Li Hsiao

    2010-01-01

    Background: The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. Aims: We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. Material & Methods: Differential protein expression patterns was assessed by western bl...

  12. On Two Iterative Methods for Mixed Monotone Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Xiwen Lu

    2010-01-01

    Full Text Available A mixed monotone variational inequality (MMVI problem in a Hilbert space H is formulated to find a point u∗∈H such that 〈Tu∗,v−u∗〉+φ(v−φ(u∗≥0 for all v∈H, where T is a monotone operator and φ is a proper, convex, and lower semicontinuous function on H. Iterative algorithms are usually applied to find a solution of an MMVI problem. We show that the iterative algorithm introduced in the work of Wang et al., (2001 has in general weak convergence in an infinite-dimensional space, and the algorithm introduced in the paper of Noor (2001 fails in general to converge to a solution.

  13. Advanced methods in the fractional calculus of variations

    CERN Document Server

    Malinowska, Agnieszka B; Torres, Delfim F M

    2015-01-01

    This brief presents a general unifying perspective on the fractional calculus. It brings together results of several recent approaches in generalizing the least action principle and the Euler–Lagrange equations to include fractional derivatives. The dependence of Lagrangians on generalized fractional operators as well as on classical derivatives is considered along with still more general problems in which integer-order integrals are replaced by fractional integrals. General theorems are obtained for several types of variational problems for which recent results developed in the literature can be obtained as special cases. In particular, the authors offer necessary optimality conditions of Euler–Lagrange type for the fundamental and isoperimetric problems, transversality conditions, and Noether symmetry theorems. The existence of solutions is demonstrated under Tonelli type conditions. The results are used to prove the existence of eigenvalues and corresponding orthogonal eigenfunctions of fractional Stur...

  14. A Local Fractional Variational Iteration Method for Laplace Equation within Local Fractional Operators

    Directory of Open Access Journals (Sweden)

    Yong-Ju Yang

    2013-01-01

    Full Text Available The local fractional variational iteration method for local fractional Laplace equation is investigated in this paper. The operators are described in the sense of local fractional operators. The obtained results reveal that the method is very effective.

  15. Mathematical methods in physics distributions, Hilbert space operators, variational methods, and applications in quantum physics

    CERN Document Server

    Blanchard, Philippe

    2015-01-01

    The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas.  The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories.  All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods.   The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...

  16. Revised Variational Iteration Method for Solving Systems of Nonlinear Fractional-Order Differential Equations

    Directory of Open Access Journals (Sweden)

    C. Ünlü

    2013-01-01

    Full Text Available A modification of the variational iteration method (VIM for solving systems of nonlinear fractional-order differential equations is proposed. The fractional derivatives are described in the Caputo sense. The solutions of fractional differential equations (FDE obtained using the traditional variational iteration method give good approximations in the neighborhood of the initial position. The main advantage of the present method is that it can accelerate the convergence of the iterative approximate solutions relative to the approximate solutions obtained using the traditional variational iteration method. Illustrative examples are presented to show the validity of this modification.

  17. Generalized Jastrow variational method for dense Fermi systems

    Science.gov (United States)

    Inguva, Ramarao; Smith, C. Ray

    1983-04-01

    In this paper, we outline a simple method whereby the antisymmetry of the wave function can be incorporated exactly in the Jastrow many-body theory. Applications of this method to the “homework problem” for neutron matter using the hypernetted-chain approximation give results in very good agreement with the Fermi-hypernetted chain approximation calculations of Fantoni and Rosati. The calculations for liquid3He at a Fermi wave number k F=0.75 Å-1 give results close to the Monte Carlo calculations of Ceperley, Chester, and Kalos.

  18. An Interior Projected-Like Subgradient Method for Mixed Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Guo-ji Tang

    2014-01-01

    Full Text Available An interior projected-like subgradient method for mixed variational inequalities is proposed in finite dimensional spaces, which is based on using non-Euclidean projection-like operator. Under suitable assumptions, we prove that the sequence generated by the proposed method converges to a solution of the mixed variational inequality. Moreover, we give the convergence estimate of the method. The results presented in this paper generalize some recent results given in the literatures.

  19. Schwinger-Dyson operators as invariant vector fields on a matrix model analog of the group of loops

    Science.gov (United States)

    Krishnaswami, Govind S.

    2008-06-01

    For a class of large-N multimatrix models, we identify a group G that plays the same role as the group of loops on space-time does for Yang-Mills theory. G is the spectrum of a commutative shuffle-deconcatenation Hopf algebra that we associate with correlations. G is the exponential of the free Lie algebra. The generating series of correlations is a function on G and satisfies quadratic equations in convolution. These factorized Schwinger-Dyson or loop equations involve a collection of Schwinger-Dyson operators, which are shown to be right-invariant vector fields on G, one for each linearly independent primitive of the Hopf algebra. A large class of formal matrix models satisfying these properties are identified, including as special cases, the zero momentum limits of the Gaussian, Chern-Simons, and Yang-Mills field theories. Moreover, the Schwinger-Dyson operators of the continuum Yang-Mills action are shown to be right-invariant derivations of the shuffle-deconcatenation Hopf algebra generated by sources labeled by position and polarization.

  20. Projection Method for Flows with Large Density Variations

    Science.gov (United States)

    Heinrich, Juan C.; Westra, Douglas G.

    2007-01-01

    Numerical models of solidification including a mushy zone are notoriously inefficient; most of them are based on formulations that require the coupled solution of the velocity components in the momentum equation greatly restricting the range of applicability of the models. There are only two models known to the authors that have used a projection or fractional step formulation, but none of these were used to model problems of any significant size. A third model was only applied to a partial mushy zone with no all-fluid region. Our initial attempts at modeling directional solidification in the presence of a developing mushy zone using a projection formulation encountered very serious difficulties once solidification starts. These difficulties were traced to the inability of the method to deal with large local density differences in the vicinity of the fluid-mush interface. As a result, a modified formulation of the projection method has been developed, that maintains the coupling between the body force and the pressure gradient and is presented in this work. The new formulation is shown to be robust and efficient, and can be applied to problems involving very large meshes. This is illustrated in this work through its application to simulations involving Pb-Sb and Pb-Sn alloys.

  1. The Subgradient Extragradient Method for Solving Variational Inequalities in Hilbert Space.

    Science.gov (United States)

    Censor, Y; Gibali, A; Reich, S

    2011-02-01

    We present a subgradient extragradient method for solving variational inequalities in Hilbert space. In addition, we propose a modified version of our algorithm that finds a solution of a variational inequality which is also a fixed point of a given nonexpansive mapping. We establish weak convergence theorems for both algorithms.

  2. Evaluation of multiple variate selection methods from a biological perspective: A nutrigenomics case study

    NARCIS (Netherlands)

    Tapp, H.S.; Radonjic, M.; Kemsley, E.K.; Thissen, U.

    2012-01-01

    Genomics-based technologies produce large amounts of data. To interpret the results and identify the most important variates related to phenotypes of interest, various multivariate regression and variate selection methods are used. Although inspected for statistical performance, the relevance of

  3. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification.

    Science.gov (United States)

    Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li

    2010-07-01

    The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.

  4. Comparison of variations detection between whole-genome amplification methods used in single-cell resequencing

    DEFF Research Database (Denmark)

    Hou, Yong; Wu, Kui; Shi, Xulian

    2015-01-01

    BACKGROUND: Single-cell resequencing (SCRS) provides many biomedical advances in variations detection at the single-cell level, but it currently relies on whole genome amplification (WGA). Three methods are commonly used for WGA: multiple displacement amplification (MDA), degenerate-oligonucleoti......BACKGROUND: Single-cell resequencing (SCRS) provides many biomedical advances in variations detection at the single-cell level, but it currently relies on whole genome amplification (WGA). Three methods are commonly used for WGA: multiple displacement amplification (MDA), degenerate...

  5. Partial differential equations with variable exponents variational methods and qualitative analysis

    CERN Document Server

    Radulescu, Vicentiu D

    2015-01-01

    Partial Differential Equations with Variable Exponents: Variational Methods and Qualitative Analysis provides researchers and graduate students with a thorough introduction to the theory of nonlinear partial differential equations (PDEs) with a variable exponent, particularly those of elliptic type. The book presents the most important variational methods for elliptic PDEs described by nonhomogeneous differential operators and containing one or more power-type nonlinearities with a variable exponent. The authors give a systematic treatment of the basic mathematical theory and constructive meth

  6. Challenges in inflationary magnetogenesis: Constraints from strong coupling, backreaction, and the Schwinger effect

    Science.gov (United States)

    Sharma, Ramkishor; Jagannathan, Sandhya; Seshadri, T. R.; Subramanian, Kandaswamy

    2017-10-01

    Models of inflationary magnetogenesis with a coupling to the electromagnetic action of the form f2Fμ νFμ ν , are known to suffer from several problems. These include the strong coupling problem, the backreaction problem and also strong constraints due to the Schwinger effect. We propose a model which resolves all these issues. In our model, the coupling function, f , grows during inflation and transits to a decaying phase post-inflation. This evolutionary behavior is chosen so as to avoid the problem of strong coupling. By assuming a suitable power-law form of the coupling function, we can also neglect backreaction effects during inflation. To avoid backreaction post-inflation, we find that the reheating temperature is restricted to be below ≈1.7 ×104 GeV . The magnetic energy spectrum is predicted to be nonhelical and generically blue. The estimated present day magnetic field strength and the corresponding coherence length taking reheating at the QCD epoch (150 MeV) are 1.4 ×10-12 G and 6.1 ×10-4 Mpc , respectively. This is obtained after taking account of nonlinear processing over and above the flux-freezing evolution after reheating. If we consider also the possibility of a nonhelical inverse transfer, as indicated in direct numerical simulations, the coherence length and the magnetic field strength are even larger. In all cases mentioned above, the magnetic fields generated in our models satisfy the γ -ray bound below a certain reheating temperature.

  7. Multiloop Euler-Heisenberg Lagrangians, Schwinger Pair Creation, and the Photon S-Matrix

    Science.gov (United States)

    Huet, I.; de Traubenberg, M. R.; Schubert, C.

    2017-03-01

    Although the perturbation series in quantum electrodynamics has been studied for eighty years concerning its high-order behavior, our present understanding is still poorer than for many other field theories. An interesting case is Schwinger pair creation in a constant electric field, which may possibly provide a window to high loop orders; simple non-perturbative closed-form expressions have been conjectured for the pair creation rate in the weak field limit, for scalar QED in 1982 by Affleck, Alvarez, and Manton, and for spinor QED by Lebedev and Ritus in 1984. Using Borel analysis, these can be used to obtain non-perturbative information on the on-shell renormalized N-photon amplitudes at large N and low energy. This line of reasoning also leads to a number of nontrivial predictions for the effective QED Lagrangian in either four or two dimensions at any loop order, and preliminary results of a calculation of the three-loop Euler-Heisenberg Lagrangian in two dimensions are presented.

  8. Numerical Modeling of Stokes Flow in a Circular Cavity by Variational Multiscale Element Free Galerkin Method

    Directory of Open Access Journals (Sweden)

    Ping Zhang

    2014-01-01

    Full Text Available The variational multiscale element free Galerkin method is extended to simulate the Stokes flow problems in a circular cavity as an irregular geometry. The method is combined with Hughes’s variational multiscale formulation and element free Galerkin method; thus it inherits the advantages of variational multiscale and meshless methods. Meanwhile, a simple technique is adopted to impose the essential boundary conditions which makes it easy to solve problems with complex area. Finally, two examples are solved and good results are obtained as compared with solutions of analytical and numerical methods, which demonstrates that the proposed method is an attractive approach for solving incompressible fluid flow problems in terms of accuracy and stability, even for complex irregular boundaries.

  9. Couple of the Variational Iteration Method and Fractional-Order Legendre Functions Method for Fractional Differential Equations

    Science.gov (United States)

    Song, Junqiang; Leng, Hongze; Lu, Fengshun

    2014-01-01

    We present a new numerical method to get the approximate solutions of fractional differential equations. A new operational matrix of integration for fractional-order Legendre functions (FLFs) is first derived. Then a modified variational iteration formula which can avoid “noise terms” is constructed. Finally a numerical method based on variational iteration method (VIM) and FLFs is developed for fractional differential equations (FDEs). Block-pulse functions (BPFs) are used to calculate the FLFs coefficient matrices of the nonlinear terms. Five examples are discussed to demonstrate the validity and applicability of the technique. PMID:24511303

  10. The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems

    OpenAIRE

    E, Weinan; Yu, Bing

    2017-01-01

    We propose a deep learning based method, the Deep Ritz Method, for numerically solving variational problems, particularly the ones that arise from partial differential equations. The Deep Ritz method is naturally nonlinear, naturally adaptive and has the potential to work in rather high dimensions. The framework is quite simple and fits well with the stochastic gradient descent method used in deep learning. We illustrate the method on several problems including some eigenvalue problems.

  11. New theoretical methods for studies on electrons and positrons scattering involving multichannel processes

    CERN Document Server

    Lara, O

    1995-01-01

    continued fractions are now in progress. It is well known that multichannel effects strongly influence the low-energy electron scattering by atoms and molecules. Nevertheless, the inclusion of such effects on the calculations of scattering cross sections remains a considerable task for the area researches due to the complexity of the problem. In the present study we aim to develop a new theoretical method which can be efficiently applied to the multichannel scattering studies. Two new theoretical formalisms namely the Multichannel sup - C-Functional Method have been proposed. Both methods were developed on the base of well-known distorted-wave method combined with Schwinger variational principle. In addition, an integrative method proposed by Horacek and Sasakawa in 1983, the method of continued fractions is adapted by the first time to multichannel scatterings. Numerical test of these three methods were carried out through applications to solve the multichannel scattering problems involving the interaction o...

  12. An alternative approach to differential-difference equations using the variational iteration method

    Energy Technology Data Exchange (ETDEWEB)

    Faraz, Naeem; Khan, Yasir [Donghua Univ., Shanghai (China). Modern Textile Inst.; Austin, Francis [Hong Kong Polytechnic Univ., Kowloon (China). Dept. of Applied Mathematics

    2010-12-15

    Although a variational iteration algorithm was proposed by Yildirim (Math. Prob. Eng. 2008 (2008), Article ID 869614) that successfully solves differential-difference equations, the method involves some repeated and unnecessary iterations in each step. An alternative iteration algorithm (variational iteration algorithm-II) is constructed in this paper that overcomes this shortcoming and promises to provide a universal mathematical tool for many differential-difference equations. (orig.)

  13. Non-local total variation method for despeckling of ultrasound images

    Science.gov (United States)

    Feng, Jianbin; Ding, Mingyue; Zhang, Xuming

    2014-03-01

    Despeckling of ultrasound images, as a very active topic research in medical image processing, plays an important or even indispensable role in subsequent ultrasound image processing. The non-local total variation (NLTV) method has been widely applied to denoising images corrupted by Gaussian noise, but it cannot provide satisfactory restoration results for ultrasound images corrupted by speckle noise. To address this problem, a novel non-local total variation despeckling method is proposed for speckle reduction. In the proposed method, the non-local gradient is computed on the images restored by the optimized Bayesian non-local means (OBNLM) method and it is introduced into the total variation method to suppress speckle in ultrasound images. Comparisons of the restoration performance are made among the proposed method and such state-of-the-art despeckling methods as the squeeze box filter (SBF), the non-local means (NLM) method and the OBNLM method. The quantitative comparisons based on synthetic speckled images show that the proposed method can provide higher Peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) than compared despeckling methods. The subjective visual comparisons based on synthetic and real ultrasound images demonstrate that the proposed method outperforms other compared algorithms in that it can achieve better performance of noise reduction, artifact avoidance, edge and texture preservation.

  14. Variational Effective Index Method for 3D Vectorial Scattering Problems in Photonics: TE Polarization

    NARCIS (Netherlands)

    Ivanova, Alyona; Stoffer, Remco; Kauppinen, L.J.; Hammer, Manfred

    2009-01-01

    In order to reduce the computational effort we develop a method for 3D-to-2D dimensionality reduction of scattering problems in photonics. Contrary to the `standard' Effective Index Method the effective parameters of the reduced problem are always rigorously defined using the variational technique,

  15. Introduction to the Special Issue on Advancing Methods for Analyzing Dialect Variation.

    Science.gov (United States)

    Clopper, Cynthia G

    2017-07-01

    Documenting and analyzing dialect variation is traditionally the domain of dialectology and sociolinguistics. However, modern approaches to acoustic analysis of dialect variation have their roots in Peterson and Barney's [(1952). J. Acoust. Soc. Am. 24, 175-184] foundational work on the acoustic analysis of vowels that was published in the Journal of the Acoustical Society of America (JASA) over 6 decades ago. Although Peterson and Barney (1952) were not primarily concerned with dialect variation, their methods laid the groundwork for the acoustic methods that are still used by scholars today to analyze vowel variation within and across languages. In more recent decades, a number of methodological advances in the study of vowel variation have been published in JASA, including work on acoustic vowel overlap and vowel normalization. The goal of this special issue was to honor that tradition by bringing together a set of papers describing the application of emerging acoustic, articulatory, and computational methods to the analysis of dialect variation in vowels and beyond.

  16. Experimental Research of the Dithering Ring Laser Gyro Triad by Allan Variations Method

    Directory of Open Access Journals (Sweden)

    A. A. Aviev

    2016-01-01

    Full Text Available The paper studies the error characteristics of the dithering laser gyro triad mounted on a SINS common base during the long measurement intervals in laboratory conditions using the Allan variations method. It conducts analysis of practical possibilities to determine parameters of Allan variations in arrays of the output data of the laser gyro triad sampled in real laboratory conditions under some instability of air temperature, available vibrations, magnetic and electric fields, etc. Plotting the Allan variations curves has identified а high noisiness of variations with small plotting step and а large variance between the maximum and minimum values of the curves under shifting the reference point by one or more sampling cycle of data array elements. The high noisiness of the Allan variations curves is a significant problem in the identification of error components in laser gyro output signal. To reduce this noise is proposed the method called "calculating the average line of variations", which allows us to identify the error components with high accuracy, including the sampling in real laboratory conditions. To reduce a computational resource-intensity of the proposed method there is a developed algorithm "random sampling of time intervals". The results can be practically applied to determine the error components not only in laser gyros, but also in other noisy output signal devices, which measure the stationary values.

  17. Phase diagram of two-color QCD in a Dyson-Schwinger approach

    Energy Technology Data Exchange (ETDEWEB)

    Buescher, Pascal Joachim

    2014-04-28

    We investigate two-color QCD with N{sub f}=2 at finite temperatures and chemical potentials using a Dyson-Schwinger approach. We employ two different truncations for the quark loop in the gluon DSE: one based on the Hard-Dense/Hard-Thermal Loop (HDTL) approximation of the quark loop and one based on the back-coupling of the full, self-consistent quark propagator (SCQL). We compare results for the different truncations with each other as well as with other approaches. As expected, we find a phase dominated by the condensation of quark-quark pairs. This diquark condensation phase overshadows the critical end point and first-order phase transition which one finds if diquark condensation is neglected. The phase transition from the phase without diquark condensation to the diquark-condensation phase is of second order. We observe that the dressing with massless quarks in the HDTL approximation leads to a significant violation of the Silver Blaze property and to a too small diquark condensate. The SCQL truncation, on the other hand, is found to reproduce all expected features of the μ-dependent quark condensates. Moreover, with parameters adapted to the situation in other approaches, we also find good to very good agreement with model and lattice calculations in all quark quantities. We find indictions that the physics in recent lattice calculations is likely to be driven solely by the explicit chiral symmetry breaking. Discrepancies w.r.t. the lattice are, however, observed in two quantities that are very sensitive to the screening of the gluon propagator, the dressed gluon propagator itself and the phase-transition line at high temperatures.

  18. Exploration of methods to identify polymorphisms associated with variation in DNA repair capacity phenotypes

    Energy Technology Data Exchange (ETDEWEB)

    Jones, I M; Thomas, C B; Xi, T; Mohrenweiser, H W; Nelson, D O

    2006-07-03

    Elucidating the relationship between polymorphic sequences and risk of common disease is a challenge. For example, although it is clear that variation in DNA repair genes is associated with familial cancer, aging and neurological disease, progress toward identifying polymorphisms associated with elevated risk of sporadic disease has been slow. This is partly due to the complexity of the genetic variation, the existence of large numbers of mostly low frequency variants and the contribution of many genes to variation in susceptibility. There has been limited development of methods to find associations between genotypes having many polymorphisms and pathway function or health outcome. We have explored several statistical methods for identifying polymorphisms associated with variation in DNA repair phenotypes. The model system used was 80 cell lines that had been resequenced to identify variation; 191 single nucleotide substitution polymorphisms (SNPs) are included, of which 172 are in 31 base excision repair pathway genes, 19 in 5 anti-oxidation genes, and DNA repair phenotypes based on single strand breaks measured by the alkaline Comet assay. Univariate analyses were of limited value in identifying SNPs associated with phenotype variation. Of the multivariable model selection methods tested: the easiest that provided reduced error of prediction of phenotype was simple counting of the variant alleles predicted to encode proteins with reduced activity, which led to a genotype including 52 SNPs; the best and most parsimonious model was achieved using a two-step analysis without regard to potential functional relevance: first SNPs were ranked by importance determined by Random Forests Regression (RFR), followed by cross-validation in a second round of RFR modeling that included ever more SNPs in declining order of importance. With this approach 6 SNPs were found to minimize prediction error. The results should encourage research into utilization of multivariate

  19. Variational Iteration Method for the Magnetohydrodynamic Flow over a Nonlinear Stretching Sheet

    Directory of Open Access Journals (Sweden)

    Lan Xu

    2013-01-01

    Full Text Available The variational iteration method (VIM is applied to solve the boundary layer problem of magnetohydrodynamic flow over a nonlinear stretching sheet. The combination of the VIM and the Padé approximants is shown to be a powerful method for solving two-point boundary value problems consisting of systems of nonlinear differential equations. And the comparison of the obtained results with other available results shows that the method is very effective and convenient for solving boundary layer problems.

  20. A new method for decreasing cell-load variation in dynamic cellular manufacturing systems

    Directory of Open Access Journals (Sweden)

    Aidin Delgoshaei

    2016-01-01

    Full Text Available Cell load variation is considered a significant shortcoming in scheduling of cellular manufacturing systems. In this article, a new method is proposed for scheduling dynamic cellular manufacturing systems in the presence of bottleneck and parallel machines. The aim of this method is to control cell load variation during the process of determining the best trading off values between in-house manufacturing and outsourcing. A genetic algorithm (GA is developed because of the high potential of trapping in the local optima, and results are compared with the results of LINGO® 12.0 software. The Taguchi method (an L_9 orthogonal optimization is used to estimate parameters of GA in order to solve experiments derived from literature. An in-depth analysis is conducted on the results in consideration of various factors, and control charts are used on machine-load variation. Our findings indicate that the dynamic condition of product demands affects the routing of product parts and may induce machine-load variations that yield to cell-load diversity. An increase in product uncertainty level causes the loading level of each cell to vary, which in turn results in the development of “complex dummy sub-cells”. The effect of the complex sub-cells is measured using another mathematical index. The results showed that the proposed GA can provide solutions with limited cell-load variations.

  1. AN IMPROVED VARIATIONAL METHOD FOR HYPERSPECTRAL IMAGE PANSHARPENING WITH THE CONSTRAINT OF SPECTRAL DIFFERENCE MINIMIZATION

    Directory of Open Access Journals (Sweden)

    Z. Huang

    2017-09-01

    Full Text Available Variational pansharpening can enhance the spatial resolution of a hyperspectral (HS image using a high-resolution panchromatic (PAN image. However, this technology may lead to spectral distortion that obviously affect the accuracy of data analysis. In this article, we propose an improved variational method for HS image pansharpening with the constraint of spectral difference minimization. We extend the energy function of the classic variational pansharpening method by adding a new spectral fidelity term. This fidelity term is designed following the definition of spectral angle mapper, which means that for every pixel, the spectral difference value of any two bands in the HS image is in equal proportion to that of the two corresponding bands in the pansharpened image. Gradient descent method is adopted to find the optimal solution of the modified energy function, and the pansharpened image can be reconstructed. Experimental results demonstrate that the constraint of spectral difference minimization is able to preserve the original spectral information well in HS images, and reduce the spectral distortion effectively. Compared to original variational method, our method performs better in both visual and quantitative evaluation, and achieves a good trade-off between spatial and spectral information.

  2. An Improved Variational Method for Hyperspectral Image Pansharpening with the Constraint of Spectral Difference Minimization

    Science.gov (United States)

    Huang, Z.; Chen, Q.; Shen, Y.; Chen, Q.; Liu, X.

    2017-09-01

    Variational pansharpening can enhance the spatial resolution of a hyperspectral (HS) image using a high-resolution panchromatic (PAN) image. However, this technology may lead to spectral distortion that obviously affect the accuracy of data analysis. In this article, we propose an improved variational method for HS image pansharpening with the constraint of spectral difference minimization. We extend the energy function of the classic variational pansharpening method by adding a new spectral fidelity term. This fidelity term is designed following the definition of spectral angle mapper, which means that for every pixel, the spectral difference value of any two bands in the HS image is in equal proportion to that of the two corresponding bands in the pansharpened image. Gradient descent method is adopted to find the optimal solution of the modified energy function, and the pansharpened image can be reconstructed. Experimental results demonstrate that the constraint of spectral difference minimization is able to preserve the original spectral information well in HS images, and reduce the spectral distortion effectively. Compared to original variational method, our method performs better in both visual and quantitative evaluation, and achieves a good trade-off between spatial and spectral information.

  3. Variational Iteration Method for Nonlinear Singular Two-Point Boundary Value Problems Arising in Human Physiology

    Directory of Open Access Journals (Sweden)

    Marwan Abukhaled

    2013-01-01

    Full Text Available The variational iteration method is applied to solve a class of nonlinear singular boundary value problems that arise in physiology. The process of the method, which produces solutions in terms of convergent series, is explained. The Lagrange multipliers needed to construct the correctional functional are found in terms of the exponential integral and Whittaker functions. The method easily overcomes the obstacle of singularities. Examples will be presented to test the method and compare it to other existing methods in order to confirm fast convergence and significant accuracy.

  4. Virtual standardized patients: an interactive method to examine variation in depression care among primary care physicians

    Science.gov (United States)

    Hooper, Lisa M.; Weinfurt, Kevin P.; Cooper, Lisa A.; Mensh, Julie; Harless, William; Kuhajda, Melissa C.; Epstein, Steven A.

    2009-01-01

    Background Some primary care physicians provide less than optimal care for depression (Kessler et al., Journal of the American Medical Association 291, 2581–90, 2004). However, the literature is not unanimous on the best method to use in order to investigate this variation in care. To capture variations in physician behaviour and decision making in primary care settings, 32 interactive CD-ROM vignettes were constructed and tested. Aim and method The primary aim of this methods-focused paper was to review the extent to which our study method – an interactive CD-ROM patient vignette methodology – was effective in capturing variation in physician behaviour. Specifically, we examined the following questions: (a) Did the interactive CD-ROM technology work? (b) Did we create believable virtual patients? (c) Did the research protocol enable interviews (data collection) to be completed as planned? (d) To what extent was the targeted study sample size achieved? and (e) Did the study interview protocol generate valid and reliable quantitative data and rich, credible qualitative data? Findings Among a sample of 404 randomly selected primary care physicians, our voice-activated interactive methodology appeared to be effective. Specifically, our methodology – combining interactive virtual patient vignette technology, experimental design, and expansive open-ended interview protocol – generated valid explanations for variations in primary care physician practice patterns related to depression care. PMID:20463864

  5. 27 CFR 479.26 - Alternate methods or procedures; emergency variations from requirements.

    Science.gov (United States)

    2010-04-01

    ... AMMUNITION MACHINE GUNS, DESTRUCTIVE DEVICES, AND CERTAIN OTHER FIREARMS Administrative and Miscellaneous...) The alternate method or procedure will not be contrary to any provision of law and will not result in... provisions of law. Variations from requirements granted under this paragraph are conditioned on compliance...

  6. Spectral approximation of variationally-posed eigenvalue problems by nonconforming methods

    Science.gov (United States)

    Alonso, Ana; Dello Russo, Anahí

    2009-01-01

    This paper deals with the nonconforming spectral approximation of variationally posed eigenvalue problems. It is an extension to more general situations of known previous results about nonconforming methods. As an application of the present theory, convergence and optimal order error estimates are proved for the lowest order Crouzeix-Raviart approximation of the eigenpairs of two representative second-order elliptical operators.

  7. Application of the cluster variation method to ordering in an interstitital solid solution

    DEFF Research Database (Denmark)

    Pekelharing, Marjon I.; Böttger, Amarante; Somers, Marcel A. J.

    1999-01-01

    The tetrahedron approximation of the cluster variation method (CVM) was applied to describe the ordering on the fcc interstitial sublattice of gamma-Fe[N] and gamma'-Fe4N1-x. A Lennard-Jones potential was used to describe the dominantly strain-induced interactions, caused by misfitting of the N a...

  8. Interactively Applying the Variational Method to the Dihydrogen Molecule: Exploring Bonding and Antibonding

    Science.gov (United States)

    Cruzeiro, Vinícius Wilian D.; Roitberg, Adrian; Polfer, Nicolas C.

    2016-01-01

    In this work we are going to present how an interactive platform can be used as a powerful tool to allow students to better explore a foundational problem in quantum chemistry: the application of the variational method to the dihydrogen molecule using simple Gaussian trial functions. The theoretical approach for the hydrogen atom is quite…

  9. Algorithms and software for total variation image reconstruction via first-order methods

    DEFF Research Database (Denmark)

    Dahl, Joahim; Hansen, Per Christian; Jensen, Søren Holdt

    2010-01-01

    This paper describes new algorithms and related software for total variation (TV) image reconstruction, more specifically: denoising, inpainting, and deblurring. The algorithms are based on one of Nesterov's first-order methods, tailored to the image processing applications in such a way that...

  10. Evaluation of methods to determine the spectral variations of aerosol optical thickness

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.; Talaulikar, M.; Rodrigues, A.; Desa, E.; Chauhan, P.

    The methods used to derive spectral variations of aerosol optical thickness, AOT are evaluated. For our analysis we have used the AOT measured using a hand held sunphotometer at the coastal station on the west coast of India, Dona-Paula, Goa...

  11. Solution of one-dimensional moving boundary problem with periodic boundary conditions by variational iteration method

    Directory of Open Access Journals (Sweden)

    Rai Nath Kabindra Rajeev

    2009-01-01

    Full Text Available In this paper, the solution of the one dimensional moving boundary problem with periodic boundary conditions is obtained with the help of variational iterational method. By using initial and boundary values, the explicit solutions of the equations have been derived, which accelerate the rapid convergence of the series solution. The method performs extremely well in terms of efficiency and simplicity. The temperature distribution and the position of moving boundary are evaluated and numerical results are presented graphically.

  12. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  13. Solving Ratio-Dependent Predatorprey System with Constant Effort Harvesting Using Variational Iteration Method

    DEFF Research Database (Denmark)

    Ghotbi, Abdoul R; Barari, Amin

    2009-01-01

    Due to wide range of interest in use of bio-economic models to gain insight in to the scientific management of renewable resources like fisheries and forestry, variational iteration method (VIM) is employed to approximate the solution of the ratio-dependent predator-prey system with constant effo...... prey harvesting. The results are compared with the results obtained by Adomian decomposition method and reveal that VIM is very effective and convenient for solving nonlinear differential equations....

  14. Highly Nonlinear Temperature-Dependent Fin Analysis by Variational Iteration Method

    DEFF Research Database (Denmark)

    Fouladi, F.; Hosseinzadeh, E.; Barari, Amin

    2010-01-01

    In this research, the variational iteration method as an approximate analytical method is utilized to overcome some inherent limitations arising as uncontrollability to the nonzero endpoint boundary conditions and is used to solve some examples in the field of heat transfer. The available exact...... solutions for the linear equations and the numerical solutions for the nonlinear ones are good bases to demonstrate the accuracy and efficiency of the proposed method. With the help of the method one can simply analyze the thermal characteristics of a straight rectangular fin for all possible types of heat...

  15. A Method of Flow-Shop Re-Scheduling Dealing with Variation of Productive Capacity

    Directory of Open Access Journals (Sweden)

    Kenzo KURIHARA

    2005-02-01

    Full Text Available We can make optimum scheduling results using various methods that are proposed by many researchers. However, it is very difficult to process the works on time without delaying the schedule. There are two major causes that disturb the planned optimum schedules; they are (1the variation of productive capacity, and (2the variation of products' quantities themselves. In this paper, we deal with the former variation, or productive capacities, at flow-shop works. When production machines in a shop go out of order at flow-shops, we cannot continue to operate the productions and we have to stop the production line. To the contrary, we can continue to operate the shops even if some workers absent themselves. Of course, in this case, the production capacities become lower, because workers need to move from a machine to another to overcome the shortage of workers and some shops cannot be operated because of the worker shortage. We developed a new re-scheduling method based on Branch-and Bound method. We proposed an equation for calculating the lower bound for our Branch-and Bound method in a practical time. Some evaluation experiments are done using practical data of real flow-shop works. We compared our results with those of another simple scheduling method, and we confirmed the total production time of our result is shorter than that of another method by 4%.

  16. Analytical formulations to the method of variation of parameters in terms of universal Y's functions

    Directory of Open Access Journals (Sweden)

    Sharaf M.A.

    2015-01-01

    Full Text Available The method of variation of parameters still has a great interest and wide applications in mathematics, physics and astrodynamics. In this paper, universal functions (the Y's functions based on Goodyear's time transformation formula were used to establish a variation of parameters method which is useful in slightly perturbed two-body initial value problem. Moreover due to its universality, the method avoids the switching among different conic orbits which are commonly occurring in space missions. The position and velocity vectors are written in terms of f and g series. The method is developed analytically and computationally. For the analytical developments, exact literal formulations for the differential system of variation of the epoch state vector are established. Symbolical series solution of the universal Kepler's equation was also established, and the literal analytical expressions of the coefficients of the series are listed in Horner form for efficient and stable evaluation. For computational developments of the method, an efficient algorithm was given using continued fraction theory. Finally, a short note on the method of solution was given just for the reader guidance.

  17. Uniqueness theorems for variational problems by the method of transformation groups

    CERN Document Server

    Reichel, Wolfgang

    2004-01-01

    A classical problem in the calculus of variations is the investigation of critical points of functionals {\\cal L} on normed spaces V. The present work addresses the question: Under what conditions on the functional {\\cal L} and the underlying space V does {\\cal L} have at most one critical point? A sufficient condition for uniqueness is given: the presence of a "variational sub-symmetry", i.e., a one-parameter group G of transformations of V, which strictly reduces the values of {\\cal L}. The "method of transformation groups" is applied to second-order elliptic boundary value problems on Riemannian manifolds. Further applications include problems of geometric analysis and elasticity.

  18. Variational method of determining effective moduli of polycrystals with tetragonal symmetry

    Science.gov (United States)

    Meister, R.; Peselnick, L.

    1966-01-01

    Variational principles have been applied to aggregates of randomly oriented pure-phase polycrystals having tetragonal symmetry. The bounds of the effective elastic moduli obtained in this way show a substantial improvement over the bounds obtained by means of the Voigt and Reuss assumptions. The Hill average is found to be a good approximation in most cases when compared to the bounds found from the variational method. The new bounds reduce in their limits to the Voigt and Reuss values. ?? 1966 The American Institute of Physics.

  19. Variational method of determining effective moduli of polycrystals: (A) hexagonal symmetry, (B) trigonal symmetry

    Science.gov (United States)

    Peselnick, L.; Meister, R.

    1965-01-01

    Variational principles of anisotropic elasticity have been applied to aggregates of randomly oriented pure-phase polycrystals having hexagonal symmetry and trigonal symmetry. The bounds of the effective elastic moduli obtained in this way show a considerable improvement over the bounds obtained by means of the Voigt and Reuss assumptions. The Hill average is found to be in most cases a good approximation when compared to the bounds found from the variational method. The new bounds reduce in their limits to the Voigt and Reuss values. ?? 1965 The American Institute of Physics.

  20. Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Duo Hao

    2017-11-01

    Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.

  1. Reactive power control methods for improved reliability of wind power inverters under wind speed variations

    DEFF Research Database (Denmark)

    Ma, Ke; Liserre, Marco; Blaabjerg, Frede

    2012-01-01

    method to relieve the thermal cycling of power switching devices under severe wind speed variations, by circulating reactive power among the parallel power converters in a WTS or among the WTS's in a wind park. The amount of reactive power is adjusted to limit the junction temperature fluctuation......The thermal cycling of power switching devices may lead to failures that compromise the reliability of power converters. Wind Turbine Systems (WTS) are especially subject to severe thermal cycling which may be caused by the wind speed variations or power grid faults. This paper proposes a control...... temperature fluctuation in the most stressed devices of 3L-NPC wind power inverter under severe wind speed variations can be significantly stabilized, and the reliability of the power converter can thereby be improved while the increased stress of the other devices in the same power converter...

  2. Uncertainty Representation Method for Open Pit Optimization Results Due to Variation in Mineral Prices

    Directory of Open Access Journals (Sweden)

    Jieun Baek

    2016-02-01

    Full Text Available This study proposes a new method to quantitatively represent the uncertainty existing in open pit optimization results due to variations in mineral prices. After generating multiple mineral prices using Monte Carlo simulation with data on past mineral prices, a probability model that represents the uncertainty was developed by integrating multiple open pit optimization results derived from the mineral prices. The results of applying the proposed method to the copper-zinc deposits showed that significant uncertainty exists in open pit optimization results due to the variation in copper prices. It was also found that the method has a potential as a tool for classifying the estimation results of ore reserve based on confidence level.

  3. Solution of the solidification problem by using the variational iteration method

    Directory of Open Access Journals (Sweden)

    E. Hetmaniok

    2009-10-01

    Full Text Available The paper presents the approximated solution of the solidification problem, modelled with the aid of the one-phase Stefan problem with the boundary condition of the second kind, by using the variational iteration method. For solving this problem one needs to determine the distribution of temperature in the given domain and the position of the moving interface. The proposed method of solution consists of describing the considered problem with a system of differential equations in a domain with known boundary, and solving the received system with the aid of VIM method. The accuracy of the obtained approximated solution is verified by comparing it with the analytical solution.

  4. An Adaptive Total Generalized Variation Model with Augmented Lagrangian Method for Image Denoising

    Directory of Open Access Journals (Sweden)

    Chuan He

    2014-01-01

    Full Text Available We propose an adaptive total generalized variation (TGV based model, aiming at achieving a balance between edge preservation and region smoothness for image denoising. The variable splitting (VS and the classical augmented Lagrangian method (ALM are used to solve the proposed model. With the proposed adaptive model and ALM, the regularization parameter, which balances the data fidelity and the regularizer, is refreshed with a closed form in each iterate, and the image denoising can be accomplished without manual interference. Numerical results indicate that our method is effective in staircasing effect suppression and holds superiority over some other state-of-the-art methods both in quantitative and in qualitative assessment.

  5. THREE-DIMENSIONAL RECONSTRUCTION BY SART METHOD WITH MINIMIZATION OF THE TOTAL VARIATION

    Directory of Open Access Journals (Sweden)

    S. A. Zolotarev

    2015-01-01

    Full Text Available Computed tomography is still being intensively studied and widely used to solve a number of industrial and medical applications. The algebraic reconstruction method with simultaneous iterations SART considered in this work as one of the most promising of iterative methods, suitable for the tomographic problems. Graphics processor is used to accelerate the speed of the reconstruction. The method of minimizing the total variation (TV is used as a priori support for the regularization of the iterative process and to overcome the incompleteness of the information.

  6. Variational methods for eigenvalue problems an introduction to the weinstein method of intermediate problems

    CERN Document Server

    Gould, S H

    1966-01-01

    The first edition of this book gave a systematic exposition of the Weinstein method of calculating lower bounds of eigenvalues by means of intermediate problems. This second edition presents new developments in the framework of the material contained in the first edition, which is retained in somewhat modified form.

  7. L1 -norm low-rank matrix factorization by variational Bayesian method.

    Science.gov (United States)

    Zhao, Qian; Meng, Deyu; Xu, Zongben; Zuo, Wangmeng; Yan, Yan

    2015-04-01

    The L1 -norm low-rank matrix factorization (LRMF) has been attracting much attention due to its wide applications to computer vision and pattern recognition. In this paper, we construct a new hierarchical Bayesian generative model for the L1 -norm LRMF problem and design a mean-field variational method to automatically infer all the parameters involved in the model by closed-form equations. The variational Bayesian inference in the proposed method can be understood as solving a weighted LRMF problem with different weights on matrix elements based on their significance and with L2 -regularization penalties on parameters. Throughout the inference process of our method, the weights imposed on the matrix elements can be adaptively fitted so that the adverse influence of noises and outliers embedded in data can be largely suppressed, and the parameters can be appropriately regularized so that the generalization capability of the problem can be statistically guaranteed. The robustness and the efficiency of the proposed method are substantiated by a series of synthetic and real data experiments, as compared with the state-of-the-art L1 -norm LRMF methods. Especially, attributed to the intrinsic generalization capability of the Bayesian methodology, our method can always predict better on the unobserved ground truth data than existing methods.

  8. Analytical and variational numerical methods for unstable miscible displacement flows in porous media

    Science.gov (United States)

    Scovazzi, Guglielmo; Wheeler, Mary F.; Mikelić, Andro; Lee, Sanghyun

    2017-04-01

    The miscible displacement of one fluid by another in a porous medium has received considerable attention in subsurface, environmental and petroleum engineering applications. When a fluid of higher mobility displaces another of lower mobility, unstable patterns - referred to as viscous fingering - may arise. Their physical and mathematical study has been the object of numerous investigations over the past century. The objective of this paper is to present a review of these contributions with particular emphasis on variational methods. These algorithms are tailored to real field applications thanks to their advanced features: handling of general complex geometries, robustness in the presence of rough tensor coefficients, low sensitivity to mesh orientation in advection dominated scenarios, and provable convergence with fully unstructured grids. This paper is dedicated to the memory of Dr. Jim Douglas Jr., for his seminal contributions to miscible displacement and variational numerical methods.

  9. Performance analysis of pin fins with temperature dependent thermal parameters using the variation of parameters method

    Directory of Open Access Journals (Sweden)

    Cihat Arslantürk

    2016-08-01

    Full Text Available The performance of pin fins transferring heat by convection and radiation and having variable thermal conductivity, variable emissivity and variable heat transfer coefficient was investigated in the present paper. Nondimensionalizing the fin equation, the problem parameters which affect the fin performance were obtained. Dimensionless nonlinear fin equation was solved with the variation of parameters method, which is quite new in the solution of nonlinear heat transfer problems. The solution of variation of parameters method was compared with known analytical solutions and some numerical solution. The comparisons showed that the solutions are seen to be perfectly compatible. The effects of problem parameters were investigated on the heat transfer rate and fin efficiency and results were presented graphically.

  10. Convergence of Variational Iteration Method for Solving Singular Partial Differential Equations of Fractional Order

    Directory of Open Access Journals (Sweden)

    Asma Ali Elbeleze

    2014-01-01

    Full Text Available We are concerned here with singular partial differential equations of fractional order (FSPDEs. The variational iteration method (VIM is applied to obtain approximate solutions of this type of equations. Convergence analysis of the VIM is discussed. This analysis is used to estimate the maximum absolute truncated error of the series solution. A comparison between the results of VIM solutions and exact solution is given. The fractional derivatives are described in Caputo sense.

  11. A New Technique of Laplace Variational Iteration Method for Solving Space-Time Fractional Telegraph Equations

    Directory of Open Access Journals (Sweden)

    Fatima A. Alawad

    2013-01-01

    Full Text Available In this paper, the exact solutions of space-time fractional telegraph equations are given in terms of Mittage-Leffler functions via a combination of Laplace transform and variational iteration method. New techniques are used to overcome the difficulties arising in identifying the general Lagrange multiplier. As a special case, the obtained solutions reduce to the solutions of standard telegraph equations of the integer orders.

  12. Approximate solution of time-fractional advection-dispersion equation via fractional variational iteration method.

    Science.gov (United States)

    Ibiş, Birol; Bayram, Mustafa

    2014-01-01

    This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs.

  13. Quantizing the Homogeneous Linear Perturbations about Taub using the Jacobi Method of Second Variation

    OpenAIRE

    Bae, Joseph H.

    2014-01-01

    Applying the Jacobi method of second variation to the Bianchi IX system in Misner variables $(\\alpha, \\beta_+, \\beta_-)$, we specialize to the Taub space background $(\\beta_- = 0)$ and obtain the governing equations for linearized homogeneous perturbations $(\\alpha', \\beta_+', \\beta_-')$ thereabout. Employing a canonical transformation, we isolate two decoupled gauge-invariant linearized variables ($\\beta_-'$ and $Q_+' = p_+ \\alpha' + p_\\alpha \\beta_+'$), together with their conjugate momenta...

  14. EVolution: an edge-based variational method for non-rigid multi-modal image registration

    Science.gov (United States)

    de Senneville, B. Denis; Zachiu, C.; Ries, M.; Moonen, C.

    2016-10-01

    Image registration is part of a large variety of medical applications including diagnosis, monitoring disease progression and/or treatment effectiveness and, more recently, therapy guidance. Such applications usually involve several imaging modalities such as ultrasound, computed tomography, positron emission tomography, x-ray or magnetic resonance imaging, either separately or combined. In the current work, we propose a non-rigid multi-modal registration method (namely EVolution: an edge-based variational method for non-rigid multi-modal image registration) that aims at maximizing edge alignment between the images being registered. The proposed algorithm requires only contrasts between physiological tissues, preferably present in both image modalities, and assumes deformable/elastic tissues. Given both is shown to be well suitable for non-rigid co-registration across different image types/contrasts (T1/T2) as well as different modalities (CT/MRI). This is achieved using a variational scheme that provides a fast algorithm with a low number of control parameters. Results obtained on an annotated CT data set were comparable to the ones provided by state-of-the-art multi-modal image registration algorithms, for all tested experimental conditions (image pre-filtering, image intensity variation, noise perturbation). Moreover, we demonstrate that, compared to existing approaches, our method possesses increased robustness to transient structures (i.e. that are only present in some of the images).

  15. New Nuclear Equation of State for Core-Collapse Supernovae with the Variational Method

    Directory of Open Access Journals (Sweden)

    Togashi H.

    2014-03-01

    Full Text Available We report the current status of our project to construct a new nuclear equation of state (EOS with the variational method for core-collapse supernova (SN simulations. Starting from the realistic nuclear Hamiltonian, the EOS for uniform nuclear matter is constructed with the cluster variational method: For non-uniform nuclear matter, the EOS is calculated with the Thomas-Fermi method. The obtained thermodynamic quantities of uniform matter are in good agreement with those with more sophisticated Fermi Hypernetted Chain variational calculations, and phase diagrams constructed so far are close to those of the Shen-EOS. The structure of neutron stars calculated with this EOS at zero temperature is consistent with recent observational data, and the maximum mass of the neutron star is slightly larger than that with the Shen-EOS. Using the present EOS of uniform nuclear matter, we also perform the 1D simulation of the core-collapse supernovae by a simplified prescription of adiabatic hydrodynamics. The stellar core with the present EOS is more compact than that with the Shen-EOS, and correspondingly, the explosion energy in this simulation with the present EOS is larger than that with the Shen-EOS.

  16. Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method

    DEFF Research Database (Denmark)

    Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua

    2014-01-01

    the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately......In Bayesian analysis of a statistical model, the predictive distribution is obtained by marginalizing over the parameters with their posterior distributions. Compared to the frequently used point estimate plug-in method, the predictive distribution leads to a more reliable result in calculating...

  17. Variational and PDE-Based Methods for Big Data Analysis, Classification and Image Processing Using Graphs

    Science.gov (United States)

    2015-01-01

    normalized sum) [20] 82.66% naive Bayes [20] 83.52% SVM (linear kernel) [20] 85.82% multiclass GL 87.3% multiclass MBO 88.48% Other Data Sets Method...UNIVERSITY OF CALIFORNIA Los Angeles Variational and PDE-based methods for big data analysis, classification and image processing using graphs A...Classification and Image Processing Using Graphs 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK

  18. Alien calculus and a Schwinger-Dyson equation: two-point function with a nonperturbative mass scale

    Science.gov (United States)

    Bellon, Marc P.; Clavier, Pierre J.

    2017-10-01

    Starting from the Schwinger-Dyson equation and the renormalization group equation for the massless Wess-Zumino model, we compute the dominant nonperturbative contributions to the anomalous dimension of the theory, which are related by alien calculus to singularities of the Borel transform on integer points. The sum of these dominant contributions has an analytic expression. When applied to the two-point function, this analysis gives a tame evolution in the deep euclidean domain at this approximation level, making doubtful the arguments on the triviality of the quantum field theory with positive β -function. On the other side, we have a singularity of the propagator for timelike momenta of the order of the renormalization group invariant scale of the theory, which has a nonperturbative relationship with the renormalization point of the theory. All these results do not seem to have an interpretation in terms of semiclassical analysis of a Feynman path integral.

  19. Revisiting the equation of state of hybrid stars in the Dyson-Schwinger equation approach to QCD

    Science.gov (United States)

    Bai, Zhan; Chen, Huan; Liu, Yu-xin

    2018-01-01

    We investigate the equation of state (EoS) and the effect of the hadron-quark phase transition of strong interaction matter in compact stars. The hadron matter is described with the relativistic mean field theory, and the quark matter is described with the Dyson-Schwinger equation approach of QCD. The complete EoS of the hybrid star matter is constructed with not only the Gibbs construction but also the 3-window interpolation. The mass-radius relation of hybrid stars is also investigated. We find that, although the EoS of both the hadron matter with hyperon and Δ -baryon and the quark matter are generally softer than that of the nucleon matter, the 3-window interpolation construction may provide an EoS stiff enough for a hybrid star with mass exceeding 2 M⊙ and, in turn, solve the so-called "hyperon puzzle."

  20. Alien calculus and a Schwinger-Dyson equation: two-point function with a nonperturbative mass scale

    Science.gov (United States)

    Bellon, Marc P.; Clavier, Pierre J.

    2018-02-01

    Starting from the Schwinger-Dyson equation and the renormalization group equation for the massless Wess-Zumino model, we compute the dominant nonperturbative contributions to the anomalous dimension of the theory, which are related by alien calculus to singularities of the Borel transform on integer points. The sum of these dominant contributions has an analytic expression. When applied to the two-point function, this analysis gives a tame evolution in the deep euclidean domain at this approximation level, making doubtful the arguments on the triviality of the quantum field theory with positive β -function. On the other side, we have a singularity of the propagator for timelike momenta of the order of the renormalization group invariant scale of the theory, which has a nonperturbative relationship with the renormalization point of the theory. All these results do not seem to have an interpretation in terms of semiclassical analysis of a Feynman path integral.

  1. Leading-order calculation of hadronic contributions to the Muon g-2 using the Dyson-Schwinger approach

    Energy Technology Data Exchange (ETDEWEB)

    Goecke, Tobias [Institut fuer Theoretische Physik, Universitaet Giessen, 35392 Giessen (Germany); Fischer, Christian S., E-mail: christian.fischer@theo.physik.uni-giessen.de [Institut fuer Theoretische Physik, Universitaet Giessen, 35392 Giessen (Germany); Gesellschaft fuer Schwerionenforschung mbH, Planckstr. 1, D-64291 Darmstadt (Germany); Williams, Richard [Dept. Fisica Teorica I, Universidad Complutense, 28040 Madrid (Spain)

    2011-10-13

    We present a calculation of the hadronic vacuum polarisation (HVP) tensor within the framework of Dyson-Schwinger equations. To this end we use a well-established phenomenological model for the quark-gluon interaction with parameters fixed to reproduce hadronic observables. From the HVP tensor we compute both the Adler function and the HVP contribution to the anomalous magnetic moment of the muon, a{sub {mu}}. We find a{sub {mu}}{sup HVP}=6760x10{sup -11} which deviates about two percent from the value extracted from experiment. Additionally, we make comparison with a recent lattice determination of a{sub {mu}}{sup HVP} and find good agreement within our approach. We also discuss the implications of our result for a corresponding calculation of the hadronic light-by-light scattering contribution to a{sub {mu}.}

  2. Cluster Variation Method as a Theoretical Tool for the Study of Phase Transformation

    Science.gov (United States)

    Mohri, Tetsuo

    2017-06-01

    Cluster variation method (CVM) has been widely employed to calculate alloy phase diagrams. The atomistic feature of the CVM is consistent with first-principles electronic structure calculations, and the combination of CVM with electronic structure calculation enables one to formulate free energy from the first-principles. CVM free energy conveys affluent information of a given system, and the second-order derivative traces the stability locus against configurational fluctuation. The kinetic extension of the CVM is the path probability method (PPM) which is utilized to calculate transformation and relaxation kinetics associated with the temperature change. Hence, the CVM and PPM are coherent methods to perform a synthetic study from initial non-equilibrium to final equilibrium states. By utilizing CVM free energy as a homogeneous free energy density term, one can calculate the time evolution of ordered domains within the phase field method. Finally, continuous displacement cluster variation method (CDCVM) is discussed as the recent development of CVM. CDCVM is capable of introducing the local lattice displacement into the free energy. Moreover, it is shown that CDCVM can be extended to study collective atomic displacements leading to displacive phase transformation.

  3. Variational Mode Extraction: a New Efficient Method to Derive Respiratory Signals from ECG.

    Science.gov (United States)

    Nazari, Mojtaba; Sakhaei, Sayed Mahmoud

    2017-07-31

    ECG-derived respiratory (EDR) signal is an effective and inexpensive method to monitor the respiration. Previous studies have shown that the empirical mode decomposition (EMD) techniques can satisfactorily extract the EDR signal, however their performances are degraded at the presence of noise. On the other hand, Variational Mode Decomposition (VMD) performs good robustness against noise. In applications such as EDR extraction, where a specific mode is in interest, VMD imposes unnecessary computational cost. In this paper, we consider the extraction of EDR as a problem of obtaining a specific mode of a signal and suggest a new method named as Variational Mode Extraction (VME). The method is established on the similar basis as VMD, with a new criterion: the residual signal after extracting the specific mode should have no or less energy at the center frequency of the mode. In this regard, VME is capable of solving the EDR problem by considering the EDR signal as a mode with approximate center frequency of zero. For verification, the respiratory rate signal is detected from EDR signal extracted by VME and compared it with those obtained by VMD, EMD-based methods and band-pass filtering. The results confirm that the new method can extract the EDR signal with a better accuracy, while performing much lower computational cost and higher convergence rate.

  4. Combined study of Schwinger-boson mean-field theory and linearized tensor renormalization group on Heisenberg ferromagnetic mixed spin (S, σ chains

    Directory of Open Access Journals (Sweden)

    Xin Yan

    2015-07-01

    Full Text Available The Schwinger-boson mean-field theory (SBMFT and the linearized tensor renormalization group (LTRG methods are complementarily applied to explore the thermodynamics of the quantum ferromagnetic mixed spin (S, σ chains. It is found that the system has double excitations, i.e. a gapless and a gapped excitation; the low-lying spectrum can be approximated by ω k ∼ S σ 2 ( S + σ J k 2 with J the ferromagnetic coupling; and the gap between the two branches is estimated to be △ ∼ J. The Bose-Einstein condensation indicates a ferromagnetic ground state with magnetization m tot z = N ( S + σ . At low temperature, the spin correlation length is inversely proportional to temperature (T, the susceptibility behaviors as χ = a 1 ∗ 1 T 2 + a 2 ∗ 1 T , and the specific heat has the form of C = c 1 ∗ T − c 2 ∗ T + c 3 ∗ T 3 2 , with ai (i = 1, 2 and ci (i = 1, 2, 3 the temperature independent constants. The SBMFT results are shown to be in qualitatively agreement with those by the LTRG numerical calculations for S = 1 and σ = 1/2. A comparison of the LTRG results with the experimental data of the model material MnIINiII(NO24(en2(en = ethylenediamine, is made, in which the coupling parameters of the compound are obtained. This study provides useful information for deeply understanding the physical properties of quantum ferromagnetic mixed spin chain materials.

  5. Numerical Methods for the Optimization of Nonlinear Residual-Based Sungrid-Scale Models Using the Variational Germano Identity

    NARCIS (Netherlands)

    Maher, G.D.; Hulshoff, S.J.

    2014-01-01

    The Variational Germano Identity [1, 2] is used to optimize the coefficients of residual-based subgrid-scale models that arise from the application of a Variational Multiscale Method [3, 4]. It is demonstrated that numerical iterative methods can be used to solve the Germano relations to obtain

  6. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.

    Science.gov (United States)

    Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua

    2016-01-01

    Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  7. A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction

    Directory of Open Access Journals (Sweden)

    Hongyang Lu

    2016-01-01

    Full Text Available Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV approach and adaptive dictionary learning (DL. In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.

  8. Modified Fractional Variational Iteration Method for Solving the Generalized Time-Space Fractional Schrödinger Equation

    Directory of Open Access Journals (Sweden)

    Baojian Hong

    2014-01-01

    Full Text Available Based on He’s variational iteration method idea, we modified the fractional variational iteration method and applied it to construct some approximate solutions of the generalized time-space fractional Schrödinger equation (GFNLS. The fractional derivatives are described in the sense of Caputo. With the help of symbolic computation, some approximate solutions and their iterative structure of the GFNLS are investigated. Furthermore, the approximate iterative series and numerical results show that the modified fractional variational iteration method is powerful, reliable, and effective when compared with some classic traditional methods such as homotopy analysis method, homotopy perturbation method, adomian decomposition method, and variational iteration method in searching for approximate solutions of the Schrödinger equations.

  9. Modified fractional variational iteration method for solving the generalized time-space fractional Schrödinger equation.

    Science.gov (United States)

    Hong, Baojian; Lu, Dianchen

    2014-01-01

    Based on He's variational iteration method idea, we modified the fractional variational iteration method and applied it to construct some approximate solutions of the generalized time-space fractional Schrödinger equation (GFNLS). The fractional derivatives are described in the sense of Caputo. With the help of symbolic computation, some approximate solutions and their iterative structure of the GFNLS are investigated. Furthermore, the approximate iterative series and numerical results show that the modified fractional variational iteration method is powerful, reliable, and effective when compared with some classic traditional methods such as homotopy analysis method, homotopy perturbation method, adomian decomposition method, and variational iteration method in searching for approximate solutions of the Schrödinger equations.

  10. Variational Methods for Discontinuous Structures : Applications to Image Segmentation, Continuum Mechanics

    CERN Document Server

    Tomarelli, Franco

    1996-01-01

    In recent years many researchers in material science have focused their attention on the study of composite materials, equilibrium of crystals and crack distribution in continua subject to loads. At the same time several new issues in computer vision and image processing have been studied in depth. The understanding of many of these problems has made significant progress thanks to new methods developed in calculus of variations, geometric measure theory and partial differential equations. In particular, new technical tools have been introduced and successfully applied. For example, in order to describe the geometrical complexity of unknown patterns, a new class of problems in calculus of variations has been introduced together with a suitable functional setting: the free-discontinuity problems and the special BV and BH functions. The conference held at Villa Olmo on Lake Como in September 1994 spawned successful discussion of these topics among mathematicians, experts in computer science and material scientis...

  11. Exact solutions for some of the fractional differential equations by using modification of He's variational iteration method

    Directory of Open Access Journals (Sweden)

    S. Irandoust-pakchin

    2011-03-01

    Full Text Available In this paper, the modification of He's variational iteration method(MVIM is developed to solve fractional ordinary differentialequations and fractional partial differential equations. It is usedthe free choice of initial approximation to propose the reliablemodification of He's variational iteration method. Some of thefractional differential equations are examined to illustrate theeffectiveness and convenience of the method. The results show thatthe proposed method has accelerated convergence.

  12. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  13. Implementation of an optimal first-order method for strongly convex total variation regularization

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Jørgensen, Jakob Heide; Hansen, Per Christian

    2012-01-01

    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ-strongly convex objective functions with L-Lipschitz continuous gradient...... parameter μ for solving ill-conditioned problems to high accuracy, in comparison with an optimal method for non-strongly convex problems and a first-order method with Barzilai-Borwein step size selection........ In the framework of Nesterov both μ and L are assumed known—an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ and L during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss...

  14. Reconstruction for 3D PET Based on Total Variation Constrained Direct Fourier Method.

    Directory of Open Access Journals (Sweden)

    Haiqing Yu

    Full Text Available This paper presents a total variation (TV regularized reconstruction algorithm for 3D positron emission tomography (PET. The proposed method first employs the Fourier rebinning algorithm (FORE, rebinning the 3D data into a stack of ordinary 2D data sets as sinogram data. Then, the resulted 2D sinogram are ready to be reconstructed by conventional 2D reconstruction algorithms. Given the locally piece-wise constant nature of PET images, we introduce the total variation (TV based reconstruction schemes. More specifically, we formulate the 2D PET reconstruction problem as an optimization problem, whose objective function consists of TV norm of the reconstructed image and the data fidelity term measuring the consistency between the reconstructed image and sinogram. To solve the resulting minimization problem, we apply an efficient methods called the Bregman operator splitting algorithm with variable step size (BOSVS. Experiments based on Monte Carlo simulated data and real data are conducted as validations. The experiment results show that the proposed method produces higher accuracy than conventional direct Fourier (DF (bias in BOSVS is 70% of ones in DF, variance of BOSVS is 80% of ones in DF.

  15. Fractional Variational Iteration Method for Solving Fractional Partial Differential Equations with Proportional Delay

    Directory of Open Access Journals (Sweden)

    Brajesh Kumar Singh

    2017-01-01

    Full Text Available This paper deals with an alternative approximate analytic solution to time fractional partial differential equations (TFPDEs with proportional delay, obtained by using fractional variational iteration method, where the fractional derivative is taken in Caputo sense. The proposed series solutions are found to converge to exact solution rapidly. To confirm the efficiency and validity of FRDTM, the computation of three test problems of TFPDEs with proportional delay was presented. The scheme seems to be very reliable, effective, and efficient powerful technique for solving various types of physical models arising in science and engineering.

  16. Local convergence of exact and inexact newton’s methods for subanalytic variational inclusions

    Directory of Open Access Journals (Sweden)

    Catherine Cabuzel

    2015-01-01

    Full Text Available This paper deals with the study of an iterative method for solving a variational inclusion of the form 0 ∈ f (x+F (x where f is a locally Lipschitz subanalytic function and F is a set-valued map from Rn to the closed subsets of Rn. To this inclusion, we firstly associate a Newton then secondly an Inexact Newton type sequence and with some semistability and hemistability properties of the solution x∗ of the previous inclusion, we prove the existence of a sequence which is locally superlinearly convergent.

  17. Variational Iteration Method for Volterra Functional Integrodifferential Equations with Vanishing Linear Delays

    Directory of Open Access Journals (Sweden)

    Ali Konuralp

    2014-01-01

    Full Text Available Application process of variational iteration method is presented in order to solve the Volterra functional integrodifferential equations which have multi terms and vanishing delays where the delay function θ(t vanishes inside the integral limits such that θ(t=qt for 0

  18. A variation iteration method for isotropic velocity-dependent potentials: Scattering case

    Energy Technology Data Exchange (ETDEWEB)

    Eed, H. [Applied Science Private University, Basic Science Department, Amman (Jordan)

    2014-12-01

    We propose a new approximation scheme to obtain analytic expressions for the Schroedinger equation with isotropic velocity-dependent potential to determine the scattering phase shift. In order to test the validity of our approach, we applied it to an exactly solvable model for nucleon-nucleon scattering. The results of the variation iteration method (VIM) formalism compare quite well with those of the exactly solvable model. The developed formalism can be applied in problems concerning pion-nucleon, nucleon-nucleon, and electron-atom scattering. (orig.)

  19. Total variation-based method for generation of intravoxel incoherent motion parametric images in MRI.

    Science.gov (United States)

    Lin, Chieh; Shih, Yi-Yu; Huang, Siao-Lan; Huang, Hsuan-Ming

    2017-10-01

    Total variation (TV) method has been used widely for image restoration and reconstruction. In this work, we propose a TV-based algorithm for parametric image generation in intravoxel incoherent motion (IVIM) diffusion-weighted magnetic resonance imaging (DW-MRI). We used simulated and real data to investigate whether the proposed TV-based method can provide reliable parametric images. Parametric images of IVIM parameters including perfusion fraction (PF), diffusion coefficient (D), and pseudo-diffusion coefficient (D*) were estimated using DW-MRI data and TV through fitting the IVIM model. The Levenberg-Marquardt (LM) method, which has often been used in the context of IVIM analysis, was employed as the standard method for comparison of the resulting parametric images. The simulation results show that the proposed method outperforms the LM algorithm in terms of precision, providing a 40-81%, 90-93%, and 68-84% improvement for PF, D and D*, respectively, at signal-to-noise ratio (SNR) of 30. For real data, the proposed method showed an average five-fold, three-fold, and four-fold improvement in the SNR for PF, D and D*, respectively. We introduced the use of TV to produce parametric images, and demonstrated that the proposed TV-based method is effective in improving the parametric image quality. Magn Reson Med 78:1383-1391, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. A non-invasive geometric morphometrics method for exploring variation in dorsal head shape in urodeles: sexual dimorphism and geographic variation in Salamandra salamandra.

    Science.gov (United States)

    Alarcón-Ríos, Lucía; Velo-Antón, Guillermo; Kaliontzopoulou, Antigoni

    2017-04-01

    The study of morphological variation among and within taxa can shed light on the evolution of phenotypic diversification. In the case of urodeles, the dorso-ventral view of the head captures most of the ontogenetic and evolutionary variation of the entire head, which is a structure with a high potential for being a target of selection due to its relevance in ecological and social functions. Here, we describe a non-invasive procedure of geometric morphometrics for exploring morphological variation in the external dorso-ventral view of urodeles' head. To explore the accuracy of the method and its potential for describing morphological patterns we applied it to two populations of Salamandra salamandra gallaica from NW Iberia. Using landmark-based geometric morphometrics, we detected differences in head shape between populations and sexes, and an allometric relationship between shape and size. We also determined that not all differences in head shape are due to size variation, suggesting intrinsic shape differences across sexes and populations. These morphological patterns had not been previously explored in S. salamandra, despite the high levels of intraspecific diversity within this species. The methodological procedure presented here allows to detect shape variation at a very fine scale, and solves the drawbacks of using cranial samples, thus increasing the possibilities of using collection specimens and alive animals for exploring dorsal head shape variation and its evolutionary and ecological implications in urodeles. J. Morphol. 278:475-485, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. A robust image registration method based on total variation regularization under complex illumination changes.

    Science.gov (United States)

    Aghajani, Khadijeh; Manzuri, Mohammad T; Yousefpour, Rohollah

    2016-10-01

    Image registration is one of the fundamental and essential tasks for medical imaging and remote sensing applications. One of the most common challenges in this area is the presence of complex spatially varying intensity distortion in the images. The widely used similarity metrics, such as MI (Mutual Information), CC (Correlation Coefficient), SSD (Sum of Square Difference), SAD (Sum of Absolute Difference) and CR (Correlation Ratio), are not robust against this kind of distortion because stationarity assumption and the pixel-wise independence cannot be obeyed and captured by these metrics. In this paper, we propose a new intensity-based method for simultaneous image registration and intensity correction. We assume that the registered moving image can be reconstructed by the reference image through a linear function that consists of multiplicative and additive coefficients. We also assume that the illumination changes in the images are spatially smooth in each region, so we use weighted Total Variation as a regularization term to estimate the aforesaid multiplicative and additive coefficients. Using weighted Total Variation leads to reduce the smoothness-effect on the coefficients across the edges and causes low level segmentation on the coefficients. For minimizing the reconstruction error, as a dissimilarity term, we use l1norm which is more robust against illumination change and non-Gaussian noises than the l2 norm. Primal-Dual method is used for solving the optimization problem. The proposed method is applied to simulated as well as real-world data consisting of clinically 4-D Computed Tomography, retina, Digital Subtraction Angiography (DSA), and iris image pairs. Then, the comparisons are made to MI, CC, SSD, SAD and RC qualitatively and sometimes quantitatively. The experiment results are demonstrating that the proposed method produces more accurate registration results than conventional methods. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Analysis of the effect of waste's particle size variations on biodrying method

    Science.gov (United States)

    Kristanto, Gabriel Andari; Zikrina, Masayu Nadiya

    2017-11-01

    The use of municipal solid waste as energy source can be a solution for Indonesia's increasing energy demand. However, its high moisture content limits the use of solid waste as energy. Biodrying is a method of lowering wastes' moisture content using biological process. This study investigated the effect of wastes' particle size variations on biodrying method. The experiment was performed on 3 lab-scale reactors with the same specifications. Organic wastes with the composition of 50% vegetable wastes and 50% garden wastes were used as substrates. The feedstock was manually shredded into 3 size variations, which were 10 - 40 mm, 50 - 80 mm, and 100 - 300 mm. The experiment lasted for 21 days. After 21 days, it was shown that the waste with the size of 100 - 300 mm has the lowest moisture content, which is 50.99%, and the volatile solids content is still 74.3% TS. This may be caused by the higher free air space of the reactor with the bigger sized substrate.

  3. Hardness improvement on low carbon steel using pack carbonitriding method with holding time variation

    Directory of Open Access Journals (Sweden)

    Puspitasari Poppy

    2017-01-01

    Full Text Available Carbonitriding is a surface hardening process of steel by heating on critical temperature, quench and followed by tempering process. In this research, pack media in carbonitriding was used as a new method. Usually, this process has been obtained using gas and liquid carbonitriding methods. The specimen used in this research is low carbon steel (St. 41 consist of 0.1517% carbon, 0.1994% of silicon, 0.5631% of manganese, 0.0224% of phosphorus, and 0.047% of sulfur. Temperature for pack carbonitriding was at 700 °C, 750 °C and 800 °C and holding time variation 60 minutes and 120 minutes respectively. The result showed that carbonitriding temperature difference affect the mechanical properties of steel St. 41. Steel hardness was increased at lower holding time (60 minutes compared to 120 minutes. The result showed that at 700 °C and 750 °C with 60 minutes variation, the steel hardness increase from 85.7 HRB to 95.7 HRB and at 800 °C the hardness decrease to 93.1 HRB. Meanwhile, at holding time of 120 minutes, steel hardness decrease from 94.1 HRB to 92.7 HRB. This result caused by austenite phase produced at longer period of holding time.

  4. Strong convergence with a modified iterative projection method for hierarchical fixed point problems and variational inequalities

    Directory of Open Access Journals (Sweden)

    Ibrahim Karahan

    2016-04-01

    Full Text Available Let C be a nonempty closed convex subset of a real Hilbert space H. Let {T_{n}}:C›H be a sequence of nearly nonexpansive mappings such that F:=?_{i=1}^{?}F(T_{i}?Ø. Let V:C›H be a ?-Lipschitzian mapping and F:C›H be a L-Lipschitzian and ?-strongly monotone operator. This paper deals with a modified iterative projection method for approximating a solution of the hierarchical fixed point problem. It is shown that under certain approximate assumptions on the operators and parameters, the modified iterative sequence {x_{n}} converges strongly to x^{*}?F which is also the unique solution of the following variational inequality: ?0, ?x?F. As a special case, this projection method can be used to find the minimum norm solution of above variational inequality; namely, the unique solution x^{*} to the quadratic minimization problem: x^{*}=argmin_{x?F}?x?². The results here improve and extend some recent corresponding results of other authors.

  5. A Position Sensorless Control Method for SRM Based on Variation of Phase Inductance

    Science.gov (United States)

    Komatsuzaki, Akitomo; Miki, Ichiro

    Switched reluctance motor (SRM) drives are suitable for variable speed industrial applications because of the simple structure and high-speed capability. However, it is necessary to detect the rotor position with a position sensor attached to the motor shaft. The use of the sensor increases the cost of the drive system and machine size, and furthermore the reliability of the system is reduced. Therefore, several approaches to eliminate the position sensor have already been reported. In this paper, a position sensorless control method based on the variation of the phase inductance is described. The phase inductance regularly varies with the rotor position. The SRM is controlled without the position sensor using the de-fluxing period and the phase inductance. The turn-off timing is determined by computing the difference of angle between the sampling point and the aligned point and the variation of angle during the de-fluxing period. In the magnetic saturation region, the phase inductance at the current when the effect of the saturation starts is computed and the sensorless control can be carried out using this inductance. Experimental results show that the SRM is well controlled without the position sensor using the proposed method.

  6. An Optimal DEM Reconstruction Method for Linear Array Synthetic Aperture Radar Based on Variational Model

    Directory of Open Access Journals (Sweden)

    Shi Jun

    2015-02-01

    Full Text Available Downward-looking Linear Array Synthetic Aperture Radar (LASAR has many potential applications in the topographic mapping, disaster monitoring and reconnaissance applications, especially in the mountainous area. However, limited by the sizes of platforms, its resolution in the linear array direction is always far lower than those in the range and azimuth directions. This disadvantage leads to the blurring of Three-Dimensional (3D images in the linear array direction, and restricts the application of LASAR. To date, the research on 3D SAR image enhancement has focused on the sparse recovery technique. In this case, the one-to-one mapping of Digital Elevation Model (DEM brakes down. To overcome this, an optimal DEM reconstruction method for LASAR based on the variational model is discussed in an effort to optimize the DEM and the associated scattering coefficient map, and to minimize the Mean Square Error (MSE. Using simulation experiments, it is found that the variational model is more suitable for DEM enhancement applications to all kinds of terrains compared with the Orthogonal Matching Pursuit (OMPand Least Absolute Shrinkage and Selection Operator (LASSO methods.

  7. Dryson equations, Ward identities, and the infrared behavior of Yang-Mills theories. [Schwinger-Dyson equations, Slavnov-Taylor identities

    Energy Technology Data Exchange (ETDEWEB)

    Baker, M.

    1979-01-01

    It was shown using the Schwinger-Dyson equations and the Slavnov-Taylor identities of Yang-Mills theory that no inconsistency arises if the gluon propagator behaves like (1/p/sup 2/)/sup 2/ for small p/sup 2/. To see whether the theory actually contains such singular long range behavior, a nonperturbative closed set of equations was formulated by neglecting the transverse parts of GAMMA and GAMMA/sub 4/ in the Schwinger-Dyson equations. This simplification preserves all the symmetries of the theory and allows the possibility for a singular low-momentum behavior of the gluon propagator. The justification for neglecting GAMMA/sup (T)/ and GAMMA/sub 4//sup (T)/ is not evident but it is expected that the present study of the resulting equations will elucidate this simplification, which leads to a closed set of equations.

  8. Equilibrium properties of quantum water clusters by the variational Gaussian wavepacket method.

    Science.gov (United States)

    Frantsuzov, Pavel A; Mandelshtam, Vladimir A

    2008-03-07

    The variational Gaussian wavepacket (VGW) method in combination with the replica-exchange Monte Carlo is applied to calculations of the heat capacities of quantum water clusters, (H(2)O)(8) and (H(2)O)(10). The VGW method is most conveniently formulated in Cartesian coordinates. These in turn require the use of a flexible (i.e., unconstrained) water potential. When the latter is fitted as a linear combination of Gaussians, all the terms involved in the numerical solution of the VGW equations of motion are analytic. When a flexible water model is used, a large difference in the timescales of the inter- and intramolecular degrees of freedom generally makes the system very difficult to simulate numerically. Yet, given this difficulty, we demonstrate that our methodology is still practical. We compare the computed heat capacities to those for the corresponding classical systems. As expected, the quantum effects shift the melting temperatures toward the lower values.

  9. Studying the properties of Variational Data Assimilation Methods by Applying a Set of Test-Examples

    DEFF Research Database (Denmark)

    Thomsen, Per Grove; Zlatev, Zahari

    2007-01-01

    he variational data assimilation methods can successfully be used in different fields of science and engineering. An attempt to utilize available sets of observations in the efforts to improve (i) the models used to study different phenomena (ii) the model results is systematically carried out wh...... assimilation method (numerical algorithms for solving differential equations, splitting procedures and optimization algorithms) have been studied by using these tests. The presentation will include results from testing carried out in the study....... and backward computations are carried out by using the model under consideration and its adjoint equations (both the model and its adjoint are defined by systems of differential equations). The major difficulty is caused by the huge increase of the computational load (normally by a factor more than 100...

  10. Hybrid variational principles and synthesis method for finite element neutron transport calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ackroyd, R.T. (Queen Mary Coll., London (UK). Dept. of Nuclear Engineering); Nanneh, M.M. (Imperial Coll. of Science and Technology, London (UK). Dept. of Mechanical Engineering)

    1990-01-01

    A family of hybrid variational principles is derived using a generalised least squares method. Neutron conservation is automatically satisfied for the hybrid principles employing two trial functions. No interfaces or reflection conditions need to be imposed on the independent even-parity trial function. For some hybrid principles a single trial function can be employed by relating one parity trial function to the other, using one of the parity transport equation in relaxed form. For other hybrid principles the trial functions can be employed sequentially. Synthesis of transport solutions, starting with the diffusion theory approximation, has been used as a way of reducing the scale of the computation that arises with established finite element methods for neutron transport. (author).

  11. Towards noninvasive method for the detection of pathological tissue variations by mapping different blood parameters

    Science.gov (United States)

    Abdallah, Omar; Qananwah, Qasem; Abo Alam, Kawther; Bolz, Armin

    2010-04-01

    This paper describes the development of an early detection method for probing pathological tissue variations. The method could be used for classifying various tissue alteration namely tumors tissue or skin disorders. The used approach is based on light scattering and absorption spectroscopy. Spectral content of the scattered light provides diagnostic information about the tissue contents. The importance of this method is using a safe light that has less power than the used in the imaging methods that will enable the frequent examination of tissue, while the exiting modalities have drawbacks like ionization, high cost, time-consuming, and agents' usage. A modality for mapping the oxygen saturation distribution in tissues noninvasively is new in this area of research, since this study focuses on the oxygen molecule in the tissue which supposed to be homogenously distributed through the tissues. Cancers may cause greater vascularization and greater oxygen consumption than in normal tissue. Therefore, oxygen existence and homogeneity will be alternated depending on the tissue state. In the proposed system, the signal was extracted after illuminating the tissue by light emitting diodes (LED's) that emits light in two wavelengths, red (660 nm) and infrared (880 nm). The absorption in these wavelengths is mainly due to oxyhemoglobin (HbO2) and deoxyhemoglobin (Hb) while other blood and tissue contents nearly have low effect on the signal. The backscattered signal which is received by a photodiodes array (128 PDs) was measured and processed using LabVIEW. Photoplethysmogram (PPG) signals have been measured at different locations. These signals will be used to differentiate between the normal and the pathological tissues. Variations in hemoglobin concentration and blood perfusion will also be used as an important indication feature for this purpose.

  12. Comparison of GPS TEC variations with Holt-Winter method and IRI-2012 over Langkawi, Malaysia

    Science.gov (United States)

    Elmunim, N. A.; Abdullah, M.; Hasbi, A. M.; Bahari, S. A.

    2017-07-01

    The Total Electron Content (TEC) is the ionospheric parameter that has the main effect on radio wave propagation. Therefore, it is crucial to evaluate the performance of the TEC models for the further improvement of the ionospheric modelling in equatorial regions. This work presents an analysis of the TEC, derived from the GPS Ionospheric Scintillation and TEC Monitor (GISTM) receiver at the Langkawi station, Malaysia, located at the geographic coordinates of 6.19°N, 99.51°E and the geomagnetic coordinates of 3.39°S, 172.42°E. The diurnal, monthly and seasonal variations in 2014 of the observed GPS-TEC were compared with the statistical Holt-Winter method and a recent version of the International Reference Ionosphere model (IRI-2012), using three different topside options of an electron density, which are the IRI-2001, IRI01-corr and NeQuick. The maximum peaks of the GPS-TEC were observed in the post-noon time and the minimum was observed during the early morning time. In addition, in monthly variations the Holt-Winter and the IRI-2012 topside options showed an underestimation that was in agreement with the GPS-TEC, except for the IRI-2001 model which showed an overestimation in June, July and August. Regarding the seasonal variation of the GPS-TEC, the lowest values were observed during summer and it reached its maximum value during the equinox season. The IRI-2001 showed the highest value of percentage deviation compared to the IRI01-corr, NeQuick and Holt-Winter method. Therefore, the accuracy of the models was found to be approximately 95% in the Holt-Winter method, 75% in the IRI01-corr, 73% in the NeQuick and 66% in the IRI-2001 model. Hence, it can be inferred that the Holt-Winter method showed a higher performance and better estimates of the TEC compared to the IRI01-corr and NeQuick, while the IRI-2001 showed a poor predictive performance in the equatorial region over Malaysia.

  13. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Science.gov (United States)

    Castaings, W.; Dartus, D.; Le Dimet, F.-X.; Saulnier, G.-M.

    2009-04-01

    Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  14. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    Directory of Open Access Journals (Sweden)

    W. Castaings

    2009-04-01

    Full Text Available Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised with respect to model inputs.

    In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations but didactic application case.

    It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run and the singular value decomposition (SVD of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation.

    For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers is adopted.

    Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  15. Evaluating variation in human gut microbiota profiles due to DNA extraction method and inter-subject differences

    Directory of Open Access Journals (Sweden)

    Brett eWagner Mackenzie

    2015-02-01

    Full Text Available The human gut contains dense and diverse microbial communities which have profound influences on human health. Gaining meaningful insights into these communities requires provision of high quality microbial nucleic acids from human fecal samples, as well as an understanding of the sources of variation and their impacts on the experimental model. We present here a systematic analysis of commonly used microbial DNA extraction methods, and identify significant sources of variation. Five extraction methods (Human Microbiome Project protocol, MoBio PowerSoil DNA Isolation Kit, QIAamp DNA Stool Mini Kit, ZR Fecal DNA MiniPrep, phenol:chloroform-based DNA isolation were evaluated based on the following criteria: DNA yield, quality and integrity, and microbial community structure based on Illumina amplicon sequencing of the V4 region of bacterial and archaeal 16S rRNA genes. Our results indicate that the largest portion of variation within the model was attributed to differences between subjects (biological variation, with a smaller proportion of variation associated with DNA extraction method (technical variation and intra-subject variation. A comprehensive understanding of the potential impact of technical variation on the human gut microbiota will help limit preventable bias, enabling more accurate diversity estimates.

  16. Evaluating variation in human gut microbiota profiles due to DNA extraction method and inter-subject differences.

    Science.gov (United States)

    Wagner Mackenzie, Brett; Waite, David W; Taylor, Michael W

    2015-01-01

    The human gut contains dense and diverse microbial communities which have profound influences on human health. Gaining meaningful insights into these communities requires provision of high quality microbial nucleic acids from human fecal samples, as well as an understanding of the sources of variation and their impacts on the experimental model. We present here a systematic analysis of commonly used microbial DNA extraction methods, and identify significant sources of variation. Five extraction methods (Human Microbiome Project protocol, MoBio PowerSoil DNA Isolation Kit, QIAamp DNA Stool Mini Kit, ZR Fecal DNA MiniPrep, phenol:chloroform-based DNA isolation) were evaluated based on the following criteria: DNA yield, quality and integrity, and microbial community structure based on Illumina amplicon sequencing of the V4 region of bacterial and archaeal 16S rRNA genes. Our results indicate that the largest portion of variation within the model was attributed to differences between subjects (biological variation), with a smaller proportion of variation associated with DNA extraction method (technical variation) and intra-subject variation. A comprehensive understanding of the potential impact of technical variation on the human gut microbiota will help limit preventable bias, enabling more accurate diversity estimates.

  17. Automatic Registration Method for Optical Remote Sensing Images with Large Background Variations Using Line Segments

    Directory of Open Access Journals (Sweden)

    Xiaolong Shi

    2016-05-01

    Full Text Available Image registration is an essential step in the process of image fusion, environment surveillance and change detection. Finding correct feature matches during the registration process proves to be difficult, especially for remote sensing images with large background variations (e.g., images taken pre and post an earthquake or flood. Traditional registration methods based on local intensity probably cannot maintain steady performances, as differences are significant in the same area of the corresponding images, and ground control points are not always available in many disaster images. In this paper, an automatic image registration method based on the line segments on the main shape contours (e.g., coastal lines, long roads and mountain ridges is proposed for remote sensing images with large background variations because the main shape contours can hold relatively more invariant information. First, a line segment detector called EDLines (Edge Drawing Lines, which was proposed by Akinlar et al. in 2011, is used to extract line segments from two corresponding images, and a line validation step is performed to remove meaningless and fragmented line segments. Then, a novel line segment descriptor with a new histogram binning strategy, which is robust to global geometrical distortions, is generated for each line segment based on the geometrical relationships,including both the locations and orientations of theremaining line segments relative to it. As a result of the invariance of the main shape contours, correct line segment matches will have similar descriptors and can be obtained by cross-matching among the descriptors. Finally, a spatial consistency measure is used to remove incorrect matches, and transformation parameters between the reference and sensed images can be figured out. Experiments with images from different types of satellite datasets, such as Landsat7, QuickBird, WorldView, and so on, demonstrate that the proposed algorithm is

  18. 3D computation of non-linear eddy currents: Variational method and superconducting cubic bulk

    Science.gov (United States)

    Pardo, Enric; Kapolka, Milan

    2017-09-01

    Computing the electric eddy currents in non-linear materials, such as superconductors, is not straightforward. The design of superconducting magnets and power applications needs electromagnetic computer modeling, being in many cases a three-dimensional (3D) problem. Since 3D problems require high computing times, novel time-efficient modeling tools are highly desirable. This article presents a novel computing modeling method based on a variational principle. The self-programmed implementation uses an original minimization method, which divides the sample into sectors. This speeds-up the computations with no loss of accuracy, while enabling efficient parallelization. This method could also be applied to model transients in linear materials or networks of non-linear electrical elements. As example, we analyze the magnetization currents of a cubic superconductor. This 3D situation remains unknown, in spite of the fact that it is often met in material characterization and bulk applications. We found that below the penetration field and in part of the sample, current flux lines are not rectangular and significantly bend in the direction parallel to the applied field. In conclusion, the presented numerical method is able to time-efficiently solve fully 3D situations without loss of accuracy.

  19. Probabilistic method for detecting copy number variation in a fetal genome using maternal plasma sequencing.

    Science.gov (United States)

    Rampášek, Ladislav; Arbabi, Aryan; Brudno, Michael

    2014-06-15

    The past several years have seen the development of methodologies to identify genomic variation within a fetus through the non-invasive sequencing of maternal blood plasma. These methods are based on the observation that maternal plasma contains a fraction of DNA (typically 5-15%) originating from the fetus, and such methodologies have already been used for the detection of whole-chromosome events (aneuploidies), and to a more limited extent for smaller (typically several megabases long) copy number variants (CNVs). Here we present a probabilistic method for non-invasive analysis of de novo CNVs in fetal genome based on maternal plasma sequencing. Our novel method combines three types of information within a unified Hidden Markov Model: the imbalance of allelic ratios at SNP positions, the use of parental genotypes to phase nearby SNPs and depth of coverage to better differentiate between various types of CNVs and improve precision. Our simulation results, based on in silico introduction of novel CNVs into plasma samples with 13% fetal DNA concentration, demonstrate a sensitivity of 90% for CNVs >400 kb (with 13 calls in an unaffected genome), and 40% for 50-400 kb CNVs (with 108 calls in an unaffected genome). Implementation of our model and data simulation method is available at http://github.com/compbio-UofT/fCNV. © The Author 2014. Published by Oxford University Press.

  20. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  1. Variational treatment of electron-polyatomic-molecule scattering calculations using adaptive overset grids

    Science.gov (United States)

    Greenman, Loren; Lucchese, Robert R.; McCurdy, C. William

    2017-11-01

    The complex Kohn variational method for electron-polyatomic-molecule scattering is formulated using an overset-grid representation of the scattering wave function. The overset grid consists of a central grid and multiple dense atom-centered subgrids that allow the simultaneous spherical expansions of the wave function about multiple centers. Scattering boundary conditions are enforced by using a basis formed by the repeated application of the free-particle Green's function and potential Ĝ0+V ̂ on the overset grid in a Born-Arnoldi solution of the working equations. The theory is shown to be equivalent to a specific Padé approximant to the T matrix and has rapid convergence properties, in both the number of numerical basis functions employed and the number of partial waves employed in the spherical expansions. The method is demonstrated in calculations on methane and CF4 in the static-exchange approximation and compared in detail with calculations performed with the numerical Schwinger variational approach based on single-center expansions. An efficient procedure for operating with the free-particle Green's function and exchange operators (to which no approximation is made) is also described.

  2. Approximate method in estimation sensitivity responses to variations in delayed neutron energy spectra

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Shin, H. S.; Song, T. Y.; Park, W. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    Previous our numerical results in computing point kinetics equations show a possibility in developing approximations to estimate sensitivity responses of nuclear reactor. We recalculate sensitivity responses by maintaining the corrections with first order of sensitivity parameter. We present a method for computing sensitivity responses of nuclear reactor based on an approximation derived from point kinetics equations. Exploiting this approximation, we found that the first order approximation works to estimate variations in the time to reach peak power because of their linear dependence on a sensitivity parameter, and that there are errors in estimating the peak power in the first order approximation for larger sensitivity parameters. To confirm legitimacy of out approximation, these approximate results are compared with exact results obtained from out previous numerical study. 4 refs., 2 figs., 3 tabs. (Author)

  3. A coastal zone segmentation variational model and its accelerated ADMM method

    Science.gov (United States)

    Huang, Baoxiang; Chen, Ge; Zhang, Xiaolei; Yang, Huan

    2017-12-01

    Effective and efficient SAR image segmentation has a significant role in coastal zone interpretation. In this paper, a coastal zone segmentation model is proposed based on Potts model. By introducing edge self-adaption parameter and modifying noisy data term, the proposed variational model provides a good solution for the coastal zone SAR image with common characteristics of inherent speckle noise and complicated geometrical details. However, the proposed model is difficult to solve due to to its nonlinear, non-convex and non-smooth characteristics. Followed by curve evolution theory and operator splitting method, the minimization problem is reformulated as a constrained minimization problem. A fast alternating minimization iterative scheme is designed to implement coastal zone segmentation. Finally, various two-stage and multiphase experimental results illustrate the advantage of the proposed segmentation model, and indicate the high computation efficiency of designed numerical approximation algorithm.

  4. Application of Stochastic variational method with correlated Ground States to coulombic systems

    Energy Technology Data Exchange (ETDEWEB)

    Usukura, Junko; Suzuki, Yasuyuki [Niigata Univ. (Japan); Varga, K.

    1998-07-01

    Positronium molecule, Ps{sub 2} has not been found experimentally yet, and it has been believed theoretically that Ps{sub 2} has only one bound state with L = 0. We predicted the existence of new bound state of Ps{sub 2}, which is the excited state with L = 1 and comes from Pauli principle, by Stochastic variational method. There are two decay mode with respect to Ps{sub 2}(P); one is pair annihilation and another is electric dipole (E1) transition to the ground state. While it is difficult to tell {gamma}-ray caused by annihilation of Ps{sub 2} from that of Ps since both of them have same energy, Energy (4.94 eV) of the photon emitted in E1 transition is specific enough to distinguish from other spectra. Then the excited state is one of clues to observe Ps{sub 2}. (author)

  5. Application of stochastic variational method to 3-4 body systems with realistic nuclear force

    Energy Technology Data Exchange (ETDEWEB)

    Ohbayashi, Yoshihide [Niigata Univ. (Japan); Varga, K.; Suzuki, Yoshiyuki

    1997-05-01

    SVM (stochastic variational method) was applied to simulate triton and alpha with realistic nuclear force such as Reid V8 (RV8), Argonne V6, V8 (AV6 and aV8). 3-4 body systems were solved by about 300-400 dimensions and the results were agreed with the most accurate solution. Convergence of energy of 3-4 body systems was rapid by using AV6 and AV8 potential, but it was slow by RV8 with strong repulsive force core. The energy values using SVM and GFMC were almost same. Number of dimension to convert the energy of triton was decreased by refinement. (S.Y.)

  6. A spatially-variant deconvolution method based on total variation for optical coherence tomography images

    Science.gov (United States)

    Almasganj, Mohammad; Adabi, Saba; Fatemizadeh, Emad; Xu, Qiuyun; Sadeghi, Hamid; Daveluy, Steven; Nasiriavanaki, Mohammadreza

    2017-03-01

    Optical Coherence Tomography (OCT) has a great potential to elicit clinically useful information from tissues due to its high axial and transversal resolution. In practice, an OCT setup cannot reach to its theoretical resolution due to imperfections of its components, which make its images blurry. The blurriness is different alongside regions of image; thus, they cannot be modeled by a unique point spread function (PSF). In this paper, we investigate the use of solid phantoms to estimate the PSF of each sub-region of imaging system. We then utilize Lucy-Richardson, Hybr and total variation (TV) based iterative deconvolution methods for mitigating occurred spatially variant blurriness. It is shown that the TV based method will suppress the so-called speckle noise in OCT images better than the two other approaches. The performance of proposed algorithm is tested on various samples, including several skin tissues besides the test image blurred with synthetic PSF-map, demonstrating qualitatively and quantitatively the advantage of TV based deconvolution method using spatially-variant PSF for enhancing image quality.

  7. Quality evaluation of energy consumed in flow regulation method by speed variation in centrifugal pumps

    Science.gov (United States)

    Morales, S.; Culman, M.; Acevedo, C.; Rey, C.

    2014-06-01

    Nowadays, energy efficiency and the Electric Power Quality are two inseparable issues in the evaluation of three-phase induction motors, framed within the program of Rational and Efficient Use of Energy (RUE).The use of efficient energy saving devices has been increasing significantly in RUE programs, for example the use of variable frequency drives (VFD) in pumping systems.The overall objective of the project was to evaluate the impact on power quality and energy efficiency in a centrifugal pump driven by an induction three-phase motor, using the flow control method of speed variation by VFD. The fundamental purpose was to test the opinions continuously heard about the use of flow control methods in centrifugal pumps, analyzing the advantages and disadvantages that have been formulated deliberately in order to offer support to the industry in taking correct decisions. The VFD changes the speed of the motor-pump system increasing efficiency compared to the classical methods of regulation. However, the VFD originates conditions that degrade the quality of the electric power supplied to the system and therefore its efficiency, due to the nonlinearity and presence of harmonic currents. It was possible to analyze the power quality, ensuring that the information that comes to the industry is generally biased.

  8. Quark number fluctuations at finite temperature and finite chemical potential via the Dyson-Schwinger equation approach

    Science.gov (United States)

    Xin, Xian-yin; Qin, Si-xue; Liu, Yu-xin

    2014-10-01

    We investigate the quark number fluctuations up to the fourth order in the matter composed of two light flavor quarks with isospin symmetry and at finite temperature and finite chemical potential using the Dyson-Schwinger equation approach of QCD. In order to solve the quark gap equation, we approximate the dressed quark-gluon vertex with the bare one and adopt both the Maris-Tandy model and the infrared constant (Qin-Chang) model for the dressed gluon propagator. Our results indicate that the second, third, and fourth order fluctuations of net quark number all diverge at the critical endpoint (CEP). Around the CEP, the second order fluctuation possesses obvious pump while the third and fourth order ones exhibit distinct wiggles between positive and negative. For the Maris-Tandy model and the Qin-Chang model, we give the pseudocritical temperature at zero quark chemical potential as Tc=146 MeV and 150 MeV, and locate the CEP at (μEq,TE)=(120,124) MeV and (124,129) MeV, respectively. In addition, our results manifest that the fluctuations are insensitive to the details of the model, but the location of the CEP shifts to low chemical potential and high temperature as the confinement length scale increases.

  9. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.

    Directory of Open Access Journals (Sweden)

    Xingjian Yu

    Full Text Available In this paper, a total variation (TV minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance. In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.

  10. Stiffeners in variational-difference method for calculating shells with complex geometry

    Directory of Open Access Journals (Sweden)

    Ivanov Vyacheslav Nikolaevich

    2014-05-01

    Full Text Available We have already considered an introduction of reinforcements in the variational-difference method (VDM of shells analysis with complex shape. At the moment only ribbed shells of revolution and shallow shells can be calculated with the help of developed analytical and finite-difference methods. Ribbed shells of arbitrary shape can be calculated only using the finite element method (FEM. However there are problems, when using FEM, which are absent in finite- and variational-difference methods: rigid body motion; conforming trial functions; parameterization of a surface; independent stress strain state. In this regard stiffeners are entered in VDM. VDM is based on the Lagrange principle - the principle of minimum total potential energy. Stress-strain state of ribs is described by the Kirchhoff-Clebsch theory of curvilinear bars: tension, bending and torsion of ribs are taken into account. Stress-strain state of shells is described by the Kirchhoff-Love theory of thin elastic shells. A position of points of the middle surface is defined by curvilinear orthogonal coordinates α, β. Curved ribs are situated along coordinate lines. Strain energy of ribs is added into the strain energy to account for ribs. A matrix form of strain energy of ribs is formed similar to a matrix form of the strain energy of the shell. A matrix of geometrical characteristics of a rib is formed from components of matrices of geometric characteristics of a shell. A matrix of mechanical characteristics of a rib contains rib’s eccentricity and geometrical characteristics of a rib’s section. Derivatives of displacements in the strain vector are replaced with finite-difference relations after the middle surface of a shell gets covered with a grid (grid lines coincide with the coordinate lines of principal curvatures. By this case the total potential energy functional becomes a function of strain nodal displacements. Partial derivatives of unknown nodal displacements are

  11. METHOD OF SOFTWARE-BASED COMPENSATION OF TECHNOLOGICAL VARIATION IN CHROMATICITY COORDINATES OF LCD PANELS

    Directory of Open Access Journals (Sweden)

    I. O. Zharinov

    2015-05-01

    Full Text Available Subject of research. The problem of software-based compensation of technological variation in chromaticity coordinates of liquid crystal panels is considered. A method of software-based compensation of technological variation in chromaticity coordinates is proposed. The method provides the color reproduction characteristics of the series-produced samples on-board indication equipment corresponding to the sample equipment, which is taken as the standard. Method. Mathematical calculation of the profile is performed for the given model of the liquid crystal panel. The coefficients that correspond to the typical values of the chromaticity coordinates for the vertices of the triangle color coverage constitute a reference mathematical model of the plate LCD panel from a specific manufacturer. At the stage of incoming inspection the sample of the liquid crystal panel, that is to be implemented within indication equipment, is mounted on the lighting test unit, where Nokia-Test control is provided by the formation of the RGB codes for display the image of a homogeneous field in the red, green, blue and white. The measurement of the (x,y-chromaticity coordinates in red, green, blue and white colors is performed using a colorimeter with the known value of absolute error. Instead of using lighting equipment, such measurements may be carried out immediately on the sample indication equipment during customizing procedure. The measured values are used to calculate individual LCD-panel profile coefficients through the use of Grassman's transformation, establishing mutual relations between the XYZ-color coordinates and RGB codes to be used for displaying the image on the liquid crystal panel. The obtained coefficients are to be set into the memory of the graphics controller together with the functional software and then used for image displaying. Main results. The efficiency of the proposed method of software-based compensation for technological variation of

  12. Application of Method of Variation to Analyze and Predict Human Induced Modifications of Water Resource Systems

    Science.gov (United States)

    Dessu, S. B.; Melesse, A. M.; Mahadev, B.; McClain, M.

    2010-12-01

    Water resource systems have often used gravitational surface and subsurface flows because of their practicality in hydrological modeling and prediction. Activities such as inter/intra-basin water transfer, the use of small pumps and the construction of micro-ponds challenge the tradition of natural rivers as water resource management unit. On the contrary, precipitation is barely affected by topography and plot harvesting in wet regions can be more manageable than diverting from rivers. Therefore, it is indicative to attend to systems where precipitation drives the dynamics while the internal mechanics constitutes spectrum of human activity and decision in a network of plots. The trade-in volume and path of harvested precipitation depends on water balance, energy balance and the kinematics of supply and demand. Method of variation can be used to understand and predict the implication of local excess precipitation harvest and exchange on the natural water system. A system model was developed using the variational form of Euler-Bernoulli’s equation for the Kenyan Mara River basin. Satellite derived digital elevation models, precipitation estimates, and surface properties such as fractional impervious surface area, are used to estimate the available water resource. Four management conditions are imposed in the model: gravitational flow, open water extraction and high water use investment at upstream and downstream respectively. According to the model, the first management maintains the basin status quo while the open source management could induce externality. The high water market at the upstream in the third management offers more than 50% of the basin-wide total revenue to the upper third section of the basin thus may promote more harvesting. The open source and upstream exploitation suggest potential drop of water availability to downstream. The model exposed the latent potential of economic gradient to reconfigure the flow network along the direction where the

  13. Investigating different filter and rescaling methods on simulated GRACE-like TWS variations for hydrological applications

    Science.gov (United States)

    Zhang, Liangjing; Dobslaw, Henryk; Dahle, Christoph; Thomas, Maik; Neumayer, Karl-Hans; Flechtner, Frank

    2017-04-01

    By operating for more than one decade now, the GRACE satellite provides valuable information on the total water storage (TWS) for hydrological and hydro-meteorological applications. The increasing interest in use of the GRACE-based TWS requires an in-depth assessment of the reliability of the outputs and also its uncertainties. Through years of development, different post-processing methods have been suggested for TWS estimation. However, since GRACE offers an unique way to provide high spatial and temporal scale TWS, there is no global ground truth data available to fully validate the results. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-type gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Three non-isotropic filter methods from Kusche (2007) and a combined filter from DDK1 and DDK3 based on the ground tracks are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-type TWS estimates to correct the bias and leakage. Time variant rescaling factors as monthly scaling factors and scaling factors for seasonal and long-term variations separately are investigated as well. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment (Zhang et al., 2016) and will subsequently recommend a processing strategy that shall also be applied for planned GRACE and GRACE-FO Level-3 products for terrestrial applications provided by GFZ. Kusche, J., 2007:Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10

  14. Prevalence of antibiotic residues in commercial milk and its variation by season and thermal processing methods

    Directory of Open Access Journals (Sweden)

    Fathollah Aalipour

    2013-01-01

    Full Text Available Aims: In this study, the prevalence of antibiotic residues in pasteurized and sterilized commercial milk available in Shahre-kourd, Iran, was investigated. In addition, the influence of seasonal temperature changes on the prevalence of contamination was studied. Materials and Methods: Commercial milk samples of 187, including 154 pasteurized and 33 sterilized, milk samples were collected from the market between early January 2012 and late July of the same year. The presence of antibiotic residues was detected using the microbiological detection test kit, Eclipse 100, as a semi-quantitative method. Results: The results showed that 37 of the samples (19.8% have contained antibiotic residues above the European Union Maximum Residues Limits (EU-MRLs, of which 28 samples (14.97% were found to be contaminated but at the concentrations below the EU-MRLs. There was no significant difference between the contamination rate of pasteurized and Ultra High Temperature (UHT-sterilized samples. Similarly, variation of weather temperature with seasons had no effect on the contamination prevalence of milk samples ( P > 0.05. Conclusion: Based on the result of this study, antibiotics residues were present in the majority of milk samples. Neither the season nor the type of thermal processing of the commercial milks had noticeable impact on the prevalence level of the milk samples. However, an increasing trend of prevalence level for antibiotic residues was observed with increasing the temperature through the warm season.

  15. Optimal experimental designs for estimating Henry's law constants via the method of phase ratio variation.

    Science.gov (United States)

    Kapelner, Adam; Krieger, Abba; Blanford, William J

    2016-10-14

    When measuring Henry's law constants (kH) using the phase ratio variation (PRV) method via headspace gas chromatography (GC), the value of kH of the compound under investigation is calculated from the ratio of the slope to the intercept of a linear regression of the inverse GC response versus the ratio of gas to liquid volumes of a series of vials drawn from the same parent solution. Thus, an experimenter collects measurements consisting of the independent variable (the gas/liquid volume ratio) and dependent variable (the GC-1 peak area). A review of the literature found that the common design is a simple uniform spacing of liquid volumes. We present an optimal experimental design which estimates kH with minimum error and provides multiple means for building confidence intervals for such estimates. We illustrate performance improvements of our design with an example measuring the kH for Naphthalene in aqueous solution as well as simulations on previous studies. Our designs are most applicable after a trial run defines the linear GC response and the linear phase ratio to the GC-1 region (where the PRV method is suitable) after which a practitioner can collect measurements in bulk. The designs can be easily computed using our open source software optDesignSlopeInt, an R package on CRAN. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A variable-order laminated plate theory based on the variational-asymptotical method

    Science.gov (United States)

    Lee, Bok W.; Sutyrin, Vladislav G.; Hodges, Dewey H.

    1993-01-01

    The variational-asymptotical method is a mathematical technique by which the three-dimensional analysis of laminated plate deformation can be split into a linear, one-dimensional, through-the-thickness analysis and a nonlinear, two-dimensional, plate analysis. The elastic constants used in the plate analysis are obtained from the through-the-thickness analysis, along with approximate, closed-form three-dimensional distributions of displacement, strain, and stress. In this paper, a theory based on this technique is developed which is capable of approximating three-dimensional elasticity to any accuracy desired. The asymptotical method allows for the approximation of the through-the-thickness behavior in terms of the eigenfunctions of a certain Sturm-Liouville problem associated with the thickness coordinate. These eigenfunctions contain all the necessary information about the nonhomogeneities along the thickness coordinate of the plate and thus possess the appropriate discontinuities in the derivatives of displacement. The theory is presented in this paper along with numerical results for the eigenfunctions of various laminated plates.

  17. Solving variational problems and partial differential equations that map between manifolds via the closest point method

    Science.gov (United States)

    King, Nathan D.; Ruuth, Steven J.

    2017-05-01

    Maps from a source manifold M to a target manifold N appear in liquid crystals, color image enhancement, texture mapping, brain mapping, and many other areas. A numerical framework to solve variational problems and partial differential equations (PDEs) that map between manifolds is introduced within this paper. Our approach, the closest point method for manifold mapping, reduces the problem of solving a constrained PDE between manifolds M and N to the simpler problems of solving a PDE on M and projecting to the closest points on N. In our approach, an embedding PDE is formulated in the embedding space using closest point representations of M and N. This enables the use of standard Cartesian numerics for general manifolds that are open or closed, with or without orientation, and of any codimension. An algorithm is presented for the important example of harmonic maps and generalized to a broader class of PDEs, which includes p-harmonic maps. Improved efficiency and robustness are observed in convergence studies relative to the level set embedding methods. Harmonic and p-harmonic maps are computed for a variety of numerical examples. In these examples, we denoise texture maps, diffuse random maps between general manifolds, and enhance color images.

  18. Biological variation: Evaluation of methods for constructing confidence intervals for estimates of within-person biological variation for different distributions of the within-person effect.

    Science.gov (United States)

    Røraas, Thomas; Støve, Bård; Petersen, Per Hyltoft; Sandberg, Sverre

    2017-05-01

    Precise estimates of the within-person biological variation, CVI, can be essential both for monitoring patients and for setting analytical performance specifications. The confidence interval, CI, may be used to evaluate the reliability of an estimate, as it is a good measure of the uncertainty of the estimated CVI. The aim of the present study is to evaluate and establish methods for constructing a CI with the correct coverage probability and non-cover probability when estimating CVI. Data based on 3 models for distributions for the within-person effect were simulated to assess the performance of 3 methods for constructing confidence intervals; the formula based method for the nested ANOVA, the percentile bootstrap and the bootstrap-t methods. The performance of the evaluated methods for constructing a CI varied, both dependent on the size of the CVI and the type of distributions. The bootstrap-t CI have good and stable performance for the models evaluated, while the formula based are more distribution dependent. The percentile bootstrap performs poorly. CI is an essential part of estimation of the within-person biological variation. Good coverage probability and non-cover probabilities for CI are achievable by using the bootstrap-t combined with CV-ANOVA. Supplemental R-code is provided online. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Methods for quantifying the influences of pressure and temperature variation on metal hydride reaction rates measured under isochoric conditions.

    Science.gov (United States)

    Voskuilen, Tyler G; Pourpoint, Timothée L

    2013-11-01

    Analysis techniques for determining gas-solid reaction rates from gas sorption measurements obtained under non-constant pressure and temperature conditions often neglect temporal variations in these quantities. Depending on the materials in question, this can lead to significant variations in the measured reaction rates. In this work, we present two new analysis techniques for comparison between various kinetic models and isochoric gas measurement data obtained under varying temperature and pressure conditions in a high pressure Sievert system. We introduce the integral pressure dependence method and the temperature dependence factor as means of correcting for experimental variations, improving model-measurement fidelity, and quantifying the effect that such variations can have on measured reaction rates. We use measurements of hydrogen absorption in LaNi5 and TiCrMn to demonstrate the effect of each of these methods and show that their use can provide quantitative improvements in interpretation of kinetics measurements.

  20. Effect of culture methods on individual variation in the growth of sea cucumber Apostichopus japonicus within a cohort and family

    Science.gov (United States)

    Qiu, Tianlong; Zhang, Libin; Zhang, Tao; Bai, Yucen; Yang, Hongsheng

    2014-07-01

    There is substantial individual variation in the growth rates of sea cucumber Apostichopus japonicus individuals. This necessitates additional work to grade the seed stock and lengthens the production period. We evaluated the influence of three culture methods (free-mixed, isolated-mixed, isolated-alone) on individual variation in growth and assessed the relationship between feeding, energy conversion efficiency, and individual growth variation in individually cultured sea cucumbers. Of the different culture methods, animals grew best when reared in the isolated-mixed treatment (i.e., size classes were held separately), though there was no difference in individual variation in growth between rearing treatment groups. The individual variation in growth was primarily attributed to genetic factors. The difference in food conversion efficiency caused by genetic differences among individuals was thought to be the origin of the variance. The level of individual growth variation may be altered by interactions among individuals and environmental heterogeneity. Our results suggest that, in addition to traditional seed grading, design of a new kind of substrate that changes the spatial distribution of sea cucumbers would effectively enhance growth and reduce individual variation in growth of sea cucumbers in culture.

  1. Size-consistent variational approaches to nonlocal pseudopotentials: Standard and lattice regularized diffusion Monte Carlo methods revisited

    Science.gov (United States)

    Casula, Michele; Moroni, Saverio; Sorella, Sandro; Filippi, Claudia

    2010-04-01

    We propose improved versions of the standard diffusion Monte Carlo (DMC) and the lattice regularized diffusion Monte Carlo (LRDMC) algorithms. For the DMC method, we refine a scheme recently devised to treat nonlocal pseudopotential in a variational way. We show that such scheme—when applied to large enough systems—maintains its effectiveness only at correspondingly small enough time-steps, and we present two simple upgrades of the method which guarantee the variational property in a size-consistent manner. For the LRDMC method, which is size-consistent and variational by construction, we enhance the computational efficiency by introducing: (i) an improved definition of the effective lattice Hamiltonian which remains size-consistent and entails a small lattice-space error with a known leading term and (ii) a new randomization method for the positions of the lattice knots which requires a single lattice-space.

  2. Evaluating variation in human gut microbiota profiles due to DNA extraction method and inter-subject differences

    OpenAIRE

    Brett eWagner Mackenzie; David William Waite; Michael W Taylor

    2015-01-01

    The human gut contains dense and diverse microbial communities which have profound influences on human health. Gaining meaningful insights into these communities requires provision of high quality microbial nucleic acids from human fecal samples, as well as an understanding of the sources of variation and their impacts on the experimental model. We present here a systematic analysis of commonly used microbial DNA extraction methods, and identify significant sources of variation. Five extracti...

  3. A Bayesian method and its variational approximation for prediction of genomic breeding values in multiple traits

    Directory of Open Access Journals (Sweden)

    Hayashi Takeshi

    2013-01-01

    Full Text Available Abstract Background Genomic selection is an effective tool for animal and plant breeding, allowing effective individual selection without phenotypic records through the prediction of genomic breeding value (GBV. To date, genomic selection has focused on a single trait. However, actual breeding often targets multiple correlated traits, and, therefore, joint analysis taking into consideration the correlation between traits, which might result in more accurate GBV prediction than analyzing each trait separately, is suitable for multi-trait genomic selection. This would require an extension of the prediction model for single-trait GBV to multi-trait case. As the computational burden of multi-trait analysis is even higher than that of single-trait analysis, an effective computational method for constructing a multi-trait prediction model is also needed. Results We described a Bayesian regression model incorporating variable selection for jointly predicting GBVs of multiple traits and devised both an MCMC iteration and variational approximation for Bayesian estimation of parameters in this multi-trait model. The proposed Bayesian procedures with MCMC iteration and variational approximation were referred to as MCBayes and varBayes, respectively. Using simulated datasets of SNP genotypes and phenotypes for three traits with high and low heritabilities, we compared the accuracy in predicting GBVs between multi-trait and single-trait analyses as well as between MCBayes and varBayes. The results showed that, compared to single-trait analysis, multi-trait analysis enabled much more accurate GBV prediction for low-heritability traits correlated with high-heritability traits, by utilizing the correlation structure between traits, while the prediction accuracy for uncorrelated low-heritability traits was comparable or less with multi-trait analysis in comparison with single-trait analysis depending on the setting for prior probability that a SNP has zero

  4. A new EEG measure using the 1D cluster variation method

    Science.gov (United States)

    Maren, Alianna J.; Szu, Harold H.

    2015-05-01

    A new information measure, drawing on the 1-D Cluster Variation Method (CVM), describes local pattern distributions (nearest-neighbor and next-nearest neighbor) in a binary 1-D vector in terms of a single interaction enthalpy parameter h for the specific case where the fractions of elements in each of two states are the same (x1=x2=0.5). An example application of this method would be for EEG interpretation in Brain-Computer Interfaces (BCIs), especially in the frontier of invariant biometrics based on distinctive and invariant individual responses to stimuli containing an image of a person with whom there is a strong affiliative response (e.g., to a person's grandmother). This measure is obtained by mapping EEG observed configuration variables (z1, z2, z3 for next-nearest neighbor triplets) to h using the analytic function giving h in terms of these variables at equilibrium. This mapping results in a small phase space region of resulting h values, which characterizes local pattern distributions in the source data. The 1-D vector with equal fractions of units in each of the two states can be obtained using the method for transforming natural images into a binarized equi-probability ensemble (Saremi & Sejnowski, 2014; Stephens et al., 2013). An intrinsically 2-D data configuration can be mapped to 1-D using the 1-D Peano-Hilbert space-filling curve, which has demonstrated a 20 dB lower baseline using the method compared with other approaches (cf. SPIE ICA etc. by Hsu & Szu, 2014). This CVM-based method has multiple potential applications; one near-term one is optimizing classification of the EEG signals from a COTS 1-D BCI baseball hat. This can result in a convenient 3-D lab-tethered EEG, configured in a 1-D CVM equiprobable binary vector, and potentially useful for Smartphone wireless display. Longer-range applications include interpreting neural assembly activations via high-density implanted soft, cellular-scale electrodes.

  5. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    Science.gov (United States)

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. An adaptive total variation image reconstruction method for speckles through disordered media

    Science.gov (United States)

    Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei

    2013-09-01

    Multiple scattering of light in highly disordered medium can break the diffraction limit of conventional optical system combined with image reconstruction method. Once the transmission matrix of the imaging system is obtained, the target image can be reconstructed from its speckle pattern by image reconstruction algorithm. Nevertheless, the restored image attained by common image reconstruction algorithms such as Tikhonov regularization has a relatively low signal-tonoise ratio (SNR) due to the experimental noise and reconstruction noise, greatly reducing the quality of the result image. In this paper, the speckle pattern of the test image is simulated by the combination of light propagation theories and statistical optics theories. Subsequently, an adaptive total variation (ATV) algorithm—the TV minimization by augmented Lagrangian and alternating direction algorithms (TVAL3), which is based on augmented Lagrangian and alternating direction algorithm, is utilized to reconstruct the target image. Numerical simulation experimental results show that, the TVAL3 algorithm can effectively suppress the noise of the restored image and preserve more image details, thus greatly boosts the SNR of the restored image. It also indicates that, compared with the image directly formed by `clean' system, the reconstructed results can overcoming the diffraction limit of the `clean' system, therefore being conductive to the observation of cells and protein molecules in biological tissues and other structures in micro/nano scale.

  7. Study on the Seismic Active Earth Pressure by Variational Limit Equilibrium Method

    Directory of Open Access Journals (Sweden)

    Jiangong Chen

    2016-01-01

    Full Text Available In the framework of limit equilibrium theory, the isoperimetric model of functional extremum regarding the seismic active earth pressure is deduced according to the variational method. On this basis, Lagrange multipliers are introduced to convert the problem of seismic active earth pressure into the problem on the functional extremum of two undetermined function arguments. Based on the necessary conditions required for the existence of functional extremum, the function of the slip surface and the normal stress distribution on the slip surface is obtained, and the functional extremum problem is further converted into a function optimization problem with two undetermined Lagrange multipliers. The calculated results show that the slip surface is a plane and the seismic active earth pressure is minimal when the action point is at the lower limit position. As the action point moves upward, the slip surface becomes a logarithmic spiral and the corresponding value of seismic active earth pressure increases in a nonlinear manner. And the seismic active earth pressure is maximal at the upper limit position. The interval estimation constructed by the minimum and maximum values of seismic active earth pressure can provide a reference for the aseismic design of gravity retaining walls.

  8. A variational method for automatic localization of the most pathological ROI in the knee cartilage

    Science.gov (United States)

    Qazi, Arish A.; Dam, Erik B.; Loog, Marco; Nielsen, Mads; Lauze, Francois; Christiansen, Claus

    2008-03-01

    Osteoarthritis (OA) is a degenerative joint disease characterized by degradation of the articular cartilage, and is a major cause of disability. At present, there is no cure for OA and currently available treatments are directed towards relief of symptoms. Recently it was shown that cartilage homogeneity visualized by MRI and representing the biochemical changes undergoing in the cartilage is a potential marker for early detection of knee OA. In this paper based on homogeneity we present an automatic technique, embedded in a variational framework, for localization of a region of interest in the knee cartilage that best indicates where the pathology of the disease is dominant. The technique is evaluated on 283 knee MR scans. We show that OA affects certain areas of the cartilage more distinctly, and these are more towards the peripheral region of the cartilage. We propose that this region in the cartilage corresponds anatomically to the area covered by the meniscus in healthy subjects. This finding may provide valuable clues in the pathology and the etiology of OA and thereby may improve treatment efficacy. Moreover our method is generic and may be applied to other organs as well.

  9. Inverse problem and variation method to optimize cascade heat exchange network in central heating system

    Science.gov (United States)

    Zhang, Yin; Wei, Zhiyuan; Zhang, Yinping; Wang, Xin

    2017-12-01

    Urban heating in northern China accounts for 40% of total building energy usage. In central heating systems, heat is often transferred from heat source to users by the heat network where several heat exchangers are installed at heat source, substations and terminals respectively. For given overall heating capacity and heat source temperature, increasing the terminal fluid temperature is an effective way to improve the thermal performance of such cascade heat exchange network for energy saving. In this paper, the mathematical optimization model of the cascade heat exchange network with three-stage heat exchangers in series is established. Aim at maximizing the cold fluid temperature for given hot fluid temperature and overall heating capacity, the optimal heat exchange area distribution and the medium fluids' flow rates are determined through inverse problem and variation method. The preliminary results show that the heat exchange areas should be distributed equally for each heat exchanger. It also indicates that in order to improve the thermal performance of the whole system, more heat exchange areas should be allocated to the heat exchanger where flow rate difference between two fluids is relatively small. This work is important for guiding the optimization design of practical cascade heating systems.

  10. A photoacoustic imaging reconstruction method based on directional total variation with adaptive directivity.

    Science.gov (United States)

    Wang, Jin; Zhang, Chen; Wang, Yuanyuan

    2017-05-30

    In photoacoustic tomography (PAT), total variation (TV) based iteration algorithm is reported to have a good performance in PAT image reconstruction. However, classical TV based algorithm fails to preserve the edges and texture details of the image because it is not sensitive to the direction of the image. Therefore, it is of great significance to develop a new PAT reconstruction algorithm to effectively solve the drawback of TV. In this paper, a directional total variation with adaptive directivity (DDTV) model-based PAT image reconstruction algorithm, which weightedly sums the image gradients based on the spatially varying directivity pattern of the image is proposed to overcome the shortcomings of TV. The orientation field of the image is adaptively estimated through a gradient-based approach. The image gradients are weighted at every pixel based on both its anisotropic direction and another parameter, which evaluates the estimated orientation field reliability. An efficient algorithm is derived to solve the iteration problem associated with DDTV and possessing directivity of the image adaptively updated for each iteration step. Several texture images with various directivity patterns are chosen as the phantoms for the numerical simulations. The 180-, 90- and 30-view circular scans are conducted. Results obtained show that the DDTV-based PAT reconstructed algorithm outperforms the filtered back-projection method (FBP) and TV algorithms in the quality of reconstructed images with the peak signal-to-noise rations (PSNR) exceeding those of TV and FBP by about 10 and 18 dB, respectively, for all cases. The Shepp-Logan phantom is studied with further discussion of multimode scanning, convergence speed, robustness and universality aspects. In-vitro experiments are performed for both the sparse-view circular scanning and linear scanning. The results further prove the effectiveness of the DDTV, which shows better results than that of the TV with sharper image edges and

  11. Analytical Investigation of Beam Deformation Equation using Perturbation, Homotopy Perturbation, Variational Iteration and Optimal Homotopy Asymptotic Methods

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Mowlaee, P.; Barari, Amin

    2011-01-01

    The beam deformation equation has very wide applications in structural engineering. As a differential equation, it has its own problem concerning existence, uniqueness and methods of solutions. Often, original forms of governing differential equations used in engineering problems are simplified......, and this process produces noise in the obtained answers. This paper deals with solution of second order of differential equation governing beam deformation using four analytical approximate methods, namely the Homotopy Perturbation Method (HPM), Variational Iteration Method (VIM) and Optimal Homotopy Asymptotic...... Method (OHAM). The comparisons of the results reveal that these methods are very effective, convenient and quite accurate to systems of non-linear differential equation....

  12. On Schwinger mechanics of electron-positron pair production from vacuum by the field of optical and x-ray laser

    CERN Document Server

    Popov, V S

    2001-01-01

    One calculated W probability of e sup + e sup - -pair production from vacuum under the effect of the intensive a-c field generated by optical or X-ray range lasers. One studied two peculiar ranges: gamma 1 and gamma 1, where gamma - parameter of adiabatic nature. It is shown that with rise of gamma, as well as, at transition from monochromatic radiation to finite duration laser pulse W probability increases abruptly (at similar value of field intensity). One discusses dependence of W probability and pulsed spectrum of electrons and positrons on the shape of laser pulse (the Schwinger dynamic effect)

  13. Multifractal subgrid-scale modeling within a variational multiscale method for large-eddy simulation of turbulent flow

    Science.gov (United States)

    Rasthofer, U.; Gravemeier, V.

    2013-02-01

    Multifractal subgrid-scale modeling within a variational multiscale method is proposed for large-eddy simulation of turbulent flow. In the multifractal subgrid-scale modeling approach, the subgrid-scale velocity is evaluated from a multifractal description of the subgrid-scale vorticity, which is based on the multifractal scale similarity of gradient fields in turbulent flow. The multifractal subgrid-scale modeling approach is integrated into a variational multiscale formulation, which constitutes a new application of the variational multiscale concept. A focus of this study is on the application of the multifractal subgrid-scale modeling approach to wall-bounded turbulent flow. Therefore, a near-wall limit of the multifractal subgrid-scale modeling approach is derived in this work. The novel computational approach of multifractal subgrid-scale modeling within a variational multiscale formulation is applied to turbulent channel flow at various Reynolds numbers, turbulent flow over a backward-facing step and turbulent flow past a square-section cylinder, which are three of the most important and widely-used benchmark examples for wall-bounded turbulent flow. All results presented in this study confirm a very good performance of the proposed method. Compared to a dynamic Smagorinsky model and a residual-based variational multiscale method, improved results are obtained. Moreover, it is demonstrated that the subgrid-scale energy transfer incorporated by the proposed method very well approximates the expected energy transfer as obtained from appropriately filtered direct numerical simulation data. The computational cost is notably reduced compared to a dynamic Smagorinsky model and only marginally increased compared to a residual-based variational multiscale method.

  14. Extended Eckart Theorem and New Variation Method for Excited States of Atoms

    CERN Document Server

    Xiong, Zhuang; Bacalis, N C; Zhou, Qin

    2016-01-01

    We extend the Eckart theorem, from the ground state to excited statew, which introduces an energy augmentation to the variation criterion for excited states. It is shown that the energy of a very good excited state trial function can be slightly lower than the exact eigenvalue. Further, the energy calculated by the trial excited state wave function, which is the closest to the exact eigenstate through Gram-Schmidt orthonormalization to a ground state approximant, is lower than the exact eigenvalue as well. In order to avoid the variation restrictions inherent in the upper bound variation theory based on Hylleraas, Undheim, and McDonald [HUM] and Eckart Theorem, we have proposed a new variation functional Omega-n and proved that it has a local minimum at the eigenstates, which allows approaching the eigenstate unlimitedly by variation of the trial wave function. As an example, we calculated the energy and the radial expectation values of Triplet-S(even) Helium atom by the new variation functional, and by HUM a...

  15. A variational EM method for pole-zero modeling of speech with mixed block sparse and Gaussian excitation

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    be expected. Moreover, motivated by the block sparse glottal flow excitation during voiced speech and the white noise excitation for unvoiced speech, we model the excitation sequence as a combination of block sparse signals and white noise. A variational EM (VEM) method is proposed for estimating...

  16. Application of He’s Variational Iteration Method to Nonlinear Helmholtz Equation and Fifth-Order KDV Equation

    DEFF Research Database (Denmark)

    Miansari, Mo; Miansari, Me; Barari, Amin

    2009-01-01

    In this article, He’s variational iteration method (VIM), is implemented to solve the linear Helmholtz partial differential equation and some nonlinear fifth-order Korteweg-de Vries (FKdV) partial differential equations with specified initial conditions. The initial approximations can be freely...

  17. Numerical Computation of Dynamical Schwinger-like Pair Production in Graphene

    Science.gov (United States)

    Fillion-Gourdeau, F.; Blain, P.; Gagnon, D.; Lefebvre, C.; Maclean, S.

    2017-03-01

    The density of electron-hole pairs produced in a graphene sample immersed in a homogeneous time-dependent electric field is evaluated. Because low energy charge carriers in graphene are described by relativistic quantum mechanics, the calculation is performed within the strong field quantum electrodynamics formalism, requiring a solution of the Dirac equation in momentum space. The equation is solved using a split-operator numerical scheme on parallel computers, allowing for the investigation of several field configurations. The strength of the method is illustrated by computing the electron momentum density generated from a realistic laser pulse model. We observe quantum interference patterns reminiscent of Landau-Zener-Stückelberg interferometry.

  18. Analytical approximate solution of time-fractional Fornberg–Whitham equation by the fractional variational iteration method

    Directory of Open Access Journals (Sweden)

    Birol İbiş

    2014-12-01

    Full Text Available The purpose of this paper was to obtain the analytical approximate solution of time-fractional Fornberg–Whitham, equation involving Jumarie’s modified Riemann–Liouville derivative by the fractional variational iteration method (FVIM. FVIM provides the solution in the form of a convergent series with easily calculable terms. The obtained approximate solutions are compared with the exact or existing numerical results in the literature to verify the applicability, efficiency and accuracy of the method.

  19. Imaging Seismic Source Variations Using Back-Projection Methods at El Tatio Geyser Field, Northern Chile

    Science.gov (United States)

    Kelly, C. L.; Lawrence, J. F.

    2014-12-01

    During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and

  20. Rigorous perturbation theory versus variational methods in the spectral study of carbon nanotubes

    DEFF Research Database (Denmark)

    Cornean, Horia; Pedersen, Thomas Garm; Ricaud, Benjamin

    Recent two-photon photo-luminescence experiments give accurate data for the ground and first excited excitonic energies at different nanotube radii. In this paper we compare the analytic approximations proved in [CDR], with a standard variational approach. We show an excellent agreement...

  1. Hourly Variation in the Flow Measurements in the Jesus Maria Watershed with the Cup-type Current Meter Method

    Directory of Open Access Journals (Sweden)

    José Pablo Bonilla Valverde

    2017-12-01

    Full Text Available Conducting punctual gauging measurements in Costa Rica constitutes a common practice for the evaluation of water resources for drinking water supply.  The country has a database composed of punctual measurements made in most of the rivers of Costa Rica with almost forty years of information. Within this database, a single data (punctual gauging is used to characterize the whole month in which it was gauged. In order to corroborate the validity of this characterization, punctual gauging was performed every hour to confirm that the hourly variation is minimal.  The hourly gauging was carried out during the flow measurement campaign in the Jesus Maria watershed conducted on April 9th and 10th, 2013.  The flow measurements were performed using cup-type current meter method according to the ISO 2537: 2007 standard.  One third of the measurements showed less than ±1% variation and more than three quarters were in the range of ±5% variation. In all cases, excluding the lower basin of the Jesus Maria River, variations in the measurements are less than 10% relative to the median.  It is concluded that the hour variation is relatively small, and therefore, the database is validated – for the months at the end of the dry season.  This experience should be repeated in the same basin at other times of the year and on other basins to ensure that the temporal variability do not represent large differences in the flow.

  2. MRT letter: a total variation based method for 3D shape recovery of microscopic objects through image focus.

    Science.gov (United States)

    Mahmood, Muhammad Tariq

    2013-09-01

    Generally, shape from focus methods use a single focus measure to compute focus quality and to obtain an initial depth map of an object. However, different focus measures perform differently in diverse conditions. Therefore, it is hard to get accurate 3D shape based on a single focus measure. In this article, we propose a total variation based method for recovering 3D shape of an object by combining multiple depth hypothesis obtained through different focus measures. Improved performance of the proposed method is evaluated by conducting several experiments using images of synthetic and real microscopic objects. Comparative analysis demonstrates the effectiveness of the proposed approach. Copyright © 2013 Wiley Periodicals, Inc.

  3. The Von Kármán constant retrieved from CASES-97 dataset using a variational method

    Directory of Open Access Journals (Sweden)

    Y. Zhang

    2008-12-01

    Full Text Available A variational method is developed to retrieve the von Kármán constant κ from the CASES-97 dataset, collected near Wichita, Kansas, the United States from 6 April to 24 May 1997. In the variational method, a cost function is defined to measure the difference between observed and computed gradients of wind speed, air temperature and specific humidity. An optimal estimated von Kármán constant is obtained by minimizing the cost function through adjusting values of the von Kármán constant. Under neutral stratification, the variational analysis confirms the conventional value of κ (=0.40. For non-neutral stratification, however, κ varies with stability. The computational results show that the κ decreases monotonously from stable to unstable stratification. The variational calculated mean value of the von Kármán constant is 0.383~0.390 when the atmospheric stratification is taken into consideration. Relations between κ and surface momentum and heat flux are also examined.

  4. Hybrid methods for accretive variational inequalities involving pseudocontractions in Banach spaces

    Directory of Open Access Journals (Sweden)

    Chen Rudong

    2011-01-01

    Full Text Available Abstract We use strongly pseudocontractions to regularize a class of accretive variational inequalities in Banach spaces, where the accretive operators are complements of pseudocontractions and the solutions are sought in the set of fixed points of another pseudocontraction. In this paper, we consider an implicit scheme that can be used to find a solution of a class of accretive variational inequalities. Our results improve and generalize some recent results of Yao et al. (Fixed Point Theory Appl, doi:10.1155/2011/180534, 2011 and Lu et al. (Nonlinear Anal, 71(3-4, 1032-1041, 2009. 2000 Mathematics subject classification 47H05; 47H09; 65J15

  5. Estimation of the Coefficient of Variation with Minimum Risk: A Sequential Method for Minimizing Sampling Error and Study Cost.

    Science.gov (United States)

    Chattopadhyay, Bhargab; Kelley, Ken

    2016-01-01

    The coefficient of variation is an effect size measure with many potential uses in psychology and related disciplines. We propose a general theory for a sequential estimation of the population coefficient of variation that considers both the sampling error and the study cost, importantly without specific distributional assumptions. Fixed sample size planning methods, commonly used in psychology and related fields, cannot simultaneously minimize both the sampling error and the study cost. The sequential procedure we develop is the first sequential sampling procedure developed for estimating the coefficient of variation. We first present a method of planning a pilot sample size after the research goals are specified by the researcher. Then, after collecting a sample size as large as the estimated pilot sample size, a check is performed to assess whether the conditions necessary to stop the data collection have been satisfied. If not an additional observation is collected and the check is performed again. This process continues, sequentially, until a stopping rule involving a risk function is satisfied. Our method ensures that the sampling error and the study costs are considered simultaneously so that the cost is not higher than necessary for the tolerable sampling error. We also demonstrate a variety of properties of the distribution of the final sample size for five different distributions under a variety of conditions with a Monte Carlo simulation study. In addition, we provide freely available functions via the MBESS package in R to implement the methods discussed.

  6. A primal-dual method for total-variation-based wavelet domain inpainting.

    Science.gov (United States)

    Wen, You-Wei; Chan, Raymond H; Yip, Andy M

    2012-01-01

    Loss of information in a wavelet domain can occur during storage or transmission when the images are formatted and stored in terms of wavelet coefficients. This calls for image inpainting in wavelet domains. In this paper, a variational approach is used to formulate the reconstruction problem. We propose a simple but very efficient iterative scheme to calculate an optimal solution and prove its convergence. Numerical results are presented to show the performance of the proposed algorithm.

  7. Accelerated gradient methods for total-variation-based CT image reconstruction

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian

    2011-01-01

    reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-demanding methods such as Newton’s method. The simple gradient method has much lower memory requirements, but exhibits slow convergence...

  8. Repeatability and variation of region-of-interest methods using quantitative diffusion tensor MR imaging of the brain

    Directory of Open Access Journals (Sweden)

    Hakulinen Ullamari

    2012-10-01

    Full Text Available Abstract Background Diffusion tensor imaging (DTI is increasingly used in various diseases as a clinical tool for assessing the integrity of the brain’s white matter. Reduced fractional anisotropy (FA and an increased apparent diffusion coefficient (ADC are nonspecific findings in most pathological processes affecting the brain’s parenchyma. At present, there is no gold standard for validating diffusion measures, which are dependent on the scanning protocols, methods of the softwares and observers. Therefore, the normal variation and repeatability effects on commonly-derived measures should be carefully examined. Methods Thirty healthy volunteers (mean age 37.8 years, SD 11.4 underwent DTI of the brain with 3T MRI. Region-of-interest (ROI -based measurements were calculated at eleven anatomical locations in the pyramidal tracts, corpus callosum and frontobasal area. Two ROI-based methods, the circular method (CM and the freehand method (FM, were compared. Both methods were also compared by performing measurements on a DTI phantom. The intra- and inter-observer variability (coefficient of variation, or CV% and repeatability (intra-class correlation coefficient, or ICC were assessed for FA and ADC values obtained using both ROI methods. Results The mean FA values for all of the regions were 0.663 with the CM and 0.621 with the FM. For both methods, the FA was highest in the splenium of the corpus callosum. The mean ADC value was 0.727 ×10-3 mm2/s with the CM and 0.747 ×10-3 mm2/s with the FM, and both methods found the ADC to be lowest in the corona radiata. The CV percentages of the derived measures were Conclusions With both ROI-based methods variability was low and repeatability was moderate. The circular method gave higher repeatability, but variation was slightly lower using the freehand method. The circular method can be recommended for the posterior limb of the internal capsule and splenium of the corpus callosum, and the freehand

  9. Solution of linear ordinary differential equations by means of the method of variation of arbitrary constants

    DEFF Research Database (Denmark)

    Mejlbro, Leif

    1997-01-01

    An alternative formula for the solution of linear differential equations of order n is suggested. When applicable, the suggested method requires fewer and simpler computations than the well-known method using Wronskians....

  10. Simulation of laminar and turbulent concentric pipe flows with the isogeometric variational multiscale method

    KAUST Repository

    Ghaffari Motlagh, Yousef

    2013-01-01

    We present an application of the residual-based variational multiscale modeling methodology to the computation of laminar and turbulent concentric annular pipe flows. Isogeometric analysis is utilized for higher-order approximation of the solution using Non-Uniform Rational B-Splines (NURBS). The ability of NURBS to exactly represent curved geometries makes NURBS-based isogeometric analysis attractive for the application to the flow through annular channels. We demonstrate the applicability of the methodology to both laminar and turbulent flow regimes. © 2012 Elsevier Ltd.

  11. Superconvergence Analysis of Finite Element Method for a Second-Type Variational Inequality

    Directory of Open Access Journals (Sweden)

    Dongyang Shi

    2012-01-01

    Full Text Available This paper studies the finite element (FE approximation to a second-type variational inequality. The supe rclose and superconvergence results are obtained for conforming bilinear FE and nonconforming EQrot FE schemes under a reasonable regularity of the exact solution u∈H5/2(Ω, which seem to be never discovered in the previous literature. The optimal L2-norm error estimate is also derived for EQrot FE. At last, some numerical results are provided to verify the theoretical analysis.

  12. Comparison of methods for classification of the coefficient of variation in papaya

    Directory of Open Access Journals (Sweden)

    Jeferson Pereira Ferreira

    2016-04-01

    Full Text Available ABSTRACT The objective of this work was to study the distribution of values of the coefficient of variation (CV in the experiments of papaya crop (Carica papaya L. by proposing ranges to guide researchers in their evaluation for different characters in the field. The data used in this study were obtained by bibliographical review in Brazilian journals, dissertations and thesis. This study considered the following characters: diameter of the stalk, insertion height of the first fruit, plant height, number of fruits per plant, fruit biomass, fruit length, equatorial diameter of the fruit, pulp thickness, fruit firmness, soluble solids and internal cavity diameter, from which, value ranges were obtained for the CV values for each character, based on the methodology proposed by Garcia, Costa and by the standard classification of Pimentel-Gomes. The results obtained in this study indicated that ranges of CV values were different among various characters, presenting a large variation, which justifies the necessity of using specific evaluation range for each character. In addition, the use of classification ranges obtained from methodology of Costa is recommended.

  13. Solving the non-isothermal reaction-diffusion model equations in a spherical catalyst by the variational iteration method

    Science.gov (United States)

    Wazwaz, Abdul-Majid

    2017-07-01

    In this work we address the Lane-Emden boundary value problems which appear in chemical applications, biochemical applications, and scientific disciplines. We apply the variational iteration method to solve two specific models. The first problem models reaction-diffusion equation in a spherical catalyst, while the second problem models the reaction-diffusion process in a spherical biocatalyst. We obtain reliable analytical expressions of the concentrations and the effectiveness factors. Proper graphs will be used to illustrate the obtained results. The proposed analysis demonstrates reliability and efficiency applicability of the employed method.

  14. A Viscosity of Cesàro Mean Approximation Methods for a Mixed Equilibrium, Variational Inequalities, and Fixed Point Problems

    Directory of Open Access Journals (Sweden)

    Jitpeera Thanyarat

    2011-01-01

    Full Text Available We introduce a new iterative method for finding a common element of the set of solutions for mixed equilibrium problem, the set of solutions of the variational inequality for a -inverse-strongly monotone mapping, and the set of fixed points of a family of finitely nonexpansive mappings in a real Hilbert space by using the viscosity and Cesàro mean approximation method. We prove that the sequence converges strongly to a common element of the above three sets under some mind conditions. Our results improve and extend the corresponding results of Kumam and Katchang (2009, Peng and Yao (2009, Shimizu and Takahashi (1997, and some authors.

  15. SU-C-9A-03: Simultaneous Deconvolution and Segmentation for PET Tumor Delineation Using a Variational Method

    Energy Technology Data Exchange (ETDEWEB)

    Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W; D' Souza, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2014-06-01

    Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational method so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was

  16. A model-based clustering method to detect infectious disease transmission outbreaks from sequence variation.

    Science.gov (United States)

    McCloskey, Rosemary M; Poon, Art F Y

    2017-11-01

    Clustering infections by genetic similarity is a popular technique for identifying potential outbreaks of infectious disease, in part because sequences are now routinely collected for clinical management of many infections. A diverse number of nonparametric clustering methods have been developed for this purpose. These methods are generally intuitive, rapid to compute, and readily scale with large data sets. However, we have found that nonparametric clustering methods can be biased towards identifying clusters of diagnosis-where individuals are sampled sooner post-infection-rather than the clusters of rapid transmission that are meant to be potential foci for public health efforts. We develop a fundamentally new approach to genetic clustering based on fitting a Markov-modulated Poisson process (MMPP), which represents the evolution of transmission rates along the tree relating different infections. We evaluated this model-based method alongside five nonparametric clustering methods using both simulated and actual HIV sequence data sets. For simulated clusters of rapid transmission, the MMPP clustering method obtained higher mean sensitivity (85%) and specificity (91%) than the nonparametric methods. When we applied these clustering methods to published sequences from a study of HIV-1 genetic clusters in Seattle, USA, we found that the MMPP method categorized about half (46%) as many individuals to clusters compared to the other methods. Furthermore, the mean internal branch lengths that approximate transmission rates were significantly shorter in clusters extracted using MMPP, but not by other methods. We determined that the computing time for the MMPP method scaled linearly with the size of trees, requiring about 30 seconds for a tree of 1,000 tips and about 20 minutes for 50,000 tips on a single computer. This new approach to genetic clustering has significant implications for the application of pathogen sequence analysis to public health, where it is critical to

  17. Variational method for second-order properties in atoms and molecules

    Energy Technology Data Exchange (ETDEWEB)

    Bendazzoli, G.L. (Bologna Univ. (Italy)); Evangelisti, S. (Bologna Univ. (Italy). Ist. di Fisica); Fano, G.; Ortolani, F. (Bologna Univ. (Italy). Ist. di Fisica; Istituto Nazionale di Fisica Nucleare, Bologna (Italy))

    1980-02-11

    Second-order properties are computed by diagonalizing a perturbed Hamiltonian in a three-dimensional linear space L. The choice of the basis vectors generating L is suggested by the coupled perturbed Hartree-Fock (CPHF) method. A test numerical computation on the dipole polarizability of the H/sub 2/ molecule is presented. The method can be considered as an improvement of the CPHF method.

  18. Determining individual variation in growth and its implication for life-history and population processes using the empirical Bayes method.

    Directory of Open Access Journals (Sweden)

    Simone Vincenzi

    2014-09-01

    Full Text Available The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth and L∞ (asymptotic size. Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC, the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish.

  19. Determining Individual Variation in Growth and Its Implication for Life-History and Population Processes Using the Empirical Bayes Method

    Science.gov (United States)

    Vincenzi, Simone; Mangel, Marc; Crivelli, Alain J.; Munch, Stephan; Skaug, Hans J.

    2014-01-01

    The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups) determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth) and (asymptotic size). Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC), the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish. PMID:25211603

  20. Determining individual variation in growth and its implication for life-history and population processes using the empirical Bayes method.

    Science.gov (United States)

    Vincenzi, Simone; Mangel, Marc; Crivelli, Alain J; Munch, Stephan; Skaug, Hans J

    2014-09-01

    The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups) determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth) and L∞ (asymptotic size). Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC), the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish.

  1. Special Semester titled Geometric mechanics : variational and stochastic methods : CIB, Lausanne, Switzerland, January-June 2015

    CERN Document Server

    Cruzeiro, Ana; Holm, Darryl

    2017-01-01

    Collecting together contributed lectures and mini-courses, this book details the research presented in a special semester titled “Geometric mechanics – variational and stochastic methods” run in the first half of 2015 at the Centre Interfacultaire Bernoulli (CIB) of the Ecole Polytechnique Fédérale de Lausanne. The aim of the semester was to develop a common language needed to handle the wide variety of problems and phenomena occurring in stochastic geometric mechanics. It gathered mathematicians and scientists from several different areas of mathematics (from analysis, probability, numerical analysis and statistics, to algebra, geometry, topology, representation theory, and dynamical systems theory) and also areas of mathematical physics, control theory, robotics, and the life sciences, with the aim of developing the new research area in a concentrated joint effort, both from the theoretical and applied points of view. The lectures were given by leading specialists in different areas of mathematics a...

  2. Functional-analytic and numerical issues in splitting methods for total variation-based image reconstruction

    Science.gov (United States)

    Hintermüller, Michael; Rautenberg, Carlos N.; Hahn, Jooyoung

    2014-05-01

    Variable splitting schemes for the function space version of the image reconstruction problem with total variation regularization (TV-problem) in its primal and pre-dual formulations are considered. For the primal splitting formulation, while existence of a solution cannot be guaranteed, it is shown that quasi-minimizers of the penalized problem are asymptotically related to the solution of the original TV-problem. On the other hand, for the pre-dual formulation, a family of parametrized problems is introduced and a parameter dependent contraction of an associated fixed point iteration is established. Moreover, the theory is validated by numerical tests. Additionally, the augmented Lagrangian approach is studied, details on an implementation on a staggered grid are provided and numerical tests are shown.

  3. The mathematical theory of time-harmonic Maxwell's equations expansion-, integral-, and variational methods

    CERN Document Server

    Kirsch, Andreas

    2015-01-01

    This book gives a concise introduction to the basic techniques needed for the theoretical analysis of the Maxwell Equations, and filters in an elegant way the essential parts, e.g., concerning the various function spaces needed to rigorously investigate the boundary integral equations and variational equations. The book arose from lectures taught by the authors over many years and can be helpful in designing graduate courses for mathematically orientated students on electromagnetic wave propagation problems. The students should have some knowledge on vector analysis (curves, surfaces, divergence theorem) and functional analysis (normed spaces, Hilbert spaces, linear and bounded operators, dual space). Written in an accessible manner, topics are first approached with simpler scale Helmholtz Equations before turning to Maxwell Equations. There are examples and exercises throughout the book. It will be useful for graduate students and researchers in applied mathematics and engineers working in the theoretical ap...

  4. A General Iterative Method of Fixed Points for Mixed Equilibrium Problems and Variational Inclusion Problems

    Directory of Open Access Journals (Sweden)

    Phayap Katchang

    2010-01-01

    Full Text Available The purpose of this paper is to investigate the problem of finding a common element of the set of solutions for mixed equilibrium problems, the set of solutions of the variational inclusions with set-valued maximal monotone mappings and inverse-strongly monotone mappings, and the set of fixed points of a family of finitely nonexpansive mappings in the setting of Hilbert spaces. We propose a new iterative scheme for finding the common element of the above three sets. Our results improve and extend the corresponding results of the works by Zhang et al. (2008, Peng et al. (2008, Peng and Yao (2009, as well as Plubtieng and Sriprad (2009 and some well-known results in the literature.

  5. Seasonal variation of gravity wave parameters using different filter methods with daylight lidar measurements at midlatitudes

    Science.gov (United States)

    Baumgarten, K.; Gerding, M.; Lübken, F.-J.

    2017-03-01

    The daylight-capable Rayleigh-Mie-Raman (RMR) lidar at the midlatitude station in Kühlungsborn (54°N, 12°E) is in operation since 2010. The RMR lidar system is used to investigate different fractions of atmospheric waves, like gravity waves (GW) and thermal tides (with diurnal, semidiurnal, and terdiurnal components) at day and night. About 6150 h of data have been acquired until 2015. The general challenge for GW observations is the separation of different wave contributions from the observed superposition of GW, tides, or even longer periodic waves. Unfiltered lidar data always include such a superposition. We applied a Butterworth filter to separate GW and tides by vertical wavelength with a cutoff wavelength of 15 km and by observed periods with a cutoff period of 8 h. GW activity and characteristics are derived in an altitude range between 30 and 70 km. The retrieved vertically filtered temperature deviations contain GW with small vertical wavelengths over a broad range of periods, while only a small range of periods is included in the temporally filtered temperature deviations. We observe an annual variation of the wave activity for unfiltered and vertically filtered data, which is caused from tides and inertia gravity waves. In contrast to that, filtering in time leads to a weak semiannual variation for gravity waves with periods of 4-8 h, especially in higher altitudes. During summer, these waves have the half of the total amount of the potential energy budget compared to the unfiltered data. This shows the importance of waves with periods smaller than 8 h.

  6. Variational finite element method to study the absorption rate of drug ...

    African Journals Online (AJOL)

    Methods: The finite element method has been used to obtain the solution of the mass diffusion equation with appropriate boundary conditions. The tissue absorption rate of drug has been taken as the decreasing function of drug concentration from the skin surface towards the target site. The concentration at nodal points ...

  7. Solving total-variation image super-resolution problems via proximal symmetric alternating direction methods

    Directory of Open Access Journals (Sweden)

    Bin Gao

    2016-08-01

    Full Text Available Abstract The single image super-resolution (SISR problem represents a class of efficient models appealing in many computer vision applications. In this paper, we focus on designing a proximal symmetric alternating direction method of multipliers (SADMM for the SISR problem. By taking full exploitation of the special structure, the method enjoys the advantage of being easily implementable by linearizing the quadratic term of subproblems in the SISR problem. With this linearization, the resulting subproblems easily achieve closed-form solutions. A global convergence result is established for the proposed method. Preliminary numerical results demonstrate that the proposed method is efficient and the computing time is saved by nearly 40% compared with several state-of-the-art methods.

  8. PSCC: sensitive and reliable population-scale copy number variation detection method based on low coverage sequencing.

    Directory of Open Access Journals (Sweden)

    Xuchao Li

    Full Text Available BACKGROUND: Copy number variations (CNVs represent an important type of genetic variation that deeply impact phenotypic polymorphisms and human diseases. The advent of high-throughput sequencing technologies provides an opportunity to revolutionize the discovery of CNVs and to explore their relationship with diseases. However, most of the existing methods depend on sequencing depth and show instability with low sequence coverage. In this study, using low coverage whole-genome sequencing (LCS we have developed an effective population-scale CNV calling (PSCC method. METHODOLOGY/PRINCIPAL FINDINGS: In our novel method, two-step correction was used to remove biases caused by local GC content and complex genomic characteristics. We chose a binary segmentation method to locate CNV segments and designed combined statistics tests to ensure the stable performance of the false positive control. The simulation data showed that our PSCC method could achieve 99.7%/100% and 98.6%/100% sensitivity and specificity for over 300 kb CNV calling in the condition of LCS (∼2× and ultra LCS (∼0.2×, respectively. Finally, we applied this novel method to analyze 34 clinical samples with an average of 2× LCS. In the final results, all the 31 pathogenic CNVs identified by aCGH were successfully detected. In addition, the performance comparison revealed that our method had significant advantages over existing methods using ultra LCS. CONCLUSIONS/SIGNIFICANCE: Our study showed that PSCC can sensitively and reliably detect CNVs using low coverage or even ultra-low coverage data through population-scale sequencing.

  9. Nonlinearly preconditioned semismooth Newton methods for variational inequality solution of two-phase flow in porous media

    KAUST Repository

    Yang, Haijian

    2016-12-10

    Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.

  10. TU-CD-BRA-12: Coupling PET Image Restoration and Segmentation Using Variational Method with Multiple Regularizations

    Energy Technology Data Exchange (ETDEWEB)

    Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was used on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural

  11. Luminescence variations in europium-doped silicon-substituted hydroxyapatite nanobiophosphor via three different methods

    Energy Technology Data Exchange (ETDEWEB)

    Thang, Cao Xuan; Pham, Vuong-Hung, E-mail: vuong.phamhung@hust.edu.vn

    2015-07-15

    Highlights: • Europium doped silicon-substituted hydroxyapatite was synthesized by wet chemical synthesis method. • Morphology of nanoparticles depended on the synthesized method. • Photoluminescence intensity of the sample increases with the increasing of Si substitutions, Eu dopants and thermal annealing. - Abstract: This paper reports the first attempt for the synthesis of europium-doped Si-substituted hydroxyapatite (HA) nanostructure to achieve strong and stable luminescence of nanobiophosphor, particularly, by addition of different Eu dopants, Si substitutions, and application of optimum annealing temperatures of up to 1000 °C. The nanobiophosphor was synthesized by the coprecipitation, microwave, and hydrothermal methods. The nanoparticles demonstrated a nanowire to a spindle-like morphology, which was dependent on the method of synthesis. The photoluminescence (PL) intensity of the sample increases with the increase in Si substitutions and Eu dopants. The luminescent nanoparticles also showed the typical luminescence of Eu{sup 3+} centered at 610 nm, which was more efficient for the annealed Eu-doped Si-HA nanoparticles than for the as-synthesized nanoparticles. Among the different synthesis methods, the hydrothermal method reveals the best light emission represented by high PL intensity and narrow PL spectra. These results suggest the potential application of Eu-doped Si-HA in stable and biocompatible nanophosphors for light emission and nanomedicine.

  12. Variational method for the nonlinear dynamics of an elliptic magnetic stagnation line

    Energy Technology Data Exchange (ETDEWEB)

    Khater, A.H.; Seadawy, A.R. [Beni-Suef Univ., Mathematics Dept., Faculty of Science (Egypt); Khater, A.H.; Callebaut, D.K. [Antwerp Univ., Dept. Natuurkunde (Belgium); Helal, M.A. [Cairo Univ., Mathematics Dept., Faculty of Science, Giza (Egypt)

    2006-08-15

    The nonlinear evolution of the kink instability of a plasma with an elliptic magnetic stagnation line is studied by means of an amplitude expansion of the ideal magnetohydrodynamic equations. Wahlberg et al. have shown that, near marginal stability, the nonlinear evolution of the stability can be described in terms of a two-dimensional potential U(X,Y), where X and Y represent the amplitudes of the perturbations with positive and negative helical polarization. The potential U(X,Y) is found to be nonlinearly stabilizing for all values of the polarization. In our paper a Lagrangian and an invariant variational principle for two coupled nonlinear ordinal differential equations describing the nonlinear evolution of the stagnation line instability with arbitrary polarization are given. Using a trial function in a rectangular box we find the functional integral. The general case for the two box potential can be obtained on the basis of a different Ansatz where we approximate the Jost function by polynomials of order n instead of a piecewise linear function. An example for the second order is given to illustrate the general case. Some considerations concerning solar filaments and filament bands (circular or straight) are indicated as possible applications besides laboratory experiments with cusp geometry corresponding to quadrupolar cusp geometries for some clouds and thunderstorms. (authors)

  13. An integration of minimum local feature representation methods to recognize large variation of foods

    Science.gov (United States)

    Razali, Mohd Norhisham bin; Manshor, Noridayu; Halin, Alfian Abdul; Mustapha, Norwati; Yaakob, Razali

    2017-10-01

    Local invariant features have shown to be successful in describing object appearances for image classification tasks. Such features are robust towards occlusion and clutter and are also invariant against scale and orientation changes. This makes them suitable for classification tasks with little inter-class similarity and large intra-class difference. In this paper, we propose an integrated representation of the Speeded-Up Robust Feature (SURF) and Scale Invariant Feature Transform (SIFT) descriptors, using late fusion strategy. The proposed representation is used for food recognition from a dataset of food images with complex appearance variations. The Bag of Features (BOF) approach is employed to enhance the discriminative ability of the local features. Firstly, the individual local features are extracted to construct two kinds of visual vocabularies, representing SURF and SIFT. The visual vocabularies are then concatenated and fed into a Linear Support Vector Machine (SVM) to classify the respective food categories. Experimental results demonstrate impressive overall recognition at 82.38% classification accuracy based on the challenging UEC-Food100 dataset.

  14. Effects of seasonal variations and collection methods on the mineral composition of propolis from Apis mellifera Linnaeus Beehives.

    Science.gov (United States)

    Souza, E A; Zaluski, R; Veiga, N; Orsi, R O

    2016-06-01

    The effects of seasonal variations and the methods of collection of propolis produced by Africanized honey bees Apis mellifera Linnaeus, 1758, on the composition of constituent minerals such as magnesium (Mg), zinc (Zn), iron (Fe), sodium (Na), calcium (Ca), copper (Cu), and potassium (K) were evaluated. Propolis was harvested from 25 beehives by scraping or by means of propolis collectors (screen, "intelligent" collector propolis [ICP], lateral opening of the super [LOS], and underlay method). During the one-year study, the propolis produced was harvested each month, ground, homogenized, and stored in a freezer at -10 ºC. Seasonal analyses of the mineral composition were carried out by atomic absorption spectrophotometry and the results were evaluated by analysis of variance (ANOVA), followed by Tukey-Kramer's test to compare the mean values (ppropolis harvesting method affects the contents of 4 minerals (Mg, Zn, Fe, and Ca).

  15. Effects of seasonal variations and collection methods on the mineral composition of propolis from Apis mellifera Linnaeus Beehives

    Directory of Open Access Journals (Sweden)

    E. A. Souza

    Full Text Available Abstract The effects of seasonal variations and the methods of collection of propolis produced by Africanized honey bees Apis mellifera Linnaeus, 1758, on the composition of constituent minerals such as magnesium (Mg, zinc (Zn, iron (Fe, sodium (Na, calcium (Ca, copper (Cu, and potassium (K were evaluated. Propolis was harvested from 25 beehives by scraping or by means of propolis collectors (screen, “intelligent” collector propolis [ICP], lateral opening of the super [LOS], and underlay method. During the one-year study, the propolis produced was harvested each month, ground, homogenized, and stored in a freezer at -10 ºC. Seasonal analyses of the mineral composition were carried out by atomic absorption spectrophotometry and the results were evaluated by analysis of variance (ANOVA, followed by Tukey-Kramer’s test to compare the mean values (p<0.05. The results showed that seasonal variations influence the contents of 5 minerals (Mg, Fe, Na, Ca, and Cu, and the propolis harvesting method affects the contents of 4 minerals (Mg, Zn, Fe, and Ca.

  16. Development of a method to compensate for signal quality variations in repeated auditory event-related potential recordings

    Directory of Open Access Journals (Sweden)

    Antti K O Paukkunen

    2010-03-01

    Full Text Available Reliable measurements are mandatory in clinically relevant auditory event-related potential (AERP-based tools and applications. The comparability of the results gets worse as a result of variations in the remaining measurement error. A potential method is studied that allows optimization of the length of the recording session according to the concurrent quality of the recorded data. In this way, the sufficiency of the trials can be better guaranteed, which enables control of the remaining measurement error. The suggested method is based on monitoring the signal-to-noise ratio (SNR and remaining measurement error which are compared to predefined threshold values. The SNR test is well defined, but the criterion for the measurement error test still requires further empirical testing in practice. According to the results, the reproducibility of average AERPs in repeated experiments is improved in comparison to a case where the number of recorded trials is constant. The test-retest reliability is not significantly changed on average but the between-subject variation in the value is reduced by 33-35%. The optimization of the number of trials also prevents excessive recordings which might be of practical interest especially in the clinical context. The efficiency of the method may be further increased by implementing online tools that improve data consistency.

  17. Development of a Method to Compensate for Signal Quality Variations in Repeated Auditory Event-Related Potential Recordings

    Science.gov (United States)

    Paukkunen, Antti K. O.; Leminen, Miika M.; Sepponen, Raimo

    2010-01-01

    Reliable measurements are mandatory in clinically relevant auditory event-related potential (AERP)-based tools and applications. The comparability of the results gets worse as a result of variations in the remaining measurement error. A potential method is studied that allows optimization of the length of the recording session according to the concurrent quality of the recorded data. In this way, the sufficiency of the trials can be better guaranteed, which enables control of the remaining measurement error. The suggested method is based on monitoring the signal-to-noise ratio (SNR) and remaining measurement error which are compared to predefined threshold values. The SNR test is well defined, but the criterion for the measurement error test still requires further empirical testing in practice. According to the results, the reproducibility of average AERPs in repeated experiments is improved in comparison to a case where the number of recorded trials is constant. The test-retest reliability is not significantly changed on average but the between-subject variation in the value is reduced by 33–35%. The optimization of the number of trials also prevents excessive recordings which might be of practical interest especially in the clinical context. The efficiency of the method may be further increased by implementing online tools that improve data consistency. PMID:20407635

  18. Regeneration Methods Affect Genetic Variation and Structure in Shortleaf Pine (Pinus Echinata Mill.)

    Science.gov (United States)

    Rajiv G. Raja; Charles G. Tauer; Robert F. Wittwer; Yinghua Huang

    1998-01-01

    The effects of regene ration methods on genetic diversity and structure in shortleaf pine (Pinus echinata Mill.) were examined by quantifying the changes in genetic composition of shortleaf pine stands following harvest by monitoring changes in allele number and frequency at heterozygous loci over time. The results were also compared to the genetic...

  19. A Simple PV Inverter Power Factor Control Method Based on Solar Irradiance Variation

    DEFF Research Database (Denmark)

    Gökmen, Nuri; Hu, Weihao; Chen, Zhe

    2017-01-01

    of these impacts mostly depends on PV penetration level, grid and weather characteristics as well as the interaction of load and generation. In this study, a reactive power control method is proposed benefitting from solar irradiance measurements in weather stations. Accordingly, power factors of PV inverters...

  20. A Combined Post-Filtering Method to Improve Accuracy of Variational Optical Flow Estimation

    NARCIS (Netherlands)

    Tu, Z.; Veltkamp, R.C.|info:eu-repo/dai/nl/084742984; van der Aa, N.P.|info:eu-repo/dai/nl/298399679; van Gemeren, C.J.|info:eu-repo/dai/nl/372664571

    We present a novel combined post-filtering (CPF) method to improve the accuracy of optical flow estimation. Its attractive advantages are that outliers reduction is attained while discontinuities are well preserved, and occlusions are partially handled. Major contributions are the following: First,

  1. Variational Multiscale Finite Element Method for Flows in Highly Porous Media

    KAUST Repository

    Iliev, O.

    2011-10-01

    We present a two-scale finite element method (FEM) for solving Brinkman\\'s and Darcy\\'s equations. These systems of equations model fluid flows in highly porous and porous media, respectively. The method uses a recently proposed discontinuous Galerkin FEM for Stokes\\' equations by Wang and Ye and the concept of subgrid approximation developed by Arbogast for Darcy\\'s equations. In order to reduce the "resonance error" and to ensure convergence to the global fine solution, the algorithm is put in the framework of alternating Schwarz iterations using subdomains around the coarse-grid boundaries. The discussed algorithms are implemented using the Deal.II finite element library and are tested on a number of model problems. © 2011 Society for Industrial and Applied Mathematics.

  2. An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

    Science.gov (United States)

    2012-08-17

    direction dk = −∂1φ(xk, y(xk)) can be regarded as a subgradient direction for ψ(x). However, since we do not require that y(x) be sub-differentiable, a...where we do not assume any differentiability of y(x). As such, NADA is no longer a standard gradient or subgradient method previously studied in

  3. A four-way coupled Euler—Lagrange approach using a variational multiscale method for simulating cavitation

    Science.gov (United States)

    Hammerl, Georg; Wall, Wolfgang A.

    2015-12-01

    An Euler-Lagrange model is developed to simulate bubbly flow around an obstacle with the aim to resolve large and meso-scales of cavitation phenomena. The volume averaged Navier-Stokes equations are discretized using finite elements on an unstructured grid with a variational multiscale method. The trajectory of each bubble is tracked using Newton's second law. Furthermore, bubble interaction is modeled with a soft sphere contact model to obtain a four-way coupled approach. The new features presented in this work, besides using a variational multiscale method in an Euler-Lagrange framework, is an improved computation of the void fraction. A second order polynomial is used as filtering function and the volume integral is transformed by applying the divergence theorem twice, leading to line integrals which can be integrated analytically. Therefore, accuracy of void fraction computation is increased and discontinuities are avoided as is the case when the kernel touches a Gauss point across time steps. This integration technique is not limited to the chosen spatial discretization. The numerical test case considers flow in a channel with a cylindrical obstacle. Bubbles are released close to the inflow boundary and void fractions up to 30% occur at the stagnation point of the obstacle.

  4. Vitamin D status assessed by a validated HPLC method: within and between variation in subjects supplemented with vitamin D3

    DEFF Research Database (Denmark)

    Jakobsen, Jette; Bysted, Anette; Andersen, Rikke

    2009-01-01

    -subject variation of vitamin D status in serum samples from four different dietary intervention studies in which subjects (n=92) were supplemented with different doses of vitamin D3 (5-12 g/day) and for different durations (4-20 months). Results. The HPLC method was applicable for 4.0-200 nmol S-25OHD/L, while...... estimated in each of the four human intervention studies did not differ significantly (p=0.55). Hence, the pooled standard deviation was 15.3 nmol 25OHD3/L. In the studies with 6-8 samplings during 7-20 months of supplementation, the within-subject variation was 3.9-7.2 nmol 25OHD3/L, while vitamin D status...... was in the range 47-120 nmol/L. Conclusions. The validated HPLC method was applied in samples from human intervention studies in which subjects were supplemented with vitamin D3. The estimated standard deviation between and within subjects is useful in the forthcoming decision on setting limits for optimal vitamin...

  5. LV wall segmentation using the variational level set method (LSM) with additional shape constraint for oedema quantification

    Science.gov (United States)

    Kadir, K.; Gao, H.; Payne, A.; Soraghan, J.; Berry, C.

    2012-10-01

    In this paper an automatic algorithm for the left ventricle (LV) wall segmentation and oedema quantification from T2-weighted cardiac magnetic resonance (CMR) images is presented. The extent of myocardial oedema delineates the ischaemic area-at-risk (AAR) after myocardial infarction (MI). Since AAR can be used to estimate the amount of salvageable myocardial post-MI, oedema imaging has potential clinical utility in the management of acute MI patients. This paper presents a new scheme based on the variational level set method (LSM) with additional shape constraint for the segmentation of T2-weighted CMR image. In our approach, shape information of the myocardial wall is utilized to introduce a shape feature of the myocardial wall into the variational level set formulation. The performance of the method is tested using real CMR images (12 patients) and the results of the automatic system are compared to manual segmentation. The mean perpendicular distances between the automatic and manual LV wall boundaries are in the range of 1-2 mm. Bland-Altman analysis on LV wall area indicates there is no consistent bias as a function of LV wall area, with a mean bias of -121 mm2 between individual investigator one (IV1) and LSM, and -122 mm2 between individual investigator two (IV2) and LSM when compared to two investigators. Furthermore, the oedema quantification demonstrates good correlation when compared to an expert with an average error of 9.3% for 69 slices of short axis CMR image from 12 patients.

  6. A note on variational multiscale methods for high-contrast heterogeneous porous media flows with rough source terms

    KAUST Repository

    Calo, Victor M.

    2011-09-01

    In this short note, we discuss variational multiscale methods for solving porous media flows in high-contrast heterogeneous media with rough source terms. Our objective is to separate, as much as possible, subgrid effects induced by the media properties from those due to heterogeneous source terms. For this reason, enriched coarse spaces designed for high-contrast multiscale problems are used to represent the effects of heterogeneities of the media. Furthermore, rough source terms are captured via auxiliary correction equations that appear in the formulation of variational multiscale methods [23]. These auxiliary equations are localized and one can use additive or multiplicative constructions for the subgrid corrections as discussed in the current paper. Our preliminary numerical results show that one can capture the effects due to both spatial heterogeneities in the coefficients (such as permeability field) and source terms (e.g., due to singular well terms) in one iteration. We test the cases for both smooth source terms and rough source terms and show that with the multiplicative correction, the numerical approximations are more accurate compared to the additive correction. © 2010 Elsevier Ltd.

  7. Variation of Routine Soil Analysis When Compared with Hyperspectral Narrow Band Sensing Method

    Directory of Open Access Journals (Sweden)

    José A. M. Demattê

    2010-08-01

    Full Text Available The objectives of this research were to: (i develop hyperspectral narrow-band models to determine soil variables such as organic matter content (OM, sum of cations (SC = Ca + Mg + K, aluminum saturation (m%, cations saturation (V%, cations exchangeable capacity (CEC, silt, sand and clay content using visible-near infrared (Vis-NIR diffuse reflectance spectra; (ii compare the variations of the chemical and the spectroradiometric soil analysis (Vis-NIR. The study area is located in São Paulo State, Brazil. The soils were sampled over an area of 473 ha divided into grids (100 × 100 m with a total of 948 soil samples georeferenced. The laboratory RS data were obtained using an IRIS (Infrared Intelligent Spectroradiometer sensor (400–2,500 nm with a 2-nm spectral resolution between 450 and 1,000 nm and 4-nm between 1,000 and 2,500 nm. Satellite reflectance values were sampled from corrected Landsat Thematic Mapper (TM images. Each pixel in the image was evaluated as its vegetation index, color compositions and soil line concepts regarding certain locations of the field in the image. Chemical and physical analysis (organic matter content, sand, silt, clay, sum of cations, cations saturation, aluminum saturation and cations exchange capacity were performed in the laboratory. Statistical analysis and multiple regression equations for soil attribute predictions using radiometric data were developed. Laboratory data used 22 bands and 13 “Reflectance Inflexion Differences, RID” from different wavelength intervals of the optical spectrum. However, for TM-Landsat six bands were used in analysis (1, 2, 3, 4, 5, and 7.Estimations of some tropical soil attributes were possible using laboratory spectral analysis. Laboratory spectral reflectance (SR presented high correlations with traditional laboratory analyses for the soil attributes such as clay (R2 = 0.84, RMSE = 3.75 and sand (R2 = 0.85, RMSE = 3.74. The most sensitive narrow-bands in modeling

  8. A signal detection method for temporal variation of adverse effect with vaccine adverse event reporting system data.

    Science.gov (United States)

    Cai, Yi; Du, Jingcheng; Huang, Jing; Ellenberg, Susan S; Hennessy, Sean; Tao, Cui; Chen, Yong

    2017-07-05

    To identify safety signals by manual review of individual report in large surveillance databases is time consuming; such an approach is very unlikely to reveal complex relationships between medications and adverse events. Since the late 1990s, efforts have been made to develop data mining tools to systematically and automatically search for safety signals in surveillance databases. Influenza vaccines present special challenges to safety surveillance because the vaccine changes every year in response to the influenza strains predicted to be prevalent that year. Therefore, it may be expected that reporting rates of adverse events following flu vaccines (number of reports for a specific vaccine-event combination/number of reports for all vaccine-event combinations) may vary substantially across reporting years. Current surveillance methods seldom consider these variations in signal detection, and reports from different years are typically collapsed together to conduct safety analyses. However, merging reports from different years ignores the potential heterogeneity of reporting rates across years and may miss important safety signals. Reports of adverse events between years 1990 to 2013 were extracted from the Vaccine Adverse Event Reporting System (VAERS) database and formatted into a three-dimensional data array with types of vaccine, groups of adverse events and reporting time as the three dimensions. We propose a random effects model to test the heterogeneity of reporting rates for a given vaccine-event combination across reporting years. The proposed method provides a rigorous statistical procedure to detect differences of reporting rates among years. We also introduce a new visualization tool to summarize the result of the proposed method when applied to multiple vaccine-adverse event combinations. We applied the proposed method to detect safety signals of FLU3, an influenza vaccine containing three flu strains, in the VAERS database. We showed that it had high

  9. An Integral Equation Method Coupling with Variational Approach for Studying Coarse-Grained Lipid Dynamics

    Science.gov (United States)

    Fu, Szu-Pei; Jiang, Shidong; Klöckner, Andreas; Ryham, Rolf; Wala, Matt; Young, Yuan-Nan

    2017-11-01

    In macroscopic model, the well-known Helfrich membrane model has been extensively utilized as it captures some macroscopic physical properties of a lipid bilayer membrane. However, some phenomena such as membrane fusion and micelle formation cannot be described in this macroscopic framework. Yet the immense molecular details of a lipid bilayer membrane are impossible to be included in a plausible physical model. Therefore, in order to include the salient molecular details, we study the dynamics of coarse-grained lipid bilayer membrane using Janus particle configurations to represent collections of lipids These coarse-grained lipid molecules interact with each other through an action field that describes their hydrophobic tail-tail interactions. For this action potential, we adopt the integral equation method on solving energy minimizer with specific boundary condition on each Janus particle. Both the QBX (quadrature by expansion) and fhe fast multipole method (FMM) are used to efficiently solve the integral equation. We also examine the numerical accuracy and qualitative observation from large system simulations.

  10. A New Calculation Method of Dynamic Kill Fluid Density Variation during Deep Water Drilling

    Directory of Open Access Journals (Sweden)

    Honghai Fan

    2017-01-01

    Full Text Available There are plenty of uncertainties and enormous challenges in deep water drilling due to complicated shallow flow and deep strata of high temperature and pressure. This paper investigates density of dynamic kill fluid and optimum density during the kill operation process in which dynamic kill process can be divided into two stages, that is, dynamic stable stage and static stable stage. The dynamic kill fluid consists of a single liquid phase and different solid phases. In addition, liquid phase is a mixture of water and oil. Therefore, a new method in calculating the temperature and pressure field of deep water wellbore is proposed. The paper calculates the changing trend of kill fluid density under different temperature and pressure by means of superposition method, nonlinear regression, and segment processing technique. By employing the improved model of kill fluid density, deep water kill operation in a well is investigated. By comparison, the calculated density results are in line with the field data. The model proposed in this paper proves to be satisfactory in optimizing dynamic kill operations to ensure the safety in deep water.

  11. Variation of strain rate sensitivity index of a superplastic aluminum alloy in different testing methods

    Science.gov (United States)

    Majidi, Omid; Jahazi, Mohammad; Bombardier, Nicolas; Samuel, Ehab

    2017-10-01

    The strain rate sensitivity index, m-value, is being applied as a common tool to evaluate the impact of the strain rate on the viscoplastic behaviour of materials. The m-value, as a constant number, has been frequently taken into consideration for modeling material behaviour in the numerical simulation of superplastic forming processes. However, the impact of the testing variables on the measured m-values has not been investigated comprehensively. In this study, the m-value for a superplastic grade of an aluminum alloy (i.e., AA5083) has been investigated. The conditions and the parameters that influence the strain rate sensitivity for the material are compared with three different testing methods, i.e., monotonic uniaxial tension test, strain rate jump test and stress relaxation test. All tests were conducted at elevated temperature (470°C) and at strain rates up to 0.1 s-1. The results show that the m-value is not constant and is highly dependent on the applied strain rate, strain level and testing method.

  12. The optimal modified variational iteration method for the Lane-Emden equations with Neumann and Robin boundary conditions

    Science.gov (United States)

    Singh, Randhir; Das, Nilima; Kumar, Jitendra

    2017-06-01

    An effective analytical technique is proposed for the solution of the Lane-Emden equations. The proposed technique is based on the variational iteration method (VIM) and the convergence control parameter h . In order to avoid solving a sequence of nonlinear algebraic or complicated integrals for the derivation of unknown constant, the boundary conditions are used before designing the recursive scheme for solution. The series solutions are found which converges rapidly to the exact solution. Convergence analysis and error bounds are discussed. Accuracy, applicability of the method is examined by solving three singular problems: i) nonlinear Poisson-Boltzmann equation, ii) distribution of heat sources in the human head, iii) second-kind Lane-Emden equation.

  13. Method of continuous variations: applications of job plots to the study of molecular associations in organometallic chemistry.

    Science.gov (United States)

    Renny, Joseph S; Tomasevich, Laura L; Tallmadge, Evan H; Collum, David B

    2013-11-11

    Applications of the method of continuous variations (MCV or the Method of Job) to problems of interest to organometallic chemists are described. MCV provides qualitative and quantitative insights into the stoichiometries underlying association of m molecules of A and n molecules of B to form A(m)B(n) . Applications to complex ensembles probe associations that form metal clusters and aggregates. Job plots in which reaction rates are monitored provide relative stoichiometries in rate-limiting transition structures. In a specialized variant, ligand- or solvent-dependent reaction rates are dissected into contributions in both the ground states and transition states, which affords insights into the full reaction coordinate from a single Job plot. Gaps in the literature are identified and critiqued. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Rapid and Inexpensive Screening of Genomic Copy Number Variations Using a Novel Quantitative Fluorescent PCR Method

    Directory of Open Access Journals (Sweden)

    Martin Stofanko

    2013-01-01

    Full Text Available Detection of human microdeletion and microduplication syndromes poses significant burden on public healthcare systems in developing countries. With genome-wide diagnostic assays frequently inaccessible, targeted low-cost PCR-based approaches are preferred. However, their reproducibility depends on equally efficient amplification using a number of target and control primers. To address this, the recently described technique called Microdeletion/Microduplication Quantitative Fluorescent PCR (MQF-PCR was shown to reliably detect four human syndromes by quantifying DNA amplification in an internally controlled PCR reaction. Here, we confirm its utility in the detection of eight human microdeletion syndromes, including the more common WAGR, Smith-Magenis, and Potocki-Lupski syndromes with 100% sensitivity and 100% specificity. We present selection, design, and performance evaluation of detection primers using variety of approaches. We conclude that MQF-PCR is an easily adaptable method for detection of human pathological chromosomal aberrations.

  15. Cycle-Based Cluster Variational Method for Direct and Inverse Inference

    Science.gov (United States)

    Furtlehner, Cyril; Decelle, Aurélien

    2016-08-01

    Large scale inference problems of practical interest can often be addressed with help of Markov random fields. This requires to solve in principle two related problems: the first one is to find offline the parameters of the MRF from empirical data (inverse problem); the second one (direct problem) is to set up the inference algorithm to make it as precise, robust and efficient as possible. In this work we address both the direct and inverse problem with mean-field methods of statistical physics, going beyond the Bethe approximation and associated belief propagation algorithm. We elaborate on the idea that loop corrections to belief propagation can be dealt with in a systematic way on pairwise Markov random fields, by using the elements of a cycle basis to define regions in a generalized belief propagation setting. For the direct problem, the region graph is specified in such a way as to avoid feed-back loops as much as possible by selecting a minimal cycle basis. Following this line we are led to propose a two-level algorithm, where a belief propagation algorithm is run alternatively at the level of each cycle and at the inter-region level. Next we observe that the inverse problem can be addressed region by region independently, with one small inverse problem per region to be solved. It turns out that each elementary inverse problem on the loop geometry can be solved efficiently. In particular in the random Ising context we propose two complementary methods based respectively on fixed point equations and on a one-parameter log likelihood function minimization. Numerical experiments confirm the effectiveness of this approach both for the direct and inverse MRF inference. Heterogeneous problems of size up to 10^5 are addressed in a reasonable computational time, notably with better convergence properties than ordinary belief propagation.

  16. A center-median filtering method for detection of temporal variation in coronal images

    Directory of Open Access Journals (Sweden)

    Plowman Joseph

    2016-01-01

    Full Text Available Events in the solar corona are often widely separated in their timescales, which can allow them to be identified when they would otherwise be confused with emission from other sources in the corona. Methods for cleanly separating such events based on their timescales are thus desirable for research in the field. This paper develops a technique for identifying time-varying signals in solar coronal image sequences which is based on a per-pixel running median filter and an understanding of photon-counting statistics. Example applications to “EIT waves” (named after EIT, the EUV Imaging Telescope on the Solar and Heliospheric Observatory and small-scale dynamics are shown, both using 193 Å data from the Atmospheric Imaging Assembly (AIA on the Solar Dynamics Observatory. The technique is found to discriminate EIT waves more cleanly than the running and base difference techniques most commonly used. It is also demonstrated that there is more signal in the data than is commonly appreciated, finding that the waves can be traced to the edge of the AIA field of view when the data are rebinned to increase the signal-to-noise ratio.

  17. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-01-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  18. Probabilistic methods for verbal autopsy interpretation: InterVA robustness in relation to variations in a priori probabilities.

    Directory of Open Access Journals (Sweden)

    Edward Fottrell

    Full Text Available InterVA is a probabilistic method for interpreting verbal autopsy (VA data. It uses a priori approximations of probabilities relating to diseases and symptoms to calculate the probability of specific causes of death given reported symptoms recorded in a VA interview. The extent to which InterVA's ability to characterise a population's mortality composition might be sensitive to variations in these a priori probabilities was investigated.A priori InterVA probabilities were changed by 1, 2 or 3 steps on the logarithmic scale on which the original probabilities were based. These changes were made to a random selection of 25% and 50% of the original probabilities, giving six model variants. A random sample of 1,000 VAs from South Africa, were used as a basis for experimentation and were processed using the original InterVA model and 20 random instances of each of the six InterVA model variants. Rank order of cause of death and cause-specific mortality fractions (CSMFs from the original InterVA model and the mean, maximum and minimum results from the 20 randomly modified InterVA models for each of the six variants were compared.CSMFs were functionally similar between the original InterVA model and the models with modified a priori probabilities such that even the CSMFs based on the InterVA model with the greatest degree of variation in the a priori probabilities would not lead to substantially different public health conclusions. The rank order of causes were also similar between all versions of InterVA.InterVA is a robust model for interpreting VA data and even relatively large variations in a priori probabilities do not affect InterVA-derived results to a great degree. The original physician-derived a priori probabilities are likely to be sufficient for the global application of InterVA in settings without routine death certification.

  19. A variational numerical method based on finite elements for the nonlinear solution characteristics of the periodically forced Chen system

    Science.gov (United States)

    Khan, Sabeel M.; Sunny, D. A.; Aqeel, M.

    2017-09-01

    Nonlinear dynamical systems and their solutions are very sensitive to initial conditions and therefore need to be approximated carefully. In this article, we present and analyze nonlinear solution characteristics of the periodically forced Chen system with the application of a variational method based on the concept of finite time-elements. Our approach is based on the discretization of physical time space into finite elements where each time-element is mapped to a natural time space. The solution of the system is then determined in natural time space using a set of suitable basis functions. The numerical algorithm is presented and implemented to compute and analyze nonlinear behavior at different time-step sizes. The obtained results show an excellent agreement with the classical RK-4 and RK-5 methods. The accuracy and convergence of the method is shown by comparing numerically computed results with the exact solution for a test problem. The presented method has shown a great potential in dealing with the solutions of nonlinear dynamical systems and thus can be utilized in delineating different features and characteristics of their solutions.

  20. Temporal and individual variation in offspring provisioning by tree swallows: a new method of automated nest attendance monitoring.

    Science.gov (United States)

    Rose, Alexandra P

    2009-01-01

    Studies of the ecology and evolution of avian nesting behavior have been limited by the difficulty and expense of sampling nest attendance behavior across entire days or throughout a substantial portion of the nestling period. Direct observation of nesting birds using human observers and most automated devices requires sub-sampling of the nestling period, which does not allow for the quantification of the duration of chick-feeding by parents within a day, and may also inadequately capture temporal variation in the rate at which chicks are fed. Here I describe an inexpensive device, the Automated Perch Recorder (APR) system, which collects accurate, long-term data on hourly rates of nest visitation, the duration of a pair's workday, and the total number of visits the pair makes to their nest across the entire period for which it is deployed. I also describe methods for verifying the accuracy of the system in the field, and several examples of how these data can be used to explore the causes of variation in and tradeoffs between the rate at which birds feed their chicks and the total length of time birds spend feeding chicks in a day.

  1. Temporal and individual variation in offspring provisioning by tree swallows: a new method of automated nest attendance monitoring.

    Directory of Open Access Journals (Sweden)

    Alexandra P Rose

    Full Text Available Studies of the ecology and evolution of avian nesting behavior have been limited by the difficulty and expense of sampling nest attendance behavior across entire days or throughout a substantial portion of the nestling period. Direct observation of nesting birds using human observers and most automated devices requires sub-sampling of the nestling period, which does not allow for the quantification of the duration of chick-feeding by parents within a day, and may also inadequately capture temporal variation in the rate at which chicks are fed. Here I describe an inexpensive device, the Automated Perch Recorder (APR system, which collects accurate, long-term data on hourly rates of nest visitation, the duration of a pair's workday, and the total number of visits the pair makes to their nest across the entire period for which it is deployed. I also describe methods for verifying the accuracy of the system in the field, and several examples of how these data can be used to explore the causes of variation in and tradeoffs between the rate at which birds feed their chicks and the total length of time birds spend feeding chicks in a day.

  2. New successive variational method of tensor-optimized antisymmetrized molecular dynamics for nuclear many-body systems

    Science.gov (United States)

    Myo, Takayuki; Toki, Hiroshi; Ikeda, Kiyomi; Horiuchi, Hisashi; Suhara, Tadahiro

    2017-07-01

    We recently proposed a new variational theory of “tensor-optimized antisymmetrized molecular dynamics” (TOAMD), which treats the strong interaction explicitly for finite nuclei [T. Myo et al., Prog. Theor. Exp. Phys. 2015, 073D02 (2015)]. In TOAMD, the correlation functions for the tensor force and the short-range repulsion and their multiple products are successively operated to the AMD state. The correlated Hamiltonian is expanded into many-body operators by using the cluster expansion and all the resulting operators are taken into account in the calculation without any truncation. We show detailed results for TOAMD with the nucleon-nucleon interaction AV8‧ for s-shell nuclei. The binding energy and the Hamiltonian components are successively converged to exact values of the few-body calculations. We also apply TOAMD to the Malfliet-Tjon central potential having a strong short-range repulsion. TOAMD can treat the short-range correlation and provided accurate energies of s-shell nuclei, reproducing the results of few-body calculations. It turns out that the numerical accuracy of TOAMD with double products of the correlation functions is beyond the variational Monte Carlo method with Jastrow's product-type correlation functions.

  3. Efficient and robust 3D CT image reconstruction based on total generalized variation regularization using the alternating direction method.

    Science.gov (United States)

    Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang

    2015-01-01

    Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research.

  4. Theoretical study of annealed proton-exchanged Nd $LiNbO_{3}$ channel waveguide lasers with variational method

    CERN Document Server

    De Long Zhang; Yuan Guo Xie; Guilan, Ding; Yuming, Cui; Cai He Chen

    2001-01-01

    The controllable fabrication parameters, including anneal time, initial exchange time, channel width, dependences of TM/sub 00/ mode size, corresponding effective refractive index, effective pump area, and coupling efficiency between pump and laser modes in z-cut annealed proton-exchanged (APE) Nd:LiNbO/sub 3/ channel waveguide lasers were studied by using variational method. The effect of channel width on the surface index increment and the waveguide depth was taken into account. The features of mode size and effective refractive index were summarized, discussed, and compared with previously published experimental results. The effective pump area, which is directly proportional to threshold pump power, increases strongly, slightly, and very slightly with the increase of anneal time, channel width, and initial exchange time, respectively. However, the coupling efficiency, which is directly proportional to slope efficiency, remains constant (around 0.82) no matter what changes made to these parameters. The var...

  5. Variation in otolith macrostructure of Japanese flounder ( Paralichthys olivaceus): A method to discriminate between wild and released fish

    Science.gov (United States)

    Katayama, S.; Isshiki, T.

    2007-02-01

    The main objective of this study was to develop a method to discriminate between wild and hatchery-produced Japanese flounder, Paralichthys olivaceus, based on variations in otolith macrostructure. Otoliths of wild flounder were more elliptical than those of hatchery-produced fish, whereas otolith area and marginal coarseness showed no clear differences. Otolith morphometry did not vary significantly with water temperature or feeding conditions in rearing experiments. Reduced ellipticity in the otoliths of hatchery-produced fish could be caused by biotic and abiotic conditions after release. Throughout the study, it was found that otoliths of Japanese flounder reared at 15 and 20 °C regimes showed opaque zones regardless of feeding condition, while otolith of fish reared at 25 °C had translucent zones. The potential of thermal marks and secondary zones as a new mass-marking system is presented.

  6. Application of Earned Value Method for evaluation the Time/Cost Consequences of Variation Orders in a Construction Project

    Science.gov (United States)

    Czemplik, Andrzej

    2017-10-01

    The decision-making process of acceptance of the variations order (VO) for running construction project is always subjected to the risk of consequences, which are difficult to be defined. But, even having all the technical and organizational consequences identified, their impact on the project completion date and final project cost, is not easy to state. The practical methodology of using the Earned Value Method (EVM) as a tool supporting the acceptance decision of the VO being considered during construction works, is presented in the paper. The main strength of the presented concept is a quick prognosis of the final project completion date and the final project cost in case the VO is accepted.

  7. Electronic structure simulation of chromium aluminum oxynitride by discrete variational-X alpha method and X-ray photoelectron spectroscopy

    CERN Document Server

    Choi, Y; Lee, J D; Kim, E; No, K

    2002-01-01

    We use a first-principles discrete variational (DV)-X alpha method to investigate the electronic structure of chromium aluminum oxynitride. When nitrogen is substituted for oxygen in the Cr-Al-O system, the N2p level appears in the energy range between O2p and Cr3d levels. Consequently, the valence band of chromium aluminum oxynitride becomes broader and the band gap becomes smaller than that of chromium aluminum oxide, which is consistent with the photoelectron spectra for the valence band using X-ray photoelectron spectroscopy (XPS) and ultraviolet photoelectron spectroscopy (UPS). We expect that this valence band structure of chromium aluminum oxynitride will modify the transmittance slope which is a requirement for photomask application.

  8. Variation in instrument-based color coordinates of esthetic restorative materials by measurement method-A review.

    Science.gov (United States)

    Lee, Yong-Keun; Yu, Bin; Lee, Seung-Hun; Cho, Moon-Sang; Lee, Chi-Youn; Lim, Ho-Nam

    2010-11-01

    Optical properties of an object are determined visually or instrumentally. Although instrumental measurement provides objective and quantitative color coordinates, these values vary by the measurement method such as specimen and background conditions, instrument settings and illuminant. The objective of this study was to review the influence of the measurement method on the instrumental color coordinates of esthetic dental restorative materials. Published reports on the measurement method dependent color variations of esthetic restorative materials were reviewed. Surface roughness influences the color coordinates differently by the surface roughness range and the measurement geometry. Specimen thickness and the kind of illuminant influence the color coordinates, and the influence of background varied by specimen thickness. Therefore, the specular component excluded (SCE) geometry that reflects the surface condition of specimens is suggested as the correct measurement geometry. Surface roughness, thickness and layering of specimens, and the kind of illuminant should be stipulated in each measurement. There should be a standard for the color and gloss of the background. Variables in instrumental color measurements should be stipulated to obtain consistent and comparable color coordinates, and a general guideline for instrumental color measurement of dental materials should be established. Copyright © 2010 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  9. Image reconstruction based on total-variation minimization and alternating direction method in linear scan computed tomography

    Science.gov (United States)

    Zhang, Han-Ming; Wang, Lin-Yuan; Yan, Bin; Li, Lei; Xi, Xiao-Qi; Lu, Li-Zhong

    2013-07-01

    Linear scan computed tomography (LCT) is of great benefit to online industrial scanning and security inspection due to its characteristics of straight-line source trajectory and high scanning speed. However, in practical applications of LCT, there are challenges to image reconstruction due to limited-angle and insufficient data. In this paper, a new reconstruction algorithm based on total-variation (TV) minimization is developed to reconstruct images from limited-angle and insufficient data in LCT. The main idea of our approach is to reformulate a TV problem as a linear equality constrained problem where the objective function is separable, and then minimize its augmented Lagrangian function by using alternating direction method (ADM) to solve subproblems. The proposed method is robust and efficient in the task of reconstruction by showing the convergence of ADM. The numerical simulations and real data reconstructions show that the proposed reconstruction method brings reasonable performance and outperforms some previous ones when applied to an LCT imaging problem.

  10. Reform-based science teaching: A mixed-methods approach to explaining variation in secondary science teacher practice

    Science.gov (United States)

    Jetty, Lauren E.

    The purpose of this two-phase, sequential explanatory mixed-methods study was to understand and explain the variation seen in secondary science teachers' enactment of reform-based instructional practices. Utilizing teacher socialization theory, this mixed-methods analysis was conducted to determine the relative influence of secondary science teachers' characteristics, backgrounds and experiences across their teacher development to explain the range of teaching practices exhibited by graduates from three reform-oriented teacher preparation programs. Data for this study were obtained from the Investigating the Meaningfulness of Preservice Programs Across the Continuum of Teaching (IMPPACT) Project, a multi-university, longitudinal study funded by NSF. In the first quantitative phase of the study, data for the sample (N=120) were collected from three surveys from the IMPPACT Project database. Hierarchical multiple regression analysis was used to examine the separate as well as the combined influence of factors such as teachers' personal and professional background characteristics, beliefs about reform-based science teaching, feelings of preparedness to teach science, school context, school culture and climate of professional learning, and influences of the policy environment on the teachers' use of reform-based instructional practices. Findings indicate three blocks of variables, professional background, beliefs/efficacy, and local school context added significant contribution to explaining nearly 38% of the variation in secondary science teachers' use of reform-based instructional practices. The five variables that significantly contributed to explaining variation in teachers' use of reform-based instructional practices in the full model were, university of teacher preparation, sense of preparation for teaching science, the quality of professional development, science content focused professional, and the perceived level of professional autonomy. Using the results

  11. Universality in the relaxation dynamics of the composed black-hole-charged-massive-scalar-field system: The role of quantum Schwinger discharge

    Directory of Open Access Journals (Sweden)

    Shahar Hod

    2015-07-01

    Full Text Available The quasinormal resonance spectrum {ωn(μ,q,M,Q}n=0n=∞ of charged massive scalar fields in the charged Reissner–Nordström black-hole spacetime is studied analytically in the large-coupling regime qQ≫Mμ (here {μ,q} are respectively the mass and charge coupling constant of the field, and {M,Q} are respectively the mass and electric charge of the black hole. This physical system provides a striking illustration for the validity of the universal relaxation bound τ×T≥ħ/π in black-hole physics (here τ≡1/ℑω0 is the characteristic relaxation time of the composed black-hole-scalar-field system, and T is the Bekenstein–Hawking temperature of the black hole. In particular, it is shown that the relaxation dynamics of charged massive scalar fields in the charged Reissner–Nordström black-hole spacetime may saturate this quantum time-times-temperature inequality. Interestingly, we prove that potential violations of the bound by light scalar fields are excluded by the Schwinger-type pair-production mechanism (a vacuum polarization effect, a quantum phenomenon which restricts the physical parameters of the composed black-hole-charged-field system to the regime qQ≪M2μ2/ħ.

  12. AluScan: a method for genome-wide scanning of sequence and structure variations in the human genome

    Directory of Open Access Journals (Sweden)

    Mei Lingling

    2011-11-01

    Full Text Available Abstract Background To complement next-generation sequencing technologies, there is a pressing need for efficient pre-sequencing capture methods with reduced costs and DNA requirement. The Alu family of short interspersed nucleotide elements is the most abundant type of transposable elements in the human genome and a recognized source of genome instability. With over one million Alu elements distributed throughout the genome, they are well positioned to facilitate genome-wide sequence amplification and capture of regions likely to harbor genetic variation hotspots of biological relevance. Results Here we report on the use of inter-Alu PCR with an enhanced range of amplicons in conjunction with next-generation sequencing to generate an Alu-anchored scan, or 'AluScan', of DNA sequences between Alu transposons, where Alu consensus sequence-based 'H-type' PCR primers that elongate outward from the head of an Alu element are combined with 'T-type' primers elongating from the poly-A containing tail to achieve huge amplicon range. To illustrate the method, glioma DNA was compared with white blood cell control DNA of the same patient by means of AluScan. The over 10 Mb sequences obtained, derived from more than 8,000 genes spread over all the chromosomes, revealed a highly reproducible capture of genomic sequences enriched in genic sequences and cancer candidate gene regions. Requiring only sub-micrograms of sample DNA, the power of AluScan as a discovery tool for genetic variations was demonstrated by the identification of 357 instances of loss of heterozygosity, 341 somatic indels, 274 somatic SNVs, and seven potential somatic SNV hotspots between control and glioma DNA. Conclusions AluScan, implemented with just a small number of H-type and T-type inter-Alu PCR primers, provides an effective capture of a diversity of genome-wide sequences for analysis. The method, by enabling an examination of gene-enriched regions containing exons, introns, and

  13. Gravity-dependent signal path variation in a large VLBI telescope modelled with a combination of surveying methods

    Science.gov (United States)

    Sarti, Pierguido; Abbondanza, C.; Vittuari, L.

    2009-11-01

    The very long baseline interferometry (VLBI) antenna in Medicina (Italy) is a 32-m AZ-EL mount that was surveyed several times, adopting an indirect method, for the purpose of estimating the eccentricity vector between the co-located VLBI and Global Positioning System instruments. In order to fulfill this task, targets were located in different parts of the telescope’s structure. Triangulation and trilateration on the targets highlight a consistent amount of deformation that biases the estimate of the instrument’s reference point up to 1 cm, depending on the targets’ locations. Therefore, whenever the estimation of accurate local ties is needed, it is critical to take into consideration the action of gravity on the structure. Furthermore, deformations induced by gravity on VLBI telescopes may modify the length of the path travelled by the incoming radio signal to a non-negligible extent. As a consequence, differently from what it is usually assumed, the relative distance of the feed horn’s phase centre with respect to the elevation axis may vary, depending on the telescope’s pointing elevation. The Medicina telescope’s signal path variation Δ L increases by a magnitude of approximately 2 cm, as the pointing elevation changes from horizon to zenith; it is described by an elevation-dependent second-order polynomial function computed as, according to Clark and Thomsen (Techical report, 100696, NASA, Greenbelt, 1988), a linear combination of three terms: receiver displacement Δ R, primary reflector’s vertex displacement Δ V and focal length variations Δ F. Δ L was investigated with a combination of terrestrial triangulation and trilateration, laser scanning and a finite element model of the antenna. The antenna gain (or auto-focus curve) Δ G is routinely determined through astronomical observations. A surprisingly accurate reproduction of Δ G can be obtained with a combination of Δ V, Δ F and Δ R.

  14. Successive variational method of the tensor-optimized antisymmetrized molecular dynamics for central interaction in finite nuclei

    Science.gov (United States)

    Myo, Takayuki; Toki, Hiroshi; Ikeda, Kiyomi; Horiuchi, Hisashi; Suhara, Tadahiro

    2017-04-01

    Tensor-optimized antisymmetrized molecular dynamics (TOAMD) is the basis of the successive variational method for the nuclear many-body problem. We apply TOAMD to finite nuclei described by the central interaction with strong short-range repulsion, and compare the results with those from the unitary correlation operator method (UCOM). In TOAMD, the pair-type correlation functions and their multiple products are operated to the antisymmetrized molecular dynamics (AMD) wave function. We show the results of TOAMD using the Malfliet-Tjon central potential containing the strong short-range repulsion. By adding the double products of the correlation functions in TOAMD, the binding energies are converged quickly to the exact values of the few-body calculations for s -shell nuclei. This indicates the high efficiency of TOAMD for treating the short-range repulsion in nuclei. We also employ the s -wave configurations of nuclei with the central part of UCOM, which reduces the short-range relative amplitudes of nucleon pair in nuclei to avoid the short-range repulsion. In UCOM, we further perform the superposition of the s -wave configurations with various size parameters, which provides a satisfactory solution of energies close to the exact and TOAMD values.

  15. Full calculation of μpd and μdt muonic bound levels: Combination of Nikiforov-Uvarov method and variational approach

    Science.gov (United States)

    Fatehizadeh, H.; Gheisari, R.; Falinejad, H.

    2017-10-01

    This paper develops a mathematical approach to calculate the ground state energies of μpd and μdt muonic ions. The calculations have been performed by combination of Nikiforov-Uvarov method and linear variational approach. Basis kets have been introduced to expand terms of wave function using a well definition of vector space in the hyper-spherical coordinates. The full calculations have been performed and compared with other variational calculations in the hyper-spherical harmonic formalism. The results of the variational energies show a fast convergence with a relatively small number of candidate basis functions.

  16. An Efficient Power Regeneration and Drive Method of an Induction Motor by Means of an Optimal Torque Derived by Variational Method

    Science.gov (United States)

    Inoue, Kaoru; Ogata, Kenji; Kato, Toshiji

    When the motor speed is reduced by using a regenerative brake, the mechanical energy of rotation is converted to the electrical energy. When the regenerative torque is large, the corresponding current increases so that the copper loss also becomes large. On the other hand, the damping effect of rotation increases according to the time elapse when the regenerative torque is small. In order to use the limited energy effectively, an optimal regenerative torque should be discussed in order to regenerate electrical energy as much as possible. This paper proposes a design methodology of a regenerative torque for an induction motor to maximize the regenerative electric energy by means of the variational method. Similarly, an optimal torque for acceleration is derived in order to minimize the energy to drive. Finally, an efficient motor drive system with the proposed optimal torque and the power storage system stabilizing the DC link voltage will be proposed. The effectiveness of the proposed methods are illustrated by both simulations and experiments.

  17. Finite element method for neutron transport. Part IV. A variational principle giving an upper bound for the lowest eigenvalue of the Boltzmann Equation

    Energy Technology Data Exchange (ETDEWEB)

    Ackroyd, R.T.

    1978-02-01

    A maximum principle for neutron transport in systems with extraneous sources is used with the method of source iteration to suggest a functional for a variational principle for self-sustaining systems. By using the general properties of the leakage and removal operators of the even-parity transport equation the variational principle is shown to give an upper bound to the lowest eigenvalue of the one-speed Boltzmann equation. Thus by making use of the method of Part III for a lower bound, the lowest eigenvalue can be bracketed. The variational principle leads to the finite element equations identical to those arising in the Williams/Galliara finite element formulation of the source-iteration methods, thus showing that the latter method always gives an upper bound to the lowest eigenvalue. Their upper bounds are very close to the exact value for some benchmark calculations.

  18. Novel Method for Estimating Variations in Salinity and River Discharge in the Hudson Estuary Using Stable Isotopes of Leaf Waxes

    Science.gov (United States)

    Tabanpour, B.; Nichols, J. E.; Isles, P. D.; Peteet, D. M.

    2010-12-01

    Understanding variations in the hydrological cycle of the Hudson Valley has important implications for water resources management, affecting millions of New Yorkers. Paleoclimatological records of hydrological variability from this region, however, are sparse, as the typical enivronments used for paleohydrological reconstruction do not exist. However, salt marshes are common features of the Hudson River, where the influence of tides is felt far upstream. To take advantage of these environments as recorders of paleohydrology, we present a new method for estimating annual river discharge using salt marsh sediments. We will be examining hydrogen isotopes of leaf waxes in vascular plants to estimate salinity, which will be calibrated to Hudson River discharge using United States Geological Survey (USGS) streamflow data. Freshwater flux from the Hudson Valley is proportional to the salinity at a particular location in the estuary. We estimate the relationship between the salinity and δD using a two-part mixing model where the salinity and δD of ocean water is 35 ppt and 0‰ VSMOW respectively, and the salinity and δD of continental water is 0 ppt and -55‰ (approximately annual average precipitation in the region). It has been shown that the δD of the leaf waxes of aquatic vegetation accurately reflect the δD of growth water. For our experiment, we collected common members of the generaTypha, Spartina, Phragmites, and Scirpus from salt marshes along the Hudson River, and the north and south shores of Long Island to calibrate the specific relationship between marsh plant leaf wax δD and marsh water δD. We compare the measured δD of these plant waxes to the δD of marsh water estimated from salinity measurements made at USGS gage stations near each collection location. We then used the new calibration to estimate late Holocene variations in marsh salinity and thus Hudson River discharge using fossil leaf waxes. This novel method will help us better understand

  19. Research on choices of methods of internet of things pricing based on variation of perceived value of service

    Directory of Open Access Journals (Sweden)

    Wei Li

    2013-03-01

    Full Text Available Purpose: With the rapid progress of Internet of Things technology, the information service of IoT has got unprecedented development, and plays an increasingly important role in real life. For the increasing demand of information service, the pricing of information service becomes more important. This paper aims to analyze the strategic options and payoff function between information provider and intermediaries based on Stackelberg game. Firstly, we describe information service delivery method based on the Internet of Things specific function. Secondly, we calculate the consumer demand for the information service. Finally, we explain two kinds of strategic options by the game theory, and then discuss the optimal pricing method of information services based on profit maximization.Design/methodology/approach: To achieve this objective, Considering the consumer perceived value of Internet of Things Service changing, we establish a Stackelberg model in which the supplier is the leader followed by the middleman. Then, we compare the advantages of using individual pricing with that of bundling pricing.Findings: The results show that whether information providers adopt bundling pricing strategy or individual pricing strategy depends on the cost of perception equipment, if information providers want to adopt individual pricing strategy, the variation of consumers’ perception value of information services must meet certain conditions.Research limitations/implications: the providers make price for the information service, in addition to continuously improve the quality of information service, it also devotes resources to tapping and understanding market information, such as the sensor device price, the variation of perception value of information services and so on, so as to create competitive advantage. This paper is just a preliminary model, it does not take into account the effect of mixed bundling.Originality/value: In this research, a new model for

  20. Developing a 3D Constrained Variational Analysis Method to Calculate Large Scale Forcing Data and the Applications

    Science.gov (United States)

    Tang, S.; Zhang, M. H.

    2014-12-01

    Large-scale forcing data (vertical velocities and advective tendencies) are important atmospheric fields to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES), but they are difficult to calculate accurately. The current 1-dimensional constrained variational analysis (1D CVA) method (Zhang and Lin, 1997) used by the Atmospheric Radiation Measurement (ARM) program is limited to represent the average of a sounding network domain. We extended the original 1D CVA algorithm into 3-dimensional along with other improvements, calculated gridded large-scale forcing data, apparent heating sources (Q1) and moisture sinks (Q2), and compared with 5 reanalyses: ERA-Interim, NCEP CFSR, MERRA, JRA55 and NARR for a mid-latitude spring cyclone case. The results from a case study for in March 3rd 2000 at the Southern Great Plain (SGP) show that reanalyses generally captured the structure of the mid-latitude cyclone, but they have serious biases in the 2nd order derivative terms (divergences and horizontal derivations) at regional scales of less than a few hundred kilometers. Our algorithm provides a set of atmospheric fields consistent with the observed constraint variables at the surface and top of the atmosphere better than reanalyses. The analyzed atmospheric fields can be used in SCM, CRM and LES to provide 3-dimensional dynamical forcing, or be used to evaluate reanalyses or model simulations.

  1. Testing Four Dimensional Variational Data Assimilation Method Using an Improved Intermediate Coupled Model for ENSO Analysis and Prediction

    Science.gov (United States)

    Gao, C.; Zhang, R. H.

    2016-12-01

    A four dimensional variational (4D-Var) data assimilation method is implemented in an improved intermediate coupled model (ICM) of the tropical Pacific. The ICM has ten baroclinic modes in the vertical, with horizonatally varying stratification taken into account; two empirical submodels are constructed from historical data, one for the subsurface entrainment temperature in the surface mixed layer (Te) in terms of sea level (SL) anomalies and another for the wind stress (τ) in terms of sea surface temperature (SST) anomalies. A twin experiment is established to evaluate the impact of the 4D-Var data assimilation algorithm on the El Niño and Southern Oscillation (ENSO) analysis and prediction. The model error is assumed to arise only from the parameter uncertainty. The "observation" of sea surface temperature (SST) anomaly is sampled from the "truth" model that takes default parameter values and added by a Gaussian noise, is directly assimilated into the assimilation model with its parameters being set erroneously. Results show that the 4D-Var effectively reduces the error of ENSO analysis and therefore improves the prediction skill of ENSO events at 12-month lead time compared with the non-assimilation case. These provide a promising way for the ICM in its better real-time ENSO prediction.

  2. Imaging spatial and temporal seismic source variations at Sierra Negra Volcano, Galapagos Islands using back-projection methods

    Science.gov (United States)

    Kelly, C. L.; Lawrence, J. F.; Ebinger, C. J.

    2013-12-01

    Imaging spatial and temporal seismic source variations at Sierra Negra Volcano, Galapagos Islands using back-projection methods Cyndi Kelly1, Jesse F. Lawrence1, Cindy Ebinger2 1Stanford University, Department of Geophysics, 397 Panama Mall, Stanford, CA 94305, USA 2University of Rochester, Department of Earth and Environmental Science, 227 Hutchison Hall, Rochester, NY 14627, USA Low-magnitude seismic signals generated by processes that characterize volcanic and hydrothermal systems and their plumbing networks are difficult to observe remotely. Seismic records from these systems tend to be extremely 'noisy', making it difficult to resolve 3D subsurface structures using traditional seismic methods. Easily identifiable high-amplitude bursts within the noise that might be suitable for use with traditional seismic methods (i.e. eruptions) tend to occur relatively infrequently compared to the length of an entire eruptive cycle. Furthermore, while these impulsive events might help constrain the dynamics of a particular eruption, they shed little insight into the mechanisms that occur throughout an entire eruption sequence. It has been shown, however, that the much more abundant low-amplitude seismic 'noise' in these records (i.e. volcanic or geyser 'tremor') actually represents a series of overlapping low-magnitude displacements that can be directly linked to magma, fluid, and volatile movement at depth. This 'noisy' data therefore likely contains valuable information about the processes occurring in the volcanic or hydrothermal system before, during and after eruption events. In this study, we present a new method to comprehensively study how the seismic source distribution of all events - including micro-events - evolves during different phases of the eruption sequence of Sierra Negra Volcano in the Galapagos Islands. We apply a back-projection search algorithm to image sources of seismic 'noise' at Sierra Negra Volcano during a proposed intrusion event. By analyzing

  3. Lippmann-Schwinger integral equation approach to the emission of radiation by sources located inside finite-sized dielectric structures

    DEFF Research Database (Denmark)

    Søndergaard, T.; Tromborg, Bjarne

    2002-01-01

    A full-vectorial integral equation method is presented for calculating near fields and far fields generated by a given distribution of sources located inside finite-sized dielectric structures. Special attention is given to the treatment of the singularity of the dipole source field. A method.......g., an excited atom located inside a dieletric structure....

  4. Analysis of Convective Straight and Radial Fins with Temperature-Dependent Thermal Conductivity Using Variational Iteration Method with Comparison with Respect to Finite Element Analysis

    Directory of Open Access Journals (Sweden)

    Safa Bozkurt Coşkun

    2007-01-01

    Full Text Available In order to enhance heat transfer between primary surface and the environment, radiating extended surfaces are commonly utilized. Especially in the case of large temperature differences, variable thermal conductivity has a strong effect on performance of such a surface. In this paper, variational iteration method is used to analyze convective straight and radial fins with temperature-dependent thermal conductivity. In order to show the efficiency of variational iteration method (VIM, the results obtained from VIM analysis are compared with previously obtained results using Adomian decomposition method (ADM and the results from finite element analysis. VIM produces analytical expressions for the solution of nonlinear differential equations. However, these expressions obtained from VIM must be tested with respect to the results obtained from a reliable numerical method or analytical solution. This work assures that VIM is a promising method for the analysis of convective straight and radial fin problems.

  5. The method of separation for evolutionary spectral density estimation of multi-variate and multi-dimensional non-stationary stochastic processes

    KAUST Repository

    Schillinger, Dominik

    2013-07-01

    The method of separation can be used as a non-parametric estimation technique, especially suitable for evolutionary spectral density functions of uniformly modulated and strongly narrow-band stochastic processes. The paper at hand provides a consistent derivation of method of separation based spectrum estimation for the general multi-variate and multi-dimensional case. The validity of the method is demonstrated by benchmark tests with uniformly modulated spectra, for which convergence to the analytical solution is demonstrated. The key advantage of the method of separation is the minimization of spectral dispersion due to optimum time- or space-frequency localization. This is illustrated by the calibration of multi-dimensional and multi-variate geometric imperfection models from strongly narrow-band measurements in I-beams and cylindrical shells. Finally, the application of the method of separation based estimates for the stochastic buckling analysis of the example structures is briefly discussed. © 2013 Elsevier Ltd.

  6. Assessment of a smartphone app (Capstesia) for measuring pulse pressure variation: agreement between two methods: A Cross-sectional study.

    Science.gov (United States)

    Barrachina, Borja; Cobos, Raquel; Mardones, Noemi; Castañeda, Angel; Vinuesa, Cristina

    2017-02-01

    Less invasive and noninvasive methods are emerging for haemodynamic monitoring. Among them is Capstesia, a smartphone app that, from photographs of a patient monitor showing invasive arterial pressure, estimates advanced haemodynamic variables after digitising and analysing the pressure curves. The aim of this study was to compare the level of agreement between the analysis of the signals obtained from the patient monitor and a photograph of the same images using the Capstesia app. Cross-sectional study. Araba University hospital (Txagorritxu), Vitoria-Gasteiz, Alava, Spain, from January to February 2015. Twenty patients (229 images) who had an arterial catheter (radial or femoral artery) inserted for haemodynamic monitoring. Snapshots obtained from the patient monitor and a photograph of these same snapshots using the Capstesia application were assessed with the same software (MATLAB, Mathworks, Natick, Massachusetats, USA) for evaluating the level of concordance of the following variables: pulse pressure variation (PPV), cardiac output (CO) and maximum slope of the pressure curve (dP/dt). Comparison was made using interclass correlation coefficients with corresponding 95% confidence intervals, and Bland-Altman plots with the corresponding percentages of error. (PPV). Secondary outcome: CO and maximum slope of the pressure curve [dP/dt]. The interclass correlation coefficients for PPV, CO and max dP/dt were 0.991 (95% confidence interval 0.988 to 0.993), 0.966 (95% confidence interval 0.956 to 0.974) and 0.962 (95% confidence interval 0.950 to 0.970), respectively. In the Bland-Altman analysis, bias and limits of agreement of PPV were (0.50% ± 1.42) resulting in a percentage of error of 20% for PPV. For CO they were 0.19 ± 0.341, with a 13.8% of error. Finally bias and limits of agreement for max dP/dt were 1.33 ± 77.71, resulting in an error of 14.20% CONCLUSIONS: Photograph of the screenshots obtained with the Capstesia app show a good concordance

  7. Short-term variations in core surface flow resolved from an improved method of calculating observatory monthly means

    Science.gov (United States)

    Olsen, Nils; Whaler, Kathryn A.; Finlay, Christopher C.

    2014-05-01

    Monthly means of the magnetic field measurements taken by ground observatories are a useful data source for studying temporal changes of the core magnetic field and the underlying core flow. However, the usual way of calculating monthly means as the arithmetic mean of all days (geomagnetic quiet as well as disturbed) and all local times (day and night) may result in contributions from external (magnetospheric and ionospheric) origin in the (ordinary, omm) monthly means. Such contamination makes monthly means less favourable for core studies. We calculated revised monthly means (rmm), and their uncertainties, from observatory hourly means using robust means and after removal of external field predictions, using an improved method for characterising the magnetospheric ring current. The utility of the new method for calculating observatory monthly means is demonstrated by inverting their first differences for core surface advective flows. The flow is assumed steady over three consecutive months to ensure uniqueness; the effects of more rapid changes should be attenuated by the weakly conducting mantle. Observatory data are inverted directly for a regularised core flow, rather than deriving it from a secular variation spherical harmonic model. The main field is specified by the CHAOS-4 model. Data from up to 128 observatories between 1997 and 2013 were used to calculate 185 flow models from the omm and rmm, for each possible set of three consecutive months. The full 3x3 (non-diagonal) data covariance matrix was used, and two-norm (least squares) minimisation performed. We are able to fit the data to the target (weighted) misfit of 1, for both omm and rmm inversions, provided we incorporate the full data covariance matrix, and produce consistent, plausible flows. Fits are better for rmm flows. The flows exhibit noticeable changes over timescales of a few months. However, they follow rapid excursions in the omm that we suspect result from external field contamination

  8. The study of evolution in the Crow - Kimura molecular genetics model using methods of calculus of variations

    Science.gov (United States)

    Subbotina, Nina N.; Shagalova, Lyubov G.

    2017-11-01

    The Cauchy problem for a nonlinear noncoercive Hamilton - Jacobi equation with state constraints is under consideration. Such a problem originates in molecular biology. It describes the process of evolution in molecular genetics according to the Crow - Kimura model. A generalized solution of prescribed structure is constructed and justifed via calculus of variations. The results of computer simulation are presented.

  9. Variation in the Profile of Anxiety Disorders in Boys with an ASD According to Method and Source of Assessment

    Science.gov (United States)

    Bitsika, Vicki; Sharpley, Christopher F.

    2015-01-01

    To determine any variation that might occur due to the type of assessment and source used to assess them, the prevalence of 7 anxiety disorders were investigated in a sample of 140 boys with an Autism spectrum disorder (ASD) and 50 non-ASD (NASD) boys via the Child and Adolescent Symptom Inventory and the KIDSCID Clinical Interview. Boys with an…

  10. A simple method of correction for profile-length water-column height variations in high-resolution, shallow-water seismic data

    Science.gov (United States)

    Kim, Hyeonju; Lee, Gwang Hoon; Yi, Bo Yeon; Yoon, Youngho; Kim, Kyong-O.; Kim, Han-Joon; Lee, Sang Hoon

    2017-06-01

    In high-resolution, shallow-water seismic surveys, correction for water-column height variations caused by tides, weather, and currents is an important part of data processing. In this study, we present a very simple method of correction for profile-length (i.e., long-wavelength) water-column height variations for high-resolution seismic data using a reference bathymetric grid. First, the difference between the depth of the seafloor picked from seismic data and the bathymetry from the bathymetric grid is computed at the locations where the shot points of seismic profiles and the bathymetric grid points are collocated or closest. Then, the results are gridded and smoothed to obtain the profile-length water-column height variations for the survey area. Next, the water-column height variations for each seismic profile are extracted from the smoothed grid and converted to two-way traveltimes. The corrections for the remaining mis-ties at the intersections, computed within a circular region around each tie shot point, are added to the corrections for the water-column height variations. The final, mistie corrected water-column height corrections are loaded to the SEGY trace header of seismic data as a total static. We applied this method to the sparker data acquired from the shallow-water area off the western-central part of Korea where the tidal range is over 7 m. The corrections for water-column height variations range from -10 to 4 m with a median value of about -2 m. Large corrections occur locally between and near the islands probably due to the amplification and shortening in tidal wavelength caused by rapid shoaling toward the islands.

  11. Evaluation of method performance for oxidative stress biomarkers in urine and biological variations in urine of patients with type 2 diabetes mellitus and diabetic nephropathy.

    Science.gov (United States)

    Kurutas, Ergul Belge; Gumusalan, Yakup; Cetinkaya, Ali; Dogan, Ekrem

    2015-01-01

    Oxidative stress biomarkers such as superoxide dismutase (CuZnSOD), catalase (CAT) and malondialdehyde (MDA) play an important role in the pathogenesis or progression of numerous diseases. Data regarding the biological variation and analytical quality specifications (imprecision, bias and total error) for judging the acceptability of method performance for oxidative stress biomarkers in urine are conspicuously lacking in the literature. Such data are important in setting analytical quality specifications, assessing the utility of population reference intervals (index of individuality) and assessing the significance of changes in serial results from an individual (reference change value; RCV). 20 patients with type 2 diabetes mellitus (T2DM), 20 patients with diabetic nephropathy (DN) and 14 healthy individuals as control were involved in this study. Timed first morning urine samples were taken from patients and healthy groups on the zero, 1st, 3rd, 5th, 7th, 15th and 30th days. Index of individuality and reference change value were calculated from within-subject and between-subject variations. Methods of oxidative stress biomarkers in human blood were adopted in human urine and markers were measured as spectrophotometrically. Also, analytical quality specifications for evaluation of the method performance were established for oxidative stress biomarkers in urine. Within-subject variations of oxidative stress biomarkers were significantly higher in patients with DN and T2DM compared to healthy subjects. MDA showed low individuality, and within-subject variances of MDA were larger than between-subject variances in all groups. However, CAT and CuZnSOD showed strong individuality, but within-subject variances of them were smaller than between-subject variances in all groups. RCVs of all analytes in diabetic patients were relatively higher, because of high within-subject variation, resulting in a higher RCV. Also, the described methodology achieves these goals, with

  12. A quasi-experimental design based on regional variations: discussion of a method for evaluating outcomes of medical practice

    DEFF Research Database (Denmark)

    Loft, A; Andersen, T F; Madsen, Mette

    1989-01-01

    A large proportion of common medical practices are subject to substantial regional variation resulting in numerous natural experiments. Opportunities are thereby provided for outcome evaluation through quasi-experimental design. If patients treated in different regions were comparable a natural...... experiment involving alternative treatments could be regarded as 'pseudo randomised', but empirical investigations are needed to verify this prerequisite. This paper discusses the role of quasi-experimental designs in assessment of medical care with evaluation of outcomes after hysterectomy in Denmark...

  13. Free vibration analysis of pre-stressed FGM Timoshenko beams under large transverse deflection by a variational method

    OpenAIRE

    Paul, Amlan; Das, Debabrata

    2017-01-01

    A theoretical study on free vibration behavior of pre-stressed functionally graded material (FGM) beam is carried out. Power law variation of volume fraction along the thickness direction is considered. Geometric non-linearity is incorporated through von Kármán non-linear strain–displacement relationship. The governing equation for the static problem is obtained using minimum potential energy principle. The dynamic problem for the pre-stressed beam is formulated as an eigenvalue problem using...

  14. Seasonal variation, method of determination of bovine milk stability, and its relation with physical, chemical, and sanitary characteristics of raw milk

    OpenAIRE

    Sandro Charopen Machado; Vivian Fischer; Marcelo Tempel Stumpf; Sheila Cristina Bosco Stivanin

    2017-01-01

    ABSTRACT The objective of this research was to determine the variation of milk stability evaluated with ethanol, boiling, and coagulation time tests (CTT) to identify milk components related with stability and verify the correlation between the three methods. Bulk raw milk was collected monthly at 50 dairy farms from January 2007 to October 2009 and physicochemical attributes, somatic cell (SCC), and total bacterial counts (TBC) were determined. Milk samples were classified into low, medium, ...

  15. Development and validation of a specific and sensitive LC-MS/MS method for quantification of urinary catecholamines and application in biological variation studies.

    Science.gov (United States)

    Li, Xiaoguang Sunny; Li, Shu; Wynveen, Paul; Mork, Kathy; Kellermann, Gottfried

    2014-11-01

    Catecholamines are a class of biogenic amines that play an important role as neurotransmitters and hormones. We developed and validated a rapid, specific and sensitive LC-MS/MS method for quantitative determination of catecholamines in human urine. Linearity, specificity, sensitivity, precision, accuracy, matrix effect, carryover, analyte stability, method comparison and reference range were evaluated. The catecholamine measurements were not affected by 35 structurally-related drugs and metabolites. The outstanding specificity was achieved by use of a specific diphenylborate-based solid phase extraction and subsequent selective LC-MS/MS analysis. Excellent sensitivity, accuracy and precision (average intra-assay variations catecholamine excretions for second morning sampling had least day-to-day within-subject variation and excellent reproducibility. This work is one of the rare studies on these topics and represents the first utilization of advanced LC-MS/MS technology. Additionally, we found significant correlations between spot and conventional 24 h collections of human urine (n = 22, r > 0.853, p catecholamine concentrations in the second morning urine sample presents accurate, convenient and reliable measurement of catecholamine excretions. In addition, consistent and significant diurnal variations for norepinephrine and epinephrine excretions were observed during the three-day period, while dopamine did not exhibit a diurnal rhythm. The LC-MS/MS method presented here is rapid, sensitive and specific, which could be an advantage in clinical laboratories.

  16. Hybrid Proximal-Point Methods for Zeros of Maximal Monotone Operators, Variational Inequalities and Mixed Equilibrium Problems

    Directory of Open Access Journals (Sweden)

    Kriengsak Wattanawitoon

    2011-01-01

    Full Text Available We prove strong and weak convergence theorems of modified hybrid proximal-point algorithms for finding a common element of the zero point of a maximal monotone operator, the set of solutions of equilibrium problems, and the set of solution of the variational inequality operators of an inverse strongly monotone in a Banach space under different conditions. Moreover, applications to complementarity problems are given. Our results modify and improve the recently announced ones by Li and Song (2008 and many authors.

  17. Individual variation and repeatability of methane production from dairy cows estimated by the CO₂ method in automatic milking system.

    Science.gov (United States)

    Haque, M N; Cornou, C; Madsen, J

    2015-09-01

    The objectives of this study were to investigate the individual variation, repeatability and correlation of methane (CH4) production from dairy cows measured during 2 different years. A total of 21 dairy cows with an average BW of 619 ± 14.2 kg and average milk production of 29.1 ± 6.5 kg/day (mean ± s.d.) were used in the 1st year. During the 2nd year, the same cows were used with an average BW of 640 ± 8.0 kg and average milk production of 33.4 ± 6.0 kg/day (mean ± s.d.). The cows were housed in a loose housing system fitted with an automatic milking system (AMS). A total mixed ration was fed to the cows ad libitum in both years. In addition, they were offered concentrate in the AMS based on their daily milk yield. The CH4 and CO2 production levels of the cows were analysed using a Gasmet DX-4030. The estimated dry matter intake (EDMI) was 19.8 ± 0.96 and 23.1 ± 0.78 (mean ± s.d.), and the energy-corrected milk (ECM) production was 30.8 ± 8.03 and 33.7 ± 5.25 kg/day (mean ± s.d.) during the 1st and 2nd year, respectively. The EDMI and ECM had a significant influence (Pproduction was significantly higher (Pproduction was observed between the years. The CH4 (l/day) production was strongly correlated (r=0.70) between the 2 years with an adjusted ECM production (30 kg/day). The diurnal variation of CH4 (l/h) production showed significantly lower (Pproduction (l/day) was 0.51 between 2 years. In conclusion, a higher EDMI (kg/day) followed by a higher ECM (kg/day) showed a higher CH4 production (l/day) in the 2nd year. The variations of CH4 (l/day) among the cows were lower than the within-cow variations. The CH4 (l/day) production was highly repeatable and, with an adjusted ECM production, was correlated between the years.

  18. Early bone healing around implant surfaces treated with variations in the resorbable blasting media method. A study in rabbits.

    Science.gov (United States)

    Jeong, Ryan; Marin, Charles; Granato, Rodrigo; Suzuki, Marcelo; Gil, Jose N; Granjeiro, Jose M; Coelho, Paulo G

    2010-01-01

    this study aimed to histomorphologically and histomorphometrically evaluate the in vivo response to three variations in the resorbable blasting media (RBM) surface processing in a rabbit femur model. screw root form implants with 3.75 mm in diameter by 8 mm in length presenting four surfaces (n=8 each): alumina-blasted/acid-etched (AB/AE), bioresorbable ceramic blasted (TCP), TCP + acid etching, and AB/AE + TCP were characterized by scanning electron microscopy (SEM) and atomic force microscopy (AFM). The implants were placed at the distal femur of 8 New Zeland rabbits, remaining for 2 weeks in vivo. After sacrifice, the implants were nondecalcified processed to 30 micro m thickness slides for histomorphology and bone-to-implant contact (BIC) determination. Statistical analysis was performed by one-way ANOVA at 95% level of significance considering implant surface as the independent variable and BIC as the dependent variable. SEM and AFM showed that all surfaces presented rough textures and that calciu-hosohate particles were observed at the TCP group surface. Histologic evaluation showed intimate interaction between newly formed woven bone and all implant surfaces, demonstrating that all surfaces were biocompatible and osseoconductive. Significant differences in BIC were observed between the AB/AE and the AB/AE + TCP, and intermediate values observed for the TCP and TCP + Acid surfaces. irrespective of RBM processing variation, all surfaces were osseoconductive and biocaompatible. The differences in BIC between groups warrant further bone-implant interface biomechanical characterization.

  19. A composite index to explain variations in poverty, health, nutritional status and standard of living: use of multivariate statistical methods.

    Science.gov (United States)

    Antony, G M; Rao, K Visweswara

    2007-08-01

    To calculate the Human Development Index (HDI) and Human Poverty Index (HPI) of Indian states; to trace the indicators useful for finding variations in poverty; and to develop a composite index that may explain variations in poverty, health, nutritional status and standard of living. Cross-sectional study. The HDI and HPI were calculated for different Indian states. A set of possible indicators varying between rich and poor states of India was identified with the use of discriminant function analysis. A composite index has been developed for measuring the standard of living of Indian states with the help of factor analysis. Demographic, socio-economic, health and dietary indicators play a major role in determining the real standard of living. Poverty, standard of living and human development depend on multiple factors. The existing indices, such as HDI and HPI, use income indicators to measure the standard of living, and do not take into account diet and nutritional status indicators. The proposed index was found to be more suitable for measuring the real standard of living and human development, as it is a comprehensive index of income and non-income indicators. Further validation may be carried out for different populations. Discriminant function analysis and factor analysis were used to assess health inequality and standard of living among Indian states. The proposed multi-dimensional index may provide a better picture of human development. Further work is of interest for other populations.

  20. Computational methods for detecting copy number variations in cancer genome using next generation sequencing: principles and challenges

    Science.gov (United States)

    Liu, Biao; Morrison, Carl D.; Johnson, Candace S.; Trump, Donald L.; Qin, Maochun; Conroy, Jeffrey C.; Wang, Jianmin; Liu, Song

    2013-01-01

    Accurate detection of somatic copy number variations (CNVs) is an essential part of cancer genome analysis, and plays an important role in oncotarget identifications. Next generation sequencing (NGS) holds the promise to revolutionize somatic CNV detection. In this review, we provide an overview of current analytic tools used for CNV detection in NGS-based cancer studies. We summarize the NGS data types used for CNV detection, decipher the principles for data preprocessing, segmentation, and interpretation, and discuss the challenges in somatic CNV detection. This review aims to provide a guide to the analytic tools used in NGS-based cancer CNV studies, and to discuss the important factors that researchers need to consider when analyzing NGS data for somatic CNV detections. PMID:24240121

  1. Understanding Variation in Treatment Effects in Education Impact Evaluations: An Overview of Quantitative Methods. NCEE 2014-4017

    Science.gov (United States)

    Schochet, Peter Z.; Puma, Mike; Deke, John

    2014-01-01

    This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…

  2. A Variational EM Method for Pole-Zero Modeling of Speech with Mixed Block Sparse and Gaussian Excitation

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    the posterior PDFs of the block sparse residuals and point estimates of mod- elling parameters within a sparse Bayesian learning framework. Compared to conventional pole-zero and all-pole based methods, experimental results show that the proposed method has lower spectral distortion and good performance......The modeling of speech can be used for speech synthesis and speech recognition. We present a speech analysis method based on pole-zero modeling of speech with mixed block sparse and Gaussian excitation. By using a pole-zero model, instead of the all-pole model, a better spectral fitting can...

  3. A colony multiplex quantitative PCR-Based 3S3DBC method and variations of it for screening DNA libraries.

    Directory of Open Access Journals (Sweden)

    Yang An

    Full Text Available A DNA library is a collection of DNA fragments cloned into vectors and stored individually in host cells, and is a valuable resource for molecular cloning, gene physical mapping, and genome sequencing projects. To take the best advantage of a DNA library, a good screening method is needed. After describing pooling strategies and issues that should be considered in DNA library screening, here we report an efficient colony multiplex quantitative PCR-based 3-step, 3-dimension, and binary-code (3S3DBC method we used to screen genes from a planarian genomic DNA fosmid library. This method requires only 3 rounds of PCR reactions and only around 6 hours to distinguish one or more desired clones from a large DNA library. According to the particular situations in different research labs, this method can be further modified and simplified to suit their requirements.

  4. Investigating the Variation of Volatile Compound Composition in Maotai-Flavoured Liquor During Its Multiple Fermentation Steps Using Statistical Methods

    Directory of Open Access Journals (Sweden)

    Zheng-Yun Wu

    2016-01-01

    Full Text Available The use of multiple fermentations is one of the most specific characteristics of Maotai-flavoured liquor production. In this research, the variation of volatile composition of Maotai-flavoured liquor during its multiple fermentations is investigated using statistical approaches. Cluster analysis shows that the obtained samples are grouped mainly according to the fermentation steps rather than the distillery they originate from, and the samples from the first two fermentation steps show the greatest difference, suggesting that multiple fermentation and distillation steps result in the end in similar volatile composition of the liquor. Back-propagation neural network (BNN models were developed that satisfactorily predict the number of fermentation steps and the organoleptic evaluation scores of liquor samples from their volatile compositions. Mean impact value (MIV analysis shows that ethyl lactate, furfural and some high-boiling-point acids play important roles, while pyrazine contributes much less to the improvement of the flavour and taste of Maotai-flavoured liquor during its production. This study contributes to further understanding of the mechanisms of Maotai-flavoured liquor production.

  5. Free vibration analysis of pre-stressed FGM Timoshenko beams under large transverse deflection by a variational method

    Directory of Open Access Journals (Sweden)

    Amlan Paul

    2016-06-01

    Full Text Available A theoretical study on free vibration behavior of pre-stressed functionally graded material (FGM beam is carried out. Power law variation of volume fraction along the thickness direction is considered. Geometric non-linearity is incorporated through von Kármán non-linear strain–displacement relationship. The governing equation for the static problem is obtained using minimum potential energy principle. The dynamic problem for the pre-stressed beam is formulated as an eigenvalue problem using Hamilton's principle. Three classical boundary conditions with immovable ends are considered for the present work, namely clamped–clamped, simply supported–simply supported and clamped–simply supported. Four different FGM beams, namely Stainless Steel–Silicon Nitride, Stainless Steel–Zirconia, Stainless Steel–Alumina and Titanium alloy–Zirconia, are considered for generation of results. Numerical results for non-dimensional frequency parameters of undeformed beam are presented. The results are presented in non-dimensional pressure-displacement plane for the static problem and in non-dimensional frequency-displacement plane for the dynamic problem. Comparative frequency-displacement plots are presented for different FGMs and also for different volume fraction indices.

  6. Discretization of three-dimensional free surface flows and moving boundary problems via elliptic grid methods based on variational principles

    Science.gov (United States)

    Fraggedakis, D.; Papaioannou, J.; Dimakopoulos, Y.; Tsamopoulos, J.

    2017-09-01

    A new boundary-fitted technique to describe free surface and moving boundary problems is presented. We have extended the 2D elliptic grid generator developed by Dimakopoulos and Tsamopoulos (2003) [19] and further advanced by Chatzidai et al. (2009) [18] to 3D geometries. The set of equations arises from the fulfillment of the variational principles established by Brackbill and Saltzman (1982) [21], and refined by Christodoulou and Scriven (1992) [22]. These account for both smoothness and orthogonality of the grid lines of tessellated physical domains. The elliptic-grid equations are accompanied by new boundary constraints and conditions which are based either on the equidistribution of the nodes on boundary surfaces or on the existing 2D quasi-elliptic grid methodologies. The capabilities of the proposed algorithm are first demonstrated in tests with analytically described complex surfaces. The sequence in which these tests are presented is chosen to help the reader build up experience on the best choice of the elliptic grid parameters. Subsequently, the mesh equations are coupled with the Navier-Stokes equations, in order to reveal the full potential of the proposed methodology in free surface flows. More specifically, the problem of gas assisted injection in ducts of circular and square cross-sections is examined, where the fluid domain experiences extreme deformations. Finally, the flow-mesh solver is used to calculate the equilibrium shapes of static menisci in capillary tubes.

  7. Configuring calendar variation based on time series regression method for forecasting of monthly currency inflow and outflow in Central Java

    Science.gov (United States)

    Setiawan, Suhartono, Ahmad, Imam Safawi; Rahmawati, Noorgam Ika

    2015-12-01

    Bank Indonesia (BI) as the central bank of Republic Indonesiahas a single overarching objective to establish and maintain rupiah stability. This objective could be achieved by monitoring traffic of inflow and outflow money currency. Inflow and outflow are related to stock and distribution of money currency around Indonesia territory. It will effect of economic activities. Economic activities of Indonesia,as one of Moslem country, absolutely related to Islamic Calendar (lunar calendar), that different with Gregorian calendar. This research aims to forecast the inflow and outflow money currency of Representative Office (RO) of BI Semarang Central Java region. The results of the analysis shows that the characteristics of inflow and outflow money currency influenced by the effects of the calendar variations, that is the day of Eid al-Fitr (moslem holyday) as well as seasonal patterns. In addition, the period of a certain week during Eid al-Fitr also affect the increase of inflow and outflow money currency. The best model based on the value of the smallestRoot Mean Square Error (RMSE) for inflow data is ARIMA model. While the best model for predicting the outflow data in RO of BI Semarang is ARIMAX model or Time Series Regression, because both of them have the same model. The results forecast in a period of 2015 shows an increase of inflow money currency happened in August, while the increase in outflow money currency happened in July.

  8. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacón Rebollo, Tomás

    2015-03-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  9. Inter-tester reproducibility and inter-method agreement of two variations of the Beighton test for determining Generalised Joint Hypermobility in primary school children.

    Science.gov (United States)

    Junge, Tina; Jespersen, Eva; Wedderkopp, Niels; Juul-Kristensen, Birgit

    2013-12-21

    The assessment of Generalised Joint Hypermobility (GJH) is usually based on the Beighton tests, which consist of a series of nine tests. Possible methodological shortcomings can arise, as the tests do not include detailed descriptions of performance, interpretation nor classification of GJH. The purpose of this study was, among children aged 7-8 and 10-12 years, to evaluate: 1) the inter-tester reproducibility of the tests and criteria for classification of GJH for 2 variations of the Beighton test battery (Methods A and B) with a variation in starting positions and benchmarks between methods, and 2) the inter-method agreement for the two batteries. A standardised three-phase protocol for clinical reproducibility studies was followed including a training phase, an overall agreement phase and a study phase. The number of participants in the three phases was 10, 70 and 39 respectively. For the inter-method study a total of 103 children participated. Two testers judged each test battery. A score of ≥ 5 was set as the cut-off level for GJH. Cohen's kappa statistics and McNemar's test were used to test for agreement and significant differences. Kappa values for GJH (≥ 5) were 0.64 (Method A, prevalence 0.42) and 0.59 (Method B, prevalence 0.46), with no difference between testers in Method A (p = 0.45) and B (p = 0.29). Prevalence of GJH in the inter-method study was 31% (A) and 35% (B) with no difference between methods (p = 0.54). Inter-tester reproducibility of Methods A and B was moderate to substantial, when following a standardised study protocol. Both test batteries can be used in the same children population, as there was no difference in prevalence of GJH at cut point 5, when applying method A and B. However, both methods need to be tested for their predictive validity at higher cut-off levels, e.g. ≥ 6 and ≥ 7.

  10. Incorporating spatial variation in housing attribute prices: a comparison of geographically weighted regression and the spatial expansion method

    Science.gov (United States)

    Bitter, Christopher; Mulligan, Gordon F.; Dall'Erba, Sandy

    2007-04-01

    Hedonic house price models typically impose a constant price structure on housing characteristics throughout an entire market area. However, there is increasing evidence that the marginal prices of many important attributes vary over space, especially within large markets. In this paper, we compare two approaches to examine spatial heterogeneity in housing attribute prices within the Tucson, Arizona housing market: the spatial expansion method and geographically weighted regression (GWR). Our results provide strong evidence that the marginal price of key housing characteristics varies over space. GWR outperforms the spatial expansion method in terms of explanatory power and predictive accuracy.

  11. The use of parabolic variations and the direct determination of stress intensity factors using the BIE method. [Boundary Integral Equation

    Science.gov (United States)

    Mendelson, A.

    1977-01-01

    Two advances in the numerical techniques of utilizing the BIE method are presented. The boundary unknowns are represented by parabolas over each interval which are integrated in closed form. These integrals are listed for easy use. For problems involving crack tip singularities, these singularities are included in the boundary integrals so that the stress intensity factor becomes just one more unknown in the set of boundary unknowns thus avoiding the uncertainties of plotting and extrapolating techniques. The method is applied to the problems of a notched beam in tension and bending, with excellent results.

  12. Anisotropic diffusion filter based edge enhancement for the segmentation of carotid intima-media layer in ultrasound images using variational level set method without re-initialisation.

    Science.gov (United States)

    Sumathi, K; Anandh, K R; Mahesh, V; Ramakrishnan, S

    2014-01-01

    In this work an attempt has been made to enhance the edges and segment the boundary of intima-media layer of Common Carotid Artery (CCA) using anisotropic diffusion filter and level set method. Ultrasound B mode longitudinal images of normal and abnormal images of common carotid arteries are used in this study. The images are subjected to anisotropic diffusion filter to generate edge map. This edge map is used as a stopping boundary in variational level set method without re-initialisation to segment the intima-media layer. Geometric features are extracted from this layer and analyzed statistically. Results show that anisotropic diffusion filtering is able to extract the edges in both normal and abnormal images. The obtained edge maps are found to have high contrast and sharp edges. The edge based variational level set method is able to segment the intima-media layer precisely from common carotid artery. The extracted geometrical features such as major axis and extent are found to be statistically significant in differentiating normal and abnormal images. Thus this study seems to be clinically useful in diagnosis of cardiovascular disease.

  13. Variational Monte Carlo method for fermionic models combined with tensor networks and applications to the hole-doped two-dimensional Hubbard model

    Science.gov (United States)

    Zhao, Hui-Hai; Ido, Kota; Morita, Satoshi; Imada, Masatoshi

    2017-08-01

    The conventional tensor-network states employ real-space product states as reference wave functions. Here, we propose a many-variable variational Monte Carlo (mVMC) method combined with tensor networks by taking advantages of both to study fermionic models. The variational wave function is composed of a pair product wave function operated by real-space correlation factors and tensor networks. Moreover, we can apply quantum number projections, such as spin, momentum, and lattice symmetry projections, to recover the symmetry of the wave function to further improve the accuracy. We benchmark our method for one- and two-dimensional Hubbard models, which show significant improvement over the results obtained individually either by mVMC or by tensor network. We have applied the present method to a hole-doped Hubbard model on the square lattice, which indicates the stripe charge/spin order coexisting with a weak d -wave superconducting order in the ground state for the doping concentration of less than 0.3, where the stripe oscillation period gets longer with increasing hole concentration. The charge homogeneous and highly superconducting state also exists as a metastable excited state for the doping concentration less than 0.25.

  14. Assimilation of total lightning data using the three-dimensional variational method at convection-allowing resolution

    Science.gov (United States)

    Zhang, Rong; Zhang, Yijun; Xu, Liangtao; Zheng, Dong; Yao, Wen

    2017-08-01

    A large number of observational analyses have shown that lightning data can be used to indicate areas of deep convection. It is important to assimilate observed lightning data into numerical models, so that more small-scale information can be incorporated to improve the quality of the initial condition and the subsequent forecasts. In this study, the empirical relationship between flash rate, water vapor mixing ratio, and graupel mixing ratio was used to adjust the model relative humidity, which was then assimilated by using the three-dimensional variational data assimilation system of the Weather Research and Forecasting model in cycling mode at 10-min intervals. To find the appropriate assimilation time-window length that yielded significant improvement in both the initial conditions and subsequent forecasts, four experiments with different assimilation time-window lengths were conducted for a squall line case that occurred on 10 July 2007 in North China. It was found that 60 min was the appropriate assimilation time-window length for this case, and longer assimilation window length was unnecessary since no further improvement was present. Forecasts of 1-h accumulated precipitation during the assimilation period and the subsequent 3-h accumulated precipitation were significantly improved compared with the control experiment without lightning data assimilation. The simulated reflectivity was optimal after 30 min of the forecast, it remained optimal during the following 42 min, and the positive effect from lightning data assimilation began to diminish after 72 min of the forecast. Overall, the improvement from lightning data assimilation can be maintained for about 3 h.

  15. Variations in semen quality parameters of Ovchepolian pramenka rams according to the method of collection and the meteorological season

    Directory of Open Access Journals (Sweden)

    Nikolovski Martin

    2012-01-01

    Full Text Available The off-breeding season for rams is a time-limiting factor for their use in scientific aims. This research was set upon two aims: (1 to acknowledge the differences of semen quality collected throughout the year, and (2 to investigate which of the two commonly used methods for semen collection (artificial vagina - A.V. and electro ejaculation - E.E. could prove to be more favorable in the off-breeding period. Five Ovchepolian Pramenka rams were used for this investigation. They were divided in two groups: group 1 (two rams, which was subjected to A.V. method, and group 2 (three rams, which was subjected to E.E. method for semen collection. Semen evaluation included: volume, spermatozoa concentration, live spermatozoa, ejaculate density and motility. According to the season, results have a high statistical significance for the volume (P<0.01 and motility (P<0.001 parameters. Group 1 and 2 results versification showed a high statistical significance for the motility score (P<0.001, ejaculate volume (P<0.01 and percentage of live spermatozoa (P<0.01 parameters. In conclusion, the A.V. method is more favorable for semen collection in late autumn, winter and spring time when rams are out of the breeding season.

  16. Method of Continuous Variations: Applications of Job Plots to the Study of Molecular Associations in Organometallic Chemistry[**

    Science.gov (United States)

    Renny, Joseph S.; Tomasevich, Laura L.; Tallmadge, Evan H.; Collum, David B.

    2014-01-01

    Applications of the method of continuous variations—MCV or the Method of Job—to problems of interest to organometallic chemists are described. MCV provides qualitative and quantitative insights into the stoichiometries underlying association of m molecules of A and n molecules of B to form AmBn. Applications to complex ensembles probe associations that form metal clusters and aggregates. Job plots in which reaction rates are monitored provide relative stoichiometries in rate-limiting transition structures. In a specialized variant, ligand- or solvent-dependent reaction rates are dissected into contributions in both the ground states and transition states, which affords insights into the full reaction coordinate from a single Job plot. Gaps in the literature are identified and critiqued. PMID:24166797

  17. Computational Modelling of Pisum Sativum L. Superoxide Dismutase and Prediction of Mutational Variations through in silico Methods

    Directory of Open Access Journals (Sweden)

    Nathan Vinod Kumar

    2014-06-01

    Full Text Available Superoxide dismutase (SOD is one of the major enzymes expressed in the oxidative stress pathway in plants. Its expression is also evident in other taxonomic group in oxidative reactions. Pisum sativum a common plant is being studied in the present work where SOD is characterized using computational tools. SOD sequence of P. sativum [CAA42737.1] Ala and Leu rich protein with alkaline pI value was used as query sequence and used to obtain nine similar sequences through BLASTp. Phylogenetic tree was constructed using MEGA 5.0 based on neighbour joining method. Physiochemical parameters and amino acid composition was studied and compared with query sequences and other similar sequences. Secondary structures were predicted to understand the dominant components. Homology modeling of P. sativum SOD was done using SWISS MODEL and quality was evaluated using standard methods. 27 active sites were detected in SOD predicted model which were Lys rich.

  18. Apparent annual survival estimates of tropical songbirds better reflect life history variation when based on intensive field methods

    Science.gov (United States)

    Martin, Thomas E.; Riordan, Margaret M.; Repin, Rimi; Mouton, James C.; Blake, William M.

    2017-01-01

    AimAdult survival is central to theories explaining latitudinal gradients in life history strategies. Life history theory predicts higher adult survival in tropical than north temperate regions given lower fecundity and parental effort. Early studies were consistent with this prediction, but standard-effort netting studies in recent decades suggested that apparent survival rates in temperate and tropical regions strongly overlap. Such results do not fit with life history theory. Targeted marking and resighting of breeding adults yielded higher survival estimates in the tropics, but this approach is thought to overestimate survival because it does not sample social and age classes with lower survival. We compared the effect of field methods on tropical survival estimates and their relationships with life history traits.LocationSabah, Malaysian Borneo.Time period2008–2016.Major taxonPasseriformes.MethodsWe used standard-effort netting and resighted individuals of all social and age classes of 18 tropical songbird species over 8 years. We compared apparent survival estimates between these two field methods with differing analytical approaches.ResultsEstimated detection and apparent survival probabilities from standard-effort netting were similar to those from other tropical studies that used standard-effort netting. Resighting data verified that a high proportion of individuals that were never recaptured in standard-effort netting remained in the study area, and many were observed breeding. Across all analytical approaches, addition of resighting yielded substantially higher survival estimates than did standard-effort netting alone. These apparent survival estimates were higher than for temperate zone species, consistent with latitudinal differences in life histories. Moreover, apparent survival estimates from addition of resighting, but not from standard-effort netting alone, were correlated with parental effort as measured by egg temperature across species

  19. The effects of variations in dose and method of administration on glucagon like peptide-2 activity in the rat

    DEFF Research Database (Denmark)

    Kaji, Tatsuru; Tanaka, Hiroaki; Holst, Jens Juul

    2008-01-01

    intestinal trophic activity. A rodent model of total parenteral nutrition (TPN) mucosal atrophy was used, examining intestinal morphology in the adult male rat after 5 days. Groups were: controls, maintained with TPN alone and GLP-2 treated groups (high dose; 240 microg/kg/day, low dose; 24 microg....../kg/day) given by continuous or intermittent (over 1 h, twice daily) intravenous infusion. Body weight and total small bowel length were significantly increased in the high dose, continuous infusion group. Both high dose infusion methods increased total small bowel weight, villus height, crypt depth, and total...

  20. A Variational Method to Retrieve the Extinction Profile in Liquid Clouds Using Multiple Field-of-View Lidar

    Science.gov (United States)

    Pounder, Nicola L.; Hogan, Robin J.; Varnai, Tamas; Battaglia, Alessandro; Cahalan, Robert F.

    2011-01-01

    While liquid clouds playa very important role in the global radiation budget, it's been very difficult to remotely determine their internal cloud structure. Ordinary lidar instruments (similar to radars but using visible light pulses) receive strong signals from such clouds, but the information is limited to a thin layer near the cloud boundary. Multiple field-of-view (FOV) lidars offer some new hope as they are able to isolate photons that were scattered many times by cloud droplets and penetrated deep into a cloud before returning to the instrument. Their data contains new information on cloud structure, although the lack of fast simulation methods made it challenging to interpret the observations. This paper describes a fast new technique that can simulate multiple-FOV lidar signals and can even estimate the way the signals would change in response to changes in cloud properties-an ability that allows quick refinements in our initial guesses of cloud structure. Results for a hypothetical airborne three-FOV lidar suggest that this approach can help determine cloud structure for a deeper layer in clouds, and can reliably determine the optical thickness of even fairly thick liquid clouds. The algorithm is also applied to stratocumulus observations by the 8-FOV airborne "THOR" lidar. These tests demonstrate that the new method can determine the depth to which a lidar provides useful information on vertical cloud structure. This work opens the way to exploit data from spaceborne lidar and radar more rigorously than has been possible up to now.

  1. On the Use of Biomineral Oxygen Isotope Data to Identify Human Migrants in the Archaeological Record: Intra-Sample Variation, Statistical Methods and Geographical Considerations.

    Directory of Open Access Journals (Sweden)

    Emma Lightfoot

    Full Text Available Oxygen isotope analysis of archaeological skeletal remains is an increasingly popular tool to study past human migrations. It is based on the assumption that human body chemistry preserves the δ18O of precipitation in such a way as to be a useful technique for identifying migrants and, potentially, their homelands. In this study, the first such global survey, we draw on published human tooth enamel and bone bioapatite data to explore the validity of using oxygen isotope analyses to identify migrants in the archaeological record. We use human δ18O results to show that there are large variations in human oxygen isotope values within a population sample. This may relate to physiological factors influencing the preservation of the primary isotope signal, or due to human activities (such as brewing, boiling, stewing, differential access to water sources and so on causing variation in ingested water and food isotope values. We compare the number of outliers identified using various statistical methods. We determine that the most appropriate method for identifying migrants is dependent on the data but is likely to be the IQR or median absolute deviation from the median under most archaeological circumstances. Finally, through a spatial assessment of the dataset, we show that the degree of overlap in human isotope values from different locations across Europe is such that identifying individuals' homelands on the basis of oxygen isotope analysis alone is not possible for the regions analysed to date. Oxygen isotope analysis is a valid method for identifying first-generation migrants from an archaeological site when used appropriately, however it is difficult to identify migrants using statistical methods for a sample size of less than c. 25 individuals. In the absence of local previous analyses, each sample should be treated as an individual dataset and statistical techniques can be used to identify migrants, but in most cases pinpointing a specific

  2. Absence of the superior mesenteric artery in an adult and a new classification method for superior-inferior mesenteric arterial variations.

    Science.gov (United States)

    Wu, Yongyou; Peng, Wei; Wu, Hao; Chen, Guangqiang; Zhu, Jianbin; Xing, Chungen

    2014-07-01

    This paper aims to report the complete absence of the superior mesenteric artery (SMA) in an adult and to propose a new classification method for the superior-inferior mesenteric arterial variations (SIMAV). A 69-year-old female was referred for abdominal pain and change of stool habits and characteristics. Multi-detector computed tomography (MDCT) was performed. Based on the CT findings of the patient and previous reports on the abnormalities of the superior-inferior mesenteric arteries, attempt was made to propose a new classification method for SIMAV. MDCT with enhancement revealed complete absence of SMA and compensatory dilation of the inferior mesenteric artery (IMA). Aneurysm of the splenic artery and both inferior phrenic arteries aberrantly arising from the aorta at the same level of the celiac trunk were also noted. Based on our case and literature reports, we were able to propose a new classification method for SIMAV. Without considering the relationship with the celiac arteries, SIMAV can be divided into 4 types. Type I is the normal type or "textbook" type. In type II, SMA is defective and in type III, IMA is defective. In type IV, there is an aberrant middle mesenteric artery (MMA). Complete absence of SMA is extremely rare. However, awareness of such a variation is of great importance during operations for rectal and sigmoid cancer. In such patients, ligation of the trunk of IMA, which is the only artery for the entire intestine, will lead to disastrous consequence. The new classification method may be helpful in the scientific and systematic description of SIMAV.

  3. The power of gene-based rare variant methods to detect disease-associated variation and test hypotheses about complex disease.

    Directory of Open Access Journals (Sweden)

    Loukas Moutsianas

    2015-04-01

    Full Text Available Genome and exome sequencing in large cohorts enables characterization of the role of rare variation in complex diseases. Success in this endeavor, however, requires investigators to test a diverse array of genetic hypotheses which differ in the number, frequency and effect sizes of underlying causal variants. In this study, we evaluated the power of gene-based association methods to interrogate such hypotheses, and examined the implications for study design. We developed a flexible simulation approach, using 1000 Genomes data, to (a generate sequence variation at human genes in up to 10K case-control samples, and (b quantify the statistical power of a panel of widely used gene-based association tests under a variety of allelic architectures, locus effect sizes, and significance thresholds. For loci explaining ~1% of phenotypic variance underlying a common dichotomous trait, we find that all methods have low absolute power to achieve exome-wide significance (~5-20% power at α = 2.5 × 10(-6 in 3K individuals; even in 10K samples, power is modest (~60%. The combined application of multiple methods increases sensitivity, but does so at the expense of a higher false positive rate. MiST, SKAT-O, and KBAC have the highest individual mean power across simulated datasets, but we observe wide architecture-dependent variability in the individual loci detected by each test, suggesting that inferences about disease architecture from analysis of sequencing studies can differ depending on which methods are used. Our results imply that tens of thousands of individuals, extensive functional annotation, or highly targeted hypothesis testing will be required to confidently detect or exclude rare variant signals at complex disease loci.

  4. Apparatus and Method for Compensating for Process, Voltage, and Temperature Variation of the Time Delay of a Digital Delay Line

    Science.gov (United States)

    Seefeldt, James (Inventor); Feng, Xiaoxin (Inventor); Roper, Weston (Inventor)

    2013-01-01

    A process, voltage, and temperature (PVT) compensation circuit and a method of continuously generating a delay measure are provided. The compensation circuit includes two delay lines, each delay line providing a delay output. The two delay lines may each include a number of delay elements, which in turn may include one or more current-starved inverters. The number of delay lines may differ between the two delay lines. The delay outputs are provided to a combining circuit that determines an offset pulse based on the two delay outputs and then averages the voltage of the offset pulse to determine a delay measure. The delay measure may be one or more currents or voltages indicating an amount of PVT compensation to apply to input or output signals of an application circuit, such as a memory-bus driver, dynamic random access memory (DRAM), a synchronous DRAM, a processor or other clocked circuit.

  5. Variation of the Pseudomonas community structure on oak leaf lettuce during storage detected by culture-dependent and -independent methods.

    Science.gov (United States)

    Nübling, Simone; Schmidt, Herbert; Weiss, Agnes

    2016-01-04

    The genus Pseudomonas plays an important role in the lettuce leaf microbiota and certain species can induce spoilage. The aim of this study was to investigate the occurrence and diversity of Pseudomonas spp. on oak leaf lettuce and to follow their community shift during a six day cold storage with culture-dependent and culture-independent methods. In total, 21 analysed partial Pseudomonas 16S rRNA gene sequences matched closely (> 98.3%) to the different reference strain sequences, which were distributed among 13 different phylogenetic groups or subgroups within the genus Pseudomonas. It could be shown that all detected Pseudomonas species belonged to the P. fluorescens lineage. In the culture-dependent analysis, 73% of the isolates at day 0 and 79% of the isolates at day 6 belonged to the P. fluorescens subgroup. The second most frequent group, with 12% of the isolates, was the P. koreensis subgroup. This subgroup was only detected at day 0. In the culture-independent analysis the P. fluorescens subgroup and P. extremaustralis could not be differentiated by RFLP. Both groups were most abundant and amounted to approximately 46% at day 0 and 79% at day 6. The phytopathogenic species P. salmonii, P. viridiflava and P. marginalis increased during storage. Both approaches identified the P. fluorescens group as the main phylogenetic group. The results of the present study suggest that pseudomonads found by plating methods indeed represent the most abundant part of the Pseudomonas community on oak leaf lettuce. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Mixed-Methods Research in a Complex Multisite VA Health Services Study: Variations in the Implementation and Characteristics of Chiropractic Services in VA

    Directory of Open Access Journals (Sweden)

    Raheleh Khorsan

    2013-01-01

    Full Text Available Maximizing the quality and benefits of newly established chiropractic services represents an important policy and practice goal for the US Department of Veterans Affairs’ healthcare system. Understanding the implementation process and characteristics of new chiropractic clinics and the determinants and consequences of these processes and characteristics is a critical first step in guiding quality improvement. This paper reports insights and lessons learned regarding the successful application of mixed methods research approaches—insights derived from a study of chiropractic clinic implementation and characteristics, Variations in the Implementation and Characteristics of Chiropractic Services in VA (VICCS. Challenges and solutions are presented in areas ranging from selection and recruitment of sites and participants to the collection and analysis of varied data sources. The VICCS study illustrates the importance of several factors in successful mixed-methods approaches, including (1 the importance of a formal, fully developed logic model to identify and link data sources, variables, and outcomes of interest to the study’s analysis plan and its data collection instruments and codebook and (2 ensuring that data collection methods, including mixed-methods, match study aims. Overall, successful application of a mixed-methods approach requires careful planning, frequent trade-offs, and complex coding and analysis.

  7. Measurement of Internal Friction for Tungsten by the Curve Vibrating Method with Variation of Voltage and Temperature

    Directory of Open Access Journals (Sweden)

    Elin Yusibani

    2013-12-01

    Full Text Available Application of a curved vibrating wire method (CVM to measure gas viscosity has been widely used. A fine Tungsten wire with 50 mm of diameter is bent into a semi-circular shape and arranged symmetrically in a magnetic field of about 0.2 T. The frequency domain is used for calculating the viscosity as a response for forced oscillation of the wire. Internal friction is one of the parameter in the CVM which is has to be measured beforeahead. Internal friction coefficien for the wire material which is the inverse of the quality factor has to be measured in a vacuum condition. The term involving internal friction actually represents the effective resistance of motion due to all non-viscous damping phenomena including internal friction and magnetic damping. The testing of internal friction measurement shows that at different induced voltage and elevated temperature at a vacuum condition, it gives the value of internal friction for Tungsten is around 1 to 4 10-4.

  8. Methods applied in determining the variations of strength and srtucture of plutonic rock material exposed to artificial weathering treatment

    Directory of Open Access Journals (Sweden)

    Ihalainen, P.

    1993-12-01

    Full Text Available In this study the most significant factors determining the weathering of natural rock material proved to be the water saturation of the samples and the chemical composition of the pore water. The action of hydrolysis caused by the acidity of the pore water, combined with repeated freezing and thawing in 100% relative humidity proved to be the most significant factor in the alteration of the strength and structure of the studied material, the Inari anorthosite. The action of these methods disintegrated the rock material more than any other weathering treatment or any other combination of the treatments used in this study. The changes in the strength of the rock material were most reliably illustrated by the changes in tensile strength, measured by the changes in the modulus of rupture and the point load index. In several cases the standard deviations of the results exceeded the absolute changes of the corresponding parameter value. By progressing weathering, the porosity of the Inari anorthosite changed in such a way that both the frost and salt weathering increased primarily the proportion of the large pores while the hydrolysis increased the proportion of the small pores of the total porosity. It is rather difficult to simulate in the laboratory the changes in strength and structure of building stone caused by natural weathering, since the effectiveness of the climatic and environmental factors affecting the rock surface in real conditions varies from case to case and according to the duration of the weathering action. An unweathered firm silicate rock with low porosity, such as the Inari anorthosite, has such a resistance against weathering that the necessary series of laboratory experiments to determine the changes in strength inevitably take several months.

  9. Screening mitochondrial DNA sequence variation as an alternative method for tracking established and outbreak populations of Queensland fruit fly at the species southern range limit.

    Science.gov (United States)

    Blacket, Mark J; Malipatil, Mali B; Semeraro, Linda; Gillespie, Peter S; Dominiak, Bernie C

    2017-04-01

    Understanding the relationship between incursions of insect pests and established populations is critical to implementing effective control. Studies of genetic variation can provide powerful tools to examine potential invasion pathways and longevity of individual pest outbreaks. The major fruit fly pest in eastern Australia, Queensland fruit fly Bactrocera tryoni (Froggatt), has been subject to significant long-term quarantine and population reduction control measures in the major horticulture production areas of southeastern Australia, at the species southern range limit. Previous studies have employed microsatellite markers to estimate gene flow between populations across this region. In this study, we used an independent genetic marker, mitochondrial DNA (mtDNA) sequences, to screen genetic variation in established and adjacent outbreak populations in southeastern Australia. During the study period, favorable environmental conditions resulted in multiple outbreaks, which appeared genetically distinctive and relatively geographically localized, implying minimal dispersal between simultaneous outbreaks. Populations in established regions were found to occur over much larger areas. Screening mtDNA (female) lineages proved to be an effective alternative genetic tool to assist in understanding fruit fly population dynamics and provide another possible molecular method that could now be employed for better understanding of the ecology and evolution of this and other pest species.

  10. Construction and investigation of 3D vessels net of the brain according to MRI data using the method of variation of scanning plane

    Science.gov (United States)

    Cherevko, A. A.; Yankova, G. S.; Maltseva, S. V.; Parshin, D. V.; Akulov, A. E.; Khe, A. K.; Chupakhin, A. P.

    2016-06-01

    The blood realizes the transport of substances, which are necessary for livelihoods, throughout the body. The assumption about the relationship genotype and structure of vasculature (in particular of brain) is natural. In the paper we consider models of vessel net for two genetic lines of laboratory mice. Vascular net obtained as a result of preprocessing MRI data. MRI scanning is realized using the method of variation of slope of scanning plane, i.e. by several sets of parallel planes specified by different normal vectors. The following special processing allowed to construct models of vessel nets without fragmentation. The purpose of the work is to compare the vascular network models of two different genetic lines of laboratory mice.

  11. Toward practical gas sensing with highly reduced graphene oxide: a new signal processing method to circumvent run-to-run and device-to-device variations.

    Science.gov (United States)

    Lu, Ganhua; Park, Sungjin; Yu, Kehan; Ruoff, Rodney S; Ocola, Leonidas E; Rosenmann, Daniel; Chen, Junhong

    2011-02-22

    Graphene is worth evaluating for chemical sensing and biosensing due to its outstanding physical and chemical properties. We first report on the fabrication and characterization of gas sensors using a back-gated field-effect transistor platform with chemically reduced graphene oxide (R-GO) as the conducting channel. These sensors exhibited a 360% increase in response when exposed to 100 ppm NO(2) in air, compared with thermally reduced graphene oxide sensors we reported earlier. We then present a new method of signal processing/data interpretation that addresses (i) sensing devices with long recovery periods (such as required for sensing gases with these R-GO sensors) as well as (ii) device-to-device variations. A theoretical analysis is used to illuminate the importance of using the new signal processing method when the sensing device suffers from slow recovery and non-negligible contact resistance. We suggest that the work reported here (including the sensor signal processing method and the inherent simplicity of device fabrication) is a significant step toward the real-world application of graphene-based chemical sensors.

  12. Toward practical gas sensing with highly reduced graphene oxide: a new signal processing method to circumvent run-to-run and device-to-device variations

    Energy Technology Data Exchange (ETDEWEB)

    Lu, G.; Park, S.; Ruoff, R. S.; Ocola, L. E.; Chen, J. (Center for Nanoscale Materials); (Univ. of Wisconsin); (Univ. of Texas)

    2011-01-01

    Graphene is worth evaluating for chemical sensing and biosensing due to its outstanding physical and chemical properties. We first report on the fabrication and characterization of gas sensors using a back-gated field-effect transistor platform with chemically reduced graphene oxide (R-GO) as the conducting channel. These sensors exhibited a 360% increase in response when exposed to 100 ppm NO{sub 2} in air, compared with thermally reduced graphene oxide sensors we reported earlier. We then present a new method of signal processing/data interpretation that addresses (i) sensing devices with long recovery periods (such as required for sensing gases with these R-GO sensors) as well as (ii) device-to-device variations. A theoretical analysis is used to illuminate the importance of using the new signal processing method when the sensing device suffers from slow recovery and non-negligible contact resistance. We suggest that the work reported here (including the sensor signal processing method and the inherent simplicity of device fabrication) is a significant step toward the real-world application of graphene-based chemical sensors.

  13. Toward practical gas sensing with highly reduced graphene oxide : a new signal processing method to circumvent run-to-run and device-to-device variations.

    Energy Technology Data Exchange (ETDEWEB)

    Ocola, L. E.; Park, S.; Yu, K.; Ruoff, R. S.; Ocola, L. E.; Rosenmann, D.; Chen, J.; Univ. of Wisconsin at Milwaukee; Univ. of Texas at Austin

    2011-01-04

    Graphene is worth evaluating for chemical sensing and biosensing due to its outstanding physical and chemical properties. We first report on the fabrication and characterization of gas sensors using a back-gated field-effect transistor platform with chemically reduced graphene oxide (R-GO) as the conducting channel. These sensors exhibited a 360% increase in response when exposed to 100 ppm NO{sub 2} in air, compared with thermally reduced graphene oxide sensors we reported earlier. We then present a new method of signal processing/data interpretation that addresses (i) sensing devices with long recovery periods (such as required for sensing gases with these R-GO sensors) as well as (ii) device-to-device variations. A theoretical analysis is used to illuminate the importance of using the new signal processing method when the sensing device suffers from slow recovery and non-negligible contact resistance. We suggest that the work reported here (including the sensor signal processing method and the inherent simplicity of device fabrication) is a significant step toward the real-world application of graphene-based chemical sensors.

  14. Comparison of Dissolution Similarity Assessment Methods for Products with Large Variations: f2 Statistics and Model-Independent Multivariate Confidence Region Procedure for Dissolution Profiles of Multiple Oral Products.

    Science.gov (United States)

    Yoshida, Hiroyuki; Shibata, Hiroko; Izutsu, Ken-Ichi; Goda, Yukihiro

    2017-01-01

    The current Japanese Ministry of Health Labour and Welfare (MHLW)'s Guideline for Bioequivalence Studies of Generic Products uses averaged dissolution rates for the assessment of dissolution similarity between test and reference formulations. This study clarifies how the application of model-independent multivariate confidence region procedure (Method B), described in the European Medical Agency and U.S. Food and Drug Administration guidelines, affects similarity outcomes obtained empirically from dissolution profiles with large variations in individual dissolution rates. Sixty-one datasets of dissolution profiles for immediate release, oral generic, and corresponding innovator products that showed large variation in individual dissolution rates in generic products were assessed on their similarity by using the f2 statistics defined in the MHLW guidelines (MHLW f2 method) and two different Method B procedures, including a bootstrap method applied with f2 statistics (BS method) and a multivariate analysis method using the Mahalanobis distance (MV method). The MHLW f2 and BS methods provided similar dissolution similarities between reference and generic products. Although a small difference in the similarity assessment may be due to the decrease in the lower confidence interval for expected f2 values derived from the large variation in individual dissolution rates, the MV method provided results different from those obtained through MHLW f2 and BS methods. Analysis of actual dissolution data for products with large individual variations would provide valuable information towards an enhanced understanding of these methods and their possible incorporation in the MHLW guidelines.

  15. Application of the variational iteration method to nonlinear vibrations of nanobeams induced by the van der Waals force under different boundary conditions

    Science.gov (United States)

    Mohammadian, Mostafa

    2017-04-01

    The pull-in instability is one of the most important phenomena which is usually associated with nanobeams when they are used in nanoelectromechanical systems (NEMS). This phenomenon may occur without electrical excitation and depends on different parameters. The aim of this paper is to investigate the nonlinear vibrations and pull-in instability of nanobeams in the presence of the van der Waals (vdW) force without electrical excitation. Utilizing Galerkin's method, the partial differential equation of motion is transferred to a nonlinear ordinary differential equation. Afterwards, the variational iteration method (VIM) is employed to obtain the nonlinear frequency and deflection of the nanobeam. The study is performed on doubly clamped, doubly simply supported and clamped-simply supported boundary conditions. The effects of boundary conditions, axial load, aspect ratio and the vdW force on nonlinear frequency and deflection as well as pull-in instability are discussed in details. In addition, three simple and useful equations are developed for predicting the critical values of the vdW force parameter in terms of axial load and aspect ratio parameters. These equations can be employed to estimate the dimensions of nanobeams before their fabrication and using them in the NEMS devices.

  16. The effect of annealing temperature variation on the optical properties test of LiTaO3 thin films based on Tauc Plot method for satellite technology

    Science.gov (United States)

    Djohan, N.; Estrada, R.; Sari, D.; Dahrul, M.; Kurniawan, A.; Iskandar, J.; Hardhienata, H.; Irzaman

    2017-01-01

    The purpose of the present research is to observe the energy gap of thin films made from LiTaO3 in 1 M-solubility deposited on n-type Si (111) substrates with annealing temperature variation. The manufacture of thin films has been formed by Chemical Solution Deposition (CSD) method using spin coater on 3000 rpm speed for 30 seconds and performed annealing process using furnace (Nabertherm type B180) at a temperature of 750°C, 800°C and 850°C for 15 hours. The absorbance of thin films is measured by using an Ocean Optics USB2000 device and processed into the energy gap curve using Tauc Plot method. The result shows that the energy gap of thin films associated with indirect transitions are increased from 2.78 eV to 2.93 eV with the rise of annealing temperature. The research shows that the thin films on n-type Si (111) substrates made of LiTaO3 produces sensitivity to violet light spectrum and have the potential to be developed as a sensor on satellite technology.

  17. Explanation of observable secular variations of gravity and alternative methods of determination of drift of the center of mass of the Earth

    Science.gov (United States)

    Barkin, Yury

    2010-05-01

    The summary. On the basis of geodynamic model of the forced relative displacement of the centers of mass of the core and the mantle of the Earth the secular variations of a gravity and heights of some gravimetry stations on a surface of the Earth have ben studied. At the account of secular drift of the center of mass of the Earth which on our geodynamic model is caused by the unidirectional drift of the core of the Earth relatively to the mantle, the full explanation is given to observable secular variations of a gravity at stations Ny-Alesund (Norway), Churchill (Canada), Medicine (Italy), Sayowa (Antarctica), Strastburg (France), Membach (Belgium), Wuhan (China) and Metsahovi (Finland). Two new methods of determination of secular drift of the center of mass of the Earth, alternative to classical method of a space geodesy are offered: 1) on the basis of gravimetry data about secular trends of a gravity at the stations located on all basic regions of the Earth; 2) on the basis of the comparative analysis of altimetry and coastal data about secular changes of sea level also in basic regions of ocean. 1. Secular drift of the center of mass of the core and the center of mass of the Earth. A secular drift of the center of mass of the Earth to the North relatively to special center O on an axis of rotation of the Earth for which the coefficient of third zonal harmonic J3' = 0, has been predicted in the author work [1]. A drift in a direction to a geographical point (pole P) 70°0 N and 104°3 E has been established for the first time theoretically - as a result of the analysis of the global directed redistribution of masses of the Earth, explaining the observed secular drift of the pole of an axis of rotation of the Earth and not tidal acceleration of its axial rotation [2]. In [1] velocity of drift it has been estimated in 1-2 cm/yr. For specified center O the figure of a planet is as though deprived of pure-shaped form (J3' = 0). And in this sense the point O can be

  18. Size variation and collapse of emphysema holes at inspiration and expiration CT scan: evaluation with modified length scale method and image co-registration.

    Science.gov (United States)

    Oh, Sang Young; Lee, Minho; Seo, Joon Beom; Kim, Namkug; Lee, Sang Min; Lee, Jae Seung; Oh, Yeon Mok

    2017-01-01

    A novel approach of size-based emphysema clustering has been developed, and the size variation and collapse of holes in emphysema clusters are evaluated at inspiratory and expiratory computed tomography (CT). Thirty patients were visually evaluated for the size-based emphysema clustering technique and a total of 72 patients were evaluated for analyzing collapse of the emphysema hole in this study. A new approach for the size differentiation of emphysema holes was developed using the length scale, Gaussian low-pass filtering, and iteration approach. Then, the volumetric CT results of the emphysema patients were analyzed using the new method, and deformable registration was carried out between inspiratory and expiratory CT. Blind visual evaluations of EI by two readers had significant correlations with the classification using the size-based emphysema clustering method (r-values of reader 1: 0.186, 0.890, 0.915, and 0.941; reader 2: 0.540, 0.667, 0.919, and 0.942). The results of collapse of emphysema holes using deformable registration were compared with the pulmonary function test (PFT) parameters using the Pearson's correlation test. The mean extents of low-attenuation area (LAA), E1 (correlated with the PFT parameters (r=-0.53, -0.43, -0.48, and -0.25), with forced expiratory volume in 1 second (FEV1; -0.81, -0.62, -0.75, and -0.40), and with diffusing capacity of the lungs for carbon monoxide (cDLco), respectively. The fraction of emphysema that shifted to the smaller subgroup showed a significant correlation with FEV1, cDLco, forced expiratory flow at 25%-75% of forced vital capacity, and residual volume (RV)/total lung capacity (r=0.56, 0.73, 0.40, and -0.58). A detailed assessment of the size variation and collapse of emphysema holes may be useful for understanding the dynamic collapse of emphysema and its functional relation.

  19. Experiencing variation

    DEFF Research Database (Denmark)

    Kobayashi, Sofie; Berge, Maria; Grout, Brian William Wilson

    2017-01-01

    This study contributes towards a better understanding of learning dynamics in doctoral supervision by analysing how learning opportunities are created in the interaction between supervisors and PhD students, using the notion of experiencing variation as a key to learning. Empirically, we have bas...... were discussed, created more complex patterns of variation. Both PhD students and supervisors can learn from this. Understanding of this mechanism that creates learning opportunities can help supervisors develop their competences in supervisory pedagogy....

  20. Elemental carbon, organic carbon, and dust concentrations in snow measured with thermal optical and gravimetric methods: Variations during the 2007-2013 winters at Sapporo, Japan

    Science.gov (United States)

    Kuchiki, Katsuyuki; Aoki, Teruo; Niwano, Masashi; Matoba, Sumito; Kodama, Yuji; Adachi, Kouji

    2015-01-01

    mass concentrations of light-absorbing snow impurities at Sapporo, Japan, were measured during six winters from 2007 to 2013. Elemental carbon (EC) and organic carbon (OC) concentrations were measured with the thermal optical method, and dust concentration was determined by filter gravimetric measurement. The measurement results using the different filters were compared to assess the filtration efficiency. Adding NH4H2PO4 coagulant to melted snow samples improved the collection efficiency for EC particles by a factor of 1.45. The mass concentrations of EC, OC, and dust in the top 2 cm layer ranged in 0.007-2.8, 0.01-13, and 0.14-260 ppmw, respectively, during the six winters. The mass concentrations and their short-term variations were larger in the surface than in the subsurface. The snow impurity concentrations varied seasonally; that is, they remained relatively low during the accumulation season and gradually increased during the melting season. Although the surface snow impurities showed no discernible trend over the six winters, they varied from year to year, with a negative correlation between the snow impurity concentrations and the amount of snowfall. The surface snow impurities generally increased with the number of days elapsed since snowfall and showed a different rate for EC (1.44), OC (9.96), and dust (6.81). The possible processes causing an increase in surface snow impurities were dry deposition of atmospheric aerosols, melting of surface snow, and sublimation/evaporation of surface snow.

  1. Comparison of one-dimensional and quasi-one-dimensional Hubbard models from the variational two-electron reduced-density-matrix method

    CERN Document Server

    Rubin, Nicholas C

    2014-01-01

    Minimizing the energy of an $N$-electron system as a functional of a two-electron reduced density matrix (2-RDM), constrained by necessary $N$-representability conditions (conditions for the 2-RDM to represent an ensemble $N$-electron quantum system), yields a rigorous lower bound to the ground-state energy in contrast to variational wavefunction methods. We characterize the performance of two sets of approximate constraints, (2,2)-positivity (DQG) and approximate (2,3)-positivity (DQGT) conditions, at capturing correlation in one-dimensional and quasi-one-dimensional (ladder) Hubbard models. We find that, while both the DQG and DQGT conditions capture both the weak and strong correlation limits, the more stringent DQGT conditions improve the ground-state energies, the natural occupation numbers, the pair correlation function, the effective hopping, and the connected (cumulant) part of the 2-RDM. We observe that the DQGT conditions are effective at capturing strong electron correlation effects in both one- an...

  2. PacBio-LITS: a large-insert targeted sequencing method for characterization of human disease-associated chromosomal structural variations.

    Science.gov (United States)

    Wang, Min; Beck, Christine R; English, Adam C; Meng, Qingchang; Buhay, Christian; Han, Yi; Doddapaneni, Harsha V; Yu, Fuli; Boerwinkle, Eric; Lupski, James R; Muzny, Donna M; Gibbs, Richard A

    2015-03-19

    Generation of long (>5 Kb) DNA sequencing reads provides an approach for interrogation of complex regions in the human genome. Currently, large-insert whole genome sequencing (WGS) technologies from Pacific Biosciences (PacBio) enable analysis of chromosomal structural variations (SVs), but the cost to achieve the required sequence coverage across the entire human genome is high. We developed a method (termed PacBio-LITS) that combines oligonucleotide-based DNA target-capture enrichment technologies with PacBio large-insert library preparation to facilitate SV studies at specific chromosomal regions. PacBio-LITS provides deep sequence coverage at the specified sites at substantially reduced cost compared with PacBio WGS. The efficacy of PacBio-LITS is illustrated by delineating the breakpoint junctions of low copy repeat (LCR)-associated complex structural rearrangements on chr17p11.2 in patients diagnosed with Potocki-Lupski syndrome (PTLS; MIM#610883). We successfully identified previously determined breakpoint junctions in three PTLS cases, and also were able to discover novel junctions in repetitive sequences, including LCR-mediated breakpoints. The new information has enabled us to propose mechanisms for formation of these structural variants. The new method leverages the cost efficiency of targeted capture-sequencing as well as the mappability and scaffolding capabilities of long sequencing reads generated by the PacBio platform. It is therefore suitable for studying complex SVs, especially those involving LCRs, inversions, and the generation of chimeric Alu elements at the breakpoints. Other genomic research applications, such as haplotype phasing and small insertion and deletion validation could also benefit from this technology.

  3. Temporal and spatial variation in the status of acid rivers and potential prevention methods of AS soil-related leaching in peatland forestry

    Energy Technology Data Exchange (ETDEWEB)

    Saarinen, T.

    2013-06-01

    This thesis examines temporal and spatial variations in the status of different rivers and streams of western Finland in terms of acidity and sources of acid load derived from the catchment area. It also examines the monitoring of acid runoff water derived from maintenance drainage in peatland forestry and suggests potential mitigation methods. A total of 17 river basins of different sizes in western Finland were selected for study, including rivers affected by both drainage of agricultural AS soils and forested peatlands. Old data from 1911-1931 were available, but most data were from the 1960s onwards and were taken from the HERTTA database. During 2009-2011, pH and conductivity measurements and water sampling were conducted. Biological monitoring for ecological classification was conducted in the Sanginjoki river system during 2008 and 2009. Three peatland forestry sites were selected to study acid leaching via pH and EC measurements and water sampling. Fluctuations in groundwater level in different drainage conditions were simulated and acid leaching was investigated in laboratory experiments in order to replicate a situation where the groundwater level drops and allows oxidation of sulphidic materials. It was found that river pH decreased and metal concentrations increased with runoff. The highest acidity observed coincided with periods of intense drainage in the 1970s and after dry summers in the past decade. Together with pH, electric conductivity and sulphate in river water were identified as suitable indicators of AS soils in a catchment, because they directly respond to acid leaching derived from AS soils. Acidity derived from organic acids was clearly observed in catchments dominated by forested peatlands and wetlands. Temporal and spatial variations in ecological status were observed, but monitoring at whole-catchment scale and during consecutive years is needed to increase the reliability of the results. Simulations on the potential effects of

  4. Variational analysis

    CERN Document Server

    Rockafellar, R Tyrrell

    1998-01-01

    From its origins in the minimization of integral functionals, the notion of 'variations' has evolved greatly in connection with applications in optimization, equilibrium, and control. It refers not only to constrained movement away from a point, but also to modes of perturbation and approximation that are best describable by 'set convergence', variational convergence of functions and the like. This book develops a unified framework and, in finite dimension, provides a detailed exposition of variational geometry and subdifferential calculus in their current forms beyond classical and convex analysis. Also covered are set-convergence, set-valued mappings, epi-convergence, duality, maximal monotone mappings, second-order subderivatives, measurable selections and normal integrands. The changes in this 3rd printing mainly concern various typographical corrections, and reference omissions that came to light in the previous printings. Many of these reached the authors' notice through their own re-reading, that of th...

  5. Variational principles

    CERN Document Server

    Moiseiwitsch, B L

    2004-01-01

    This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha

  6. Size variation and collapse of emphysema holes at inspiration and expiration CT scan: evaluation with modified length scale method and image co-registration

    Directory of Open Access Journals (Sweden)

    Oh SY

    2017-07-01

    Full Text Available Sang Young Oh,1,* Minho Lee,1,* Joon Beom Seo,1,* Namkug Kim,1,2,* Sang Min Lee,1 Jae Seung Lee,3 Yeon Mok Oh3 1Department of Radiology, 2Department of Convergence Medicine, 3Department of Pulmonology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea *These authors contributed equally to this work Abstract: A novel approach of size-based emphysema clustering has been developed, and the size variation and collapse of holes in emphysema clusters are evaluated at inspiratory and expiratory computed tomography (CT. Thirty patients were visually evaluated for the size-based emphysema clustering technique and a total of 72 patients were evaluated for analyzing collapse of the emphysema hole in this study. A new approach for the size differentiation of emphysema holes was developed using the length scale, Gaussian low-pass filtering, and iteration approach. Then, the volumetric CT results of the emphysema patients were analyzed using the new method, and deformable registration was carried out between inspiratory and expiratory CT. Blind visual evaluations of EI by two readers had significant correlations with the classification using the size-based emphysema clustering method (r-values of reader 1: 0.186, 0.890, 0.915, and 0.941; reader 2: 0.540, 0.667, 0.919, and 0.942. The results of collapse of emphysema holes using deformable registration were compared with the pulmonary function test (PFT parameters using the Pearson’s correlation test. The mean extents of low-attenuation area (LAA, E1 (<1.5 mm, E2 (<7 mm, E3 (<15 mm, and E4 (≥15 mm were 25.9%, 3.0%, 11.4%, 7.6%, and 3.9%, respectively, at the inspiratory CT, and 15.3%, 1.4%, 6.9%, 4.3%, and 2.6%, respectively at the expiratory CT. The extents of LAA, E2, E3, and E4 were found to be significantly correlated with the PFT ­parameters (r=−0.53, −0.43, −0.48, and −0.25, with forced expiratory volume in 1 second (FEV1; −0.81, −0.62, −0.75, and

  7. SU-G-201-13: Investigation of Dose Variation Induced by HDR Ir-192 Source Global Shift Within the Varian Ring Applicator Using Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Y; Cai, J; Meltsner, S; Chang, Z; Craciunescu, O [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: The Varian tandem and ring applicators are used to deliver HDR Ir-192 brachytherapy for cervical cancer. The source path within the ring is hard to predict due to the larger interior ring lumen. Some studies showed the source could be several millimeters different from planned positions, while other studies demonstrated minimal dosimetric impact. A global shift can be applied to limit the effect of positioning offsets. The purpose of this study was to assess the necessities of implementing a global source shift using Monte Carlo (MC) simulations. Methods: The MCNP5 radiation transport code was used for all MC simulations. To accommodate TG-186 guidelines and eliminate inter-source attenuation, a BrachyVision plan with 10 dwell positions (0.5cm step sizes) was simulated as the summation of 10 individual sources with equal dwell times for simplification. To simplify the study, the tandem was also excluded from the MC model. Global shifts of ±0.1, ±0.3, ±0.5 cm were then simulated as distal and proximal from the reference positions. Dose was scored in water for all MC simulations and was normalized to 100% at the normalization point 0.5 cm from the cap in the ring plane. For dose comparison, Point A was 2 cm caudal from the buildup cap and 2 cm lateral on either side of the ring axis. With seventy simulations, 108 photon histories gave a statistical uncertainties (k=1) <2% for (0.1 cm)3 voxels. Results: Compared to no global shift, average Point A doses were 0.0%, 0.4%, and 2.2% higher for distal global shifts, and 0.4%, 2.8%, and 5.1% higher for proximal global shifts, respectively. The MC Point A doses differed by < 1% when compared to BrachyVision. Conclusion: Dose variations were not substantial for ±0.3 cm global shifts, which is common in clinical practice.

  8. Seasonal variation, method of determination of bovine milk stability, and its relation with physical, chemical, and sanitary characteristics of raw milk

    Directory of Open Access Journals (Sweden)

    Sandro Charopen Machado

    Full Text Available ABSTRACT The objective of this research was to determine the variation of milk stability evaluated with ethanol, boiling, and coagulation time tests (CTT to identify milk components related with stability and verify the correlation between the three methods. Bulk raw milk was collected monthly at 50 dairy farms from January 2007 to October 2009 and physicochemical attributes, somatic cell (SCC, and total bacterial counts (TBC were determined. Milk samples were classified into low, medium, and high stability to ethanol test when coagulation occurred at 72 °GL, between 74 and 78 °GL, and above 78 °GL, respectively. Univariate analysis was performed considering the effects of year, months, and interaction in a completely randomized design. Principal factor analysis and logistic regression were done. There was an interaction between months and years for stability to the ethanol test and coagulation time. All samples were stable at the boiling test. Boiling test was not related to ethanol and coagulation time tests. Coagulation time was weakly but positively correlated with ethanol test. Broken line analysis revealed that milk stability measured with CTT and ethanol tests decreased sharply when SCC attained 790,000 or 106 cell/mL of milk, respectively. Milk stability measured with ethanol test decreased when TBC was higher than 250,000 cfu/mL, while there was no inflexion point between TBC and stability measured with CTT. Milk with high stability presented lower values for acidity, TBC, and SCC but higher values for pH, lactose, protein, and CTT compared with low-stability milk. Due to the execution easiness, single-point cut-off result and low cost, we do not recommend the replacement of ethanol test for boiling or coagulation time test.

  9. Comparison of GPS-TEC variation during quiet and disturbed period using the Holt-Winter method and IRI-2012 model over Malaysia

    Science.gov (United States)

    Ahmed Ismail, Nouf Abd Emunim; Abdullah, Mardina; Hasbi, Alina Marie

    2016-07-01

    Total Electron Content (TEC) is the main parameter in the ionosphere that has significant effects on radio wave; it changes the speed and direction of the signal propagation, causing the delay of the Global Positioning System (GPS) signals. Therefore, it is crucial to validate the performance of the ionospheric model to reveal the variety of ionospheric behaviour during quiet and disturbed period. This research presents the performance evaluation of the statistical Holt-Winter method and IRI-2012 model using three topside electron density options: IRI-2001, IRI01-corr and NeQuick with the observed GPS-TEC during quiet and disturbed period. The GPS-TEC data were derived from the dual frequency GPS receiver at JUPEM (Department of Survey and Mapping Malaysia), from the UUMK station (north Peninsular Malaysia) at geographic coordinates of 6.46°N-100.50°E and geomagnetic coordinates of 3.32°S-172.99°E and TGPG station (south Peninsular Malaysia) at geographic coordinates of 1.36°N-104.10°E and geomagnetic coordinates of 8.43°S -176.53°E, during March of 2013. The maximum value of the GPS-TEC was at the post noon time at 17:00 LT and the minimum was in the early morning from 6:00-7:00 LT. During the quiet period, the maximum GPS-TEC at the UUMK station was 52 TECU while at the TGPG station, it was 60 TECU. During the disturbed period, when intense geomagnetic storm occurred on 17 March 2013, the maximum GPS-TEC recorded was 58 TECU and 65 TECU in UUMK and TGPG station, respectively. The diurnal hourly variation during the quiet period indicated that IRI-2001, IRI01-corr, and NeQuick had overestimation agreement during the day hours except for the time between 11:00-19:00 LT when IRI01-corr and NeQuick showed underestimation, while during 13:00-20:00 LT, IRI-2001 showed slight underestimation whereas the Holt-Winter method showed good agreement with GPS-TEC. During the disturbed period, IRI-2001 showed overestimation agreement for all hours, while the IRI01-corr

  10. Variational principles in physics

    CERN Document Server

    Basdevant, Jean-Louis

    2007-01-01

    Optimization under constraints is an essential part of everyday life. Indeed, we routinely solve problems by striking a balance between contradictory interests, individual desires and material contingencies. This notion of equilibrium was dear to thinkers of the enlightenment, as illustrated by Montesquieu’s famous formulation: "In all magistracies, the greatness of the power must be compensated by the brevity of the duration." Astonishingly, natural laws are guided by a similar principle. Variational principles have proven to be surprisingly fertile. For example, Fermat used variational methods to demonstrate that light follows the fastest route from one point to another, an idea which came to be known as Fermat’s principle, a cornerstone of geometrical optics. Variational Principles in Physics explains variational principles and charts their use throughout modern physics. The heart of the book is devoted to the analytical mechanics of Lagrange and Hamilton, the basic tools of any physicist. Prof. Basdev...

  11. Ensembl variation resources

    Directory of Open Access Journals (Sweden)

    Marin-Garcia Pablo

    2010-05-01

    Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.

  12. Specific food group combinations explaining the variation in intakes of nutrients and other important food components in the European Prospective Investigation into Cancer and Nutrition : an application of the reduced rank regression method

    NARCIS (Netherlands)

    Kroeger, J.; Ferrari, P.; Jenab, M.; Bamia, C.; Touvier, M.; Bueno-de-Mesquita, H. B.; Fahey, M. T.; Benetou, V.; Schulz, M.; Wirfalt, E.; Boeing, H.; Hoffmann, K.; Schulze, M. B.; Orfanos, P.; Oikonomou, E.; Huybrechts, I.; Rohrmann, S.; Pischon, T.; Manjer, J.; Agren, A.; Navarro, C.; Jakszyn, P.; Boutron-Ruault, M. C.; Niravong, M.; Khaw, K. T.; Crowe, F.; Ocke, M. C.; van der Schouw, Y. T.; Mattiello, A.; Bellegotti, M.; Engeset, D.; Hjartaker, A.; Egeberg, R.; Overvad, K.; Riboli, E.; Bingham, S.; Slimani, N.

    2009-01-01

    Objective: To identify combinations of food groups that explain as much variation in absolute intakes of 23 key nutrients and food components as possible within the country-specific populations of the European Prospective Investigation into Cancer and Nutrition (EPIC). Subjects/Methods: The analysis

  13. An incremental-iterative method for modeling damage evolution in voxel-based microstructure models

    Science.gov (United States)

    Zhu, Qi-Zhi; Yvonnet, Julien

    2015-02-01

    Numerical methods motivated by rapid advances in image processing techniques have been intensively developed during recent years and increasingly applied to simulate heterogeneous materials with complex microstructure. The present work aims at elaborating an incremental-iterative numerical method for voxel-based modeling of damage evolution in quasi-brittle microstructures. The iterative scheme based on the Lippmann-Schwinger equation in the real space domain (Yvonnet, in Int J Numer Methods Eng 92:178-205, 2012) is first cast into an incremental form so as to implement nonlinear material models efficiently. In the proposed scheme, local strain increments at material grid points are computed iteratively by a mapping operation through a transformation array, while local stresses are determined using a constitutive model that accounts for material degradation by damage. For validation, benchmark studies and numerical simulations using microtomographic data of concrete are performed. For each test, numerical predictions by the incremental-iterative scheme and the finite element method, respectively, are presented and compared for both global responses and local damage distributions. It is emphasized that the proposed incremental-iterative formulation can be straightforwardly applied in the framework of other Lippmann-Schwinger equation-based schemes, like the fast Fourier transform method.

  14. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald [Colorado State Univ., Fort Collins, CO (United States)

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  15. Seasonal Variation in Epidemiology

    Science.gov (United States)

    Marrero, Osvaldo

    2013-01-01

    Seasonality analyses are important in medical research. If the incidence of a disease shows a seasonal pattern, then an environmental factor must be considered in its etiology. We discuss a method for the simultaneous analysis of seasonal variation in multiple groups. The nuts and bolts are explained using simple trigonometry, an elementary…

  16. Variational transition state theory

    Energy Technology Data Exchange (ETDEWEB)

    Truhlar, D.G. [Univ. of Minnesota, Minneapolis (United States)

    1993-12-01

    This research program involves the development of variational transition state theory (VTST) and semiclassical tunneling methods for the calculation of gas-phase reaction rates and selected applications. The applications are selected for their fundamental interest and/or their relevance to combustion.

  17. FROG - Fingerprinting Genomic Variation Ontology.

    Directory of Open Access Journals (Sweden)

    E Abinaya

    Full Text Available Genetic variations play a crucial role in differential phenotypic outcomes. Given the complexity in establishing this correlation and the enormous data available today, it is imperative to design machine-readable, efficient methods to store, label, search and analyze this data. A semantic approach, FROG: "FingeRprinting Ontology of Genomic variations" is implemented to label variation data, based on its location, function and interactions. FROG has six levels to describe the variation annotation, namely, chromosome, DNA, RNA, protein, variations and interactions. Each level is a conceptual aggregation of logically connected attributes each of which comprises of various properties for the variant. For example, in chromosome level, one of the attributes is location of variation and which has two properties, allosomes or autosomes. Another attribute is variation kind which has four properties, namely, indel, deletion, insertion, substitution. Likewise, there are 48 attributes and 278 properties to capture the variation annotation across six levels. Each property is then assigned a bit score which in turn leads to generation of a binary fingerprint based on the combination of these properties (mostly taken from existing variation ontologies. FROG is a novel and unique method designed for the purpose of labeling the entire variation data generated till date for efficient storage, search and analysis. A web-based platform is designed as a test case for users to navigate sample datasets and generate fingerprints. The platform is available at http://ab-openlab.csir.res.in/frog.

  18. SU-F-T-70: A High Dose Rate Total Skin Electron Irradiation Technique with A Specific Inter-Film Variation Correction Method for Very Large Electron Beam Fields

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X; Rosenfield, J; Dong, X; Elder, E; Dhabaan, A [Emory University, Atlanta, GA (United States)

    2016-06-15

    Purpose: Rotational total skin electron irradiation (RTSEI) is used in the treatment of cutaneous T-cell lymphoma. Due to inter-film uniformity variations the dosimetry measurement of a large electron beam of a very low energy is challenging. This work provides a method to improve the accuracy of flatness and symmetry for a very large treatment field of low electron energy used in dual beam RTSEI. Methods: RTSEI is delivered by dual angles field a gantry of ±20 degrees of 270 to cover the upper and the lower halves of the patient body with acceptable beam uniformity. The field size is in the order of 230cm in vertical height and 120 cm in horizontal width and beam energy is a degraded 6 MeV (6 mm of PMMA spoiler). We utilized parallel plate chambers, Gafchromic films and OSLDs as a measuring devices for absolute dose, B-Factor, stationary and rotational percent depth dose and beam uniformity. To reduce inter-film dosimetric variation we introduced a new specific correction method to analyze beam uniformity. This correction method uses some image processing techniques combining film value before and after radiation dose to compensate the inter-variation dose response differences among films. Results: Stationary and rotational depth of dose demonstrated that the Rp is 2 cm for rotational and the maximum dose is shifted toward the surface (3mm). The dosimetry for the phantom showed that dose uniformity reduced to 3.01% for the vertical flatness and 2.35% for horizontal flatness after correction thus achieving better flatness and uniformity. The absolute dose readings of calibrated films after our correction matched with the readings from OSLD. Conclusion: The proposed correction method for Gafchromic films will be a useful tool to correct inter-film dosimetric variation for the future clinical film dosimetry verification in very large fields, allowing the optimizations of other parameters.

  19. Screening mitochondrial DNA sequence variation as an alternative method for tracking established and outbreak populations of Queensland fruit fly at the species southern range limit

    OpenAIRE

    Blacket, Mark J.; Malipatil, Mali B.; Semeraro, Linda; Gillespie, Peter S.; Dominiak, Bernie C.

    2017-01-01

    Abstract Understanding the relationship between incursions of insect pests and established populations is critical to implementing effective control. Studies of genetic variation can provide powerful tools to examine potential invasion pathways and longevity of individual pest outbreaks. The major fruit fly pest in eastern Australia, Queensland fruit fly Bactrocera tryoni (Froggatt), has been subject to significant long?term quarantine and population reduction control measures in the major ho...

  20. A Map Spectrum-Based Spatiotemporal Clustering Method for GDP Variation Pattern Analysis Using Nighttime Light Images of the Wuhan Urban Agglomeration

    Directory of Open Access Journals (Sweden)

    Penglin Zhang

    2017-05-01

    Full Text Available Estimates of gross domestic product (GDP play a significant role in evaluating the economic performance of a country or region. Understanding the spatiotemporal process of GDP growth is important for estimating or monitoring the economic state of a region. Various GDP studies have been reported, and several studies have focused on spatiotemporal GDP variations. This study presents a map spectrum-based clustering approach to analyze the spatiotemporal variation patterns of GDP growth. First, a sequence of nighttime light images (from the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS is used to support the spatial distribution of statistical GDP data. Subsequently, the time spectrum of each spatial unit is generated using a time series of dasymetric GDP maps, and then the spatial units with similar time spectra are clustered into one class. Each category has a similar spatiotemporal GDP variation pattern. Finally, the proposed approach is applied to analyze the spatiotemporal patterns of GDP growth in the Wuhan urban agglomeration. The experimental results illustrated regional discrepancies of GDP growth existed in the study area.

  1. Exploring language variation across Europe

    DEFF Research Database (Denmark)

    Hovy, Dirk; Johannsen, Anders Trærup

    2016-01-01

    training in both variational linguistics and computational methods, a combination that is still not common. We take a first step here to alleviate the problem by providing an interface to explore large-scale language variation along several socio-demographic factors without programming knowledge. It makes...

  2. Method

    Directory of Open Access Journals (Sweden)

    Ling Fiona W.M.

    2017-01-01

    Full Text Available Rapid prototyping of microchannel gain lots of attention from researchers along with the rapid development of microfluidic technology. The conventional methods carried few disadvantages such as high cost, time consuming, required high operating pressure and temperature and involve expertise in operating the equipment. In this work, new method adapting xurography method is introduced to replace the conventional method of fabrication of microchannels. The novelty in this study is replacing the adhesion film with clear plastic film which was used to cut the design of the microchannel as the material is more suitable for fabricating more complex microchannel design. The microchannel was then mold using polymethyldisiloxane (PDMS and bonded with a clean glass to produce a close microchannel. The microchannel produced had a clean edge indicating good master mold was produced using the cutting plotter and the bonding between the PDMS and glass was good where no leakage was observed. The materials used in this method is cheap and the total time consumed is less than 5 hours where this method is suitable for rapid prototyping of microchannel.

  3. A topology-based method to mitigate the dosimetric uncertainty caused by the positional variation of the boost volume in breast conservative radiotherapy.

    Science.gov (United States)

    Lee, Peng-Yi; Lin, Chih-Yuan; Chen, Shang-Wen; Chien, Chun-Ru; Chu, Chun-Nan; Hsu, Hsiu-Ting; Liang, Ji-An; Lin, Ying-Jun; Shiau, An-Cheng

    2017-03-20

    To improve local control rate in patients with breast cancer receiving adjuvant radiotherapy after breast conservative surgery, additional boost dose to the tumor bed could be delivered simultaneously via the simultaneous integrated boost (SIB) modulated technique. However, the position of tumor bed kept changing during the treatment course as the treatment position was aligned to bony anatomy. This study aimed to analyze the positional uncertainties between bony anatomy and tumor bed, and a topology-based approach was derived to stratify patients with high variation in tumor bed localization. Sixty patients with early-stage breast cancer or ductal carcinoma in situ were enrolled. All received adjuvant whole breast radiotherapy with or without local boost via SIB technique. The delineation of tumor bed was defined by incorporating the anatomy of seroma, adjacent surgical clips, and any architectural distortion on computed tomography simulation. A total of 1740 on-board images were retrospectively analyzed. Positional uncertainty of tumor bed was assessed by four components: namely systematic error (SE), and random error (RE), through anterior-posterior (AP), cranial-caudal (CC), left-right (LR) directions and couch rotation (CR). Age, tumor location, and body-mass factors including volume of breast, volume of tumor bed, breast thickness, and body mass index (BMI) were analyzed for their predictive role. The appropriate margin to accommodate the positional uncertainty of the boost volume was assessed, and the new plans with this margin for the tumor bed was designed as the high risk planning target volume (PTV-H) were created retrospectively to evaluate the impact on organs at risk. In univariate analysis, a larger breast thickness, larger breast volume, higher BMI, and different tumor locations correlated with a greater positional uncertainty of tumor bed. However, BMI was the only factor associated with displacements of surgical clips in the multivariate analysis

  4. method

    Directory of Open Access Journals (Sweden)

    L. M. Kimball

    2002-01-01

    Full Text Available This paper presents an interior point algorithm to solve the multiperiod hydrothermal economic dispatch (HTED. The multiperiod HTED is a large scale nonlinear programming problem. Various optimization methods have been applied to the multiperiod HTED, but most neglect important network characteristics or require decomposition into thermal and hydro subproblems. The algorithm described here exploits the special bordered block diagonal structure and sparsity of the Newton system for the first order necessary conditions to result in a fast efficient algorithm that can account for all network aspects. Applying this new algorithm challenges a conventional method for the use of available hydro resources known as the peak shaving heuristic.

  5. The uncertainty of estimating the thickness of soft sediments with the HVSR method: A computational point of view on weak lateral variations

    Science.gov (United States)

    Bignardi, Samuel

    2017-10-01

    The use of the ratio of microtremor spectra, as computed by the Nakamura's technique, was recently proved successful for the evaluating the thickness of sedimentary covers laying over both shallow and deep rocky bedrocks thus enabling bedrock mapping. The experimental success of such application and its experimental uncertainties are today reported in many publications. To map bedrock, two approaches exist. The first is to assume a constant shear wave velocity profile of the sediments. The second, and most preferable, is Ibs-von Seht and Wohlenberg's, based on correlating Nakamura's curves main peak and wells information. In the latter approach, the main sources of uncertainty addressed by authors, despite the lack of formal proof, comprise local deviations of the subsurface from the assumed model. I first discuss the reliability of the simplified constant velocity approach showing its limitations. As a second task, I evaluate the uncertainty of the Ibs-von Seht and Wohlenberg's approach with focus on local subsurface variations. Since the experimental basis is well established, I entirely focus my investigation on numerical simulations to evaluate to what extent local subsurface deviations from the assumed model may affect the outcome of a bedrock mapping survey. Further, the present investigation strategy suggests that modeling and inversion, through the investigation of the parameters space around the reference model, may reveal a very convenient tool when lateral variations are suspected to exist or when the number of available wells is not sufficient to obtain an accurate frequency-depth regression.

  6. Discrimination Method of the Volatiles from Fresh Mushrooms by an Electronic Nose Using a Trapping System and Statistical Standardization to Reduce Sensor Value Variation

    Science.gov (United States)

    Fujioka, Kouki; Shimizu, Nobuo; Manome, Yoshinobu; Ikeda, Keiichi; Yamamoto, Kenji; Tomizawa, Yasuko

    2013-01-01

    Electronic noses have the benefit of obtaining smell information in a simple and objective manner, therefore, many applications have been developed for broad analysis areas such as food, drinks, cosmetics, medicine, and agriculture. However, measurement values from electronic noses have a tendency to vary under humidity or alcohol exposure conditions, since several types of sensors in the devices are affected by such variables. Consequently, we show three techniques for reducing the variation of sensor values: (1) using a trapping system to reduce the infering components; (2) performing statistical standardization (calculation of z-score); and (3) selecting suitable sensors. With these techniques, we discriminated the volatiles of four types of fresh mushrooms: golden needle (Flammulina velutipes), white mushroom (Agaricus bisporus), shiitake (Lentinus edodes), and eryngii (Pleurotus eryngii) among six fresh mushrooms (hen of the woods (Grifola frondosa), shimeji (Hypsizygus marmoreus) plus the above mushrooms). Additionally, we succeeded in discrimination of white mushroom, only comparing with artificial mushroom flavors, such as champignon flavor and truffle flavor. In conclusion, our techniques will expand the options to reduce variations in sensor values. PMID:24233028

  7. Discrimination Method of the Volatiles from Fresh Mushrooms by an Electronic Nose Using a Trapping System and Statistical Standardization to Reduce Sensor Value Variation

    Directory of Open Access Journals (Sweden)

    Kouki Fujioka

    2013-11-01

    Full Text Available Electronic noses have the benefit of obtaining smell information in a simple and objective manner, therefore, many applications have been developed for broad analysis areas such as food, drinks, cosmetics, medicine, and agriculture. However, measurement values from electronic noses have a tendency to vary under humidity or alcohol exposure conditions, since several types of sensors in the devices are affected by such variables. Consequently, we show three techniques for reducing the variation of sensor values: (1 using a trapping system to reduce the infering components; (2 performing statistical standardization (calculation of z-score; and (3 selecting suitable sensors. With these techniques, we discriminated the volatiles of four types of fresh mushrooms: golden needle (Flammulina velutipes, white mushroom (Agaricus bisporus, shiitake (Lentinus edodes, and eryngii (Pleurotus eryngii among six fresh mushrooms (hen of the woods (Grifola frondosa, shimeji (Hypsizygus marmoreus plus the above mushrooms. Additionally, we succeeded in discrimination of white mushroom, only comparing with artificial mushroom flavors, such as champignon flavor and truffle flavor. In conclusion, our techniques will expand the options to reduce variations in sensor values.

  8. Genome structural variation discovery and genotyping

    OpenAIRE

    Alkan, Can; Coe, Bradley P.; Eichler, Evan E.

    2011-01-01

    Comparisons of human genomes show that more base pairs are altered as a result of structural variation — including copy number variation — than as a result of point mutations. Here we review advances and challenges in the discovery and genotyping of structural variation. The recent application of massively parallel sequencing methods has complemented microarray-based methods and has led to an exponential increase in the discovery of smaller structural-variation events. Some glo...

  9. Statistics, Uncertainty, and Transmitted Variation

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, Joanne Roth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-11-05

    The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.

  10. Separating common from distinctive variation.

    Science.gov (United States)

    van der Kloet, Frans M; Sebastián-León, Patricia; Conesa, Ana; Smilde, Age K; Westerhuis, Johan A

    2016-06-06

    Joint and individual variation explained (JIVE), distinct and common simultaneous component analysis (DISCO) and O2-PLS, a two-block (X-Y) latent variable regression method with an integral OSC filter can all be used for the integrated analysis of multiple data sets and decompose them in three terms: a low(er)-rank approximation capturing common variation across data sets, low(er)-rank approximations for structured variation distinctive for each data set, and residual noise. In this paper these three methods are compared with respect to their mathematical properties and their respective ways of defining common and distinctive variation. The methods are all applied on simulated data and mRNA and miRNA data-sets from GlioBlastoma Multiform (GBM) brain tumors to examine their overlap and differences. When the common variation is abundant, all methods are able to find the correct solution. With real data however, complexities in the data are treated differently by the three methods. All three methods have their own approach to estimate common and distinctive variation with their specific strength and weaknesses. Due to their orthogonality properties and their used algorithms their view on the data is slightly different. By assuming orthogonality between common and distinctive, true natural or biological phenomena that may not be orthogonal at all might be misinterpreted.

  11. Variation in fish community structure, richness, and diversity in 56 Danish lakes with contrasting depth, size, and trophic state: does the method matter?

    DEFF Research Database (Denmark)

    Menezes, Rosemberg; Borchsenius, Finn; Svenning, J.-C.

    2013-01-01

    The distribution of freshwater fish is influenced by food availability, habitat heterogeneity, competition, predation, trophic state, and presence/absence of macrophytes. This poses a challenge to monitoring, and researchers have been struggling to develop accurate sampling methods for obtaining ...... community, as all methods miss some important species that other methods capture. However, electrofishing seems to be a fast alternative to gillnets for monitoring fish species richness and composition in littoral habitats of Danish lakes.......The distribution of freshwater fish is influenced by food availability, habitat heterogeneity, competition, predation, trophic state, and presence/absence of macrophytes. This poses a challenge to monitoring, and researchers have been struggling to develop accurate sampling methods for obtaining...... a better understanding of fish communities. We compare fish community composition, richness, and diversity in 56 Danish lakes using data obtained by gillnetting in different lake zones and near-shore electrofishing, respectively. On average, electrofishing captured more species than offshore gillnets...

  12. Variation of the Phytochemical Constituents and Antioxidant Activities of Zingiber officinale var. rubrum Theilade Associated with Different Drying Methods and Polyphenol Oxidase Activity.

    Science.gov (United States)

    Ghasemzadeh, Ali; Jaafar, Hawa Z E; Rahmat, Asmah

    2016-06-17

    The effects of different drying methods (freeze drying, vacuum oven drying, and shade drying) on the phytochemical constituents associated with the antioxidant activities of Z. officinale var. rubrum Theilade were evaluated to determine the optimal drying process for these rhizomes. Total flavonoid content (TFC), total phenolic content (TPC), and polyphenol oxidase (PPO) activity were measured using the spectrophotometric method. Individual phenolic acids and flavonoids, 6- and 8-gingerol and shogaol were identified by ultra-high performance liquid chromatography method. Ferric reducing antioxidant potential (FRAP) and 1,1-diphenyl-2-picrylhydrazyl (DPPH) assays were used for the evaluation of antioxidant activities. The highest reduction in moisture content was observed after freeze drying (82.97%), followed by vacuum oven drying (80.43%) and shade drying (72.65%). The highest TPC, TFC, and 6- and 8-shogaol contents were observed in samples dried by the vacuum oven drying method compared to other drying methods. The highest content of 6- and 8-gingerol was observed after freeze drying, followed by vacuum oven drying and shade drying methods. Fresh samples had the highest PPO activity and lowest content of flavonoid and phenolic acid compounds compared to dried samples. Rhizomes dried by the vacuum oven drying method represent the highest DPPH (52.9%) and FRAP activities (566.5 μM of Fe (II)/g DM), followed by freeze drying (48.3% and 527.1 μM of Fe (II)/g DM, respectively) and shade drying methods (37.64% and 471.8 μM of Fe (II)/g DM, respectively) with IC50 values of 27.2, 29.1, and 34.8 μg/mL, respectively. Negative and significant correlations were observed between PPO and antioxidant activity of rhizomes. Vacuum oven dried rhizomes can be utilized as an ingredient for the development of value-added food products as they contain high contents of phytochemicals with valuable antioxidant potential.

  13. Method

    Directory of Open Access Journals (Sweden)

    Andrey Gnatov

    2015-01-01

    Full Text Available Recently repair and recovery vehicle body operations become more and more popular. A special place here is taken by equipment that provides performance of given repair operations. The most interesting things are methods for recovery of car body panels that allow the straightening without disassembling of car body panels and damaging of existing protective coating. Now, there are several technologies for repair and recovery of car body panels without their disassembly and dismantling. The most perspective is magnetic-pulse technology of external noncontact straightening. Basics of magnetic-pulse attraction, both ferromagnetic and nonferromagnetic thin-walled sheet metal, are explored. Inductor system calculation models of magnetic-pulse straightening tools are presented. Final analytical expressions for excited efforts calculation in the tools under consideration are introduced. According to the obtained analytical expressions, numerical evaluations of excited forces were executed. The volumetric epures of the attractive force radial distributions for different types of inductors were built. The practical testing of magnetic-pulse straightening with research tools is given. Using the results of the calculations we can create effective tools for an external magnetic-pulse straightening of car body panels.

  14. Optimization of water treatment methods for the purification of peat extraction derived runoff: Evaluation of chemical treatment response to variations in incoming water quality using a 2k factorial test design

    Science.gov (United States)

    Heiderscheidt, Elisangela; Ronkanen, Anna-Kaisa; Klöve, Björn

    2013-04-01

    The sustainable use of peatland areas requires measures to minimize and when possible eradicate the identified environmental impacts. The drainage of peatlands and other peat extraction, agriculture and forestry activities are known to increase the leaching of pollutant substances resulting in the eutrophication and siltation of receiving water bodies, causing water quality deterioration. Due to the geochemistry characteristics of peat soils the quality of peatland derived runoff water is known to oscillate with location and also with variations in runoff and peak discharge occurrences. Affordable, simple and reliable purification methods that can purify waters rich in particulates, nutrients and dissolved organic carbon while capable of coping with incoming water quality variations are therefore required. Chemical treatment is considered one of the best available technologies for the purification of peat extraction runoff water in Finland; however, until recently little research had been applied on the development of this treatment method for the purification of non-point source pollution. Chemical purification, using metal salts as coagulant agents, is currently applied in several treatment facilities in Finnish peat extraction sites. Nevertheless, variations in runoff water quality and the lack of development of field process parameters has led to the application of high chemical dosages, significant and undesirable fluctuations in purification efficiency and high metal concentration in the discharging waters. This work aims to develop and optimize the chemical purification method by using high level scientific methods to evaluate the response of the purification process to variations in water quality which are typical of peatland derived runoff. The evaluation of how the purification process responds to these variations is a critical step which will enable the development of preventive measures and optimization of relevant process parameters and thus reduce the

  15. Laser-ablation-combustion-GC-IRMS -- a new method for online analysis of intra-annual variation of {delta}13C in tree rings

    Energy Technology Data Exchange (ETDEWEB)

    Schulze, B.; Wirth, C.; Linke, P.; Brand, W. A.; Kuhlmann, I.; Horna, V.; Schulze, E-D. [Max Planck Institut fuer Biogeochemie, Jena (Germany)

    2004-11-01

    A method for high resolution on-line determination of {delta}13C in tree rings is described. The proposed method combines laser ablation, combustion, gas chromatography and isotope ratio mass spectrometry. Sample material was extracted from two Scotch pine tree cores at six minute intervals using ultra-violet laser. The wood dust was combusted to carbon dioxide at 700 degrees C, separated from other gases on a gas chromatography column and injected into an isotope ratio mass spectrometer after removal of water vapor. Results showed patterns of {delta}13C along three parallel ablation lines on the same core to be highly congruent. The isotopic patterns of the two Scotch pine trees were broadly similar, suggesting a signal related to the forest stand's climate history. There was a sharp decline in {delta}13C during latewood formation and a rapid increase with early growth. Overall, the proposed method showed high accuracy when compared to conventional methods involving microtome slicing, elemental analysis and isotope ratio mass spectrometry. Major application of the new ablation method is expected to be in high-resolution dendroclimatology and plant physiology. 62 refs., 9 figs.

  16. Variation of the Phytochemical Constituents and Antioxidant Activities of Zingiber officinale var. rubrum Theilade Associated with Different Drying Methods and Polyphenol Oxidase Activity

    Directory of Open Access Journals (Sweden)

    Ali Ghasemzadeh

    2016-06-01

    Full Text Available The effects of different drying methods (freeze drying, vacuum oven drying, and shade drying on the phytochemical constituents associated with the antioxidant activities of Z. officinale var. rubrum Theilade were evaluated to determine the optimal drying process for these rhizomes. Total flavonoid content (TFC, total phenolic content (TPC, and polyphenol oxidase (PPO activity were measured using the spectrophotometric method. Individual phenolic acids and flavonoids, 6- and 8-gingerol and shogaol were identified by ultra-high performance liquid chromatography method. Ferric reducing antioxidant potential (FRAP and 1,1-diphenyl-2-picrylhydrazyl (DPPH assays were used for the evaluation of antioxidant activities. The highest reduction in moisture content was observed after freeze drying (82.97%, followed by vacuum oven drying (80.43% and shade drying (72.65%. The highest TPC, TFC, and 6- and 8-shogaol contents were observed in samples dried by the vacuum oven drying method compared to other drying methods. The highest content of 6- and 8-gingerol was observed after freeze drying, followed by vacuum oven drying and shade drying methods. Fresh samples had the highest PPO activity and lowest content of flavonoid and phenolic acid compounds compared to dried samples. Rhizomes dried by the vacuum oven drying method represent the highest DPPH (52.9% and FRAP activities (566.5 μM of Fe (II/g DM, followed by freeze drying (48.3% and 527.1 μM of Fe (II/g DM, respectively and shade drying methods (37.64% and 471.8 μM of Fe (II/g DM, respectively with IC50 values of 27.2, 29.1, and 34.8 μg/mL, respectively. Negative and significant correlations were observed between PPO and antioxidant activity of rhizomes. Vacuum oven dried rhizomes can be utilized as an ingredient for the development of value-added food products as they contain high contents of phytochemicals with valuable antioxidant potential.

  17. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  18. Assessing the Influence of Seasonal and Spatial Variations on the Estimation of Secondary Organic Carbon in Urban Particulate Matter by Applying the EC-Tracer Method

    Directory of Open Access Journals (Sweden)

    Sandra Wagener

    2014-04-01

    Full Text Available The elemental carbon (EC-tracer method was applied to PM10 and PM1 data of three sampling sites in the City of Berlin from February to October 2010. The sites were characterized by differing exposure to traffic and vegetation. The aim was to determine the secondary organic carbon (SOC concentration and to describe the parameters influencing the application of the EC-tracer method. The evaluation was based on comparisons with results obtained from positive matrix factorization (PMF applied to the same samples. To obtain site- and seasonal representative primary OC/EC-ratios ([OC/EC]p, the EC-tracer method was performed separately for each station, and additionally discrete for samples with high and low contribution of biomass burning. Estimated SOC-concentrations for all stations were between 11% and 33% of total OC. SOC-concentrations obtained with PMF exceeded EC-tracer results more than 100% at the park in the period with low biomass burning emissions in PM10. The deviations were besides others attributed to the high ratio of biogenic to combustion emissions and to direct exposure to vegetation. The occurrences of biomass burning emissions in contrast lead to increased SOC-concentrations compared to PMF in PM10. The obtained results distinguish that the EC-tracer-method provides well comparable results with PMF if sites are strongly influenced by one characteristic primary combustion source, but was found to be adversely influenced by direct and relatively high biogenic emissions.

  19. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  20. Using an Explicit Emission Tagging Method in Global Modeling of Source-Receptor Relationships for Black Carbon in the Arctic: Variations, Sources and Transport Pathways

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hailong; Rasch, Philip J.; Easter, Richard C.; Singh, Balwinder; Zhang, Rudong; Ma, Po-Lun; Qian, Yun; Ghan, Steven J.; Beagley, Nathaniel

    2014-11-27

    We introduce an explicit emission tagging technique in the Community Atmosphere Model to quantify source-region-resolved characteristics of black carbon (BC), focusing on the Arctic. Explicit tagging of BC source regions without perturbing the emissions makes it straightforward to establish source-receptor relationships and transport pathways, providing a physically consistent and computationally efficient approach to produce a detailed characterization of the destiny of regional BC emissions and the potential for mitigation actions. Our analysis shows that the contributions of major source regions to the global BC burden are not proportional to the respective emissions due to strong region-dependent removal rates and lifetimes, while the contributions to BC direct radiative forcing show a near-linear dependence on their respective contributions to the burden. Distant sources contribute to BC in remote regions mostly in the mid- and upper troposphere, having much less impact on lower-level concentrations (and deposition) than on burden. Arctic BC concentrations, deposition and source contributions all have strong seasonal variations. Eastern Asia contributes the most to the wintertime Arctic burden. Northern Europe emissions are more important to both surface concentration and deposition in winter than in summer. The largest contribution to Arctic BC in the summer is from Northern Asia. Although local emissions contribute less than 10% to the annual mean BC burden and deposition within the Arctic, the per-emission efficiency is much higher than for major non-Arctic sources. The interannual variability (1996-2005) due to meteorology is small in annual mean BC burden and radiative forcing but is significant in yearly seasonal means over the Arctic. When a slow aging treatment of BC is introduced, the increase of BC lifetime and burden is source-dependent. Global BC forcing-per-burden efficiency also increases primarily due to changes in BC vertical distributions. The

  1. Variations in the ability of general medical practitioners to apply two methods of clinical audit: A five-year study of assessment by peer review.

    Science.gov (United States)

    McKay, John; Bowie, Paul; Lough, Murray

    2006-12-01

    Clinical audit has a central role in the NHS clinical governance agenda and the professional appraisal of medical practitioners in the UK. However, concerns have been raised about the poor design and impact of clinical audit studies and the ability of practitioners to apply audit methods. One method of making informed judgements on audit performance is by peer review. In the west of Scotland a voluntary peer review model has been open to general practitioners since 1999, while general practice trainees are compelled to participate as part of summative assessment. The study aimed to compare the outcomes of peer review for two methods of audit undertaken by different professional and academic groups of doctors. Participants submitted a criterion audit or significant event analysis in standard formats for review by two informed general practitioners (GPs) using appropriate instruments. Peer review outcome data and the professional status of doctors participating were generated by computer search. Differences in proportions of those gaining a satisfactory peer review for each group were calculated. Of 1002 criterion audit submissions, 552 (55%) were judged to be satisfactory. GP registrars were significantly more likely than GP trainers (P groups (P peer review. GPs in non-training practices were less likely to achieve a satisfactory review than registrars (P groups gaining a similar proportion of satisfactory assessments, although GP registrars may have outperformed non-training practice GPs (P = 0.05). A significant proportion of GPs may be unable to adequately apply audit methods, potentially raising serious questions about the effectiveness of clinical audit as a health care improvement policy in general medical practice.

  2. Identifying indicators of the spatial variations of agricultural practices by a tree partitioning methods: the case of weed control practices in vine growing catchment

    OpenAIRE

    2009-01-01

    Environmental impact assessments of agricultural practices on a regional scale may be computed by running spatially distributed biophysical models using mapped input data on agricultural practices. In cases of hydrological impact assessments, such as herbicide pollution through run-off, methods for generating these data over the entire water resource catchment and at the plot resolution are needed. In this study, we aimed to identify indicators for simulating the spatial distribution of weed ...

  3. Identifying indicators of the spatial variation of agricultural practices by a tree partitioning method : the case of weed control practices in a vine growing catchment

    OpenAIRE

    Biarnès, Anne; J. S. Bailly; Boissieux, Yannick

    2009-01-01

    Environmental impact assessments of agricultural practices on a regional scale may be computed by running spatially distributed biophysical models using mapped input data on agricultural practices. In cases of hydrological impact assessments, Such as herbicide pollution through runoff, methods for generating these data over the entire water resource catchment and at the plot resolution are needed. In this study, we aimed to identify indicators for simulating the spatial distribution of weed c...

  4. Quantifying the Spatial Variations of Hyporheic Water Exchange at Catchment Scale Using the Thermal Method: A Case Study in the Weihe River, China

    OpenAIRE

    Junlong Zhang; Jinxi Song; Yongqing Long; Yan Zhang; Bo Zhang; Yuqi Wang; Yuanyuan Wang

    2017-01-01

    Understanding the dynamics of hyporheic water exchange (HWE) has been limited by the hydrological heterogeneity at large catchment scale. The thermal method has been widely used to understand water exchange patterns in a hyporheic zone. This study was conducted in the Weihe River catchment in Shaanxi Province, China. A conceptual model was developed to determine water transfer patterns, and a one-dimensional heat diffusion-advection equation was employed to estimate vertical fluxes of ten dif...

  5. Observer variation factor on advanced method for accurate, robust, and efficient spectral fitting of java based magnetic resonance user interface for MRS data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Suk Jun [Dept. of Biomedical Laboratory Science, College of Health Science, Cheongju University, Cheongju (Korea, Republic of); Yu, Seung Man [Dept. of Radiological Science, College of Health Science, Gimcheon University, Gimcheon (Korea, Republic of)

    2016-06-15

    The purpose of this study was examined the measurement error factor on AMARES of jMRUI method for magnetic resonance spectroscopy (MRS) quantitative analysis by skilled and unskilled observer method and identified the reason of independent observers. The Point-resolved spectroscopy sequence was used to acquired magnetic resonance spectroscopy data of 10 weeks male Sprague-Dawley rat liver. The methylene protons ((-CH2-)n) of 1.3 ppm and water proton (H2O) of 4.7 ppm ratio was calculated by LCModel software for using the reference data. The seven unskilled observers were calculated total lipid (methylene/water) using the jMRUI AMARES technique twice every 1 week, and we conducted interclass correlation coefficient (ICC) statistical analysis by SPSS software. The inter-observer reliability (ICC) of Cronbach's alpha value was less than 0.1. The average value of seven observer's total lipid (0.096±0.038) was 50% higher than LCModel reference value. The jMRUI AMARES analysis method is need to minimize the presence of the residual metabolite by identified metabolite MRS profile in order to obtain the same results as the LCModel.

  6. Abnormal variation of magnetic properties with Ce content in (PrNdCe)2Fe14B sintered magnets prepared by dual alloy method

    Science.gov (United States)

    Xue-Feng, Zhang; Jian-Ting, Lan; Zhu-Bai, Li; Yan-Li, Liu; Le-Le, Zhang; Yong-Feng, Li; Qian, Zhao

    2016-05-01

    Resource-saving (PrNdCe)2Fe14B sintered magnets with nominal composition (PrNd)15-x Ce x Fe77B8 (x = 0-10) were prepared using a dual alloy method by mixing (PrNd)5Ce10Fe77B8 with (PrNd)15Fe77B8 powders. For Ce atomic percent of 1% and 2%, coercivity decreases dramatically. With further increase of Ce atomic percent, the coercivity increases, peaks at 6.38 kOe in (PrNd)11Ce4Fe77B8, and then declines gradually. The abnormal dependence of coercivity is likely related to the inhomogeneity of rare earth chemical composition in the intergranular phase, where PrNd concentration is strongly dependent on the additive amount of (PrNd)5Ce10Fe77B8 powders. In addition, for Ce atomic percent of 8%, 7%, and 6% the coercivity is higher than that of magnets prepared by the conventional method, which shows the advantage of the dual alloy method in preparing high abundant rare earth magnets. Project supported by the National Natural Science Foundation of China (Grant Nos. 51461033, 51571126, 51541105, and 11547032), the Natural Science Foundation of Inner Mongolia, China (Grant No. 2013MS0110), and the Inner Mongolia University of Science and Technology Innovation Fund, China.

  7. Evaluation and Modeling of the Variation of Electromagnetic Field on the Cross Section of a Transmission Line Using Finite Difference Method

    Directory of Open Access Journals (Sweden)

    Jorge I. Silva O.

    2015-06-01

    Full Text Available This paper present a purpose to characterize power lines in order to identify level of operation since the power grid planning. In order to model a power line was required the use of computational tools to generate a mathematical model in MATLAB, which was based on the finite difference method and represent the electromagnetic field (EMF contribution. The results were contrasted with real and measured values taken from a cross section of a power line that was previously modeled. Statistical analysis showed an accurate estimation of the electric and magnetic field emitted by the line identifying the same shape of the plotted curve and values in an acceptable range.

  8. Replication methods and tools in high-throughput cultivation processes - recognizing potential variations of growth and product formation by on-line monitoring

    Directory of Open Access Journals (Sweden)

    Luft Karina

    2010-03-01

    Full Text Available Abstract Background High-throughput cultivations in microtiter plates are the method of choice to express proteins from recombinant clone libraries. Such processes typically include several steps, whereby some of them are linked by replication steps: transformation, plating, colony picking, preculture, main culture and induction. In this study, the effects of conventional replication methods and replication tools (8-channel pipette, 96-pin replicators: steel replicator with fixed or spring-loaded pins, plastic replicator with fixed pins on growth kinetics of Escherichia coli SCS1 pQE-30 pSE111 were observed. Growth was monitored with the BioLector, an on-line monitoring technique for microtiter plates. Furthermore, the influence of these effects on product formation of Escherichia coli pRhotHi-2-EcFbFP was investigated. Finally, a high-throughput cultivation process was simulated with Corynebacterium glutamicum pEKEx2-phoD-GFP, beginning at the colony picking step. Results Applying different replication tools and methods for one single strain resulted in high time differences of growth of the slowest and fastest growing culture. The shortest time difference (0.3 h was evaluated for the 96 cultures that were transferred with an 8-channel pipette from a thawed and mixed cryoculture and the longest time difference (6.9 h for cultures that were transferred with a steel replicator with fixed pins from a frozen cryoculture. The on-line monitoring of a simulated high-throughput cultivation process revealed strong variances in growth kinetics and a twofold difference in product formation. Another experiment showed that varying growth kinetics, caused by varying initial biomass concentrations (OD600 of 0.0125 to 0.2 led to strongly varying product formation upon induction at a defined point of time. Conclusions To improve the reproducibility of high-throughput cultivation processes and the comparability between different applied cultures, it is strongly

  9. The Construction of Regulatory Network for Insulin-Mediated Genes by Integrating Methods Based on Transcription Factor Binding Motifs and Gene Expression Variations

    Directory of Open Access Journals (Sweden)

    Hyeim Jung

    2015-09-01

    Full Text Available Type 2 diabetes mellitus is a complex metabolic disorder associated with multiple genetic, developmental and environmental factors. The recent advances in gene expression microarray technologies as well as network-based analysis methodologies provide groundbreaking opportunities to study type 2 diabetes mellitus. In the present study, we used previously published gene expression microarray datasets of human skeletal muscle samples collected from 20 insulin sensitive individuals before and after insulin treatment in order to construct insulin-mediated regulatory network. Based on a motif discovery method implemented by iRegulon, a Cytoscape app, we identified 25 candidate regulons, motifs of which were enriched among the promoters of 478 up-regulated genes and 82 down-regulated genes. We then looked for a hierarchical network of the candidate regulators, in such a way that the conditional combination of their expression changes may explain those of their target genes. Using Genomica, a software tool for regulatory network construction, we obtained a hierarchical network of eight regulons that were used to map insulin downstream signaling network. Taken together, the results illustrate the benefits of combining completely different methods such as motif-based regulatory factor discovery and expression level-based construction of regulatory network of their target genes in understanding insulin induced biological processes and signaling pathways.

  10. The Construction of Regulatory Network for Insulin-Mediated Genes by Integrating Methods Based on Transcription Factor Binding Motifs and Gene Expression Variations.

    Science.gov (United States)

    Jung, Hyeim; Han, Seonggyun; Kim, Sangsoo

    2015-09-01

    Type 2 diabetes mellitus is a complex metabolic disorder associated with multiple genetic, developmental and environmental factors. The recent advances in gene expression microarray technologies as well as network-based analysis methodologies provide groundbreaking opportunities to study type 2 diabetes mellitus. In the present study, we used previously published gene expression microarray datasets of human skeletal muscle samples collected from 20 insulin sensitive individuals before and after insulin treatment in order to construct insulin-mediated regulatory network. Based on a motif discovery method implemented by iRegulon, a Cytoscape app, we identified 25 candidate regulons, motifs of which were enriched among the promoters of 478 up-regulated genes and 82 down-regulated genes. We then looked for a hierarchical network of the candidate regulators, in such a way that the conditional combination of their expression changes may explain those of their target genes. Using Genomica, a software tool for regulatory network construction, we obtained a hierarchical network of eight regulons that were used to map insulin downstream signaling network. Taken together, the results illustrate the benefits of combining completely different methods such as motif-based regulatory factor discovery and expression level-based construction of regulatory network of their target genes in understanding insulin induced biological processes and signaling pathways.

  11. Photographic Study of Combustion in a Rocket Engine I : Variation in Combustion of Liquid Oxygen and Gasoline with Seven Methods of Propellant Injection

    Science.gov (United States)

    Bellman, Donald R; Humphrey, Jack C

    1948-01-01

    Motion pictures at camera speeds up to 3000 frames per second were taken of the combustion of liquid oxygen and gasoline in a 100-pound-thrust rocket engine. The engine consisted of thin contour and injection plates clamped between two clear plastic sheets forming a two-dimensional engine with a view of the entire combustion chamber and nozzle. A photographic investigation was made of the effect of seven methods of propellant injection on the uniformity of combustion. From the photographs, it was found that the flame front extended almost to the faces of the injectors with most of the injection methods, all the injection systems resulted in a considerable nonuniformity of combustion, and luminosity rapidly decreased in the divergent part of the nozzle. Pressure vibration records indicated combustion vibrations that approximately corresponded to the resonant frequencies of the length and the thickness of the chamber. The combustion temperature divided by the molecular weight of the combustion gases as determined from the combustion photographs was about 50 to 70 percent of the theoretical value.

  12. Effect of pH variation on the stability and structural properties of In(OH){sub 3} nanoparticles synthesized by co-precipitation method

    Energy Technology Data Exchange (ETDEWEB)

    Goh, Kian Wei; Wong, Yew Hoong [University of Malaya, Department of Mechanical Engineering, Faculty of Engineering, Kuala Lumpur (Malaysia); Johan, Mohd Rafie [University of Malaya, Department of Mechanical Engineering, Faculty of Engineering, Kuala Lumpur (Malaysia); University of Malaya, Nanotechnology and Catalysis Research Centre, Kuala Lumpur (Malaysia)

    2016-10-15

    Indium hydroxide (In(OH){sub 3}) nanoparticles were synthesized at various pH values (8-11) by co-precipitation method. Its properties were characterized by X-ray diffractometer, Fourier transform infrared spectroscopy, Raman spectroscopy and transmission electron microscope. The electrostatic stability of nanoparticles is carried out through zeta potential measurement. The crystallite size of nanoparticles calculated by Scherrer equation has similar trend with the values obtained from William-Hall plot. TEM images show that the particles size is within the range of 11.76-20.76 nm. The maximum zeta potential is 3.68 mV associated with the smallest particle size distribution of 92.6 nm occurred at pH 10. Our work clearly confirms the crystallite size, stability and the morphology of In(OH){sub 3} NPs are strongly depending on the pH of precursor solution. (orig.)

  13. Variation in Breast Cancer-Risk Factor Associations by Method of Detection: Results From a Series of Case-Control Studies.

    Science.gov (United States)

    Sprague, Brian L; Gangnon, Ronald E; Hampton, John M; Egan, Kathleen M; Titus, Linda J; Kerlikowske, Karla; Remington, Patrick L; Newcomb, Polly A; Trentham-Dietz, Amy

    2015-06-15

    Concerns about breast cancer overdiagnosis have increased the need to understand how cancers detected through screening mammography differ from those first detected by a woman or her clinician. We investigated risk factor associations for invasive breast cancer by method of detection within a series of case-control studies (1992-2007) carried out in Wisconsin, Massachusetts, and New Hampshire (n=15,648 invasive breast cancer patients and 17,602 controls aged 40-79 years). Approximately half of case women reported that their cancer had been detected by mammographic screening and half that they or their clinician had detected it. In polytomous logistic regression models, parity and age at first birth were more strongly associated with risk of mammography-detected breast cancer than with risk of woman/clinician-detected breast cancer (P≤0.01; adjusted for mammography utilization). Among postmenopausal women, estrogen-progestin hormone use was predominantly associated with risk of woman/clinician-detected breast cancer (odds ratio (OR)=1.49, 95% confidence interval (CI): 1.29, 1.72), whereas obesity was predominantly associated with risk of mammography-detected breast cancer (OR=1.72, 95% CI: 1.54, 1.92). Among regularly screened premenopausal women, obesity was not associated with increased risk of mammography-detected breast cancer (OR=0.99, 95% CI: 0.83, 1.18), but it was associated with reduced risk of woman/clinician-detected breast cancer (OR=0.53, 95% CI: 0.43, 0.64). These findings indicate important differences in breast cancer risk factors according to method of detection. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Variation of Parameters in Differential Equations (A Variation in Making Sense of Variation of Parameters)

    Science.gov (United States)

    Quinn, Terry; Rai, Sanjay

    2012-01-01

    The method of variation of parameters can be found in most undergraduate textbooks on differential equations. The method leads to solutions of the non-homogeneous equation of the form y = u[subscript 1]y[subscript 1] + u[subscript 2]y[subscript 2], a sum of function products using solutions to the homogeneous equation y[subscript 1] and…

  15. The variation of the unitary stresses occurring in the working part in relation to the type of soil, using the finite element method

    Science.gov (United States)

    Chiorescu, E.; Chiorescu, D.

    2017-08-01

    Agriculture brings a major contribution to the sustainable development of the economy, providing food to people. Because of the continuous growth of the population, there is an ever increasing need of food worldwide. For this reason, it is necessary to study the contact between the soil and the active tool of the cultivators, in relation to the type of soil and its parameters. The physical-mechanical characteristics of the soils are influenced by the moving velocity of the working part, as well as by the humidity of the soil. The humidity triggers the change of the friction coefficient at the soil-steel contact, being of significant importance for the decrease of the working resistance of the working tools and responsible for increasing exploitation costs. The model used for the soil has a non-linear plastic behavior of the Drucker Prager type, being different from the Mises model. The programming software Ansys was used for the simulation with the finite element method, allowing the study of the behavior of the active working part, the normal stress being analyzed in real conditions, at various depths and velocities for a soil with a clay-sandy texture.

  16. Comprehensive Quality Assessment Based Specific Chemical Profiles for Geographic and Tissue Variation in Gentiana rigescens Using HPLC and FTIR Method Combined with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Jie Li

    2017-12-01

    Full Text Available Roots, stems, leaves, and flowers of Longdan (Gentiana rigescens Franch. ex Hemsl were collected from six geographic origins of Yunnan Province (n = 240 to implement the quality assessment based on contents of gentiopicroside, loganic acid, sweroside and swertiamarin and chemical profile using HPLC-DAD and FTIR method combined with principal component analysis (PCA. The content of gentiopicroside (major iridoid glycoside was the highest in G. rigescens, regardless of tissue and geographic origin. The level of swertiamarin was the lowest, even unable to be detected in samples from Kunming and Qujing. Significant correlations (p < 0.05 between gentiopicroside, loganic acid, sweroside, and swertiamarin were found at inter- or intra-tissues, which were highly depended on geographic origins, indicating the influence of environmental conditions on the conversion and transport of secondary metabolites in G. rigescens. Furthermore, samples were reasonably classified as three clusters along large producing areas where have similar climate conditions, characterized by carbohydrates, phenols, benzoates, terpenoids, aliphatic alcohols, aromatic hydrocarbons, and so forth. The present work provided global information on the chemical profile and contents of major iridoid glycosides in G. rigescens originated from six different origins, which is helpful for controlling quality of herbal medicines systematically.

  17. Quantifying the Spatial Variations of Hyporheic Water Exchange at Catchment Scale Using the Thermal Method: A Case Study in the Weihe River, China

    Directory of Open Access Journals (Sweden)

    Junlong Zhang

    2017-01-01

    Full Text Available Understanding the dynamics of hyporheic water exchange (HWE has been limited by the hydrological heterogeneity at large catchment scale. The thermal method has been widely used to understand water exchange patterns in a hyporheic zone. This study was conducted in the Weihe River catchment in Shaanxi Province, China. A conceptual model was developed to determine water transfer patterns, and a one-dimensional heat diffusion-advection equation was employed to estimate vertical fluxes of ten different segments in the hyporheic zone in various ten segments of the catchment. The amount of water exchange varied from 78.47 mm/d to 23.66 mm/d and a decreasing trend from the upstream to downstream of catchment was observed. The spatial correlation of variability between the water exchange and distance is 0.62. The results indicate that mountain’s topography trend is the primary driver influencing the distribution of river tributaries, and the water exchange amount has a decreasing trend from upstream to downstream of the main river channel.

  18. Evaluation of the Genetic Variation of Non Coding Control Region of BK Virus Using Nested-PCR Sequencing Method in Renal Graft Patients

    Directory of Open Access Journals (Sweden)

    A Emami

    2015-05-01

    Full Text Available Background & aim: Polyomaviruses (BK is a comprehensive infection with more than of 80% prevalence in the world. One of the most important reasons of BK virus nephropathy is in the renal transplant recipients and rejection of transplanted tissue between them. Non Coding region of this virus play a regulatory role in replication and amplification of the virus. The aim of this study was to evaluate the genetic patterns of this area in renal graft at Namazi Transplantation Center, Shiraz, Iran. Methods: In the present experimental study, 380 renal allograft serums were collected. DNAs of 129 eligible samples were extracted and evaluated using a virus genome. The presence of the virus was determined by qualitative and sequencing. Of these, 129 samples were tested for the presence of virus according to the condition study, using quantitative, qualitative genomic amplification and sequencing. Results: The study showed symptoms of nephropathy, 76 (58.9% of them were males and 46 (35.7% were females with the mean age 38.0±.089 years of age. In general, 46 patients (35.7% percent were positive for BK Polyomaviruses. After comparing the genomic sequence with applications of molecular they were categorized in three groups and then recorded in gene bank. Conclusion: About 35% of renal transplant recipients with high creatinine levels were positive for the presence of BK virus. Non-coding region of respondents in the sample survey revealed that among patients with the most common genotypes were rearranged the entire transplant patients were observed at this tranplant center. Examination of these sequences indicated that this rearrangments had a specific pattern, different from the standard strain of archaea type.

  19. Variational integrators for electric circuits

    Energy Technology Data Exchange (ETDEWEB)

    Ober-Blöbaum, Sina, E-mail: sinaob@math.upb.de [Computational Dynamics and Optimal Control, University of Paderborn (Germany); Tao, Molei [Courant Institute of Mathematical Sciences, New York University (United States); Cheng, Mulin [Applied and Computational Mathematics, California Institute of Technology (United States); Owhadi, Houman; Marsden, Jerrold E. [Control and Dynamical Systems, California Institute of Technology (United States); Applied and Computational Mathematics, California Institute of Technology (United States)

    2013-06-01

    In this contribution, we develop a variational integrator for the simulation of (stochastic and multiscale) electric circuits. When considering the dynamics of an electric circuit, one is faced with three special situations: 1. The system involves external (control) forcing through external (controlled) voltage sources and resistors. 2. The system is constrained via the Kirchhoff current (KCL) and voltage laws (KVL). 3. The Lagrangian is degenerate. Based on a geometric setting, an appropriate variational formulation is presented to model the circuit from which the equations of motion are derived. A time-discrete variational formulation provides an iteration scheme for the simulation of the electric circuit. Dependent on the discretization, the intrinsic degeneracy of the system can be canceled for the discrete variational scheme. In this way, a variational integrator is constructed that gains several advantages compared to standard integration tools for circuits; in particular, a comparison to BDF methods (which are usually the method of choice for the simulation of electric circuits) shows that even for simple LCR circuits, a better energy behavior and frequency spectrum preservation can be observed using the developed variational integrator.

  20. Comparing variation across European countries

    DEFF Research Database (Denmark)

    Thygesen, Lau C; Baixauli-Pérez, Cristobal; Librero-López, Julián

    2015-01-01

    BACKGROUND: In geographical studies, population distribution is a key issue. An unequal distribution across units of analysis might entail extra-variation and produce misleading conclusions on healthcare performance variations. This article aims at assessing the impact of building more homogeneous...... units of analysis in the estimation of systematic variation in three countries. METHODS: Hospital discharges for six conditions (congestive heart failure, short-term complications of diabetes, hip fracture, knee replacement, prostatectomy in prostate cancer and percutaneous coronary intervention...... inhabitants vs. 7% in Denmark and 5% in England. Point estimates for Extremal Quotient and Interquartile Interval Ratio were lower in the three countries, particularly in less prevalent conditions. In turn, the Systematic Component of Variation and Empirical Bayes statistic were slightly lower in more...

  1. Storm surge variational assimilation model

    Directory of Open Access Journals (Sweden)

    Shi-li HUANG

    2010-06-01

    Full Text Available To eliminate errors caused by uncertainty of parameters and further improve capability of storm surge forecasting, the variational data assimilation method is applied to the storm surge model based on unstructured grid with high spatial resolution. The method can effectively improve the forecasting accuracy of storm surge induced by typhoon through controlling wind drag force coefficient parameter. The model is first theoretically validated with synthetic data. Then, the real storm surge process induced by the TC 0515 typhoon is forecasted by the variational data assimilation model, and results show the feasibility of practical application.

  2. Deriving relativistic Bohmian quantum potential using variational ...

    Indian Academy of Sciences (India)

    Deriving relativistic Bohmian quantum potential using variational method and conformal transformations ... We obtain this potential by using variational method. Then ... Department of Physics, Ferdowsi University of Mashhad, Azadi Sq., Mashhad, Iran; School of Physics, Institute for Research in Fundamental Science (IPM), ...

  3. Estimating the variation, autocorrelation, and environmental sensitivity of phenotypic selection

    NARCIS (Netherlands)

    Chevin, Luis-Miguel; Visser, Marcel E.; Tufto, Jarle

    Despite considerable interest in temporal and spatial variation of phenotypic selection, very few methods allow quantifying this variation while correctly accounting for the error variance of each individual estimate. Furthermore, the available methods do not estimate the autocorrelation of

  4. Perturbation methods

    CERN Document Server

    Nayfeh, Ali H

    2008-01-01

    1. Introduction 1 2. Straightforward Expansions and Sources of Nonuniformity 23 3. The Method of Strained Coordinates 56 4. The Methods of Matched and Composite Asymptotic Expansions 110 5. Variation of Parameters and Methods of Averaging 159 6. The Method of Multiple Scales 228 7. Asymptotic Solutions of Linear Equations 308 References and Author Index 387 Subject Index 417

  5. Gauging Variational Inference

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahn, Sungsoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of); Shin, Jinwoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of)

    2017-05-25

    Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM). Since it is computationally intractable, approximate methods have been used to resolve the issue in practice, where meanfield (MF) and belief propagation (BP) are arguably the most popular and successful approaches of a variational type. In this paper, we propose two new variational schemes, coined Gauged-MF (G-MF) and Gauged-BP (G-BP), improving MF and BP, respectively. Both provide lower bounds for the partition function by utilizing the so-called gauge transformation which modifies factors of GM while keeping the partition function invariant. Moreover, we prove that both G-MF and G-BP are exact for GMs with a single loop of a special structure, even though the bare MF and BP perform badly in this case. Our extensive experiments, on complete GMs of relatively small size and on large GM (up-to 300 variables) confirm that the newly proposed algorithms outperform and generalize MF and BP.

  6. Temporal super resolution using variational methods

    DEFF Research Database (Denmark)

    Keller, Sune Høgild; Lauze, Francois Bernard; Nielsen, Mads

    2010-01-01

    Temporal super resolution (TSR) is the ability to convert video from one frame rate to another and is as such a key functionality in modern video processing systems. A higher frame rate than what is recorded is desired for high frame rate displays, for super slow-motion, and for video/film format...... and intensities are calculated simultaneously in a multiresolution setting. A frame doubling version of our algorithm is implemented and in testing it, we focus on making the motion of high contrast edges to seem smooth and thus reestablish the illusion of motion pictures....

  7. Statistical Physics Methods Provide the Exact Solution to a Long-Standing Problem of Genetics

    Science.gov (United States)

    Samal, Areejit; Martin, Olivier C.

    2015-06-01

    Analytic and computational methods developed within statistical physics have found applications in numerous disciplines. In this Letter, we use such methods to solve a long-standing problem in statistical genetics. The problem, posed by Haldane and Waddington [Genetics 16, 357 (1931)], concerns so-called recombinant inbred lines (RILs) produced by repeated inbreeding. Haldane and Waddington derived the probabilities of RILs when considering two and three genes but the case of four or more genes has remained elusive. Our solution uses two probabilistic frameworks relatively unknown outside of physics: Glauber's formula and self-consistent equations of the Schwinger-Dyson type. Surprisingly, this combination of statistical formalisms unveils the exact probabilities of RILs for any number of genes. Extensions of the framework may have applications in population genetics and beyond.

  8. Separating common from distinctive variation

    NARCIS (Netherlands)

    van der Kloet, F.M.; Sebastián-León, P.; Conesa, A.; Smilde, A.K.; Westerhuis, J.A.

    2016-01-01

    BACKGROUND: Joint and individual variation explained (JIVE), distinct and common simultaneous component analysis (DISCO) and O2-PLS, a two-block (X-Y) latent variable regression method with an integral OSC filter can all be used for the integrated analysis of multiple data sets and decompose them in

  9. Fiscal 1997 report on the survey of verification of geothermal exploration technology, etc. 1. Development of the reservoir variation exploration method (development of the fracture hydraulic exploration method); 1997 nendo chinetsu tansa gijutsu nado kensho chosa. Choryuso hendo tansaho kaihatsu (danretsu suiri tansaho kaihatsu) hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    The paper described the fiscal 1997 result of the fracture hydraulic exploration method as the variation exploration method of geothermal reservoirs. By elucidating hydraulic characteristics of the fracture system forming reservoir, technologies are established which are effective for the reservoir evaluation in early stages of development, maintenance of stable power after operational start-up, and extraction of peripheral reservoirs. As for the pressure transient test method, a test supporting system was basically designed to obtain high accuracy hydraulic parameters. As to the tiltmeter fracture monitoring method, a simulation was made for distribution of active fractures and evaluation of hydraulic constants without drilling wells. In relation to the two-phase flow measuring method, for stable steam production, the use of the orifice plate, the existing flow measuring method, etc. was forecast as a simple measuring method of the two-phase state of reservoir. Concerning the hydrophone VSP method, a feasibility study was made of the practical VSP for high temperature which can analyze hydraulic characteristics and geological structures around the well at the same time which the existing methods were unable to grasp, and brought the results. Moreover, to make high accuracy reservoir modeling possible, Doppler borehole televiewer was made in each reservoir. 80 refs., 147 figs., 22 tabs.

  10. Variational approach in transport theory

    Energy Technology Data Exchange (ETDEWEB)

    Panta Pazos, R. [Nucler Engineering Department, UFRGS, Porto-Alegre (Brazil); Tullio de Vilhena, M. [Institute of Mathematics, UFRGS, Porto-Alegre (Brazil)

    2004-07-01

    In this work we present a variational approach to some methods to solve transport problems of neutral particles. We consider a convex domain X (for example the geometry of a slab, or a convex set in the plane, or a convex bounded set in the space) and we use discrete ordinates quadrature to get a system of differential equations derived from the neutron transport equation. The boundary conditions are vacuum for a subset of the boundary, and of specular reflection for the complementary subset of the boundary. Recently some different approximation methods have been presented to solve these transport problems. We introduce in this work the adjoint equations and the conjugate functions obtained by means of the variational approach. First we consider the general formulation, and then some numerical methods such as spherical harmonics and spectral collocation method. (authors)

  11. Bilateral renal artery variation

    OpenAIRE

    Üçerler, Hülya; Üzüm, Yusuf; İkiz, Z. Aslı Aktan

    2015-01-01

    Each kidney is supplied by a single renal artery, although renal artery variations are common. Variations of the renal arteryhave become important with the increasing number of renal transplantations. Numerous studies describe variations in renalartery anatomy. Especially the left renal artery is among the most critical arterial variations, because it is the referred side forresecting the donor kidney. During routine dissection in a formalin fixed male cadaver, we have found a bilateral renal...

  12. Variation in the Biochemical Composition of the Edible Seaweed Grateloupia turuturu Yamada Harvested from Two Sampling Sites on the Brittany Coast (France: The Influence of Storage Method on the Extraction of the Seaweed Pigment R-Phycoerythrin

    Directory of Open Access Journals (Sweden)

    Mathilde Munier

    2013-01-01

    Full Text Available Numerous studies have demonstrated that the biochemical content of seaweeds varies according to seasonality in a restricted area. In this study, the influence of sampling site on the biochemical composition of the edible red seaweed Grateloupia turuturu Yamada was investigated, but not its variation over time. Some differences in water-soluble protein ( m dw and  m dw, water-soluble carbohydrate ( m dw and  m dw, and lipid contents ( m dw and  m dw were recorded between the two sites chosen on the Brittany coast (France. The yield of R-phycoerythrin (R-PE contained in the seaweed also varied according to the sampling site ( m dw versus  m dw. In addition, the effect of storage conditions on the preservation of R-PE was studied. The results demonstrated that freezing is the best preservation method in terms of R-PE extraction yield and purity index. In conclusion, this study shows that the sampling site influences the biochemical content of the red seaweed Grateloupia turuturu. Moreover, the extraction yield of R-phycoerythrin and its purity index depend on both the sampling site and the sample storage method.

  13. Studying Variation in Tunes

    NARCIS (Netherlands)

    Janssen, B.; van Kranenburg, P.

    2014-01-01

    Variation in music can be caused by different phenomena: conscious, creative manipulation of musical ideas; but also unconscious variation during music recall. It is the latter phenomenon that we wish to study: variation which occurs in oral transmission, in which a melody is taught without the help

  14. Is there much variation in variation? Revisiting statistics of small area variation in health services research

    Directory of Open Access Journals (Sweden)

    Ibáñez Berta

    2009-04-01

    Full Text Available Abstract Background The importance of Small Area Variation Analysis for policy-making contrasts with the scarcity of work on the validity of the statistics used in these studies. Our study aims at 1 determining whether variation in utilization rates between health areas is higher than would be expected by chance, 2 estimating the statistical power of the variation statistics; and 3 evaluating the ability of different statistics to compare the variability among different procedures regardless of their rates. Methods Parametric bootstrap techniques were used to derive the empirical distribution for each statistic under the hypothesis of homogeneity across areas. Non-parametric procedures were used to analyze the empirical distribution for the observed statistics and compare the results in six situations (low/medium/high utilization rates and low/high variability. A small scale simulation study was conducted to assess the capacity of each statistic to discriminate between different scenarios with different degrees of variation. Results Bootstrap techniques proved to be good at quantifying the difference between the null hypothesis and the variation observed in each situation, and to construct reliable tests and confidence intervals for each of the variation statistics analyzed. Although the good performance of Systematic Component of Variation (SCV, Empirical Bayes (EB statistic shows better behaviour under the null hypothesis, it is able to detect variability if present, it is not influenced by the procedure rate and it is best able to discriminate between different degrees of heterogeneity. Conclusion The EB statistics seems to be a good alternative to more conventional statistics used in small-area variation analysis in health service research because of its robustness.

  15. Stochastic variational approach to minimum uncertainty states

    Energy Technology Data Exchange (ETDEWEB)

    Illuminati, F.; Viola, L. [Dipartimento di Fisica, Padova Univ. (Italy)

    1995-05-21

    We introduce a new variational characterization of Gaussian diffusion processes as minimum uncertainty states. We then define a variational method constrained by kinematics of diffusions and Schroedinger dynamics to seek states of local minimum uncertainty for general non-harmonic potentials. (author)

  16. Construction of A Trial Function In The Variational Procedure of ...

    African Journals Online (AJOL)

    A form of variational method for calculating the ground state energy of a quantum mechanical system is considered. The method is based on a systematic construction of a trial variational function at each step of the calculation of the ground state energy. The construction involves introducing more variational parameters to ...

  17. Novo método de classificação de coeficientes de variação para a cultura do arroz de terras altas A new method of variation coefficient classification for upland rice crop

    Directory of Open Access Journals (Sweden)

    Nilce Helena de Araújo Diniz Costa

    2002-03-01

    have normal distribution, which is not always true. This study had as objective to propose a methodology for CV classification bands definition, independently of their distribution; the new methodology is based on the use of the median and the pseudosigma. Data from 110 experiments of upland rice, designed in randomized complete blocks, generalized incomplete blocks and square lattices were analyzed, selecting the characters related to diseases, lodging, yield and yield components. CV classification bands were constructed for each character studied, in general, and considering the experimental design utilized for experiments with 30 to 100 treatments. The method proposed is efficient for the definition of CV bands, independently of their distribution, and the variation of the values of CV evidences the importance of considering, not only the variable in study, but also the experimental design. Experiments with smaller CV were those that utilized Incomplete Block Design. The variables related to diseases and lodging were those that presented larger coefficient of variation.

  18. Study of a method of detection for natural carbon-14 using a liquid scintillator, recent variations in the natural radio-activity due to artificial carbon-14 (1963); Etude d'une methode de detection du carrons 14 naturel, utilisant un scintillateur liquide - variations recentes de l'activite naturelle dues au carbone 14 artificiel (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Leger, C. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1963-06-15

    Among the various natural isotopes of carbon, a radioactive isotope, carbon-14, is formed by the action of secondary neutrons from cosmic rays on nitrogen in the air. Until 1950, the concentration of this isotope in ordinary carbon underwent weak fluctuations of about 2-3 per cent. The exact measurement of this concentration 6 X 10{sup 12} Ci/gm of carbon, and of its fluctuations, are difficult and in the first part of this report a highly sensitive method is given using a liquid scintillator. Since 1950 this natural activity has shown large fluctuations because of the carbon-14 formed during nuclear explosions, and in the second part, the evolution in France of this specific activity of carbon in the atmosphere and biosphere is examined. In the last part is studied the local increase in carbon activity in the atmosphere around the Saclay site, an increase caused by the carbon-14 given off as C{sup 14}O{sub 2}, by the reactors cooled partially with exterior air. (author) [French] Parmi les differents isotopes naturels du carbone, un isotope radioactif, le carbone 14, est forme par l'action de neutrons secondaires due aux rayons cosmiques sir l'azote de l'air. Jusqu'en 1950, la concentration de cet isotope dans le carbone ordinaire est soumise a des fluctuations de faible amplitude, de l'ordre de 2 a 3 pour cent. Les mesures precises de cette concentration, 6. 10{sup -12} Ci/g de carbone, et de ses fluctuations sont delicates, et dans la premiere partie de ce rapport, on decrit une methode de detection a grande sensibilite utilisant un scintillateur liquide. Depuis 1950, cette activite naturelle subit des fluctuations importantes dues au carbone 14 forme lors des explosions nucleaires, et dans la seconde partie, on examine l'evolution en France de l'activite specifique du carbone de l'atmosphere et ce la biosphere. Dans la derniere partie, on etudie l'accroissement local de l'activite du carbone de l'air aux

  19. Genome structural variation discovery and genotyping.

    Science.gov (United States)

    Alkan, Can; Coe, Bradley P; Eichler, Evan E

    2011-05-01

    Comparisons of human genomes show that more base pairs are altered as a result of structural variation - including copy number variation - than as a result of point mutations. Here we review advances and challenges in the discovery and genotyping of structural variation. The recent application of massively parallel sequencing methods has complemented microarray-based methods and has led to an exponential increase in the discovery of smaller structural-variation events. Some global discovery biases remain, but the integration of experimental and computational approaches is proving fruitful for accurate characterization of the copy, content and structure of variable regions. We argue that the long-term goal should be routine, cost-effective and high quality de novo assembly of human genomes to comprehensively assess all classes of structural variation.

  20. Spectrophotometry mole ratio and continuous variation experiments ...

    African Journals Online (AJOL)

    The mole-ratio method yield a ratio of 1M : 1L for the silver dithizonate complex and 1M : 3L for cobalt. Employing the continuous variation method give M : L ratio's of 1 : 3 for both nickel and cobalt. Formation constants are readily calculated from absorbance data. Complete methods, data, calculations and outcomes are ...