WorldWideScience

Sample records for quantification theory type

  1. Densitometric quantification of ether-type phospholipids.

    Science.gov (United States)

    Oe, Shinji; Tanaka, Takahiro; Ohga, Mami; Koga, Yosuke; Morii, Hiroyuki

    2008-09-01

    A quantification method for analysis of individual ether-type phospholipids is important in studies of the regulation of membrane lipid biosynthesis in Archaea. For ester-type lipid of Bacteria and Eucarya, a densitometric method has been established for simultaneous quantification of individual phospholipids visualized with molybdenum blue reagent on a TLC plate. In this study, we developed a TLC densitometric method for rapid quantitative determination of 6 kinds of main ether-type phospholipids in a methanogenic archaeon and an extremely halophilic archaeon. It has been reported previously that on densitometric quantification the values of molar absorptivities are approximately the same among most ester-type phospholipids. On the other hand, we found significant disparity in the molar absorptivity of archaeal ether-type lipids and serine-containing ester-type lipid. Therefore, analysis should be accomplished by use of each standard mixture. Compared with a previous method (preparative TLC method) that is measurement of inorganic phosphate of silica gel powder scraped off from spots of phospholipids on a TLC plate, the TLC densitometry is accomplished at one tenth the sample size in a short time. PMID:18783009

  2. Utilizing general information theories for uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Booker, J. M. (Jane M.)

    2002-01-01

    Uncertainties enter into a complex problem from many sources: variability, errors, and lack of knowledge. A fundamental question arises in how to characterize the various kinds of uncertainty and then combine within a problem such as the verification and validation of a structural dynamics computer model, reliability of a dynamic system, or a complex decision problem. Because uncertainties are of different types (e.g., random noise, numerical error, vagueness of classification), it is difficult to quantify all of them within the constructs of a single mathematical theory, such as probability theory. Because different kinds of uncertainty occur within a complex modeling problem, linkages between these mathematical theories are necessary. A brief overview of some of these theories and their constituents under the label of Generalized lnforrnation Theory (GIT) is presented, and a brief decision example illustrates the importance of linking at least two such theories.

  3. Recurrence quantification analysis theory and best practices

    CERN Document Server

    Jr, Jr; Marwan, Norbert

    2015-01-01

    The analysis of recurrences in dynamical systems by using recurrence plots and their quantification is still an emerging field.  Over the past decades recurrence plots have proven to be valuable data visualization and analysis tools in the theoretical study of complex, time-varying dynamical systems as well as in various applications in biology, neuroscience, kinesiology, psychology, physiology, engineering, physics, geosciences, linguistics, finance, economics, and other disciplines.   This multi-authored book intends to comprehensively introduce and showcase recent advances as well as established best practices concerning both theoretical and practical aspects of recurrence plot based analysis.  Edited and authored by leading researcher in the field, the various chapters address an interdisciplinary readership, ranging from theoretical physicists to application-oriented scientists in all data-providing disciplines.

  4. An approximation approach for uncertainty quantification using evidence theory

    International Nuclear Information System (INIS)

    Over the last two decades, uncertainty quantification (UQ) in engineering systems has been performed by the popular framework of probability theory. However, many scientific and engineering communities realize that there are limitations in using only one framework for quantifying the uncertainty experienced in engineering applications. Recently evidence theory, also called Dempster-Shafer theory, was proposed to handle limited and imprecise data situations as an alternative to the classical probability theory. Adaptation of this theory for large-scale engineering structures is a challenge due to implicit nature of simulations and excessive computational costs. In this work, an approximation approach is developed to improve the practical utility of evidence theory in UQ analysis. The techniques are demonstrated on composite material structures and airframe wing aeroelastic design problem

  5. Quantification of image quality using information theory

    International Nuclear Information System (INIS)

    Full text: Aims of present study were to examine usefulness of information theory in visual assessment of image quality. We applied first order approximation of the Shannon's information theory to compute information losses (IL). Images of a contrast detail mammography (CDMAM) phantom were acquired with computed radiographies for various radiation doses. Information content was defined as the entropy ?, log (l/Pi), in which detection probabilities pi were calculated from distribution of detection rate of the CDMAM. IL was defined as the difference between information content and information obtained. IL decreased with increases in the disk diameters (P < 0.0001, ANOVA) and in the radiation doses (P < 0.002, F-test). Sums of IL, which we call total information losses (TIL), were closely correlated with the image quality figures (r = 0.985). TIL was dependent on the distribution of image reading ability of each examinee, even when average reading ratio was the same in the group. TIL was shown to be sensitive to the observers' distribution of image readings and was expected to improve the evaluation of image quality. (author)

  6. Invariant types in NIP theories

    OpenAIRE

    Simon, Pierre

    2014-01-01

    We study invariant types in NIP theories. Amongst other things: we prove a definable version of the (p,q)-theorem in theories of small or medium directionality; we construct a canonical retraction from the space of M-invariant types to that of M-finitely satisfiable types; we show some amalgamation results for invariant types and list a number of open questions.

  7. Linear contextual modal type theory

    DEFF Research Database (Denmark)

    Schack-Nielsen, Anders; Schürmann, Carsten

    2011-01-01

    Abstract. When one implements a logical framework based on linear type theory, for example the Celf system [?], one is immediately con- fronted with questions about their equational theory and how to deal with logic variables. In this paper, we propose linear contextual modal type theory that gives a mathematical account of the nature of logic variables. Our type theory is conservative over intuitionistic contextual modal type theory proposed by Nanevski, Pfenning, and Pientka. Our main contributions include a mechanically checked proof of soundness and a working implementation.

  8. An overview of type theories

    OpenAIRE

    Guallart, Nino

    2014-01-01

    Pure type systems arise as a generalisation of simply typed lambda calculus. The contemporary development of mathematics has renewed the interest in type theories, as they are not just the object of mere historical research, but have an active role in the development of computational science and core mathematics. It is worth exploring some of them in depth, particularly predicative Martin-L\\"of's intuitionistic type theory and impredicative Coquand's calculus of construction...

  9. Treatise on Intuitionistic Type Theory

    CERN Document Server

    Granstrom, Johan Georg

    2011-01-01

    Intuitionistic type theory can be described, somewhat boldly, as a partial fulfillment of the dream of a universal language for science. This book expounds several aspects of intuitionistic type theory, such as the notion of set, reference vs. computation, assumption, and substitution. Moreover, the book includes philosophically relevant sections on the principle of compositionality, lingua characteristica, epistemology, propositional logic, intuitionism, and the law of excluded middle. Ample historical references are given throughout the book.

  10. Causality in Time Series: Its Detection and Quantification by Means of Information Theory.

    Czech Academy of Sciences Publication Activity Database

    Hlavá?ková-Schindler, Kate?ina

    New York : Springer, 2008 - (Emmert-Streib, F.; Dehmer, M.), s. 183-207 ISBN 978-0-387-84815-0. - (Computer Science) R&D Projects: GA MŠk 2C06001 Institutional research plan: CEZ:AV0Z10750506 Keywords : causality * time series * information theory Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2009/AS/schindler-causality in time series its detection and quantification by means of information theory.pdf

  11. Standard Error Computations for Uncertainty Quantification in Inverse Problems: Asymptotic Theory vs. Bootstrapping.

    Science.gov (United States)

    Banks, H T; Holm, Kathleen; Robbins, Danielle

    2010-11-01

    We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods. PMID:20835347

  12. Quantification of uncertainty of performance measures using graph theory

    OpenAIRE

    Lopes, Isabel Da Silva; Sousa, Se?rgio; Nunes, Euse?bio P.

    2013-01-01

    In this paper, the graph theory is used to quantify the uncertainty generated in performance measures during the process of performance measurement. A graph is developed considering all the sources of uncertainty present in this process and their relationship. The permanent function of the matrix associated with the graph is used as the basis for determining an uncertainty index.

  13. Uncertainty Quantification and Propagation in Nuclear Density Functional Theory

    CERN Document Server

    Schunck, N; Higdon, D; Sarich, J; Wild, S M

    2015-01-01

    Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going efforts seek to better root nuclear DFT in the theory of nuclear forces [see Duguet et al., this issue], energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in finite nuclei. In this paper, we review recent efforts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.

  14. Minimal types in super-dependent theories

    CERN Document Server

    Hasson, Assaf

    2007-01-01

    We give necessary and sufficient geometric conditions for a theory definable in an o-minimal structure to interpret a real closed field. The proof goes through an analysis of thorn-minimal types in super-rosy dependent theories of finite rank. We prove that such theories are coordinatised by thorn-minimal types and that such a type is unstable if an only if every non-algebraic extension thereof is. We conclude that a type is stable if and only if it admits a coordinatisation in thorn-minimal stable types. We also show that non-trivial thorn-minimal stable types extend stable sets.

  15. Quantification of Uncertainties in Nuclear Density Functional theory

    CERN Document Server

    Schunck, N; Higdon, D; Sarich, J; Wild, S

    2014-01-01

    Reliable predictions of nuclear properties are needed as much to answer fundamental science questions as in applications such as reactor physics or data evaluation. Nuclear density functional theory is currently the only microscopic, global approach to nuclear structure that is applicable throughout the nuclear chart. In the past few years, a lot of effort has been devoted to setting up a general methodology to assess theoretical uncertainties in nuclear DFT calculations. In this paper, we summarize some of the recent progress in this direction. Most of the new material discussed here will be be published in separate articles.

  16. Uncertainty quantification using evidence theory in multidisciplinary design optimization

    International Nuclear Information System (INIS)

    Advances in computational performance have led to the development of large-scale simulation tools for design. Systems generated using such simulation tools can fail in service if the uncertainty of the simulation tool's performance predictions is not accounted for. In this research an investigation of how uncertainty can be quantified in multidisciplinary systems analysis subject to epistemic uncertainty associated with the disciplinary design tools and input parameters is undertaken. Evidence theory is used to quantify uncertainty in terms of the uncertain measures of belief and plausibility. To illustrate the methodology, multidisciplinary analysis problems are introduced as an extension to the epistemic uncertainty challenge problems identified by Sandia National Laboratories. After uncertainty has been characterized mathematically the designer seeks the optimum design under uncertainty. The measures of uncertainty provided by evidence theory are discontinuous functions. Such non-smooth functions cannot be used in traditional gradient-based optimizers because the sensitivities of the uncertain measures are not properly defined. In this research surrogate models are used to represent the uncertain measures as continuous functions. A sequential approximate optimization approach is used to drive the optimization process. The methodology is illustrated in application to multidisciplinary example problems

  17. Schwarz Type Topological Quantum Field Theories

    CERN Document Server

    Kaul, R K; Ramadevi, P

    2005-01-01

    Topological quantum field theories can be used to probe topological properties of low dimensional manifolds. A class of these theories known as Schwarz type theories, comprise of Chern-Simons theories and BF theories. In three dimensions both capture the properties of knots and links leading to invariants characterising them. These can also be used to construct three-manifold invariants. Three dimensional gravity is described by these field theories. BF theories exist also in higher dimensions. In four dimensions, these describe two-dimensional generalization of knots as well as Donaldson invariants.

  18. Completeness in Hybrid Type Theory

    DEFF Research Database (Denmark)

    Areces, Carlos; Blackburn, Patrick Rowan

    2014-01-01

    We show that basic hybridization (adding nominals and @ operators) makes it possible to give straightforward Henkin-style completeness proofs even when the modal logic being hybridized is higher-order. The key ideas are to add nominals as expressions of type t, and to extend to arbitrary types the way we interpret @i in propositional and first-order hybrid logic. This means: interpret @i?a , where ?a is an expression of any type a , as an expression of type a that rigidly returns the value that ?a receives at the i-world. The axiomatization and completeness proofs are generalizations of those found in propositional and first-order hybrid logic, and (as is usual inhybrid logic) we automatically obtain a wide range of completeness results for stronger logics and languages. Our approach is deliberately low-tech. We don’t, for example, make use of Montague’s intensional type s, or Fitting-style intensional models; we build, as simply as we can, hybrid logicover Henkin’s logic

  19. ANALISIS PENGARUH INDEKS KINERJA DOSEN TERHADAP PRESTASI NILAI MATAKULIAH MENGGUNAKAN FUZZY QUANTIFICATION THEORY I

    Directory of Open Access Journals (Sweden)

    Shofwatul ‘Uyun

    2012-05-01

    Full Text Available Implementasi penjaminan mutu akademik tentu saja berkaitan erat dengan pelaku utama proses akademik di sebuah perguruan tinggi, yaitu dosen. Untuk itu perlu dilakukan evaluasi kinerja dosen. Indeks kinerja dosen (IKD UIN terdiri dari tiga komponen penilaian, yaitu meliputi : Kehadiran mengajar dikelas (K1 sebesar 30%, Ketepatan waktu penyerahan nilai (K2 sebesar 30% dan Penilaian mahasiswa (K3 sebesar 40%.  Selain penilaian mahasiswa yang bersifat kualitatif, IKD juga dipengaruhi oleh variabel kehadiran dosen dalam mengajar dan ketepatan penyerahan nilai, yang jelas terukur. Untuk menghubungkan antara faktor kualitatif dan kuantitatif, dapat digunakan fuzzy quantification theory I. Metode yang digunakan untuk pengambilan sampel data adalah multistage random sampling dan analisis data dengan fuzzy quantification theory untuk menentukan seberapa besar faktor-faktor kualitatif penilaian mahasiswa dan kehadiran dosen mempengaruhi prestasi nilai matakuliah mahasiswa UIN Sunan Kalijaga. Hasil penelitian menunjukkan bahwa Indeks kinerja dosen (hasil penilaian mahasiswa dan jumlah kehadiran mengajar dosen hanya mampu memberikan pengaruh terhadap prestasi nilai matakuliah mahasiswa Universitas Islam Negeri (UIN Sunan Kalijaga sebesar 68,58 %.  Disiplin terhadap ketepatan waktu kuliah dan kemampuan dosen untuk meningkatkan minat belajar mahasiswa memiliki pengaruh yang paling tinggi terhadap prestasi nilai matakuliah mahasiswa UIN Sunan Kalijaga. Pengaruh ini akan sangat kuat apabila kehadiran dosen mengajar lebih dari 10 kali.

  20. Applicability of Information Theory to the Quantification of Responses to Anthropogenic Noise by Southeast Alaskan Humpback Whales

    Science.gov (United States)

    Doyle, Laurance R.; McCowan, Brenda; Hanser, Sean F.; Chyba, Christopher; Bucci, Taylor; Blue, J. E.

    2008-06-01

    We assess the effectiveness of applying information theory to the characterization and quantification of the affects of anthropogenic vessel noise on humpback whale (Megaptera novaeangliae) vocal behavior in and around Glacier Bay, Alaska. Vessel noise has the potential to interfere with the complex vocal behavior of these humpback whales which could have direct consequences on their feeding behavior and thus ultimately on their health and reproduction. Humpback whale feeding calls recorded during conditions of high vessel-generated noise and lower levels of background noise are compared for differences in acoustic structure, use, and organization using information theoretic measures. We apply information theory in a self-referential manner (i.e., orders of entropy) to quantify the changes in signaling behavior. We then compare this with the reduction in channel capacity due to noise in Glacier Bay itself treating it as a (Gaussian) noisy channel. We find that high vessel noise is associated with an increase in the rate and repetitiveness of sequential use of feeding call types in our averaged sample of humpback whale vocalizations, indicating that vessel noise may be modifying the patterns of use of feeding calls by the endangered humpback whales in Southeast Alaska. The information theoretic approach suggested herein can make a reliable quantitative measure of such relationships and may also be adapted for wider application to many species where environmental noise is thought to be a problem.

  1. Applicability of Information Theory to the Quantification of Responses to Anthropogenic Noise by Southeast Alaskan Humpback Whales

    Directory of Open Access Journals (Sweden)

    J. Ellen Blue

    2008-05-01

    Full Text Available We assess the effectiveness of applying information theory to the characterization and quantification of the affects of anthropogenic vessel noise on humpback whale (Megaptera novaeangliae vocal behavior in and around Glacier Bay, Alaska. Vessel noise has the potential to interfere with the complex vocal behavior of these humpback whales which could have direct consequences on their feeding behavior and thus ultimately on their health and reproduction. Humpback whale feeding calls recorded during conditions of high vessel-generated noise and lower levels of background noise are compared for differences in acoustic structure, use, and organization using information theoretic measures. We apply information theory in a self-referential manner (i.e., orders of entropy to quantify the changes in signaling behavior. We then compare this with the reduction in channel capacity due to noise in Glacier Bay itself treating it as a (Gaussian noisy channel. We find that high vessel noise is associated with an increase in the rate and repetitiveness of sequential use of feeding call types in our averaged sample of humpback whale vocalizations, indicating that vessel noise may be modifying the patterns of use of feeding calls by the endangered humpback whales in Southeast Alaska. The information theoretic approach suggested herein can make a reliable quantitative measure of such relationships and may also be adapted for wider application to many species where environmental noise is thought to be a problem.

  2. Uncertainty Quantification for Nuclear Density Functional Theory and Information Content of New Measurements

    CERN Document Server

    McDonnell, J D; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-01-01

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models; to estimate model errors and thereby improve predictive capability; to extrapolate beyond the regions reached by experiment; and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, w...

  3. Development and Assessment of a Multiplex Real-Time PCR Assay for Quantification of Human Immunodeficiency Virus Type 1 DNA?

    OpenAIRE

    Beloukas, A.; Paraskevis, D.; Haida, C.; Sypsa, V.; Hatzakis, A.

    2009-01-01

    Previous studies showed that high levels of human immunodeficiency virus type 1 (HIV-1) DNA are associated with a faster progression to AIDS, an increased risk of death, and a higher risk of HIV RNA rebound in patients on highly active antiretroviral therapy. Our objective was to develop and assess a highly sensitive real-time multiplex PCR assay for the quantification of HIV-1 DNA (RTMP-HIV) based on molecular beacons. HIV-1 DNA quantification was carried out by RTMP in a LightCycler 2.0 app...

  4. Uncertainty Quantification for Nuclear Density Functional Theory and Information Content of New Measurements

    Science.gov (United States)

    McDonnell, J. D.; Schunck, N.; Higdon, D.; Sarich, J.; Wild, S. M.; Nazarewicz, W.

    2015-03-01

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  5. Uncertainty quantification of few group diffusion theory constants generated by the B1 theory-augmented Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Park, H. J. [Korea Atomic Energy Research Inst., Daedeokdaero 989-111, Yuseong-gu, Daejeon (Korea, Republic of); Shim, H. J.; Joo, H. G.; Kim, C. H. [Dept. of Nuclear Engineering, Seoul National Univ., 1 Gwanak-ro, Gwanak-gu, Seoul (Korea, Republic of)

    2012-07-01

    The purpose of this paper is to quantify uncertainties of fuel pin cell or fuel assembly (FA) homogenized few group diffusion theory constants generated from the B1 theory-augmented Monte Carlo (MC) method. A mathematical formulation of the first kind is presented to quantify uncertainties of the few group constants in terms of the two major sources of the MC method; statistical and nuclear cross section and nuclide number density input data uncertainties. The formulation is incorporated into the Seoul National Univ. MC code McCARD. It is then used to compute the uncertainties of the burnup-dependent homogenized two group constants of a low-enriched UO{sub 2} fuel pin cell and a PWR FA on the condition that nuclear cross section input data of U-235 and U-238 from JENDL 3.3 library and nuclide number densities from the solution to fuel depletion equations have uncertainties. The contribution of the MC input data uncertainties to the uncertainties of the two group constants of the two fuel systems is separated from that of the statistical uncertainties. The utilities of uncertainty quantifications are then discussed from the standpoints of safety analysis of existing power reactors, development of new fuel or reactor system design, and improvement of covariance files of the evaluated nuclear data libraries. (authors)

  6. Uncertainty quantification of few group diffusion theory constants generated by the B1 theory-augmented Monte Carlo method

    International Nuclear Information System (INIS)

    The purpose of this paper is to quantify uncertainties of fuel pin cell or fuel assembly (FA) homogenized few group diffusion theory constants generated from the B1 theory-augmented Monte Carlo (MC) method. A mathematical formulation of the first kind is presented to quantify uncertainties of the few group constants in terms of the two major sources of the MC method; statistical and nuclear cross section and nuclide number density input data uncertainties. The formulation is incorporated into the Seoul National Univ. MC code McCARD. It is then used to compute the uncertainties of the burnup-dependent homogenized two group constants of a low-enriched UO2 fuel pin cell and a PWR FA on the condition that nuclear cross section input data of U-235 and U-238 from JENDL 3.3 library and nuclide number densities from the solution to fuel depletion equations have uncertainties. The contribution of the MC input data uncertainties to the uncertainties of the two group constants of the two fuel systems is separated from that of the statistical uncertainties. The utilities of uncertainty quantifications are then discussed from the standpoints of safety analysis of existing power reactors, development of new fuel or reactor system design, and improvement of covariance files of the evaluated nuclear data libraries. (authors)

  7. Applicability of Information Theory to the Quantification of Responses to Anthropogenic Noise by Southeast Alaskan Humpback Whales

    OpenAIRE

    Ellen Blue, J.; Taylor Bucci; Christopher Chyba; Hanser, Sean F.; Brenda McCowan; Doyle, Laurance R.

    2008-01-01

    We assess the effectiveness of applying information theory to the characterization and quantification of the affects of anthropogenic vessel noise on humpback whale (Megaptera novaeangliae) vocal behavior in and around Glacier Bay, Alaska. Vessel noise has the potential to interfere with the complex vocal behavior of these humpback whales which could have direct consequences on their feeding behavior and thus ultimately on their health and reproduction. Humpback whale feeding calls recorded d...

  8. Quantification of spatial parameters in 3D cellular constructs using graph theory.

    Science.gov (United States)

    Lund, A W; Bilgin, C C; Hasan, M A; McKeen, L M; Stegemann, J P; Yener, B; Zaki, M J; Plopper, G E

    2009-01-01

    Multispectral three-dimensional (3D) imaging provides spatial information for biological structures that cannot be measured by traditional methods. This work presents a method of tracking 3D biological structures to quantify changes over time using graph theory. Cell-graphs were generated based on the pairwise distances, in 3D-Euclidean space, between nuclei during collagen I gel compaction. From these graphs quantitative features are extracted that measure both the global topography and the frequently occurring local structures of the "tissue constructs." The feature trends can be controlled by manipulating compaction through cell density and are significant when compared to random graphs. This work presents a novel methodology to track a simple 3D biological event and quantitatively analyze the underlying structural change. Further application of this method will allow for the study of complex biological problems that require the quantification of temporal-spatial information in 3D and establish a new paradigm in understanding structure-function relationships. PMID:19920859

  9. Quantification of Spatial Parameters in 3D Cellular Constructs Using Graph Theory

    Directory of Open Access Journals (Sweden)

    G. E. Plopper

    2009-01-01

    Full Text Available Multispectral three-dimensional (3D imaging provides spatial information for biological structures that cannot be measured by traditional methods. This work presents a method of tracking 3D biological structures to quantify changes over time using graph theory. Cell-graphs were generated based on the pairwise distances, in 3D-Euclidean space, between nuclei during collagen I gel compaction. From these graphs quantitative features are extracted that measure both the global topography and the frequently occurring local structures of the “tissue constructs.” The feature trends can be controlled by manipulating compaction through cell density and are significant when compared to random graphs. This work presents a novel methodology to track a simple 3D biological event and quantitatively analyze the underlying structural change. Further application of this method will allow for the study of complex biological problems that require the quantification of temporal-spatial information in 3D and establish a new paradigm in understanding structure-function relationships.

  10. Multi-level Contextual Type Theory

    Directory of Open Access Journals (Sweden)

    Mathieu Boespflug

    2011-10-01

    Full Text Available Contextual type theory distinguishes between bound variables and meta-variables to write potentially incomplete terms in the presence of binders. It has found good use as a framework for concise explanations of higher-order unification, characterize holes in proofs, and in developing a foundation for programming with higher-order abstract syntax, as embodied by the programming and reasoning environment Beluga. However, to reason about these applications, we need to introduce meta^2-variables to characterize the dependency on meta-variables and bound variables. In other words, we must go beyond a two-level system granting only bound variables and meta-variables. In this paper we generalize contextual type theory to n levels for arbitrary n, so as to obtain a formal system offering bound variables, meta-variables and so on all the way to meta^n-variables. We obtain a uniform account by collapsing all these different kinds of variables into a single notion of variabe indexed by some level k. We give a decidable bi-directional type system which characterizes beta-eta-normal forms together with a generalized substitution operation.

  11. General relativity is a gauge type theory

    International Nuclear Information System (INIS)

    It is shown that the Einstein-Maxwell theory of interacting electromagnetism and gravitation, can be derived from a first-order Lagrangian, depending on the electromagnetic field and on the curvature of a symmetric affine connection GAMMA on the space-time M. The variation is taken with respect to the electromagnetic potential (a connection on a U(1) principal fiber bundle on M) and the gravitational potential GAMMA (a connection on the GL(4,R) principal fiber bundle of frames on M). The metric tensor g does not appear in the Lagrangian, but it arises as a momentum canonically conjugated to GAMMA. The Lagrangians of this type are calculated also for the Proca field, for a charged scalar field interacting with electromagnetism and gravitation, and for a few other interesting physical theories. (orig.)

  12. Multi-level Contextual Type Theory

    CERN Document Server

    Boespflug, Mathieu; 10.4204/EPTCS.71.3

    2011-01-01

    Contextual type theory distinguishes between bound variables and meta-variables to write potentially incomplete terms in the presence of binders. It has found good use as a framework for concise explanations of higher-order unification, characterize holes in proofs, and in developing a foundation for programming with higher-order abstract syntax, as embodied by the programming and reasoning environment Beluga. However, to reason about these applications, we need to introduce meta^2-variables to characterize the dependency on meta-variables and bound variables. In other words, we must go beyond a two-level system granting only bound variables and meta-variables. In this paper we generalize contextual type theory to n levels for arbitrary n, so as to obtain a formal system offering bound variables, meta-variables and so on all the way to meta^n-variables. We obtain a uniform account by collapsing all these different kinds of variables into a single notion of variabe indexed by some level k. We give a decidable ...

  13. Uncertainty Quantification of the Pion-Nucleon Low-Energy Coupling Constants up to Fourth Order in Chiral Perturbation Theory

    CERN Document Server

    Wendt, K A; Ekström, A

    2014-01-01

    We extract the statistical uncertainties for the pion-nucleon ($\\pi N$) low energy constants (LECs) up to fourth order $\\mathcal{O}(Q^4)$ in the chiral expansion of the nuclear effective Lagrangian. The LECs are optimized with respect to experimental scattering data. For comparison, we also present an uncertainty quantification that is based solely on \\pin{} scattering phase shifts. Statistical errors on the LECs are critical in order to estimate the subsequent uncertainties in \\textit{ab initio} modeling of light and medium mass nuclei which exploit chiral effective field theory. As an example of the this, we present the first complete predictions with uncertainty quantification of peripheral phase shifts of elastic proton-neutron scattering.

  14. Quantification analysis of CT for aphasic patients

    International Nuclear Information System (INIS)

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on Slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis). (author)

  15. Stereomicroscopic imaging technique for the quantification of cold flow in drug-in-adhesive type of transdermal drug delivery systems.

    Science.gov (United States)

    Krishnaiah, Yellela S R; Katragadda, Usha; Khan, Mansoor A

    2014-05-01

    Cold flow is a phenomenon occurring in drug-in-adhesive type of transdermal drug delivery systems (DIA-TDDS) because of the migration of DIA coat beyond the edge. Excessive cold flow can affect their therapeutic effectiveness, make removal of DIA-TDDS difficult from the pouch, and potentially decrease available dose if any drug remains adhered to pouch. There are no compendial or noncompendial methods available for quantification of this critical quality attribute. The objective was to develop a method for quantification of cold flow using stereomicroscopic imaging technique. Cold flow was induced by applying 1 kg force on punched-out samples of marketed estradiol DIA-TDDS (model product) stored at 25°C, 32°C, and 40°C/60% relative humidity (RH) for 1, 2, or 3 days. At the end of testing period, dimensional change in the area of DIA-TDDS samples was measured using image analysis software, and expressed as percent of cold flow. The percent of cold flow significantly decreased (p < 0.001) with increase in size of punched-out DIA-TDDS samples and increased (p < 0.001) with increase in cold flow induction temperature and time. This first ever report suggests that dimensional change in the area of punched-out samples stored at 32°C/60%RH for 2 days applied with 1 kg force could be used for quantification of cold flow in DIA-TDDS. PMID:24585397

  16. Matrix theory of type IIB plane wave from membranes

    International Nuclear Information System (INIS)

    We write down a maximally supersymmetric one parameter deformation of the field theory action of Bagger and Lambert. We show that this theory on R x T2 is invariant under the superalgebra of the maximally supersymmetric Type IIB plane wave. It is argued that this theory holographically describes the Type IIB plane wave in the discrete light-cone quantization (DLCQ).

  17. Three-dimensional topological quantum field theory of Witten type

    CERN Document Server

    Bakalarska, M; Bakalarska, Malgorzata; Broda, Boguslaw

    1998-01-01

    Description of two three-dimensional topological quantum field theories of Witten type as twisted supersymmetric theories is presented. Low-energy effective action and a corresponding topological invariant of three-dimensional manifolds are considered.

  18. A bi-metric theory of Klein-Kaluza type

    International Nuclear Information System (INIS)

    A unified theory of Klein-Kaluza type with a de Sitter structure is considered. A condition binding the external and internal spaces reduces the Lagrangian of the theory to the Einstein Lagrangian plus a small quadratic term

  19. Type IIB Flux Vacua from M-theory via F-theory

    OpenAIRE

    Valandro, Roberto

    2008-01-01

    We study in detail some aspects of duality between type IIB and M-theory. We focus on the duality between type IIB string theory on K3 x T^2/Z_2 orientifold and M-theory on K3 x K3, in the F-theory limit. We give the explicit map between the fields and in particular between the moduli of compactification, studying their behavior under the F-theory limit. Turning on fluxes generates a potential for the moduli both in type IIB and in M-theory. We verify that the type IIB analy...

  20. Verifying Process Algebra Proofs in Type Theory

    OpenAIRE

    Sellink, M. P. A.

    1993-01-01

    In this paper we study automatic verification of proofs in process algebra. Formulas of process algebra are represented by types in typed ?-calculus. Inhabitants (terms) of these types represent proofs. The specific typed ?-calculus we use is the Calculus of Inductive Constructions as implemented in the interactive proof construction program COQ.

  1. Maxwell Chern-Simons Solitons from Type IIB String Theory

    OpenAIRE

    Lee, Bum-hoon; Lee, Hyuk-jae; Ohta, Nobuyoshi; Yang, Hyun Seok

    1999-01-01

    We study various three-dimensional supersymmetric Maxwell Chern-Simons solitons by using type IIB brane configurations. We give a systematic classification of soliton spectra such as topological BPS vortices and nontopological vortices in $\\cn=2,3$ supersymmetric Maxwell Chern-Simons system via the branes of type IIB string theory. We identify the brane configurations with the soliton spectra of the field theory and obtain a nice agreement with field theory aspects. We also ...

  2. Intersection spaces, perverse sheaves and type IIB string theory

    OpenAIRE

    Banagl, Markus; Budur, Nero; Maxim, Laurentiu

    2012-01-01

    The method of intersection spaces associates rational Poincar\\'e complexes to singular stratified spaces. For a conifold transition, the resulting cohomology theory yields the correct count of all present massless 3-branes in type IIB string theory, while intersection cohomology yields the correct count of massless 2-branes in type IIA theory. For complex projective hypersurfaces with an isolated singularity, we show that the cohomology of intersection spaces is the hypercoh...

  3. Quantification of local water and biomass in wild type PA01 biofilms by Confocal Raman Microspectroscopy.

    Science.gov (United States)

    Sandt, C; Smith Palmer, T; Pink, J; Pink, D

    2008-09-01

    Confocal Raman Microspectroscopy (CRM) can be used as a tool for the in situ evaluation of the chemical composition of living, fully submerged, unstained biofilms. In this study the estimation of the local water content in Pseudomonas aeruginosa PA01 biofilms is given as an example. The ratio of the area of the O-H stretching vibration band at 3450 cm(-1), (water), to that of the C-H stretching bands at 2950 cm(-1) (biomass), was used to estimate the relative biofilm water content. The quantification of biofilm water and biomass was based on calibration curves generated from protein solutions. Water/biomass ratios (W:BR) equivalent to that of a 30% (w/v) protein solution were observed within some biofilm colonies. PMID:18571260

  4. Type IIB flux vacua from M-theory via F-theory

    International Nuclear Information System (INIS)

    We study in detail some aspects of duality between type IIB and M-theory. We focus on the duality between type IIB string theory on K3 x T2/Z2 orientifold and M-theory on K3 x K3, in the F-theory limit. We give the explicit map between the fields and in particular between the moduli of compactification, studying their behavior under the F-theory limit. Turning on fluxes generates a potential for the moduli both in type IIB and in M-theory. We verify that the type IIB analysis gives the same results of the F-theory analysis. In particular we check that the two potentials match.

  5. Basics of F-theory from the type IIB perspective

    International Nuclear Information System (INIS)

    These short lecture notes provide an introduction to some basic notions of F-theory with some special emphasis on its relation to Type IIB orientifolds with O7/O3-planes. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  6. A Co-Operative Phenomena Type Local Realistic Theory

    OpenAIRE

    Buonomano, Vincent

    1999-01-01

    We analyze a conceivable type of local realistic theory, which we call a co-operative phenomena type local realistic theory. In an experimental apparatus to measure second or fourth order interference effects, it images that their exists a stable global pattern or mode in a hypothesized medium that is at least the size of the coherence volume of all the involved beams. If you change the position of a mirror, beam splitter, polarizer, state preparation, or block a beam then a...

  7. Rapid and reliable quantification of reovirus type 3 by high performance liquid chromatography during manufacturing of Reolysin.

    Science.gov (United States)

    Transfiguracion, Julia; Bernier, Alice; Voyer, Robert; Coelho, Helene; Coffey, Matt; Kamen, Amine

    2008-11-01

    Reolysin, a human reovirus type 3, is being evaluated in the clinic as an oncolytic therapy for various types of cancer. To facilitate the optimization and scale-up of the current process, a high performance liquid chromatography (HPLC) method has been developed that is rapid, specific and reliable for the quantification of reovirus type 3 particles. Using an anion-exchange column, the intact virus eluted from the contaminants in 9.78 min at 350 mM NaCl in 50mM HEPES, pH 7.10 in a total analysis time of 25 min. The virus demonstrated a homogenous peak with no co-elution of other compounds as analyzed by photodiode array analysis. The HPLC method facilitated the optimization of the purification process which resulted in the improvement of both total and infectious particle recovery and contributed to the successful scale-up of the process at the 20 L, 40 L and 100 L production scale. The method is suitable for the analysis of crude virus supernatants, crude lysates, semi-purified and purified preparations and therefore is an ideal monitoring tool during process development and scale-up. PMID:18632239

  8. Quantification of age-related changes in the structure model type and trabecular thickness of human tibial cancellous

    DEFF Research Database (Denmark)

    Ding, Ming; Hvid, I

    2000-01-01

    Structure model type and trabecular thickness are important characteristics in describing cancellous bone architecture. It has been qualitatively observed that a radical change of trabeculae from plate-like to rod-like occurs in aging, bone remodeling, and osteoporosis. Thickness of trabeculae has traditionally been measured using model-based histomorphometric methods on two-dimensional (2-D) sections. However, no quantitative study has been published based on three-dimensional (3-D) methods on the age-related changes in structure model type and trabecular thickness for human peripheral (tibial) cancellous bone. In this study, 160 human proximal tibial cancellous bone specimens from 40 normal donors, aged 16 to 85 years, were collected. These specimens were micro-computed tomography (micro-CT) scanned, then the micro-CT images were segmented using optimal thresholds. From accurate 3-D data sets, structure model type and trabecular thickness were quantified by means of novel 3-D methods. Structure model type was assessed by calculating the structure model index (SMI). The SMI was quantified based on a differential analysis of the triangulated bone surface of a structure. This technique allows quantification of structure model type, such as plate, rod objects, or mixture of plates or rods. Trabecular thickness is calculated directly from 3-D images, which is especially important for an a priori unknown or changing structure. Furthermore, 2-D trabecular thickness was also calculated based on the plate model. Our results showed that structure model type changed towards more rod-like in the elderly, and that trabecular thickness declined significantly with age. These changes become significant after 80 years of age for human tibial cancellous bone, whereas both properties seem to remain relatively unchanged between 20 and 80 years. Although a fairly close relationship was seen between 3-D trabecular thickness and 2-D trabecular thickness, real 3-D trabecular thickness was significantly underestimated using 2-D method.

  9. Flux vacua in type II supergravity theories

    OpenAIRE

    Solard, Gautier

    2013-01-01

    We first give a review of the geometrical techniques (in particular G-structures and Generalized Complex Geometry) that are currently used in the study of supersymmetric N=1 compactifications. Then we focus on the study of type IIB compactifications to four dimensional Anti de Sitter vacua with N=1 supersymmetry. We give the general conditions that supersymmetry imposes on the solutions in particular, the internal manifold must have SU(2)-structure. Then we perform an exhaustive search of suc...

  10. Axion Inflation in Type II String Theory

    OpenAIRE

    Grimm, Thomas W.

    2007-01-01

    Inflationary models driven by a large number of axion fields are discussed in the context of type IIB compactifications with N=1 supersymmetry. The inflatons arise as the scalar modes of the R-R two-forms evaluated on vanishing two-cycles in the compact geometry. The vanishing cycles are resolved by small two-volumes or NS-NS B-fields which sit together with the inflatons in the same supermultiplets. String world-sheets wrapping the vanishing cycles correct the metric of the...

  11. Radiochemical Separation and Quantification of Tritium in Metallic Radwastes Generated from CANDU Type NPP - 13279

    International Nuclear Information System (INIS)

    As a destructive quantification method of 3H in low and intermediate level radwastes, bomb oxidation, sample oxidation, and wet oxidation methods have been introduced. These methods have some merits and demerits in the radiochemical separation of 3H radionuclides. That is, since the bomb oxidation and sample oxidation methods are techniques using heating at high temperature, the separation methods of the radionuclides are relatively simple. However, since 3H radionuclide has a property of being diffused deeply into the inside of metals, 3H which is distributed on the surface of the metals can only be extracted if the methods are applied. As an another separation method, the wet oxidation method makes 3H oxidized with an acidic solution, and extracted completely to an oxidized HTO compound. However, incomplete oxidized 3H compounds, which are produced by reactions of acidic solutions and metallic radwastes, can be released into the air. Thus, in this study, a wet oxidation method to extract and quantify the 3H radionuclide from metallic radwastes was established. In particular, a complete extraction method and complete oxidation method of incomplete chemical compounds of 3H using a Pt catalyst were studied. The radioactivity of 3H in metallic radwastes is extracted and measured using a wet oxidation method and liquid scintillation counter. Considering the surface dose rate of the sample, the appropriate size of the sample was determined and weighed, and a mixture of oxidants was added to a 200 ml round flask with 3 tubes. The flask was quickly connected to the distilling apparatus. 20 mL of 16 wt% H2SO4 was given into the 200-ml round flask through a dropping funnel while under stirring and refluxing. After dropping, the temperature of the mixture was raised to 96 deg. C and the sample was leached and oxidized by refluxing for 3 hours. At that time, the incomplete oxidized 3H compounds were completely oxidized using the Pt catalysts and produced a stable HTO compound. After that, about a 20 ml solution was distilled in the separation apparatus, and the distillate was mixed with an ultimagold LLT as a cocktail solution. The solution in the vial was left standing for at least 24 hours. The radioactivity of 3H was counted directly using a liquid scintillation analyzer (Packard, 2500 TR/AB, Alpha and Beta Liquid Scintillation Analyzer). (authors)

  12. One type of classical solution in theories with vacuum periodicity

    International Nuclear Information System (INIS)

    We discuss the properties of a new type of classical solution in theories with vacuum periodicity. This type of solution is in real time and satisfies special boundary conditions. The analytic expression of such a solution in the simplest (0+1)-dimensional model is found. (orig.)

  13. Intensional Type Theory with Guarded Recursive Types qua Fixed Points on Universes

    DEFF Research Database (Denmark)

    Birkedal, Lars; Mogelberg, R.E.

    2013-01-01

    Guarded recursive functions and types are useful for giving semantics to advanced programming languages and for higher-order programming with infinite data types, such as streams, e.g., for modeling reactive systems. We propose an extension of intensional type theory with rules for forming fixed points of guarded recursive functions. Guarded recursive types can be formed simply by taking fixed points of guarded recursive functions on the universe of types. Moreover, we present a general model construction for constructing models of the intensional type theory with guarded recursive functions and types. When applied to the groupoid model of intensional type theory with the universe of small discrete groupoids, the construction gives a model of guarded recursion for which there is a one-to-one correspondence between fixed points of functions on the universe of types and fixed points of (suitable) operators on types. In particular, we find that the functor category Grpd?op from the preordered set of natural numbers to the category of groupoids is a model of intensional type theory with guarded recursive types.

  14. Intensional type theory with guarded recursive types qua fixed points on universes

    DEFF Research Database (Denmark)

    MØgelberg, Rasmus Ejlers; Birkedal, Lars

    2013-01-01

    Guarded recursive functions and types are useful for giving semantics to advanced programming languages and for higher-order programming with infinite data types, such as streams, e.g., for modeling reactive systems. We propose an extension of intensional type theory with rules for forming fixed points of guarded recursive functions. Guarded recursive types can be formed simply by taking fixed points of guarded recursive functions on the universe of types. Moreover, we present a general model construction for constructing models of the intensional type theory with guarded recursive functions and types. When applied to the groupoid model of intensional type theory with the universe of small discrete groupoids, the construction gives a model of guarded recursion for which there is a one-to-one correspondence between fixed points of functions on the universe of types and fixed points of (suitable) operators on types. In particular, we find that the functor category from the preordered set of natural numbers to the category of groupoids is a model of intensional type theory with guarded recursive types.

  15. Introduction to type-2 fuzzy logic control theory and applications

    CERN Document Server

    Mendel, Jerry M; Tan, Woei-Wan; Melek, William W; Ying, Hao

    2014-01-01

    Written by world-class leaders in type-2 fuzzy logic control, this book offers a self-contained reference for both researchers and students. The coverage provides both background and an extensive literature survey on fuzzy logic and related type-2 fuzzy control. It also includes research questions, experiment and simulation results, and downloadable computer programs on an associated website. This key resource will prove useful to students and engineers wanting to learn type-2 fuzzy control theory and its applications.

  16. On the classification of Floer-type theories

    OpenAIRE

    Shirokova, Nadya

    2007-01-01

    In this paper we outline a program for the classification of Floer-type theories, (or defining invariants of finite type for families). We consider Khovanov complexes as a local system on the space of knots introduced by V. Vassiliev and construct the wall-crossing morphism. We extend this system to the singular locus by the cone of this morphism and introduce the definition of the local system of finite type. This program can be further generalized to the manifolds of dimen...

  17. Calculating the Fundamental Group of the Circle in Homotopy Type Theory

    OpenAIRE

    Licata, Daniel R.; Shulman, Michael

    2013-01-01

    Recent work on homotopy type theory exploits an exciting new correspondence between Martin-Lof's dependent type theory and the mathematical disciplines of category theory and homotopy theory. The category theory and homotopy theory suggest new principles to add to type theory, and type theory can be used in novel ways to formalize these areas of mathematics. In this paper, we formalize a basic result in algebraic topology, that the fundamental group of the circle is the inte...

  18. Quantification of Poly(I:C)-Mediated Protection against Genital Herpes Simplex Virus Type 2 Infection

    OpenAIRE

    Herbst-kralovetz, Melissa M.; Pyles, Richard B.

    2006-01-01

    Alternative strategies for controlling the growing herpes simplex virus type 2 (HSV-2) epidemic are needed. A novel class of immunomodulatory microbicides has shown promise as antiherpetics, including intravaginally applied CpG-containing oligodeoxynucleotides that stimulate toll-like receptor 9 (TLR9). In the current study, we quantified protection against experimental genital HSV-2 infection provided by an alternative nucleic acid-based TLR agonist, polyinosine-poly(C) (PIC) (TLR3 agonist)....

  19. An Information Theory Perspective on Uncertainty Quantification and Bayes Law (Invited)

    Science.gov (United States)

    Gupta, H. V.; Nearing, G. S.; Gong, W.; Weijs, S. V.; Ehret, U.

    2013-12-01

    This talk will provide an accessible introduction to how concepts of Information Theory can be used to characterize and quantify uncertainty in the context of learning, model building and prediction. Conversely, it seeks to illuminate the concepts and usefulness of Information Theory by viewing it through a Bayesian perspective. In particular, we examine the question of 'What is Information', discuss how Data, Models, Conjectures and Assumptions are different kinds of information, and examine how these three kinds of information are linked through Bayes Law. As a secondary goal, and time permitting, we will discuss a) how the Information Theoretic metrics Entropy and Mutual Information can be used to inform the process of detecting and diagnosing model structural errors (epistemic uncertainty), and b) discuss the need for a practical, robust and communally accepted approach to computing such metrics in the presence of random data error (aleatory uncertainty).

  20. Theory confronts experiment in the Casimir force measurements: quantification of errors and precision

    OpenAIRE

    Chen, F.; Klimchitskaya, G. L.; Mohideen, U.; Mostepanenko, V. M.

    2004-01-01

    We compare theory and experiment in the Casimir force measurement between gold surfaces performed with the atomic force microscope. Both random and systematic experimental errors are found leading to a total absolute error equal to 8.5 pN at 95% confidence. In terms of the relative errors, experimental precision of 1.75% is obtained at the shortest separation of 62 nm at 95% confidence level (at 60% confidence the experimental precision of 1% is confirmed at the shortest sep...

  1. The superconformal index of class theories of type D

    Science.gov (United States)

    Lemos, Madalena; Peelaers, Wolfger; Rastelli, Leonardo

    2014-05-01

    We consider the superconformal index of class theories of type D, which arise by compactification of the (2 , 0) D n theories on a punctured Riemann surface . We also allow for the presence of twist lines on associated to the outer automorphism of D n . For the two-parameter slice ( p = 0 , q, t) in the space of superconformal fugacities, we determine the 2 d TQFT that computes the index.

  2. Real-time quantification of wild-type contaminants in glyphosate tolerant soybean

    OpenAIRE

    Noli Enrico; Battistini Elena

    2009-01-01

    Abstract Background Trait purity is a key factor for the successful utilization of biotech varieties and is currently assessed by analysis of individual seeds or plants. Here we propose a novel PCR-based approach to test trait purity that can be applied to bulk samples. To this aim the insertion site of a transgene is characterized and the corresponding sequence of the wild-type (wt) allele is used as diagnostic target for amplification. As a demonstration, we developed a real-time quantitati...

  3. Uncertainty Propagation and Quantification using Constrained Coupled Adaptive Forward-Inverse Schemes: Theory and Applications

    Science.gov (United States)

    Ryerson, F. J.; Ezzedine, S. M.; Antoun, T.

    2013-12-01

    The success of implementation and execution of numerous subsurface energy technologies such shale gas extraction, geothermal energy, underground coal gasification rely on detailed characterization of the geology and the subsurface properties. For example, spatial variability of subsurface permeability controls multi-phase flow, and hence impacts the prediction of reservoir performance. Subsurface properties can vary significantly over several length scales making detailed subsurface characterization unfeasible if not forbidden. Therefore, in common practices, only sparse measurements of data are available to image or characterize the entire reservoir. For example pressure, P, permeability, k, and production rate, Q, measurements are only available at the monitoring and operational wells. Elsewhere, the spatial distribution of k is determined by various deterministic or stochastic interpolation techniques and P and Q are calculated from the governing forward mass balance equation assuming k is given at all locations. Several uncertainty drivers, such as PSUADE, are then used to propagate and quantify the uncertainty (UQ) of quantities (variable) of interest using forward solvers. Unfortunately, forward-solver techniques and other interpolation schemes are rarely constrained by the inverse problem itself: given P and Q at observation points determine the spatially variable map of k. The approach presented here, motivated by fluid imaging for subsurface characterization and monitoring, was developed by progressively solving increasingly complex realistic problems. The essence of this novel approach is that the forward and inverse partial differential equations are the interpolator themselves for P, k and Q rather than extraneous and sometimes ad hoc schemes. Three cases with different sparsity of data are investigated. In the simplest case, a sufficient number of passive pressure data (pre-production pressure gradients) are given. Here, only the inverse hyperbolic equation for the distribution of k is solved, provided that Cauchy data are appropriately assigned. In the next stage, only a limited number of passive measurements are provided. In this case, the forward and inverse PDEs are solved simultaneously. This is accomplished by adding regularization terms and filtering the pressure gradients in the inverse problem. Both the forward and the inverse problem are either simultaneously or sequentially coupled and solved using implicit schemes, adaptive mesh refinement, Galerkin finite elements. The final case arises when P, k, and Q data only exist at producing wells. This exceedingly ill posed problem calls for additional constraints on the forward-inverse coupling to insure that the production rates are satisfied at the desired locations. Results from all three cases are presented demonstrating stability and accuracy of the proposed approach and, more importantly, providing some insights into the consequences of data under sampling, uncertainty propagation and quantification. We illustrate the advantages of this novel approach over the common UQ forward drivers on several subsurface energy problems in either porous or fractured or/and faulted reservoirs. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  4. Type III hyperlipoproteinemia: Quantification, distribution, and nature of atherosclerotic coronary arterial narrowing in five necropsy patients.

    Science.gov (United States)

    Cabin, H S; Schwartz, D E; Virmani, R; Brewer, H B; Roberts, W C

    1981-11-01

    The amount of cross-sectional area (XSA) narrowing in each 5 mm long segment of each of the four major epicardial coronary arteries was determined in each of five patients with type III hyperlipoproteinemia (HLP) and symptomatic, fatal atherosclerotic coronary disease (CAD). Four had angina pectoris; two had acute myocardial infarcts which healed, and two died suddenly. Of the four major epicardial coronary arteries, all four were narrowed 76% to 100% in XSA by atherosclerotic plaques in two patients, three were narrowed to this degree in two patients, and two were so narrowed in one patient. Three patients had severe narrowing of the left main coronary artery. The percent of 5 mm long segments of coronary artery narrowed to various degrees was as follows: 96% to 100%, 0 to 37 (mean 14); 76% to 95%, 14 to 61 (mean 35); 51% to 75%, 9 to 41 (mean 24); 26% to 50%, 0 to 42 (mean 16), and 0% to 25%, 0 to 27 (mean 11). Utilizing a scoring system of 1 to 4 for the four categories of narrowing (1 = 0% to 25%, 2 = 26% to 50%, 3 = 51% to 75% and 4 = 76% to 100% XSA narrowing), scores per 5 mm segment for each patient ranged from 2.5 to 3.9 (mean 3.1). Thus these five type III HLP patients had severe diffuse coronary narrowing by atherosclerotic plaques. PMID:7304393

  5. Quantification of heat balance during work in three types of asbestos-protective clothing.

    Science.gov (United States)

    Holmér, I; Nilsson, H; Rissanen, S; Hirata, K; Smolander, J

    1992-01-01

    Three types of protective suits for asbestos removal work were tested in a climatic chamber at two ambient temperatures, 25 degrees and 36 degrees C. Four subjects performed 50 min of bicycle exercise at 90 W dressed in shorts, socks and sneakers (NoPS). The same test was carried out with three different types of asbestos-protective suits worn on top of NoPS. Suits were made of GoreTex (GT), polypropylene (PP) and Tyvek (TYV). At 25 degrees C, responses differed very little between suits and thermal strain was small. At 36 degrees C, strain was least with NoPS. TYV resulted in significantly higher physiological and thermal strain than did PP and GT. Evaporative heat loss was maintained at a similar level with less permeable ensembles, but at the expense of increased skin wetness and sweat rate. Measured values compared favourably with calculated values for skin wetness and sweat rate according to ISO 7933, when resultant, rather than standard, basic data for insulation and evaporative resistance of ensembles were used. Results indicate that differences between suits that may be of little importance at normal room temperature become significant at higher stress levels (increased activity and/or air temperature). PMID:1468792

  6. Type II Actions from 11-Dimensional Chern-Simons Theories

    CERN Document Server

    Belov, D M; Belov, Dmitriy M.; Moore, Gregory W.

    2006-01-01

    This paper continues the discussion of hep-th/0605038, applying the holographic formulation of self-dual theory to the Ramond-Ramond fields of type II supergravity. We formulate the RR partition function, in the presence of nontrivial H-fields, in terms of the wavefunction of an 11-dimensional Chern-Simons theory. Using the methods of hep-th/0605038 we show how to formulate an action principle for the RR fields of both type IIA and type IIB supergravity, in the presence of RR current. We find a new topological restriction on consistent backgrounds of type IIA supergravity, namely the fourth Wu class must have a lift to the H-twisted cohomology.

  7. Multivariate Bonferroni-type inequalities theory and applications

    CERN Document Server

    Chen, John

    2014-01-01

    Multivariate Bonferroni-Type Inequalities: Theory and Applications presents a systematic account of research discoveries on multivariate Bonferroni-type inequalities published in the past decade. The emergence of new bounding approaches pushes the conventional definitions of optimal inequalities and demands new insights into linear and Fréchet optimality. The book explores these advances in bounding techniques with corresponding innovative applications. It presents the method of linear programming for multivariate bounds, multivariate hybrid bounds, sub-Markovian bounds, and bounds using Hamil

  8. Quantification of stress history in type 304L stainless steel using positron annihilation spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Walters, Thomas W., E-mail: Thomas.Walters@inl.gov [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID, 83415-6188 (United States); Walters, Leon C. [Argonne National Laboratory, 9700 Cass Ave., Argonne, IL, 60439 (United States); Schoen, Marco P.; Naidu, D. Subbaram [Idaho State University, 921 S. 8th Avenue, Pocatello, ID, 83201 (United States); Dickerson, Charles [Positron Systems, Inc., 1500 Alvin Ricken Dr., Pocatello, ID, 83201-2783 (United States); Perrenoud, Ben C. [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID, 83415-6188 (United States)

    2011-04-15

    Five Type 304L stainless steel specimens were subjected to incrementally increasing values of plastic strain. At each value of strain, the associated static stress was recorded and the specimen was subjected to positron annihilation spectroscopy (PAS) using the Doppler Broadening method. A calibration curve for the 'S' parameter as a function of stress was developed based on the five specimens. Seven different specimens (blind specimens labeled B1-B7) of 304L stainless steel were subjected to values of stress inducing plastic deformation. The values of stress ranged from 310 to 517 MPa. The seven specimens were subjected to PAS post-loading using the Doppler Broadening method, and the results were compared against the developed curve from the previous five specimens. It was found that a strong correlation exists between the 'S' parameter, stress, and strain up to a strain value of 15%, corresponding to a stress value of 500 MPa, beyond which saturation of the 'S' parameter occurs. Research Highlights: {yields} Specimens were initially in an annealed/recrystallized condition. {yields} Calibration results indicate positron annihilation measurements yield correlation. {yields} Deformation produced by cold work was likely larger than the maximum strain.

  9. Quantification of Stress History in Type 304L Stainless Steel Using Positron Annihilation Spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Thomas W. Walters

    2011-04-01

    Five type 304L stainless steel specimens were subjected to incrementally increasing values of plastic strain. At each value of strain, the associated static stress was recorded and the specimen was subjected to Positron Annihilation Spectroscopy (PAS) using the Doppler Broadening method. A calibration curve for the ‘S’ parameter as a function of stress was developed based on the five specimens. Seven different specimens (blind specimens labeled B1-B7) of 304L stainless steel were subjected to values of stress inducing plastic deformation. The values of stress ranged from 310-517 MPa. The seven specimens were subjected to Positron Annihilation Spectroscopy post loading using the Doppler Broadening method, and the results were compared against the developed curve from the previous five specimens to determine feasibility of applying the curve to materials in order to non-destructively quantify stress history in materials based only on the ‘S’ parameter extracted from the Positron Annihilation Spectroscopy. Results for the calibration set of specimens indicated that calibration development is possible.

  10. Classical instanton and wormhole solutions of Type IIB string theory

    OpenAIRE

    Kim, Jin Young; Lee, H. W.; Myung, Y. S.

    1996-01-01

    We study $p=-1$ D-brane in type IIB superstring theory. In addition to RR instanton, we obtain the RR charged wormhole solution in the Einstein frame. This corresponds to the ten-dimensional singular wormhole solution with infinite euclidean action.

  11. Calabi-Yau compactifications of type IIB superstring theory

    International Nuclear Information System (INIS)

    Starting from a non-self-dual action for ten dimensional type IIB supergravity this theory is compactified on a Calabi-Yau 3-fold and 4- fold. The compactification are thereby performed in the limit, in which the volumina of the manifolds are large against the string scale

  12. The theory of winds in early type stars

    Science.gov (United States)

    Hearn, A. G.

    The conducted review of the theory of winds from early type stars takes into account developments which occurred during the time since the last discussion concerning this subject took place in 1978. At this discussion it had appeared that studies conducted with the aid of the Einstein Observatory would resolve the arguments about the nature of the winds from hot stars. The Einstein Observatory did measure X rays from OB stars. However, the issues could not be resolved. A description is provided of the impact of Einstein Observatory data on the considered theories, taking into account the theory of Castor et al. (1975), the warm wind coronal model proposed by Lamers and Rogerson (1978), the coronal model for early type stars suggested by Hearn (1975), arguments regarding mass loss from stars advanced by Cannon and Thomas (1977), and a new numerical method for calculating stellar coronal models developed by Hearn and Vardavas (1980).

  13. Models of Particle Physics from Type IIB String Theory and F-theory: A Review

    CERN Document Server

    Maharana, Anshuman

    2012-01-01

    We review particle physics model building in type IIB string theory and F-theory. This is a region in the landscape where in principle many of the key ingredients required for a realistic model of particle physics can be combined successfully. We begin by reviewing moduli stabilisation within this framework and its implications for supersymmetry breaking. We then review model building tools and developments in the weakly coupled type IIB limit, for both local D3-branes at singularities and global models of intersecting D7-branes. Much of recent model building work has been in the strongly coupled regime of F-theory due to the presence of exceptional symmetries which allow for the construction of phenomenologically appealing Grand Unified Theories. We review both local and global F-theory model building starting from the fundamental concepts and tools regarding how the gauge group, matter sector and operators arise, and ranging to detailed phenomenological properties explored in the literature.

  14. Development and validation of an enzyme-linked immunosorbent assay for the quantification of a specific MMP-9 mediated degradation fragment of type III collagen--A novel biomarker of atherosclerotic plaque remodeling

    DEFF Research Database (Denmark)

    Barascuk, Natasha; Vassiliadis, Efstathios

    2011-01-01

    Degradation of collagen in the arterial wall by matrix metalloproteinases is the hallmark of atherosclerosis. We have developed an ELISA for the quantification of type III collagen degradation mediated by MMP-9 in urine.

  15. A ground many-valued type theory and its extensions.

    Czech Academy of Sciences Publication Activity Database

    B?hounek, Libor

    Linz : Johannes Kepler Universität, 2014 - (Flaminio, T.; Godo, L.; Gottwald, S.; Klement, E.). s. 15-18 [Linz Seminar on Fuzzy Set Theory /35./. 18.02.2014-22.02.2014, Linz] R&D Projects: GA MŠk ED1.1.00/02.0070; GA MŠk EE2.3.30.0010 Institutional support: RVO:67985807 Keywords : type theory * many-valued logics * higher-order logic * teorie typ? * vícehodnotové logiky * logika vyššího ?ádu Subject RIV: BA - General Mathematics

  16. Constructive Type Theory and the Dialogical Approach to Meaning

    Directory of Open Access Journals (Sweden)

    Shahid Rahman

    2013-12-01

    Full Text Available In its origins Dialogical logic constituted one part of a new movement called the Erlangen School or Erlangen Constructivism. Its goal was to provide a new start to a general theory of language and of science. According to the Erlangen-School, language is not just a fact that we discover, but a human cultural accomplishment whose construction reason can and should control. The resulting project of intentionally constructing a scientific language was called the Orthosprache-project. Unfortunately, the Orthosprache-project was not further developed and seemed to fade away. It is possible that one of the reasons for this fading away is that the link between dialogical logic and Orthosprache was not sufficiently developed - in particular, the new theory of meaning to be found in dialogical logic seemed to be cut off from both the project of establishing the basis for scientific language and also from a general theory of meaning. We would like to contribute to clarifying one possible way in which a general dialogical theory of meaning could be linked to dialogical logic. The idea behind the proposal is to make use of constructive type theory in which logical inferences are preceded by the description of a fully interpreted language. The latter, we think, provides the means for a new start not only for the project of Orthosprache, but also for a general dialogical theory of meaning.

  17. Type IIB flux vacua from G-theory I

    OpenAIRE

    Candelas, Philip; Constantin, Andrei; Damian, Cesar; Larfors, Magdalena; Morales, Jose Francisco

    2014-01-01

    We construct non-perturbatively exact four-dimensional Minkowski vacua of type IIB string theory with non-trivial fluxes. These solutions are found by gluing together, consistently with U-duality, local solutions of type IIB supergravity on $T^4 \\times \\mathbb{C}$ with the metric, dilaton and flux potentials varying along $\\mathbb{C}$ and the flux potentials oriented along $T^4$. We focus on solutions locally related via U-duality to non-compact Ricci-flat geometries. More g...

  18. Dilaton-driven brane inflation in type IIB string theory

    OpenAIRE

    Kim, Jin Young

    2000-01-01

    We consider the cosmological evolution of the three-brane in the background of type IIB string theory. For two different backgrounds which give nontrivial dilaton profile we have derived the Friedman-like equations. These give the cosmological evolution which is similar to the one by matter density on the universe brane. The effective density blows up as we move towards the singularity showing the initial singularity problem. The analysis shows that when there is axion field...

  19. Type II Superstring Field Theory: Geometric Approach and Operadic Description

    OpenAIRE

    Jurco, Branislav; Muenster, Korbinian

    2013-01-01

    We outline the construction of type II superstring field theory leading to a geometric and algebraic BV master equation, analogous to Zwiebach's construction for the bosonic string. The construction uses the small Hilbert space. Elementary vertices of the non-polynomial action are described with the help of a properly formulated minimal area problem. They give rise to an infinite tower of superstring field products defining a $\\mathcal{N}=1$ generalization of a loop homotopy...

  20. On global anomalies in type IIB string theory

    OpenAIRE

    Sati, Hisham

    2011-01-01

    We study global gravitational anomalies in type IIB string theory with nontrivial middle cohomology. This requires the study of the action of diffeomorphisms on this group. Several results and constructions, including some recent vanishing results via elliptic genera, make it possible to consider this problem. Along the way, we describe in detail the intersection pairing and the action of diffeomorphisms, and highlight the appearance of various structures, including the Roch...

  1. Enhanced gauge symmetry in type II string theory

    International Nuclear Information System (INIS)

    We show how enhanced gauge symmetry in type II string theory compactified on a Calabi-Yau threefold arises from singularities in the geometry of the target space. When the target space of the type IIA string acquires a genus g curve C of AN-1 singularities, we find that an SU(N) gauge theory with g adjoint hypermultiplets appears at the singularity. The new massless states correspond to solitons wrapped about the collapsing cycles, and their dynamics is described by a twisted supersymmetric gauge theory on C x R4. We reproduce this result from an analysis of the S-dual D-manifold. We check that the predictions made by this model about the nature of the Higgs branch, the monodromy of period integrals, and the asymptotics of the one-loop topological amplitude are in agreement with geometrical computations. In one of our examples we find that the singularity occurs at strong coupling in the heterotic dual proposed by Kachru and Vafa. (orig.)

  2. Irregular singularities in Liouville theory and Argyres-Douglas type gauge theories, I

    Energy Technology Data Exchange (ETDEWEB)

    Gaiotto, D. [Institute for Advanced Study (IAS), Princeton, NJ (United States); Teschner, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2012-03-15

    Motivated by problems arising in the study of N=2 supersymmetric gauge theories we introduce and study irregular singularities in two-dimensional conformal field theory, here Liouville theory. Irregular singularities are associated to representations of the Virasoro algebra in which a subset of the annihilation part of the algebra act diagonally. In this paper we define natural bases for the space of conformal blocks in the presence of irregular singularities, describe how to calculate their series expansions, and how such conformal blocks can be constructed by some delicate limiting procedure from ordinary conformal blocks. This leads us to a proposal for the structure functions appearing in the decomposition of physical correlation functions with irregular singularities into conformal blocks. Taken together, we get a precise prediction for the partition functions of some Argyres-Douglas type theories on S{sup 4}. (orig.)

  3. Smooth double critical state theory for type-II superconductors

    CERN Document Server

    Ruiz, H S

    2010-01-01

    Several aspects of the general theory for the critical states of a vortex lattice and the magnetic flux dynamics in type-II superconductors are examined by a direct variational optimisation method and widespread physical principles. Our method allows to unify a number of conventional models describing the complex vortex configurations in the critical state regime. Special attention is given to the discussion of the relation between the flux-line cutting mechanism and the depinning threshold limitation. This is done by using a smooth double critical state concept which incorporates the so-called isotropic, elliptical, T and CT models as well-defined limits of our general treatment. Starting from different initial configurations for a superconducting slab in a 3D magnetic field, we show that the predictions of the theory range from the collapse to zero of transverse magnetic moments in the isotropic model, to nearly force free configurations in which paramagnetic values can arbitrarily increase with the applied...

  4. D-brane Instantons in Type II String Theory

    Energy Technology Data Exchange (ETDEWEB)

    Blumenhagen, Ralph; /Munich, Max Planck Inst.; Cvetic, Mirjam; /Pennsylvania U.; Kachru, Shamit; /Stanford U., Phys. Dept. /SLAC; Weigand, Timo; /SLAC

    2009-06-19

    We review recent progress in determining the effects of D-brane instantons in N=1 supersymmetric compactifications of Type II string theory to four dimensions. We describe the abstract D-brane instanton calculus for holomorphic couplings such as the superpotential, the gauge kinetic function and higher fermionic F-terms. This includes a discussion of multi-instanton effects and the implications of background fluxes for the instanton sector. Our presentation also highlights, but is not restricted to the computation of D-brane instanton effects in quiver gauge theories on D-branes at singularities. We then summarize the concrete consequences of stringy D-brane instantons for the construction of semi-realistic models of particle physics or SUSY-breaking in compact and non-compact geometries.

  5. Church-style type theories over finitary weakly implicative logics.

    Czech Academy of Sciences Publication Activity Database

    B?hounek, Libor

    Vienna : Vienna University of Technology, 2014 - (Baaz, M.; Ciabattoni, A.; Hetzl, S.). s. 131-133 [LATD 2014. Logic, Algebra and Truth Degrees. 16.07.2014-19.07.2014, Vienna] R&D Projects: GA MŠk ED1.1.00/02.0070; GA MŠk EE2.3.30.0010 Institutional support: RVO:67985807 Keywords : type theory * higher-order logic * weakly implicative logics * teorie typ? * logika vyššího ?ádu * slab? implika?ní logiky Subject RIV: BA - General Mathematics

  6. Type II Superstring Field Theory: Geometric Approach and Operadic Description

    CERN Document Server

    Jurco, Branislav

    2013-01-01

    We outline the construction of type II superstring field theory leading to a geometric and algebraic BV master equation, analogous to Zwiebach's construction for the bosonic string. The construction uses the small Hilbert space. Elementary vertices of the non-polynomial action are described with the help of a properly formulated minimal area problem. They give rise to an infinite tower of superstring field products defining a $\\mathcal{N}=1$ generalization of a loop homotopy Lie algebra, the genus zero part generalizing a homotopy Lie algebra. Finally, we give an operadic interpretation of the construction.

  7. The hexagon gauge anomaly in Type 1 superstring theory

    International Nuclear Information System (INIS)

    Hexagon diagrams with external on-mass-shell Yang-Mills gauge particles are investigated in Type I superstring theory. Both the annulus and the Mobius-strip diagrams are shown to give anomalies, implying that spurious longitudinal modes cannot be consistently decoupled. However, the anomalies cancel when the two diagrams are added together if the gauge group is chosen to be SO(32). In carrying out the analysis, two different regulators are considered, but the same conclusions emerge in both cases. The authors point out where various terms in the low-energy effective action originate in superstring diagrams

  8. Deformations and descent type theory for Hopf algebras

    CERN Document Server

    Agore, A L

    2012-01-01

    Let $A \\subset E$ be a given extension of Hopf algebras. A factorization $A$-form of $E$ is a Hopf algebra $H$ such that $E$ factorizes through $A$ and $H$. The bicrossed descent theory asks for the description and classification of all factorization $A$-forms of $E$. The factorization index $[E: A]^f$ is introduced as a numerical measure of the bicrossed descent theory: the extensions of factorization index 1 are those for which a Krull-Schmidt-Azumaya type theorem for bicrossed products holds. The Hopf algebra $H$ is deformed to a new Hopf algebra $H_r$, using a certain type of unitary cocentral map $r: H \\to A$ called a descent map of the matched pair $(A, H, \\triangleright,\\triangleleft)$. This is a general deformation of a given Hopf algebra and it is of interest in its own right. Let $H$ be a given factorization $A$-form of $E$. The description of forms proves that ${\\mathbb H}$ is a factorization $A$-form of $E$ if and only if ${\\mathbb H}$ is isomorphic to $H_{r}$, for some descent map $r: H \\to A$. T...

  9. Type IIB flux vacua from G-theory I

    CERN Document Server

    Candelas, Philip; Damian, Cesar; Larfors, Magdalena; Morales, Jose Francisco

    2014-01-01

    We construct non-perturbatively exact four-dimensional Minkowski vacua of type IIB string theory with non-trivial fluxes. These solutions are found by gluing together, consistently with U-duality, local solutions of type IIB supergravity on $T^4 \\times \\mathbb{C}$ with the metric, dilaton and flux potentials varying along $\\mathbb{C}$ and the flux potentials oriented along $T^4$. We focus on solutions locally related via U-duality to non-compact Ricci-flat geometries. More general solutions and a complete analysis of the supersymmetry equations are presented in the companion paper [1]. We build a precise dictionary between fluxes in the global solutions and the geometry of an auxiliary $K3$ surface fibered over $\\mathbb{CP}^1$. In the spirit of F-theory, the flux potentials are expressed in terms of locally holomorphic functions that parametrize the complex structure moduli space of the $K3$ fiber in the auxiliary geometry. The brane content is inferred from the monodromy data around the degeneration points o...

  10. Compactifications of type IIB string theory and F-theory models using toric geometry

    International Nuclear Information System (INIS)

    In this work we focus on the toric construction of type IIB and F-theory models. After introducing the main concepts of type IIB orientifold and F-theory compactifications as well as their connection via the Sen limit, we provide the toric tools to explicitly construct and describe the manifolds involved in our setups. On the type IIB side, we study the 'Large Volume Scenario' on four-modulus, 'Swiss cheese' Calabi-Yau manifolds obtained from four-dimensional simplicial lattice polytopes. We thoroughly analyze the possibility of generating neutral, non-perturbative superpotentials from Euclidean D3-branes in the presence of chirally intersecting D7-branes. We find that taking proper account of the Freed-Witten anomaly on non-spin cycles and the Kaehler cone conditions imposes severe constraints on the models. Nevertheless, we are able to create setups where the constraints are solved, and up to three moduli are stabilized. In the case of F-theory compactifications, we make use of toric geometry to construct a class of grand unified theory (GUT) models in F-theory. The base manifolds are hypersurfaces of the four-dimensional projective space with toric point and curve blowups. The associated Calabi-Yau fourfolds are complete intersections of two hypersurfaces in the P[231] fibered toric sixfolds. We construct SO(10) GUT models on suitable divisors of the basis manifolds using the spectral cover construction. By means of abelian fluxes we break the SO(10) gauge group to fluxes we break the SO(10) gauge group to SU(5)xU(1) which is interpreted as a flipped SU(5) model. With the GUT Higgses in this model it is possible to further break the gauge symmetry to the Standard Model. We present several phenomenologically attractive examples in detail. (author)

  11. Classical Bianchi type I cosmology in K-essence theory

    CERN Document Server

    Socorro, J; Espinoza-García, Abraham

    2014-01-01

    We use one of the simplest forms of the K-essence theory and we apply it to the classical anisotropic Bianchi type I cosmological model, with a barotropic perfect fluid modeling the usual matter content and with cosmological constant. The classical solutions for any but the stiff fluid and without cosmological constant are found in closed form, using a time transformation. We also present the solution whith cosmological constant and some particular values of the barotropic parameter. We present the possible isotropization of the cosmological model, using the ratio between the anisotropic parameters and the volume of the universe and show that this tend to a constant or to zero for different cases. We include also a qualitative analysis of the analog of the Friedmann equation.

  12. The Biequivalence of Locally Cartesian Closed Categories and Martin-L\\"of Type Theories

    CERN Document Server

    Clairambault, Pierre

    2011-01-01

    Seely's paper "Locally cartesian closed categories and type theory" contains a well-known result in categorical type theory: that the category of locally cartesian closed categories is equivalent to the category of Martin-L\\"of type theories with Pi-types, Sigma-types and extensional identity types. However, Seely's proof relies on the problematic assumption that substitution in types can be interpreted by pullbacks. Here we prove a corrected version of Seely's theorem: that the B\\'enabou-Hofmann interpretation of Martin-L\\"of type theory in locally cartesian closed categories yields a biequivalence of 2-categories. To facilitate the technical development we employ categories with families as a substitute for syntactic Martin-L\\"of type theories. As a second result we prove that if we remove Pi-types the resulting categories with families are biequivalent to left exact categories.

  13. Development of Primer-Probe Energy Transfer real-time PCR for the detection and quantification of porcine circovirus type 2

    DEFF Research Database (Denmark)

    Balint, Adam; Tenk, M

    2009-01-01

    A real-time PCR assay, based on Primer-Probe Energy Transfer (PriProET), was developed to improve the detection and quantification of porcine circovirus type 2 (PVC2). PCV2 is recognised as the essential infectious agent in post-weaning multisystemic wasting syndrome (PMWS) and has been associated with other disease syndromes such as porcine dermatitis and nephropathy syndrome (PDNS) and porcine respiratory disease complex (PRDC). Since circoviruses commonly occur in the pig populations and there is a correlation between the severity of the disease and the viral load in the organs and blood, it is important not only to detect PCV2 but also to determine the quantitative aspects of viral load. The PriProET real-time PCR assay described in this study was tested on various virus strains and clinical forms of PMWS in order to investigate any correlation between the clinical signs and viral loads in different organs. The data obtained in this study correlate with those described earlier; namely, the viral load in 1ml plasma and in 500 ng tissue DNA exceeds 10(7) copies in the case of PMWS. The results indicate that the new assay provides a specific, sensitive and robust tool for the improved detection and quantification of PCV2.

  14. Geometry of model building in type IIB superstring theory and F-theory compactifications

    International Nuclear Information System (INIS)

    The present thesis is devoted to the study and geometrical description of type IIB superstring theory and F-theory model building. After a concise exposition of the basic concepts of type IIB flux compactifications, we explain their relation to F-theory. Moreover, we give a brief introduction to toric geometry focusing on the construction and the analysis of compact Calabi-Yau (CY) manifolds, which play a prominent role in the compactification of extra spatial dimensions. We study the 'Large Volume Scenario' on explicit new compact four-modulus CY manifolds. We thoroughly analyze the possibility of generating neutral non-perturbative superpotentials from Euclidean D3-branes in the presence of chirally intersecting D7-branes. We find that taking proper account of the Freed-Witten anomaly on non-spin cycles and of the Kaehler cone conditions imposes severe constraints on the models. Furthermore, we systematically construct a large number of compact CY fourfolds that are suitable for F-theory model building. These elliptically fibered CYs are complete intersections of two hypersurfaces in a six-dimensional ambient space. We first construct three-dimensional base manifolds that are hypersurfaces in a toric ambient space. We find that elementary conditions, which are motivated by F-theory GUTs (Grand Unified Theory), lead to strong constraints on the geometry, which significantly reduce the number of suitable models. We work out several examples in more detail. At the end,veral examples in more detail. At the end, we focus on the complex moduli space of CY threefolds. It is a known result that infinite sequences of type IIB flux vacua with imaginary self-dual flux can only occur in so-called D-limits, corresponding to singular points in complex structure moduli space. We refine this no-go theorem by demonstrating that there are no infinite sequences accumulating to the large complex structure point of a certain class of one-parameter CY manifolds. We perform a similar analysis for conifold points and for the decoupling limit, obtaining identical results. Furthermore, we establish the absence of infinite sequences in a D-limit corresponding to the large complex structure limit of a two-parameter CY. We corroborate our results with a numerical study ofthe sequences. (author)

  15. Design and Evaluation of a Real-Time PCR Assay for Quantification of JAK2 V617F and Wild-Type JAK2 Transcript Levels in the Clinical Laboratory

    OpenAIRE

    Merker, Jason D.; Jones, Carol D.; Oh, Stephen T.; Schrijver, Iris; Gotlib, Jason; Zehnder, James L.

    2010-01-01

    The somatic mutation JAK2 V617F is associated with BCR-ABL1-negative myeloproliferative neoplasms. Detection of this mutation aids diagnosis of these neoplasms, and quantification of JAK2 V617F may provide a method to monitor response to therapy. For these reasons, we designed a clinical assay that uses allele-specific PCR and real-time detection with hydrolysis probes for the quantification of JAK2 V617F, wild-type JAK2, and GAPDH transcripts. Mutant and wild-type JAK2 were quantified by usi...

  16. Development of a sandwich ELISA-type system for the detection and quantification of hazelnut in model chocolates.

    Science.gov (United States)

    Costa, Joana; Ansari, Parisa; Mafra, Isabel; Oliveira, M Beatriz P P; Baumgartner, Sabine

    2015-04-15

    Hazelnut is one of the most appreciated nuts being virtually found in a wide range of processed foods. The simple presence of trace amounts of hazelnut in foods can represent a potential risk for eliciting allergic reactions in sensitised individuals. The correct labelling of processed foods is mandatory to avoid adverse reactions. Therefore, adequate methodology evaluating the presence of offending foods is of great importance. Thus, the aim of this study was to develop a highly specific and sensitive sandwich enzyme-linked immunosorbent assay (ELISA) for the detection and quantification of hazelnut in complex food matrices. Using in-house produced antibodies, an ELISA system was developed capable to detect hazelnut down to 1 mg kg(-1) and quantify this nut down to 50 mg kg(-1) in chocolates spiked with known amounts of hazelnut. These results highlight and reinforce the value of ELISA as rapid and reliable tool for the detection of allergens in foods. PMID:25466021

  17. Type IIB flux vacua from G-theory II

    CERN Document Server

    Candelas, Philip; Damian, Cesar; Larfors, Magdalena; Morales, Jose Francisco

    2014-01-01

    We find analytic solutions of type IIB supergravity on geometries that locally take the form $\\text{Mink}\\times M_4\\times \\mathbb{C}$ with $M_4$ a generalised complex manifold. The solutions involve the metric, the dilaton, NSNS and RR flux potentials (oriented along the $M_4$) parametrised by functions varying only over $\\mathbb{C}$. Under this assumption, the supersymmetry equations are solved using the formalism of pure spinors in terms of a finite number of holomorphic functions. Alternatively, the solutions can be viewed as vacua of maximally supersymmetric supergravity in six dimensions with a set of scalar fields varying holomorphically over $\\mathbb{C}$. For a class of solutions characterised by up to five holomorphic functions, we outline how the local solutions can be completed to four-dimensional flux vacua of type IIB theory. A detailed study of this global completion for solutions with two holomorphic functions has been carried out in the companion paper [1]. The fluxes of the global solutions ar...

  18. Type IIA flux compactifications. Vacua, effective theories and cosmological challenges

    Energy Technology Data Exchange (ETDEWEB)

    Koers, Simon

    2009-07-30

    In this thesis, we studied a number of type IIA SU(3)-structure compactifications with 06-planes on nilmanifolds and cosets, which are tractable enough to allow for an explicit derivation of the low energy effective theory. In particular we calculated the mass spectrum of the light scalar modes, using N = 1 supergravity techniques. For the torus and the Iwasawa solution, we have also performed an explicit Kaluza-Klein reduction, which led to the same result. For the nilmanifold examples we have found that there are always three unstabilized moduli corresponding to axions in the RR sector. On the other hand, in the coset models, except for SU(2) x SU(2), all moduli are stabilized. We discussed the Kaluza-Klein decoupling for the supersymmetric AdS vacua and found that it requires going to the Nearly-Calabi Yau limited. We searched for non-trivial de Sitter minima in the original flux potential away from the AdS vacuum. Finally, in chapter 7, we focused on a family of three coset spaces and constructed non-supersymmetric vacua on them. (orig.)

  19. Type IIA flux compactifications. Vacua, effective theories and cosmological challenges

    International Nuclear Information System (INIS)

    In this thesis, we studied a number of type IIA SU(3)-structure compactifications with 06-planes on nilmanifolds and cosets, which are tractable enough to allow for an explicit derivation of the low energy effective theory. In particular we calculated the mass spectrum of the light scalar modes, using N = 1 supergravity techniques. For the torus and the Iwasawa solution, we have also performed an explicit Kaluza-Klein reduction, which led to the same result. For the nilmanifold examples we have found that there are always three unstabilized moduli corresponding to axions in the RR sector. On the other hand, in the coset models, except for SU(2) x SU(2), all moduli are stabilized. We discussed the Kaluza-Klein decoupling for the supersymmetric AdS vacua and found that it requires going to the Nearly-Calabi Yau limited. We searched for non-trivial de Sitter minima in the original flux potential away from the AdS vacuum. Finally, in chapter 7, we focused on a family of three coset spaces and constructed non-supersymmetric vacua on them. (orig.)

  20. Preclinical evaluation and quantification of [18F]MK-9470 as a radioligand for PET imaging of the type 1 cannabinoid receptor in rat brain

    International Nuclear Information System (INIS)

    [18F]MK-9470 is an inverse agonist for the type 1 cannabinoid (CB1) receptor allowing its use in PET imaging. We characterized the kinetics of [18F]MK-9470 and evaluated its ability to quantify CB1 receptor availability in the rat brain. Dynamic small-animal PET scans with [18F]MK-9470 were performed in Wistar rats on a FOCUS-220 system for up to 10 h. Both plasma and perfused brain homogenates were analysed using HPLC to quantify radiometabolites. Displacement and blocking experiments were done using cold MK-9470 and another inverse agonist, SR141716A. The distribution volume (VT) of [18F]MK-9470 was used as a quantitative measure and compared to the use of brain uptake, expressed as SUV, a simplified method of quantification. The percentage of intact [18F]MK-9470 in arterial plasma samples was 80 ± 23 % at 10 min, 38 ± 30 % at 40 min and 13 ± 14 % at 210 min. A polar radiometabolite fraction was detected in plasma and brain tissue. The brain radiometabolite concentration was uniform across the whole brain. Displacement and pretreatment studies showed that 56 % of the tracer binding was specific and reversible. VT values obtained with a one-tissue compartment model plus constrained radiometabolite input had good identifiability (?10 %). Ignoring the radiometabolite contribution using a one-tissue compartment model alone, i.e. without constrained radiometabolite input, overestimated the ometabolite input, overestimated the [18F]MK-9470 VT, but was correlated. A correlation between [18F]MK-9470 VT and SUV in the brain was also found (R 2 = 0.26-0.33; p ? 0.03). While the presence of a brain-penetrating radiometabolite fraction complicates the quantification of [18F]MK-9470 in the rat brain, its tracer kinetics can be modelled using a one-tissue compartment model with and without constrained radiometabolite input. (orig.)

  1. Cuantificación del virus de hepatitis B por la técnica PCR tiempo real / Quantification of viral hepatitis type B using the real-time PCR technique

    Scientific Electronic Library Online (English)

    Elizabeth, Rojas-Cordero.

    2008-11-01

    Full Text Available SciELO Costa Rica | Language: Spanish Abstract in spanish La técnica de reacción en cadena de la polimerasa (PCR) ha determinado, en la cuantificación de virus, un avance especial para el manejo de infecciones crónicas por virus, en especial, para HIV, virus de hepatitis B y C. La cuantificación en tiempo real se realiza en el ABI PRISM Sequence Detection [...] System. Se amplifica, específicamente, el fragmento pb del genoma del virus de hepatitis B. Se recomienda el HBV PCR kit para la toma de la muestra., determinándose un protocolo para el almacenamiento de la misma. Es importante saber que el congelar las muestras o el almacenamiento prolongado disminuyen la sensibilidad del método. Se puede almacenar por años si es a una temperatura de - 70º C. Tubos con heparina o pacientes heparinizado alteraran la determinación de la muestra. El limite inferior detectable del virus B es de 3.78 UI/ml y el mayor es 1.4 x 10¹¹ UI/ml, se determinan todos los genotipos desde A-H. Los pacientes HBeAg positivos, usualmente, tienen valores mayores a 1 x 10(6) UI/ml y los HBe negativos portadores inactivos por lo general, valores menores a 1 x 10(4). Abstract in english The polymerase chain reaction technique (PCR) has determined a special advance in the virus quantification for the management of chronic viral infections, especially for HIV hepatitis virus type B and C. The real-time quantification is performed in the ABI PRISM Sequence Detection System. The fragme [...] nt pb of the genome of the hepatitis type B virus is amplified. The HBV PCR kit is recommended for taking a sample and a protocol for storing it is determined. It is important to know that freezing the samples or keeping them for a long time decreases the sensitivity of the method. It may be stored for years if kept at a temperature of -70° C. Heparinized patients or tubes with heparin will alter the determination of the sample. The lower limit of detection of B virus is 3.78 Ul/ml and the higher limit is 1.4 x 10(11) Ul/ml. All genotypes from A-H are determined. The patients with HbeAg positive usually present higher values than 1 x 10(6) Ul/ml and the HBe negative inactive carriers usually get lower values than 1 x 10(4).

  2. Deep defects in n-type high-purity germanium: quantification of optical variants of deep level transient spectroscopy

    Science.gov (United States)

    Blondeel, A.; Clauws, P.

    1999-12-01

    The characterization of high-purity (HP) Ge for the fabrication of ?-ray detectors poses very specific demands due to the high degree of purity of the material (shallow concentration of the order 109-1010 cm-3). Deep level transient spectroscopy (DLTS) may still be applied to this kind of material since the sensitivity is relative to the shallow doping concentration. In contrast with p-type HP Ge which was characterized extensively in the 1980s, very little is known about deep defects in n-type HP Ge. Two optical variants of DLTS have been applied to n-type HP Ge and quantified for the first time. Several deep minority carrier traps are detected and identified as mainly Cu-related traps with concentrations in the 106-108 cm-3 range. These Cu-related traps, which are well known as the majority carrier traps appearing in typical p-type HP Ge, are thus present as minority carrier traps in typical n-type HP Ge. The conclusion that deep-level defects in n- and p-type HP Ge are very similar could be expected from the similarity in growing conditions for the two types of materials. In the first DLTS variant, known as optical DLTS or ODLTS the deep levels are filled by optical injection (with light of above bandgap energy) at the back ohmic contact of a reverse biased diode. The spectrum is generated by the capacitance transients following the optical excitation. In the second variant, known as photo induced (Current) transient spectroscopy or PI(C)TS the deep levels are also filled optically with intrinsic light, but here a neutral structure is used with two ohmic contacts in sandwich configuration. The spectrum is generated by current transients instead of capacitance transients. This method is especially suited for high-resistivity or semi-insulating materials which cannot be measured with capacitance-based DLTS. PICTS was applied to n-type Ge with a shallow concentration as low as 109 cm-3.

  3. New type IIB backgrounds and aspects of their field theory duals

    Science.gov (United States)

    Caceres, Elena; Macpherson, Niall T.; Núñez, Carlos

    2014-08-01

    In this paper we study aspects of geometries in Type IIA and Type IIB String theory and elaborate on their field theory dual pairs. The backgrounds are associated with reductions to Type IIA of solutions with G 2 holonomy in eleven dimensions. We classify these backgrounds according to their G-structure, perform a non-Abelian T-duality on them and find new Type IIB configurations presenting dynamical SU(2)-structure. We study some aspects of the associated field theories defined by these new backgrounds. Various technical details are clearly spelled out.

  4. Quiver Gauge Theories on A-type ALE Spaces

    Science.gov (United States)

    Bruzzo, Ugo; Sala, Francesco; Szabo, Richard J.

    2015-03-01

    We survey and compare recent approaches to the computation of the partition functions and correlators of chiral BPS observables in gauge theories on ALE spaces based on quiver varieties and the minimal resolution X k of the A k-1 toric singularity , in light of their recently conjectured duality with two-dimensional coset conformal field theories. We review and elucidate the rigorous constructions of gauge theories for a particular family of ALE spaces, using their relation to the cohomology of moduli spaces of framed torsion-free sheaves on a suitable orbifold compactification of X k . We extend these computations to generic superconformal quiver gauge theories, obtaining in these instances new constraints on fractional instanton charges, a rigorous proof of the Nekrasov master formula, and new quantizations of Hitchin systems based on the underlying Seiberg-Witten geometry.

  5. Optimal Uncertainty Quantification

    CERN Document Server

    Owhadi, Houman; Sullivan, Timothy John; McKerns, Mike; Ortiz, Michael

    2010-01-01

    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \\emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as extreme values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions, they have finite-dimensional reductions. As an application, we develop \\emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, contr...

  6. Quantification of the types of water in Eudragit RLPO polymer and the kinetics of water loss using FTIR

    DEFF Research Database (Denmark)

    Pirayavaraporn, Chompak; Rades, Thomas

    2013-01-01

    Coalescence of polymer particles in polymer matrix tablets influences drug release. The literature has emphasized that coalescence occurs above the glass transition temperature (Tg) of the polymer and that water may plasticize (lower Tg) the polymer. However, we have shown previously that nonplasticizing water also influences coalescence of Eudragit RLPO; so there is a need to quantify the different types of water in Eudragit RLPO. The purpose of this study was to distinguish the types of water present in Eudragit RLPO polymer and to investigate the water loss kinetics for these different types of water. Eudragit RLPO was stored in tightly closed chambers at various relative humidities (0, 33, 56, 75, and 94%) until equilibrium was reached. Fourier transform infrared spectroscopy (FTIR)-DRIFTS was used to investigate molecular interactions between water and polymer, and water loss over time. Using a curve fitting procedure, the water region (3100-3,700 cm(-1)) of the spectra was analyzed, and used to identifywater present in differing environments in the polymer and to determine the water loss kinetics upon purging the sample with dry compressed air. It was found that four environments can be differentiated (dipole interaction of water with quaternary ammonium groups, water cluster, and water indirectly and directly binding to the carbonyl groups of the polymer) but it was not possible to distinguish whether the different types of water were lost at different rates. It is suggested that water is trapped in the polymer in different forms and this should be considered when investigating coalescence of polymer matrices.

  7. In-vivo segmentation and quantification of coronary lesions by optical coherence tomography images for a lesion type definition and stenosis grading.

    Science.gov (United States)

    Celi, Simona; Berti, Sergio

    2014-10-01

    Optical coherence tomography (OCT) is a catheter-based medical imaging technique that produces cross-sectional images of blood vessels. This technique is particularly useful for studying coronary atherosclerosis. In this paper, we present a new framework that allows a segmentation and quantification of OCT images of coronary arteries to define the plaque type and stenosis grading. These analyses are usually carried out on-line on the OCT-workstation where measuring is mainly operator-dependent and mouse-based. The aim of this program is to simplify and improve the processing of OCT images for morphometric investigations and to present a fast procedure to obtain 3D geometrical models that can also be used for external purposes such as for finite element simulations. The main phases of our toolbox are the lumen segmentation and the identification of the main tissues in the artery wall. We validated the proposed method with identification and segmentation manually performed by expert OCT readers. The method was evaluated on ten datasets from clinical routine and the validation was performed on 210 images randomly extracted from the pullbacks. Our results show that automated segmentation of the vessel and of the tissue components are possible off-line with a precision that is comparable to manual segmentation for the tissue component and to the proprietary-OCT-console for the lumen segmentation. Several OCT sections have been processed to provide clinical outcome. PMID:25077844

  8. Quantification of the N-terminal propeptide of human procollagen type I (PINP): comparison of ELISA and RIA with respect to different molecular forms.

    DEFF Research Database (Denmark)

    Jensen, Charlotte Harken; Hansen, M

    1998-01-01

    This paper compares the results of procollagen type I N-terminal propeptide (PINP) quantification by radioimmunoassay (RIA) and enzyme linked immunosorbent assay (ELISA). PINP in serum from a patient with uremic hyperparathyroidism was measured in RIA and ELISA to 20 micrograms l-1 and 116 micrograms l-1 and the corresponding concentrations in dialysis fluid were 94.5 micrograms l-1 and 140 micrograms l-1, respectively. PINP antigen appears in two distinct peaks following size chromatography and the two peak fractions display immunological identity and identical M(r)'s (27 kDa: SDS-PAGE). Analysis of fractions from size separated amniotic fluid, serum and dialysis fluid demonstrated that the RIA failed to measure the low molecular weight form of PINP. However, the anti-PINP supplied with the RIA-kit and the anti-PINP applied in the ELISA reacted equally well with both molecular forms of PINP when analysed in a direct ELISA. It is concluded that the major difference in the ELISA and RIA results is due to assayefficacy with respect to the low molecular weight form of PINP. Udgivelsesdato: 1998-Jan-12

  9. Dystrophin quantification

    Science.gov (United States)

    Anthony, Karen; Arechavala-Gomeza, Virginia; Taylor, Laura E.; Vulin, Adeline; Kaminoh, Yuuki; Torelli, Silvia; Feng, Lucy; Janghra, Narinder; Bonne, Gisèle; Beuvin, Maud; Barresi, Rita; Henderson, Matt; Laval, Steven; Lourbakos, Afrodite; Campion, Giles; Straub, Volker; Voit, Thomas; Sewry, Caroline A.; Morgan, Jennifer E.; Flanigan, Kevin M.

    2014-01-01

    Objective: We formed a multi-institution collaboration in order to compare dystrophin quantification methods, reach a consensus on the most reliable method, and report its biological significance in the context of clinical trials. Methods: Five laboratories with expertise in dystrophin quantification performed a data-driven comparative analysis of a single reference set of normal and dystrophinopathy muscle biopsies using quantitative immunohistochemistry and Western blotting. We developed standardized protocols and assessed inter- and intralaboratory variability over a wide range of dystrophin expression levels. Results: Results from the different laboratories were highly concordant with minimal inter- and intralaboratory variability, particularly with quantitative immunohistochemistry. There was a good level of agreement between data generated by immunohistochemistry and Western blotting, although immunohistochemistry was more sensitive. Furthermore, mean dystrophin levels determined by alternative quantitative immunohistochemistry methods were highly comparable. Conclusions: Considering the biological function of dystrophin at the sarcolemma, our data indicate that the combined use of quantitative immunohistochemistry and Western blotting are reliable biochemical outcome measures for Duchenne muscular dystrophy clinical trials, and that standardized protocols can be comparable between competent laboratories. The methodology validated in our study will facilitate the development of experimental therapies focused on dystrophin production and their regulatory approval. PMID:25355828

  10. Preclinical evaluation and quantification of [{sup 18}F]MK-9470 as a radioligand for PET imaging of the type 1 cannabinoid receptor in rat brain

    Energy Technology Data Exchange (ETDEWEB)

    Casteels, Cindy [K.U. Leuven, University Hospital Leuven, Division of Nuclear Medicine, Leuven (Belgium); K.U. Leuven, MoSAIC, Molecular Small Animal Imaging Center, Leuven (Belgium); University Hospital Gasthuisberg, Division of Nuclear Medicine, Leuven (Belgium); Koole, Michel; Laere, Koen van [K.U. Leuven, University Hospital Leuven, Division of Nuclear Medicine, Leuven (Belgium); K.U. Leuven, MoSAIC, Molecular Small Animal Imaging Center, Leuven (Belgium); Celen, Sofie; Bormans, Guy [K.U. Leuven, MoSAIC, Molecular Small Animal Imaging Center, Leuven (Belgium); K.U. Leuven, Laboratory for Radiopharmacy, Leuven (Belgium)

    2012-09-15

    [{sup 18}F]MK-9470 is an inverse agonist for the type 1 cannabinoid (CB1) receptor allowing its use in PET imaging. We characterized the kinetics of [{sup 18}F]MK-9470 and evaluated its ability to quantify CB1 receptor availability in the rat brain. Dynamic small-animal PET scans with [{sup 18}F]MK-9470 were performed in Wistar rats on a FOCUS-220 system for up to 10 h. Both plasma and perfused brain homogenates were analysed using HPLC to quantify radiometabolites. Displacement and blocking experiments were done using cold MK-9470 and another inverse agonist, SR141716A. The distribution volume (V{sub T}) of [{sup 18}F]MK-9470 was used as a quantitative measure and compared to the use of brain uptake, expressed as SUV, a simplified method of quantification. The percentage of intact [{sup 18}F]MK-9470 in arterial plasma samples was 80 {+-} 23 % at 10 min, 38 {+-} 30 % at 40 min and 13 {+-} 14 % at 210 min. A polar radiometabolite fraction was detected in plasma and brain tissue. The brain radiometabolite concentration was uniform across the whole brain. Displacement and pretreatment studies showed that 56 % of the tracer binding was specific and reversible. V{sub T} values obtained with a one-tissue compartment model plus constrained radiometabolite input had good identifiability ({<=}10 %). Ignoring the radiometabolite contribution using a one-tissue compartment model alone, i.e. without constrained radiometabolite input, overestimated the [{sup 18}F]MK-9470 V{sub T}, but was correlated. A correlation between [{sup 18}F]MK-9470 V{sub T} and SUV in the brain was also found (R {sup 2} = 0.26-0.33; p {<=} 0.03). While the presence of a brain-penetrating radiometabolite fraction complicates the quantification of [{sup 18}F]MK-9470 in the rat brain, its tracer kinetics can be modelled using a one-tissue compartment model with and without constrained radiometabolite input. (orig.)

  11. Identification of enzymes and quantification of metabolic fluxes in the wild type and in a recombinant Aspergillus oryzae strain

    DEFF Research Database (Denmark)

    Pedersen, Henrik; Carlsen, Morten

    1999-01-01

    Two alpha-amylase-producing strains of Aspergillus oryzae, a wild-type strain and a recombinant containing additional copies of the alpha-amylase gene, were characterized,vith respect to enzyme activities, localization of enzymes to the mitochondria or cytosol, macromolecular composition, and metabolic fluxes through the central metabolism during glucose-limited chemostat cultivations. Citrate synthase and isocitrate dehydrogenase (NAD) activities were found only in the mitochondria, glucose-6-phosphate dehydrogenase and glutamate dehydrogenase (NADP) activities were found only in the cytosol, and isocitrate dehydrogenase (NADP), glutamate oxaloacetate transaminase, malate dehydrogenase, and glutamate dehydrogenase (NAD) activities were found in both the mitochondria and the cytosol, The measured biomass components and ash could account for 95% (wt/wt) of the biomass. The protein and RNA contents increased linearly with increasing specific growth rate, but the carbohydrate and chitin contents decreased. A metabolic model consisting of 69 fluxes and 59 intracellular metabolites was used to calculate the metabolic fluxes through the central metabolism at several specific growth rates, with ammonia or nitrate as the nitrogen source. The flux through the pentose phosphate pathway increased with increasing specific growth rate. The fluxes through the pentose phosphate pathway were 15 to 26% higher for the recombinant strain than for the wild-type strain.

  12. Quantification of zinc atoms in a surface alloy on copper in an industrial-type methanol synthesis catalyst

    DEFF Research Database (Denmark)

    Kuld, Sebastian; Moses, Poul Georg

    2014-01-01

    Methanol has recently attracted renewed interest because of its potential importance as a solar fuel.1 Methanol is also an important bulk chemical that is most efficiently formed over the industrial Cu/ZnO/Al2O3 catalyst. The identity of the active site and, in particular, the role of ZnO as a promoter for this type of catalyst is still under intense debate.2 Structural changes that are strongly dependent on the pretreatment method have now been observed for an industrial-type methanol synthesis catalyst. A combination of chemisorption, reaction, and spectroscopic techniques provides a consistent picture of surface alloying between copper and zinc. This analysis enables a reinterpretation of the methods that have been used for the determination of the Cu surface area and provides an opportunity to independently quantify the specific Cu and Zn areas. This method may also be applied to other systems where metal–support interactions are important, and this work generally addresses the role of the carrier and the nature of the interactions between carrier and metal in heterogeneous catalysts.

  13. Quantification of zinc atoms in a surface alloy on copper in an industrial-type methanol synthesis catalyst.

    Science.gov (United States)

    Kuld, Sebastian; Conradsen, Christian; Moses, Poul Georg; Chorkendorff, Ib; Sehested, Jens

    2014-06-01

    Methanol has recently attracted renewed interest because of its potential importance as a solar fuel. Methanol is also an important bulk chemical that is most efficiently formed over the industrial Cu/ZnO/Al2O3 catalyst. The identity of the active site and, in particular, the role of ZnO as a promoter for this type of catalyst is still under intense debate. Structural changes that are strongly dependent on the pretreatment method have now been observed for an industrial-type methanol synthesis catalyst. A combination of chemisorption, reaction, and spectroscopic techniques provides a consistent picture of surface alloying between copper and zinc. This analysis enables a reinterpretation of the methods that have been used for the determination of the Cu surface area and provides an opportunity to independently quantify the specific Cu and Zn areas. This method may also be applied to other systems where metal-support interactions are important, and this work generally addresses the role of the carrier and the nature of the interactions between carrier and metal in heterogeneous catalysts. PMID:24764288

  14. Krichever-Novikov type algebras theory and applications

    CERN Document Server

    Schlichenmaier, Martin

    2014-01-01

    Krichever and Novikov introduced certain classes of infinite dimensionalLie algebrasto extend the Virasoro algebra and its related algebras to Riemann surfaces of higher genus. The author of this book generalized and extended them toa more general setting needed by the applications. Examples of applications are Conformal Field Theory, Wess-Zumino-Novikov-Witten models, moduli space problems, integrable systems, Lax operator algebras, and deformation theory of Lie algebra. Furthermore they constitute an important class of infinite dimensional Lie algebras which due to their geometric origin are

  15. A syntactic theory of type generativity and sharing

    OpenAIRE

    Leroy, Xavier

    1995-01-01

    This paper presents a purely syntactic account of type generativity and sharing --- two key mechanisms in the Standard ML module system --- and shows its equivalence with the traditional stamp-based description of these mechanisms. This syntactic description recasts the Standard ML module system in a more abstract, type-theoretic framework.

  16. Initial layer theory and model equations of Volterra type

    International Nuclear Information System (INIS)

    It is demonstrated here that there exist initial layers to singularly perturbed Volterra equations whose thicknesses are not of order of magnitude of 0(?), ? ? 0. It is also shown that the initial layer theory is extremely useful because it allows one to construct the approximate solution to an equation, which is almost identical to the exact solution. (author)

  17. Small instantons, del Pezzo surfaces and type I' theory

    International Nuclear Information System (INIS)

    Small instantons of exceptional groups arise geometrically by a collapsing del Pezzo surface in a CY. We use this to explain the physics of a 4-brane probe in type I' compactification to nine dimensions. (orig.)

  18. Algebraic theory of type-and-effect systems

    OpenAIRE

    Kammar, Ohad

    2014-01-01

    We present a general semantic account of Gifford-style type-and-effect systems. These type systems provide lightweight static analyses annotating program phrases with the sets of possible computational effects they may cause, such as memory access and modification, exception raising, and non-deterministic choice. The analyses are used, for example, to justify the program transformations typically used in optimising compilers, such as code reordering and inlining. Despite their ...

  19. Experimental quantification of dynamic forces and shaft motion in two different types of backup bearings under several contact conditions

    DEFF Research Database (Denmark)

    Lahriri, Said; Santos, Ilmar

    2013-01-01

    This paper treats the experimental study on a shaft impacting its stator for different cases. The paper focuses mainly on the measured contact forces and the shaft motion in two different types of backup bearings. As such, the measured contact forces are thoroughly studied. These measured contact forces enable the hysteresis loops to be computed and analyzed. Consequently, the contact forces are plotted against the local deformation in order to assess the contact force loss during the impacts. The shaft motion during contact with the backup bearing is verified with a two-sided spectrum analyses. The analyses show that by use of a conventional annular guide, the shaft undergoes a direct transition from normal operation to a full annular backward whirling state for the case of external excitation. However, in a self-excited vibration case, where the speed is gradually increased and decreased through the first critical speed, the investigation revealed that different paths initiated the onset of backward whip and whirling motion. In order to improve the whirling and the full annular contact behavior, an unconventional pinned backup bearing is realized. The idea is to utilize pin connections that center the rotor during impacts and prevent the shaft from entering a full annular contact state. The experimental results show that the shaft escapes the pins and returns to a normal operational condition during an impact event. © 2013 Elsevier Ltd. All rights reserved.

  20. Constructive Type Theory and the Dialogical Approach to Meaning

    OpenAIRE

    Shahid Rahman; Nicolas Clerbout

    2013-01-01

    In its origins Dialogical logic constituted one part of a new movement called the Erlangen School or Erlangen Constructivism. Its goal was to provide a new start to a general theory of language and of science. According to the Erlangen-School, language is not just a fact that we discover, but a human cultural accomplishment whose construction reason can and should control. The resulting project of intentionally constructing a scientific language was called the Orthosprache-project. Unfortunat...

  1. Applications of Reflection Amplitudes in Toda-type Theories

    CERN Document Server

    Ahn, C; Rim, C; Ahn, Changrim; Kim, Chanju; Rim, Chaiho

    2001-01-01

    Reflection amplitudes are defined as two-point functions of certain class of conformal field theories where primary fields are given by vertex operators with real couplings. Among these, we consider (Super-)Liouville theory and simply and non-simply laced Toda theories. In this paper we show how to compute the scaling functions of effective central charge for the models perturbed by some primary fields which maintains integrability. This new derivation of the scaling functions are compared with the results from conventional TBA approach and confirms our approach along with other non-perturbative results such as exact expressions of the on-shell masses in terms of the parameters in the action, exact free energies. Another important application of the reflection amplitudes is a computation of one-point functions for the integrable models. Introducing functional relations between the one-point functions in terms of the reflection amplitudes, we obtain explicit expressions for simply-laced and non-simply-laced af...

  2. Surveying problem solution with theory and objective type questions

    CERN Document Server

    Chandra, AM

    2005-01-01

    The book provides a lucid and step-by-step treatment of the various principles and methods for solving problems in land surveying. Each chapter starts with basic concepts and definitions, then solution of typical field problems and ends with objective type questions. The book explains errors in survey measurements and their propagation. Survey measurements are detailed next. These include horizontal and vertical distance, slope, elevation, angle, and direction. Measurement using stadia tacheometry and EDM are then highlighted, followed by various types of levelling problems. Traversing is then explained, followed by a detailed discussion on adjustment of survey observations and then triangulation and trilateration.

  3. The Classification of Gun’s Type Using Image Recognition Theory

    OpenAIRE

    Kulthon Kasemsan, M. L.

    2014-01-01

    The research aims to develop the Gun’s Type and Models Classification (GTMC) system using image recognition theory. It is expected that this study can serve as a guide for law enforcement agencies or at least serve as the catalyst for a similar type of research. Master image storage and image recognition are the two main processes. The procedures involved original images, scaling, gray scale, canny edge detector, SUSAN corner detector, block matching template, and finally gun type’s recog...

  4. A Calculus of Substitutions for Incomplete-Proof Representation in Type Theory

    OpenAIRE

    Mun?oz, Ce?sar

    1997-01-01

    In the framework of intuitionnistic logic and type theory, the concepts of «propositions» and «types» are identified. This principle is known as the Curry-Howard isomorphism, and it is at the base of mathematical formalisms where proofs are represented as typed lambda-terms. In order to see the process of proof construction as an incremental process of term construction, it is necessary to extend the lambda-calculus with new operators. First, we consider typed meta-variables to represent ...

  5. Brans-Dicke-type theories and avoidance of the cosmological singularity

    CERN Document Server

    Quirós, I; Cardenas, R; Quiros, Israel; Bonal, Rolando; Cardenas, Rolando

    2000-01-01

    A point of view, based on a postulate about the physical equivalence of conformal representations of a given physical situation in Brans-Dicke-type theories of gravitation is presented, that automatically solves the discussion about the physical equivalence of Jordan frame and Einstein frame formulations of scalar-tensor theory. The cosmological consequences of this viewpoint for general relativity are studied, and its implications for the low-energy limit of string theory outlined.

  6. On the Nonlinear Theory of Viscoelasticity of Differential Type

    CERN Document Server

    Pucci, Edvige

    2011-01-01

    We consider nonlinear viscoelastic materials of differential type and for some special models we derive exact solutions of initial boundary value problems. These exact solutions are used to investigate the reasons of non-existence of global solutions for such equations.

  7. Relative quantification and detection of different types of infectious bursal disease virus in bursa of Fabricius and cloacal swabs using real time RT-PCR SYBR green technology

    DEFF Research Database (Denmark)

    Li, Yiping; Handberg, K.J.

    2007-01-01

    In present study, different types of infectious bursal disease virus (IBDV), virulent strain DK01, classic strain F52/70 and vaccine strain D78 were quantified and detected in infected bursa of Fabricius (BF) and cloacal swabs using quantitative real time RT-PCR with SYBR green dye. For selection of a suitable internal control gene, real time PCR parameters were evaluated for three candidate genes, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), 28S rRNA and beta-actin to IBDVs. Based on this P-actin was selected as an internal control for quantification of IBDVs in BF. All BF samples with D78, DK01 or F52/70 inoculation were detected as virus positive at day I post inoculation (p.i.). The D78 viral load peaked at day 4 and day 8 p.i., while the DK01 and F52/70 viral load showed relatively high levels at day 2 p.i. In cloacal swabs, viruses detectable were at day 2 p.i. for DK01 and F52/70, day 8 p.i. for D78. Importantly, the primers set were specific as the D78 primer set gave no amplification of F52/70 and DK01 and the DK01 primer set gave no amplification of D78, thus DK01 and D78 could be quantified simultaneously in dually infected chickens by use of these two set of primers. The method described here is robust and may sever as a useful tool with high capacity for diagnostics as well as in viral pathogenesis studies.

  8. Eady Solitary Waves: A Theory of Type B Cyclogenesis.

    Science.gov (United States)

    Mitsudera, Humio

    1994-11-01

    Localized baroclinic instability in a weakly nonlinear, long-wave limit using an Eady model is studied. The resulting evolution equations have a form of the KdV type, including extra terms representing linear coupling. Baroclinic instability is triggered locally by the collision between two neutral solitary waves (one trapped at the upper boundary and the other at the lower boundary) if their incident amplitudes are sufficiently large. This characteristic is explained from the viewpoint of resonance when the relative phase speed, which depends on the amplitudes, is less than a critical value. The upper and lower disturbances grow in a coupled manner (resembling a normal-mode structure) initially, but they reverse direction slowly as the amplitudes increase, and eventually separate from each other.The motivation of this study is to investigate a type of extratropical cyclogenesis that involves a preexisting upper trough (termed as Type B development) from the viewpoint of resonant solitary waves. Two cases are of particular interest. First, the author examines a case where an upper disturbance preexists over an undisturbed low-level waveguide. The solitary waves exhibit behavior similar to that conceived by Hoskins et al. for Type B development; the lower disturbance is forced one sidedly by a preexisting upper disturbance initially, but in turn forces the latter once the former attains a sufficient amplitude, thus resulting in mutual reinforcement. Second, if a weak perturbation exists at the surface ahead of the preexisting strong upper disturbance, baroclinic instability is triggered when the two waves interact. Even though the amplitude of the lower disturbance is initially much weaker, it is intensified quickly and catches up with the amplitude of the upper disturbance, so that the coupled vertical structure resembles that of an unstable normal mode eventually. These results describe the observed behavior in Type B atmospheric cyclogenesis quite well.

  9. Cosmic web-type classification using decision theory

    CERN Document Server

    Leclercq, Florent; Wandelt, Benjamin

    2015-01-01

    We propose a decision criterion for segmenting the cosmic web into different structure types (voids, sheets, filaments and clusters) on the basis of their respective probabilities and the strength of data constraints. Our approach is inspired by an analysis of games of chance where the gambler only plays if a positive expected net gain can be achieved based on some degree of privileged information. The result is a general solution for classification problems in the face of uncertainty, including the option of not committing to a class for a candidate object. As an illustration, we produce high-resolution maps of web-type constituents in the nearby Universe as probed by the Sloan Digital Sky Survey main galaxy sample. Other possible applications include the selection and labeling of objects in catalogs derived from astronomical survey data.

  10. Applications of differential sensitivity theory for extremum-type responses

    International Nuclear Information System (INIS)

    A recently developed sensitivity theory for nonlinear systems with responses defined at critical points, e.g. maxima, minima, or saddle points, of a function of the system's state variables and parameters is applied to a protected transient with scram on high power level in the Fast Flux Test Facility. The single-phase segment of the fast reactor safety code MELT-III B is used to model this transient. Two responses of practical importance, viz. The maximum fuel temperature in the hot channel, and the maximum normalized reactor power level, are considered. For the purposes of sensitivity analysis, a complete characterization of such responses requires consideration of both the numerical value of the response at the maximum, and the location in phase-space where the maximum occurs. This is because variations in the system parameters alter not only the value at this maximum but also alter the location of the maximum in phase-space

  11. On the Conformal Field Theory Duals of type IIA AdS_4 Flux Compactifications

    CERN Document Server

    Aharony, Ofer; Berkooz, Micha

    2008-01-01

    We study the conformal field theory dual of the type IIA flux compactification model of DeWolfe, Giryavets, Kachru and Taylor, with all moduli stabilized. We find its central charge and properties of its operator spectrum. We concentrate on the moduli space of the conformal field theory, which we investigate through domain walls in the type IIA string theory. The moduli space turns out to consist of many different branches. We use Bezout's theorem and Bernstein's theorem to enumerate the different branches of the moduli space and estimate their dimension.

  12. Quantum black holes in Type-IIA String Theory

    CERN Document Server

    Bueno, Pablo; Shahbazi, C S

    2012-01-01

    We study black hole solutions of Type-IIA Calabi-Yau compactifications in the presence of perturbative quantum corrections. We define a class of black holes that only exist in the presence of quantum corrections and that, consequently, can be considered as purely quantum black holes. The regularity conditions of the solutions impose the topological constraint h^{1,1}>h^{2,1} on the Calabi-Yau manifold, defining a class of admissible compactifications, which we prove to be non-empty for h^{1,1}=3 by explicitly constructing the corresponding Calabi-Yau manifolds, new in the literature.

  13. Towards a theory for type III solar radio bursts. III

    International Nuclear Information System (INIS)

    The procedure developed in Smith (1974) to model the radiation source for type III bursts is modified to include scattering of radiation in the source itself. Since the inhomogeneities in the source must have the same statistical properties as the inhomogeneities used in tracing radiation from the source to the observer, these two parts of the type III problem are no longer uncoupled. Thus inhomogeneities are used consistent with the scattering inhomogeneities of Steinberg et al. (1971) and Riddle (1974) and the procedure is applied to an archetype 'fundamental-harmonic' pair observed at Culgoora on 28 September, 1973 at 03 19 UT. It is found that it is impossible to model this burst with a source which is homogeneous in the sense that every part of the source has the same energy density in plasma waves. The density inhomogeneities in the source severely hamper amplification of the supposed fundamental. Possible ways out of this dilemma are discussed, including second harmonic pairs and a source with an inhomogeneous distribution of plasma waves. It is concluded that none of the possibilities are completely satisfactory to explain present observations and suggested that critical observations are missing. (Auth.)

  14. Ginzburg-Landau-type theory of spin superconductivity.

    Science.gov (United States)

    Bao, Zhi-qiang; Xie, X C; Sun, Qing-feng

    2013-01-01

    Spin superconductivity is a recently proposed analogue of conventional charge superconductivity, in which spin currents flow without dissipation but charge currents do not. Here we derive a universal framework for describing the properties of a spin superconductor along similar lines to the Ginzburg-Landau equations that describe conventional superconductors, and show that the second of these Ginzburg-Landau-type equations is equivalent to a generalized London equation. Just as the GL equations enabled researchers to explore the behaviour of charge superconductors, our Ginzburg-Landau-type equations enable us to make a number of non-trivial predictions about the potential behaviour of putative spin superconductor. They enable us to calculate the super spin current in a spin superconductor under a uniform electric field or that induced by a thin conducting wire. Moreover, they allow us to predict the emergence of new phenomena, including the spin-current Josephson effect in which a time-independent magnetic field induces a time-dependent spin current. PMID:24335888

  15. Theory of zeolite supralattices: Se in zeolite Linde type A

    International Nuclear Information System (INIS)

    We study theoretically properties of Se clusters in zeolites, and choose zeolite Linde type A (LTA) as a prototype system. The geometries of free-space Se clusters are first determined, and we report the energetics and electronic and vibrational properties of these clusters. The work on clusters includes an investigation of the energetics of C3-C1 defect formation in Se rings and chains. The electronic properties of two Se crystalline polymorphs, trigonal Se and -monoclinic Se, are also determined. Electronic and vibrational properties of the zeolite LTA are investigated. Next we investigate the electronic and optical properties of ring-like Se clusters inside the large -cages of LTA. We find that Se clusters inside cages of silaceous LTA have very little interaction with the zeolite, and that the HOMO-LUMO gaps (HOMO standing for highest occupied molecular orbital and LUMO for lowest unoccupied molecular orbital) are nearly those of the isolated cluster. The HOMO-LUMO gaps of Se6, Se8, and Se12 are found to be similar, which makes it difficult to identify them experimentally by absorption spectroscopy. We find that the zeolite/Se8 nanocomposite is lower in energy than the two separated systems. We also investigate two types of infinite chain encapsulated in LTA. Finally, we carry out finite-temperature molecular dynamics simulations for an encapsulated Se12 cluster, which shows cluster melti2 cluster, which shows cluster melting and formation of nanoscale Se droplets in the?-cages of LTA. (author)

  16. Theory of Zonal Flow Generation by Flute Type Turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Andrushchenko, Zh.N. [National Academy of Sciences of Ukraine, Kiev (Ukraine). Inst. for Nuclear Research; Pavlenko, V.P. [Uppsala Univ. (Sweden). Dept. of Astronomy and Space Physics; Schoepf, K. [Univ. of Innsbruck (Austria). Inst. for Theoretical Physics

    2002-10-01

    The theory of zonal flows generation by Reynolds stress is extended by flute (interchange), mode turbulence. The specific role of density fluctuations and finite Larmor radius effects is clarified. To describe the dynamics of a large-scale plasma flow that varies on a longer time scale compared to the small-scale fluctuations, a multiple scale expansion is employed, assuming that there is a sufficient spectral gap separating large-scale and small-scale motions. The evolution equations for mean flow generation with sources consisting of the standard and diamagnetic (due to the density fluctuations) Reynolds stresses are obtained by averaging the model equations over fast small scales. Analysis of these equations shows that diamagnetic effects may significantly enhance the total Reynolds force. The possibility of mean flow generation by the ensemble-averaged external J x B force is shown to yield results which agree with the calculation of the flow generation by Reynolds stresses. The physics of the acceleration mechanism is elucidated.

  17. Abelian gauge symmetries and fluxed instantons in compactifications of type IIB and F-theory

    International Nuclear Information System (INIS)

    We discuss the role of Abelian gauge symmetries in type IIB orientifold compactifications and their F-theory uplift. Particular emphasis is placed on U(1)s which become massive through the geometric Stueckelberg mechanism in type IIB. We present a proposal on how to take such geometrically massive U(1)s and the associated fluxes into account in the Kaluza-Klein reduction of F-theory with the help of non-harmonic forms. Evidence for this proposal is obtained by working out the F-theory effective action including such non-harmonic forms and matching the results with the known type IIB expressions. We furthermore discuss how world-volume fluxes on D3-brane instantons affect the instanton charge with respect to U(1) gauge symmetries and the chiral zero mode spectrum. The classical partition function of M5-instantons in F-theory is discussed and compared with the type IIB results for D3-brane instantons. The type IIB match allows us to determine the correct M5 partition function. Selection rules for the absence of chiral charged zero modes on M5-instantons in backgrounds with G4 flux are discussed and compared with the type IIB results. The dimensional reduction of the democratic formulation of M-theory is presented in the appendix.

  18. Abelian gauge symmetries and fluxed instantons in compactifications of type IIB and F-theory

    Energy Technology Data Exchange (ETDEWEB)

    Buenaventura Kerstan, Max Bromo

    2013-11-13

    We discuss the role of Abelian gauge symmetries in type IIB orientifold compactifications and their F-theory uplift. Particular emphasis is placed on U(1)s which become massive through the geometric Stueckelberg mechanism in type IIB. We present a proposal on how to take such geometrically massive U(1)s and the associated fluxes into account in the Kaluza-Klein reduction of F-theory with the help of non-harmonic forms. Evidence for this proposal is obtained by working out the F-theory effective action including such non-harmonic forms and matching the results with the known type IIB expressions. We furthermore discuss how world-volume fluxes on D3-brane instantons affect the instanton charge with respect to U(1) gauge symmetries and the chiral zero mode spectrum. The classical partition function of M5-instantons in F-theory is discussed and compared with the type IIB results for D3-brane instantons. The type IIB match allows us to determine the correct M5 partition function. Selection rules for the absence of chiral charged zero modes on M5-instantons in backgrounds with G{sub 4} flux are discussed and compared with the type IIB results. The dimensional reduction of the democratic formulation of M-theory is presented in the appendix.

  19. Vortex-type half-BPS solitons in Aharony-Bergman-Jafferis-Maldacena theory

    International Nuclear Information System (INIS)

    We study the Aharony-Bergman-Jafferis-Maldacena (ABJM) theory without and with mass deformation. It is shown that maximally supersymmetry preserving, D-term, and F-term mass deformations of single mass parameter are equivalent. We obtain vortex-type half-BPS equations and the corresponding energy bound. For the undeformed ABJM theory, the resulting half-BPS equation is the same as that in supersymmetric Yang-Mills theory and no finite energy regular BPS solution is found. For the mass-deformed ABJM theory, the half-BPS equations for U(2)xU(2) case reduce to the vortex equation in Maxwell-Higgs theory, which supports static regular multivortex solutions. In U(N)xU(N) case with N>2 the non-Abelian vortex equation of Yang-Mills-Higgs theory is obtained.

  20. Quantification analysis of CT of ovarian tumors

    International Nuclear Information System (INIS)

    Early symptoms in patients with ovarian tumors are usually few and nonspecific. CT is often very helpful in the diagnosis of ovarian tumors. Although it is difficult to identify normal ovaries, it is usually possible to diagnose ovarian lesions on CT, because with few exceptions they show tumorous enlargement. We can even estimate the histology in typical cases such as dermoid cysts or some types of cystadenomas. However, estimation of histology is difficult in many cases. Tumors other than those of ovarian origin can occur in the pelvis and require differentiation. Ovarian tumors have a close relationship with the uterus and broad ligaments, and make contact with as least one side of the pelvic wall. Enhanced CT with contrast media may facilitate differentiation between pedunculated subserosal leiomyoma uteri and ovarian tumor, because the former shows intense enhancement as a uterine body; the latter is less intense. Thus, we have little difficulty in differentiating between tumors of ovarian origin and those of other origins. Our problem is differentiating between malignant and benign ovarian tumors, and clarification of their histology. In this study, we devised a decision flow chart to attain an accurate diagnosis. In part, we have utilized Hayashi's quantification theory II, a multiple regression analysis where predictive variables are categorical and outside criteria are classificatory. Hayashi stated that the aim of multi-dimensional quantification is to synthetically form numerical representation of intercorrelated patterns to maximize the efficiency of classification, i.e. the success rate of prediction. Thus, quantification of patterns is thought to be effective in facilitating image diagnosis such as CT and minimizing errors. (author)

  1. M theory, type IIA string and 4D N=1 SUSY SU(NL) x SU(NR) gauge theory

    International Nuclear Information System (INIS)

    SU(NL) x SU(NR) gauge theories are investigated as effective field theories on D4-branes in type IIA string theory. The classical gauge configuration is shown to match quantitatively with a corresponding classical U(NL) x U(NR) gauge theory. Quantum effects freeze the U(1) gauge factors and turn some parameters into moduli. The SU(NL) x SU(NR) quantum model is realized in M theory. Starting with an N=2 configuration (parallel NS 5-branes), the rotation of a single NS 5-brane is considered. Generically, this leads to a complete lifting of the Coulomb moduli space. The implications of this result to field theory and the dynamics of branes are discussed. When the initial M 5-brane is reducible, part of the Coulomb branch may survive. Some such situations are considered, leading to curves describing the effective gauge couplings for N=1 models. The generalization to models with more gauge group factors is also discussed. (orig.)

  2. Digital games for type 1 and type 2 diabetes: underpinning theory with three illustrative examples.

    Science.gov (United States)

    Kamel Boulos, Maged N; Gammon, Shauna; Dixon, Mavis C; MacRury, Sandra M; Fergusson, Michael J; Miranda Rodrigues, Francisco; Mourinho Baptista, Telmo; Yang, Stephen P

    2015-01-01

    Digital games are an important class of eHealth interventions in diabetes, made possible by the Internet and a good range of affordable mobile devices (eg, mobile phones and tablets) available to consumers these days. Gamifying disease management can help children, adolescents, and adults with diabetes to better cope with their lifelong condition. Gamification and social in-game components are used to motivate players/patients and positively change their behavior and lifestyle. In this paper, we start by presenting the main challenges facing people with diabetes-children/adolescents and adults-from a clinical perspective, followed by three short illustrative examples of mobile and desktop game apps and platforms designed by Ayogo Health, Inc. (Vancouver, BC, Canada) for type 1 diabetes (one example) and type 2 diabetes (two examples). The games target different age groups with different needs-children with type 1 diabetes versus adults with type 2 diabetes. The paper is not meant to be an exhaustive review of all digital game offerings available for people with type 1 and type 2 diabetes, but rather to serve as a taster of a few of the game genres on offer today for both types of diabetes, with a brief discussion of (1) some of the underpinning psychological mechanisms of gamified digital interventions and platforms as self-management adherence tools, and more, in diabetes, and (2) some of the hypothesized potential benefits that might be gained from their routine use by people with diabetes. More research evidence from full-scale evaluation studies is needed and expected in the near future that will quantify, qualify, and establish the evidence base concerning this gamification potential, such as what works in each age group/patient type, what does not, and under which settings and criteria. PMID:25791276

  3. Digital Games for Type 1 and Type 2 Diabetes: Underpinning Theory With Three Illustrative Examples

    Science.gov (United States)

    Gammon, Shauna; Dixon, Mavis C; MacRury, Sandra M; Fergusson, Michael J; Miranda Rodrigues, Francisco; Mourinho Baptista, Telmo; Yang, Stephen P

    2015-01-01

    Digital games are an important class of eHealth interventions in diabetes, made possible by the Internet and a good range of affordable mobile devices (eg, mobile phones and tablets) available to consumers these days. Gamifying disease management can help children, adolescents, and adults with diabetes to better cope with their lifelong condition. Gamification and social in-game components are used to motivate players/patients and positively change their behavior and lifestyle. In this paper, we start by presenting the main challenges facing people with diabetes—children/adolescents and adults—from a clinical perspective, followed by three short illustrative examples of mobile and desktop game apps and platforms designed by Ayogo Health, Inc. (Vancouver, BC, Canada) for type 1 diabetes (one example) and type 2 diabetes (two examples). The games target different age groups with different needs—children with type 1 diabetes versus adults with type 2 diabetes. The paper is not meant to be an exhaustive review of all digital game offerings available for people with type 1 and type 2 diabetes, but rather to serve as a taster of a few of the game genres on offer today for both types of diabetes, with a brief discussion of (1) some of the underpinning psychological mechanisms of gamified digital interventions and platforms as self-management adherence tools, and more, in diabetes, and (2) some of the hypothesized potential benefits that might be gained from their routine use by people with diabetes. More research evidence from full-scale evaluation studies is needed and expected in the near future that will quantify, qualify, and establish the evidence base concerning this gamification potential, such as what works in each age group/patient type, what does not, and under which settings and criteria. PMID:25791276

  4. Delimited continuations in natural language: quantification and polarity sensitivity

    CERN Document Server

    Shan, C

    2004-01-01

    Making a linguistic theory is like making a programming language: one typically devises a type system to delineate the acceptable utterances and a denotational semantics to explain observations on their behavior. Via this connection, the programming language concept of delimited continuations can help analyze natural language phenomena such as quantification and polarity sensitivity. Using a logical metalanguage whose syntax includes control operators and whose semantics involves evaluation order, these analyses can be expressed in direct style rather than continuation-passing style, and these phenomena can be thought of as computational side effects.

  5. Krull dimension of types in a class of first-order theories

    OpenAIRE

    Zambella, Domenico

    2011-01-01

    We study a class of first-order theories whose complete quantifier-free types with one free variable either have a trivial positive part or are isolated by a positive quantifier-free formula--plus a few other technical requirements. The theory of vector spaces and the theory fields are examples. We prove the amalgamation property and the existence of a model-companion. We show that the model-companion is strongly minimal. We also prove that the length of any increasing seque...

  6. Nonperturbative type IIB model building in the F-theory framework

    Energy Technology Data Exchange (ETDEWEB)

    Jurke, Benjamin Helmut Friedrich

    2011-02-28

    This dissertation is concerned with the topic of non-perturbative string theory, which is generally considered to be the most promising approach to a consistent description of quantum gravity. The five known 10-dimensional perturbative string theories are all interconnected by numerous dualities, such that an underlying non-perturbative 11-dimensional theory, called M-theory, is postulated. Due to several technical obstacles, little is known about the fundamental objects in this theory. There exists an alternative non-perturbative description to type IIB string theory, namely F-theory. Here the SL(2;Z) self-duality of IIB theory is geometrized in the form of an elliptic fibration over the space-time. Moreover, higher-dimensional objects like 7-branes are included via singularities into the geometric picture. This formally elegant description, however, requires significant technical effort for the construction of suitable compactification geometries, as many different aspects necessarily have to be dealt with at the same time. On the other hand, the generation of essential GUT building blocks like certain Yukawa couplings or spinor representations is easier compared to perturbative string theory. The goal of this study is therefore to formulate a unified theory within the framework of F-theory, that satisfies basic phenomenological constraints. Within this thesis, at first E3-brane instantons in type IIB string theory - 4-dimensional objects that are entirely wrapped around the invisible dimensions of space-time - are matched with M5-branes in F-theory. Such objects are of great importance in the generation of critical Yukawa couplings or the stabilization of the free parameters of a theory. Certain properties of M5-branes then allow to derive a new criterion for E3-branes to contribute to the superpotential. In the aftermath of this analysis, several compactification geometries are constructed and checked for basic properties that are relevant for semi-realistic unified model building. An important aspect is the proper handling of the gauge flux on the 7-branes. Via the spectral cover description - which at first requires further refinements - chiral matter can be generated and the unified gauge group can be broken to the Standard Model. Ultimately, in this thesis an explicit unified model based on the gauge group SU(5) is constructed within the F-theory framework, such that an acceptable phenomenology and the observed three chiral matter generations are obtained. (orig.)

  7. Nonperturbative type IIB model building in the F-theory framework

    International Nuclear Information System (INIS)

    This dissertation is concerned with the topic of non-perturbative string theory, which is generally considered to be the most promising approach to a consistent description of quantum gravity. The five known 10-dimensional perturbative string theories are all interconnected by numerous dualities, such that an underlying non-perturbative 11-dimensional theory, called M-theory, is postulated. Due to several technical obstacles, little is known about the fundamental objects in this theory. There exists an alternative non-perturbative description to type IIB string theory, namely F-theory. Here the SL(2;Z) self-duality of IIB theory is geometrized in the form of an elliptic fibration over the space-time. Moreover, higher-dimensional objects like 7-branes are included via singularities into the geometric picture. This formally elegant description, however, requires significant technical effort for the construction of suitable compactification geometries, as many different aspects necessarily have to be dealt with at the same time. On the other hand, the generation of essential GUT building blocks like certain Yukawa couplings or spinor representations is easier compared to perturbative string theory. The goal of this study is therefore to formulate a unified theory within the framework of F-theory, that satisfies basic phenomenological constraints. Within this thesis, at first E3-brane instantons in type IIB string theory - 4-dimensional objects that are entirely wrapped around the invisible dimensions of space-time - are matched with M5-branes in F-theory. Such objects are of great importance in the generation of critical Yukawa couplings or the stabilization of the free parameters of a theory. Certain properties of M5-branes then allow to derive a new criterion for E3-branes to contribute to the superpotential. In the aftermath of this analysis, several compactification geometries are constructed and checked for basic properties that are relevant for semi-realistic unified model building. An important aspect is the proper handling of the gauge flux on the 7-branes. Via the spectral cover description - which at first requires further refinements - chiral matter can be generated and the unified gauge group can be broken to the Standard Model. Ultimately, in this thesis an explicit unified model based on the gauge group SU(5) is constructed within the F-theory framework, such that an acceptable phenomenology and the observed three chiral matter generations are obtained. (orig.)

  8. Abelian gauge symmetries and fluxed instantons in compactifications of type IIB and F-theory

    OpenAIRE

    Kerstan, Max Bromo Buenaventura

    2014-01-01

    We discuss the role of Abelian gauge symmetries in type IIB orientifold compactifications and their F-theory uplift. Particular emphasis is placed on U(1)s which become massive through the geometric St\\"uckelberg mechanism in type IIB. We present a proposal on how to take such geometrically massive U(1)s and the associated fluxes into account in the Kaluza-Klein reduction of F-theory with the help of non-harmonic forms. Evidence for this proposal is obtained by working out t...

  9. The D^{10} R^4 term in type IIB string theory

    OpenAIRE

    Basu, Anirban

    2006-01-01

    The modular invariant coefficient of the D^{2k} {\\cal{R}}^4 term in the effective action of type IIB superstring theory is expected to satisfy Poisson equation on the fundamental domain of SL(2,Z). Under certain assumptions, we obtain the equation satisfied by D^{10} {\\cal{R}}^4 using the tree level and one loop results for four graviton scattering in type II string theory. This leads to the conclusion that the perturbative contributions to D^{10} {\\cal{R}}^4 vanish above th...

  10. Type II theories compactified on Calabi-Yau threefolds in the presence of background fluxes

    Energy Technology Data Exchange (ETDEWEB)

    Louis, Jan E-mail: j.louis@physik.uni-halle.de; Micu, Andrei E-mail: micu@physik.uni-halle.de

    2002-07-22

    Compactifications of type II theories on Calabi-Yau threefolds including electric and magnetic background fluxes are discussed. We derive the bosonic part of the four-dimensional low energy effective action and show that it is a non-canonical N=2 supergravity which includes a massive two-form. The symplectic invariance of the theory is maintained as long as the flux parameters transform as a symplectic vector and a massive two-form which couples to both electric and magnetic field strengths is present. The mirror symmetry between type IIA and type IIB compactified on mirror manifolds is shown to hold for RR fluxes at the level of the effective action. We also compactify type IIA in the presence of NS three-form flux but the mirror symmetry in this case remains unclear.

  11. The iteration formula of the Maslov-type index theory with applications to nonlinear Hamiltonian systems

    International Nuclear Information System (INIS)

    In this paper, the iteration formula of the Maslov-type index theory for linear Hamiltonian systems with continuous periodic and symmetric coefficients is established. This formula yields a new method to determine the minimality of the period for solutions of nonlinear autonomous Hamiltonian systems via their Maslov-type indices. Applications of this formula give new results on the existence of periodic solutions with prescribed minimal period for such systems. (author). 40 refs

  12. Extension of anisotropic effective medium theory to account for an arbitrary number of inclusion types

    Science.gov (United States)

    Myles, Timothy D.; Peracchio, Aldo A.; Chiu, Wilson K. S.

    2015-01-01

    The purpose of this work is to extend, to multi-components, a previously reported theory for calculating the effective conductivity of a two component mixture. The previously reported theory involved preferentially oriented spheroidal inclusions contained in a continuous matrix, with inclusions oriented relative to a principle axis. This approach was based on Bruggeman's unsymmetrical theory, and is extended to account for an arbitrary number of different inclusion types. The development begins from two well-known starting points; the Maxwell approach and the Maxwell-Garnett approach for dilute mixtures. It is shown that despite these two different starting points, the final Bruggeman type equation is the same. As a means of validating the developed expression, comparisons are made to several existing effective medium theories. It is shown that these existing theories coincide with the developed equations for the appropriate parameter set. Finally, a few example mixtures are considered to demonstrate the effect of multiple inclusions on the calculated effective property. Inclusion types of different conductivities, shapes, and orientations are considered and each of the aforementioned properties is shown to have a potentially significant impact on the calculated mixture property.

  13. Diffusion, Uptake and Release of Hydrogen in p-type Gallium Nitride: Theory and Experiment

    Energy Technology Data Exchange (ETDEWEB)

    MYERS JR.,SAMUEL M.; WRIGHT,ALAN F.; PETERSEN,GARY A.; WAMPLER,WILLIAM R.; SEAGER,CARLETON H.; CRAWFORD,MARY H.; HAN,JUNG

    2000-06-27

    The diffusion, uptake, and release of H in p-type GaN are modeled employing state energies from density-function theory and compared with measurements of deuterium uptake and release using nuclear-reaction analysis. Good semiquantitative agreement is found when account is taken of a surface permeation barrier.

  14. K'-Theory of a Local Ring of Finite Cohen-Macaulay Type

    CERN Document Server

    Navkal, Viraj

    2011-01-01

    We study the $K'$-theory of a CM Henselian local ring $R$ of finite Cohen-Macaulay type. In our main theorem we produce a long exact sequence involving the groups $K_i'(R)$ and the $K$-groups of the endomorphism rings of the indecomposable maximal Cohen-Macaulay $R$-modules, along with their residue rings.

  15. de Sitter vacua in type IIB string theory: classical solutions and quantum corrections

    Science.gov (United States)

    Dasgupta, Keshav; Gwyn, Rhiannon; McDonough, Evan; Mia, Mohammed; Tatar, Radu

    2014-07-01

    We revisit the classical theory of ten-dimensional two-derivative gravity coupled to fluxes, scalar fields, D-branes, anti D-branes and Orientifold-planes. We show that such set-ups do not give rise to a four-dimensional positive curvature spacetime with the isometries of de Sitter spacetime. We further argue that a de Sitter solution in type IIB theory may still be achieved if the higher-order curvature corrections are carefully controlled. Our analysis relies on the derivation of the de Sitter condition from an explicit background solution by going beyond the supergravity limit of type IIB theory. As such this also tells us how the background supersymmetry should be broken and under what conditions D-term uplifting can be realized with non self-dual fluxes.

  16. The effective theory of type IIA AdS4 compactifications on nilmanifolds and cosets

    International Nuclear Information System (INIS)

    We consider string theory compactifications of the form AdS4xM6 with orientifold six-planes, where M6 is a six-dimensional compact space that is either a nilmanifold or a coset. For all known solutions of this type we obtain the four-dimensional N=1 low-energy effective theory by computing the superpotential, the Kaehler potential and the mass spectrum for the light moduli. For the nilmanifold examples we perform a cross-check on the result for the mass spectrum by calculating it alternatively from a direct Kaluza-Klein reduction and find perfect agreement. We show that in all but one of the coset models all moduli are stabilized at the classical level. As an application we show that all but one of the coset models can potentially be used to bypass a recent no-go theorem against inflation in type IIA theory.

  17. Generalized N=1 and N=2 structures in M-theory and type II orientifolds

    CERN Document Server

    Graña, Mariana

    2012-01-01

    We consider M-theory and type IIA reductions to four dimensions with N=2 and N=1 supersymmetry and discuss their interconnection. Our work is based on the framework of Exceptional Generalized Geometry (EGG), which extends the tangent bundle to include all symmetries in M-theory and type II string theory, covariantizing the local U-duality group E7. We describe general N=1 and N=2 reductions in terms of SU(7) and SU(6) structures on this bundle and thereby derive the effective four-dimensional N=1 and N=2 couplings, in particular we compute the Kahler and hyper-Kahler potentials as well as the triplet of Killing prepotentials (or the superpotential in the N=1 case). These structures and couplings can be described in terms of forms on an eight-dimensional tangent space where SL(8) contained in E7 acts, which might indicate a description in terms of an eight-dimensional internal space, similar to F-theory. We finally discuss an orbifold action in M-theory and its reduction to O6 orientifolds, and show how the pr...

  18. Search of unified theory of basic types of elementary particle interactions

    International Nuclear Information System (INIS)

    Four types of forces are described (strong, weak, electromagnetic and gravitational) mediating the basic interactions of quarks and leptons, and attempts are reported of forming a unified theory of all basic interactions. The concepts are discussed, such as the theory symmetry (eg., invariance in relation to the Lorentz transformations) and isotopic symmetry (based on the interchangeability of particles in a given isotopic multiplet). Described are the gauge character of electromagnetic and gravitational interactions, the violation of the gauge symmetry and the mechanism of particle confinement. (H.S.)

  19. Screening for and validated quantification of phenethylamine-type designer drugs and mescaline in human blood plasma by gas chromatography/mass spectrometry.

    Science.gov (United States)

    Habrdova, Vilma; Peters, Frank T; Theobald, Denis S; Maurer, Hans H

    2005-06-01

    In recent years, several newer designer drugs of the so-called 2C series such as 2C-D, 2C-E, 2C-P, 2C-B, 2C-I, 2C-T-2, and 2C-T-7 have entered the illicit drug market as recreational drugs. Some fatal intoxications involving 2C-T-7 have been reported. Only scarce data have been published about analyses of these substances in human blood and/or plasma. This paper describes a method for screening and simultaneous quantification of the above-mentioned compounds and their analog mescaline in human blood plasma. The analytes were analyzed by gas chromatography/mass spectrometry in the selected-ion monitoring mode, after mixed-mode solid-phase extraction (HCX) and derivatization with heptafluorobutyric anhydride. The method was fully validated according to international guidelines. Validation data for 2C-T-2 and 2C-T-7 were unacceptable. For all other analytes, the method was linear from 5 to 500 microg/L and the data for accuracy (bias) and precision (coefficient of variation) were within the acceptance limits of +/-15% and <15%, respectively (within +/-20% and <20% near the limit of quantification of 5 microg/L). PMID:15827969

  20. The structure of the R8 term in type IIB string theory

    International Nuclear Information System (INIS)

    Based on the structure of the on-shell linearized superspace of type IIB supergravity, we argue that there is a non-BPS 16 derivative interaction in the effective action of type IIB string theory of the form (t8t8R4)2, which we call the R8 interaction. It lies in the same supermultiplet as the G8R4 interaction. Using the Kawai–Lewellen–Tye relation, we analyze the structure of the tree level eight-graviton scattering amplitude in the type IIB theory, which leads to the R8 interaction at the linearized level. This involves an analysis of color-ordered multi-gluon disc amplitudes in the type I theory, which shows an intricate pole structure and transcendentality consistent with various other interactions. Considerations of S-duality show that the R8 interaction receives non-analytic contributions in the string coupling at one and two loops. Apart from receiving perturbative contributions, we show that the R8 interaction receives a non-vanishing contribution in the one D-instanton-anti-instanton background at leading order in the weak coupling expansion. (paper)

  1. Inflation of Bianchi type-$\\Rmnum{7}_0$ Universe with Dirac Field in Einstein-Cartan theory

    CERN Document Server

    Fang, Wei; Lu, Hui-Qing

    2010-01-01

    We discuss the Bianchi type-$\\Rmnum{7}_0$ cosmology with Dirac field in Einstein-Cartan theory. We obtain the equations of Dirac field and gravitational field in Einstein-Cartan theory. We find a Bianchi type-$\\Rmnum{7}_0$ inflationary solution.

  2. Theory of flux-flow voltage noise in type-II superconductors

    International Nuclear Information System (INIS)

    A theory for the flux-flow voltage noise in a type-II superconductor in the mixed state is developed. The interactions between the vortices are taken into account via a wavevector-dependent interaction matrix. Applied to Johnson noise in a film without flux flow for a special measuring circuit, the theory predicts a suppression of the power spectrum at low frequencies. For flux-flow noise produced when a perfect vortex lattice moves in a superconducting foil or film containing a random array of independent pinning centers, the theory gives two contributions to the power spectrum: one from transverse velocity fluctuations, which has a characteristic frequency determined by the characteristic frequency of the power spectrum of the elementary pinning force; the other from longitudinal density fluctuations, which may have a characteristic frequency determined by the elastic constants and the geometry of the measuring circuit. Both of these frequencies are much higher than experimentally observed values

  3. Theory and Observations of Type I X-Ray Bursts from Neutron Stars

    CERN Document Server

    Bildsten, L

    2000-01-01

    I review our understanding of the thermonuclear instabilities on accreting neutron stars that produce Type I X-Ray bursts. I emphasize those observational and theoretical aspects that should interest the broad audience of this meeting. The easily accessible timescales of the bursts (durations of tens of seconds and recurrence times of hours to days) allow for a very stringent comparison to theory. The largest discrepancy (which was found with EXOSAT observations) is the accretion rate dependence of the Type I burst properties. Bursts become less frequent and energetic as the global accretion rate increases, just the opposite of what the spherical theory predicts. I present a resolution of this issue by taking seriously the observed dependence of the burning area on the global accretion rate, which implies that as the accretion rate increases, the accretion rate per unit area decreases. This resurrects the unsolved problem of knowing where the freshly accreted material accumulates on the star, equally relevant...

  4. A sufficient condition for de Sitter vacua in type IIB string theory

    International Nuclear Information System (INIS)

    We derive a sufficient condition for realizing meta-stable de Sitter vacua with small positive cosmological constant within type IIB string theory flux compactifications with spontaneously broken supersymmetry. There are a number of 'lamp post' constructions of de Sitter vacua in type IIB string theory and supergravity. We show that one of them - the method of 'Kaehler uplifting' by F-terms from an interplay between non-perturbative effects and the leading ?'-correction - allows for a more general parametric understanding of the existence of de Sitter vacua. The result is a condition on the values of the flux induced superpotential and the topological data of the Calabi-Yau compactification, which guarantees the existence of a meta-stable de Sitter vacuum if met. Our analysis explicitly includes the stabilization of all moduli, i.e. the Kaehler, dilaton and complex structure moduli, by the interplay of the leading perturbative and non-perturbative effects at parametrically large volume. (orig.)

  5. Spatially Homogeneous Bianchi Type V Cosmological Model in the Scale-Covariant Theory of Gravitation

    International Nuclear Information System (INIS)

    We discuss spatially homogeneous and anisotropic Bianchi type-V spacetime filled with a perfect fluid in the framework of the scale-covariant theory of gravitation proposed by Canuto et al. By applying the law of variation for Hubble's parameter, exact solutions of the field equations are obtained, which correspond to the model of the universe having a big-bang type singularity at the initial time t = 0. The cosmological model, evolving from the initial singularity, expands with power-law expansion and gives essentially an empty space for a large time. The physical and dynamical properties of the model are also discussed. (geophysics, astronomy, and astrophysics)

  6. Social Cognition in Later Life: Effects of Aging and Task Type on Theory of Mind Performance

    OpenAIRE

    Doyle, Claire L.

    2009-01-01

    Abstract Recent studies assessing the effects of age and task type on theory of mind (ToM) have found mixed results. However, these studies have not considered the possibility that by using a series of distinct and unrelated tasks, other confounding factors are likely to affect performance, such as the type of ToM reasoning required, the length of the social interactions, the characters involved etc. Moreover, most have relied on traditional ToM tests which lack resemblance to real-world s...

  7. The Ginzburg-Landau Theory of Type II superconductors in magnetic field

    OpenAIRE

    Rosenstein, Baruch; Li, Dingping

    2009-01-01

    Thermodynamics of type II superconductors in electromagnetic field based on the Ginzburg - Landau theory is presented. The Abrikosov flux lattice solution is derived using an expansion in a parameter characterizing the "distance" to the superconductor - normal phase transition line. The expansion allows a systematic improvement of the solution. The phase diagram of the vortex matter in magnetic field is determined in detail. In the presence of significant thermal fluctuation...

  8. Enhanced Gauge Symmetry in Type II and F-Theory Compactifications: Dynkin Diagrams from Polyhedra

    OpenAIRE

    Perevalov, Eugene; Skarke, Harald

    1997-01-01

    We explain the observation by Candelas and Font that the Dynkin diagrams of nonabelian gauge groups occurring in type IIA and F-theory can be read off from the polyhedron $\\Delta^*$ that provides the toric description of the Calabi-Yau manifold used for compacification. We show how the intersection pattern of toric divisors corresponding to the degeneration of elliptic fibers follows the ADE classification of singularities and the Kodaira classification of degenerations. We ...

  9. On Energy and Entropy Influxes in the Green-Naghdi Type III Theory of Heat Conduction

    OpenAIRE

    Bargmann, Swantje; Favata, Antonino; Podio-guidugli, Paolo

    2012-01-01

    The energy-influx/entropy-influx relation in the Green-Naghdi Type III theory of heat conduction is examined within a thermodynamical framework \\`a la Mueller-Liu, where that relation is not specified a priori irrespectively of the constitutive class under attention. It is shown that the classical assumption, i.e., that the entropy influx and the energy influx are proportional via the absolute temperature, holds true if heat conduction is, in a sense that is made precise, is...

  10. Bianchi-type I and V cosmologies in Einstein-Cartan theory

    International Nuclear Information System (INIS)

    Within the framework of the Einstein-Cartan theory a Weyssenhoff-spinning-fluid-filled homogeneous anisotropic Bianchi-Type I and V space-time is studied. The effects of a cosmological constant upon the dynamics of the early universe are also considered. For an appropriate choice of the parameters non-singular behaviour occurs and evolution of the cosmological fluid near the bounce is considered in detail

  11. Psychosocial Correlates of Dietary Behaviour in Type 2 Diabetic Women, Using a Behaviour Change Theory

    OpenAIRE

    Didarloo, A.; Shojaeizadeh, D.; Asl, R. Gharaaghaji; Niknami, S.; Khorami, A.

    2014-01-01

    The study evaluated the efficacy of the Theory of Reasoned Action (TRA), along with self-efficacy to predict dietary behaviour in a group of Iranian women with type 2 diabetes. A sample of 352 diabetic women referred to Khoy Diabetes Clinic, Iran, were selected and given a self-administered survey to assess eating behaviour, using the extended TRA constructs. Bivariate correlations and Enter regression analyses of the extended TRA model were performed with SPSS software. Overall, the proposed...

  12. Stringy unification of type IIA and IIB supergravities under N=2D=10 supersymmetric double field theory

    International Nuclear Information System (INIS)

    To the full order in fermions, we construct D=10 type II supersymmetric double field theory. We spell the precise N=2 supersymmetry transformation rules as for 32 supercharges. The constructed action unifies type IIA and IIB supergravities in a manifestly covariant manner with respect to O(10,10) T-duality and a pair of local Lorentz groups, or Spin(1,9)×Spin(9,1), besides the usual general covariance of supergravities or the generalized diffeomorphism. While the theory is unique, the solutions are twofold. Type IIA and IIB supergravities are identified as two different types of solutions rather than two different theories

  13. Creep design of type 316LN stainless steel by K-R damage theory

    International Nuclear Information System (INIS)

    Kachanov-Rabotnov (K-R) creep damage theory was reviewed, and applied to design a creep curve for type 316LN stainless steel. Seven coefficients used in the theory, i. e., A, B, ?, m, ?, r, and q were determined, and their physical meanings were analyzed clearly. In order to quantify a damage parameter (?), cavity amount was measured in the crept specimen taken from interrupted creep test with time variation, and then the amount was reflected into K-R damage equations. Coefficient ?, which is regarded as a creep tolerance feature of a material, increased with creep strain. Master curve with ?=2.8 was well coincided with an experimental one to the full lifetime. The relationship between damage parameter and life fraction was matched with the theory at exponent r=24 value. It is concluded that K-R damage equation was reliable as the modelling equation for type 316LN stainless steel. Coefficient data obtained from type 316LN stainless steel can be utilized for life prediction of operating material. (author)

  14. Semi-quantification of endolymphatic size on MR imaging after intravenous injection of single-dose gadodiamide. Comparison between two types of processing strategies

    International Nuclear Information System (INIS)

    Many inner ear disorders, including Meniere's disease, are believed to be based on endolymphatic hydrops. We evaluated a newly proposed method for semi-quantification of endolymphatic size in patients with suspected endolymphatic hydrops that uses 2 kinds of processed magnetic resonance (MR) images. Twenty-four consecutive patients underwent heavily T2-weighted (hT2W) MR cisternography (MRC), hT2W 3-dimensional (3D) fluid-attenuated inversion recovery (FLAIR) with inversion time of 2250 ms (positive perilymph image, PPI), and hT2W-3D-IR with inversion time of 2050 ms (positive endolymph image, PEI) 4 hours after intravenous administration of single-dose gadolinium-based contrast material (IV-SD-GBCM). Two images were generated using 2 new methods to process PPI, PEI, and MRC. Three radiologists contoured the cochlea and vestibule on MRC, copied regions of interest (ROIs) onto the 2 kinds of generated images, and semi-quantitatively measured the size of the endolymph for the cochlea and vestibule by setting a threshold pixel value. Each observer noted a strong linear correlation between endolymphatic size of both the cochlea and vestibule of the 2 kinds of generated images. The Pearson correlation coefficients (r) were 0.783, 0.734, and 0.800 in the cochlea and 0.924, 0.930, and 0.933 in the vestibule (P<0.001, for all). In both the cochlea and vestibule, repeated-measures analysis of variance showed no statistically significant difference between observers. Use of the 2 kinds of generated images generated from MR images obtained 4 hours after IV-SD-GBCM might enable semi-quantification of endolymphatic size with little observer dependency. (author)

  15. A Novel Framework for Quantification of Supply Chain Risks

    OpenAIRE

    Qazi, Abroon; Quigley, John; Dickson, Alex

    2014-01-01

    Supply chain risk management is an active area of research and there is a research gap of exploring established risk quantification techniques in other fields for application in the context of supply chain management. We have developed a novel framework for quantification of supply chain risks that integrates two techniques of Bayesian belief network and Game theory. Bayesian belief network can capture interdependency between risk factors and Game theory can assess risks associated with confl...

  16. Localized Modes in Type II and Heterotic Singular Calabi-Yau Conformal Field Theories

    CERN Document Server

    Mizoguchi, Shun'ya

    2008-01-01

    We consider type II and heterotic string compactifications on an isolated singularity in the noncompact Gepner model approach. The conifold-type ADE noncompact Calabi-Yau threefolds, as well as the ALE twofolds, are modeled by a tensor product of the SL(2,R)/U(1) Kazama-Suzuki model and an N=2 minimal model. Based on the string partition functions on these internal Calabi-Yaus previously obtained by Eguchi and Sugawara, we construct new modular invariant, space-time supersymmetric partition functions for both type II and heterotic string theories, where the GSO projection is performed before the continuous and discrete state contributions are separated. We investigate in detail the massless spectra of the localized modes. In particular, we propose an interesting three generation model, in which each flavor is in the 27+1 representation of E6 and localized on a four-dimensional space-time residing at the tip of the cigar.

  17. Relation between anomaly in type-I superstring and anomaly in its effective theory

    International Nuclear Information System (INIS)

    We explicitly calculate the hexagon and heptagon (covariant) gauge anomaly including the precise numerical coefficients in type-I open superstring. The calculation is performed by using the Pauli-Villars regularization on the basis of the stringy Ward identity, which is an assumption weaker than the cancelled propagator argument. We show that the anomalies in the effective theory are realized as the zero slope limit (?' ? 0) of the string anomalies in a very non-trivial way in the present calculational scheme. This non-trivial realization in higher point (n ? 8) anomalies is also discussed. (author)

  18. Worldline approach to Sudakov-type form factors in non-Abelian gauge theories

    CERN Document Server

    Gellas, G C; Ktorides, C N; Stefanis, N G

    1997-01-01

    We calculate Sudakov-type form factors for isolated spin-1/2 particles (fermions) entering non-Abelian gauge-field systems. We consider both the on- and the off-mass-shell case using a methodology which rests on a worldline casting of field theories. The simplicity and utility of our approach derives from the fact that we are in a position to make, a priori, a more transparent separation (factorization), with respect to a given scale, between short- and long-distance physics than diagrammatic methods.

  19. Bianchi Type-IX Magnetized Dark Energy Model in Saez-Ballester Theory of Gravitation

    Directory of Open Access Journals (Sweden)

    H. R. Ghate

    2014-03-01

    Full Text Available The Bianchi type-IX cosmological model with variable ? has been studied in the scalar tensor theory of gravitation proposed by Saez and Ballester [Phys. Lett. A 113: 467, 1985] in the presence and absence of magnetic field of energy density?b. A special law of variation of Hubble’s parameter proposed by Berman [Nuovo Cimento 74 B, 182, 1983] has been used to solve the field equations. The physical and kinematical properties of the model are also discussed.

  20. A numerical approach related to defect-type theories for some weakly random problems in homogenization

    CERN Document Server

    Anantharaman, Arnaud

    2010-01-01

    We present in this paper an approach for computing the homogenized behavior of a medium that is a small random perturbation of a periodic reference material. The random perturbation we consider is, in a sense made precise in our work, a rare event at the microscopic level. It however affects the macroscopic properties of the material, and we indeed provide a method to compute the first and second-order corrections. To this end, we formally establish an asymptotic expansion of the macroscopic properties. Our perturbative approach shares common features with a defect-type theory of solid state physics. The computational efficiency of the approach is demonstrated.

  1. Bianchi type I cosmology in generalized Saez-Ballester theory via Noether gauge symmetry

    International Nuclear Information System (INIS)

    In this paper, we investigate the generalized Saez-Ballester scalar-tensor theory of gravity via Noether gauge symmetry (NGS) in the background of Bianchi type I cosmological spacetime. We start with the Lagrangian of our model and calculate its gauge symmetries and corresponding invariant quantities. We obtain the potential function for the scalar field in the exponential form. For all the symmetries obtained, we determine the gauge functions corresponding to each gauge symmetry which include constant and dynamic gauge. We discuss cosmological implications of our model and show that it is compatible with the observational data. (orig.)

  2. Bianchi type I cosmology in generalized Saez-Ballester theory via Noether gauge symmetry

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, Mubasher [National University of Sciences and Technology (NUST), Center for Advanced Mathematics and Physics (CAMP), Islamabad (Pakistan); Eurasian National University, Eurasian International Center for Theoretical Physics, Astana (Kazakhstan); Ali, Sajid [National University of Sciences and Technology (NUST), School of Electrical Engineering and Computer Sciences (SEECS), Islamabad (Pakistan); Momeni, D.; Myrzakulov, R. [Eurasian National University, Eurasian International Center for Theoretical Physics, Astana (Kazakhstan)

    2012-04-15

    In this paper, we investigate the generalized Saez-Ballester scalar-tensor theory of gravity via Noether gauge symmetry (NGS) in the background of Bianchi type I cosmological spacetime. We start with the Lagrangian of our model and calculate its gauge symmetries and corresponding invariant quantities. We obtain the potential function for the scalar field in the exponential form. For all the symmetries obtained, we determine the gauge functions corresponding to each gauge symmetry which include constant and dynamic gauge. We discuss cosmological implications of our model and show that it is compatible with the observational data. (orig.)

  3. S-matrix elements and covariant tachyon action in type 0 theory

    International Nuclear Information System (INIS)

    We evaluate the sphere level S-matrix element of two tachyons and two massless NS states, the S-matrix element of four tachyons, and the S-matrix element of two tachyons and two Ramond-Ramond vertex operators, in type 0 theory. We then find an expansion for theses amplitudes that their leading order terms correspond to a covariant tachyon action. To the order considered, there are no T4, T2(-bar T)2, T2H2, nor T2R tachyon couplings, whereas, the tachyon couplings FF-bar T and T2F2 are non-zero

  4. TaqMan RT-PCR assay coupled with capillary electrophoresis for quantification and identification of bcr-abl transcript type.

    Science.gov (United States)

    Luthra, Rajyalakshmi; Sanchez-Vega, Beatriz; Medeiros, L Jeffrey

    2004-01-01

    Chronic myelogenous leukemia is characterized by the presence of the reciprocal t(9;22)(q34;q11) in which c-abl located on chromosome 9, and the bcr locus located on chromosome 22, are disrupted and translocated creating a novel bcr-abl fusion gene residing on the derivative chromosome 22. In most cases, the breakpoint in abl occurs within intron 1. Depending on the breakpoint in bcr, exon 2 of abl (a2) joins with exons 1 (e1), 13 (b2), or 14 (b3), or rarely to exon 19 (e19) of bcr resulting in chimeric proteins of p190, p210 and p230, respectively. Currently, several multiplex real-time reverse transcriptase-polymerase chain reaction (RT-PCR)-based assays for detecting bcr-abl are available to assess the levels of the three common fusion transcripts, b2a2, b3a2 and e1a2. Although these assays circumvent the requirement for individual fusion sequence quantitative polymerase chain reaction-based assays, they do not identify the specific fusion transcript. Knowledge of the latter is useful to rule out false-positive results and to compare clones before and after therapy. We designed a novel multiplex real-time RT-PCR assay to detect bcr-abl that allows accurate quantification and determination of the specific fusion transcript. In this assay, abl primer labeled at its 5' end with the fluorescent dye NED (Applied Biosystems) is incorporated into the bcr-abl fusion product during amplification. The NED fluorescent dye in abl primer, without interfering with fluorescent TaqMan probe signal, allows subsequent identification of the fusion transcript by semiautomated high-resolution capillary electrophoresis and GeneScan analysis. PMID:14657955

  5. Optimal Uncertainty Quantification

    OpenAIRE

    Owhadi, Houman; Scovel, Clint; Sullivan, Timothy John; Mckerns, Mike; Ortiz, Michael

    2010-01-01

    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \\emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing pr...

  6. On the effective theory of type II string compactifications on nilmanifolds and coset spaces

    Energy Technology Data Exchange (ETDEWEB)

    Caviezel, Claudio

    2009-07-30

    In this thesis we analyzed a large number of type IIA strict SU(3)-structure compactifications with fluxes and O6/D6-sources, as well as type IIB static SU(2)-structure compactifications with fluxes and O5/O7-sources. Restricting to structures and fluxes that are constant in the basis of left-invariant one-forms, these models are tractable enough to allow for an explicit derivation of the four-dimensional low-energy effective theory. The six-dimensional compact manifolds we studied in this thesis are nilmanifolds based on nilpotent Lie-algebras, and, on the other hand, coset spaces based on semisimple and U(1)-groups, which admit a left-invariant strict SU(3)- or static SU(2)-structure. In particular, from the set of 34 distinct nilmanifolds we identified two nilmanifolds, the torus and the Iwasawa manifold, that allow for an AdS{sub 4}, N = 1 type IIA strict SU(3)-structure solution and one nilmanifold allowing for an AdS{sub 4}, N = 1 type IIB static SU(2)-structure solution. From the set of all the possible six-dimensional coset spaces, we identified seven coset spaces suitable for strict SU(3)-structure compactifications, four of which also allow for a static SU(2)-structure compactification. For all these models, we calculated the four-dimensional low-energy effective theory using N = 1 supergravity techniques. In order to write down the most general four-dimensional effective action, we also studied how to classify the different disconnected ''bubbles'' in moduli space. (orig.)

  7. On the effective theory of type II string compactifications on nilmanifolds and coset spaces

    International Nuclear Information System (INIS)

    In this thesis we analyzed a large number of type IIA strict SU(3)-structure compactifications with fluxes and O6/D6-sources, as well as type IIB static SU(2)-structure compactifications with fluxes and O5/O7-sources. Restricting to structures and fluxes that are constant in the basis of left-invariant one-forms, these models are tractable enough to allow for an explicit derivation of the four-dimensional low-energy effective theory. The six-dimensional compact manifolds we studied in this thesis are nilmanifolds based on nilpotent Lie-algebras, and, on the other hand, coset spaces based on semisimple and U(1)-groups, which admit a left-invariant strict SU(3)- or static SU(2)-structure. In particular, from the set of 34 distinct nilmanifolds we identified two nilmanifolds, the torus and the Iwasawa manifold, that allow for an AdS4, N = 1 type IIA strict SU(3)-structure solution and one nilmanifold allowing for an AdS4, N = 1 type IIB static SU(2)-structure solution. From the set of all the possible six-dimensional coset spaces, we identified seven coset spaces suitable for strict SU(3)-structure compactifications, four of which also allow for a static SU(2)-structure compactification. For all these models, we calculated the four-dimensional low-energy effective theory using N = 1 supergravity techniques. In order to write down the most general four-dimensional effective action, we also studied how to classify the different disconnected ''bubbles'' in moduli space. (orig.)

  8. The early life origin theory in the development of cardiovascular disease and type 2 diabetes.

    Science.gov (United States)

    Lindblom, Runa; Ververis, Katherine; Tortorella, Stephanie M; Karagiannis, Tom C

    2015-04-01

    Life expectancy has been examined from a variety of perspectives in recent history. Epidemiology is one perspective which examines causes of morbidity and mortality at the population level. Over the past few 100 years there have been dramatic shifts in the major causes of death and expected life length. This change has suffered from inconsistency across time and space with vast inequalities observed between population groups. In current focus is the challenge of rising non-communicable diseases (NCD), such as cardiovascular disease and type 2 diabetes mellitus. In the search to discover methods to combat the rising incidence of these diseases, a number of new theories on the development of morbidity have arisen. A pertinent example is the hypothesis published by David Barker in 1995 which postulates the prenatal and early developmental origin of adult onset disease, and highlights the importance of the maternal environment. This theory has been subject to criticism however it has gradually gained acceptance. In addition, the relatively new field of epigenetics is contributing evidence in support of the theory. This review aims to explore the implication and limitations of the developmental origin hypothesis, via an historical perspective, in order to enhance understanding of the increasing incidence of NCDs, and facilitate an improvement in planning public health policy. PMID:25270249

  9. Hints for Off-Shell Mirror Symmetry in type II/F-theory Compactifications

    CERN Document Server

    Alim, Murad; Jockers, Hans; Mayr, Peter; Mertens, Adrian; Soroush, Masoud

    2009-01-01

    We perform a Hodge theoretic study of parameter dependent families of D-branes on compact Calabi-Yau manifolds in type II and F-theory compactifcations. Starting from a geometric Gauss-Manin connection for B type branes we study the integrability and flatness conditions. The B model geometry defines an interesting ring structure of operators. For the mirror A model this indicates the existence of an open-string extension of the so-called A model connection, whereas the discovered ring structure should be part of the open-string A model quantum cohomology. We obtain predictions for genuine Ooguri-Vafa invariants for Lagrangian branes on the quintic in P4 that pass some non-trivial consistency checks. We discuss the lift of the brane compactifications to F-theory on Calabi-Yau 4-folds and the effective couplings in the effective supergravity action as determined by the N = 1 special geometry of the open-closed deformation space.

  10. Engineering kinematic theory of ground contact pressure as applied to calculation of certain types of foundations

    Directory of Open Access Journals (Sweden)

    V.S. Korovkin

    2014-10-01

    Full Text Available A brief analysis of the examined groundwater models has shown that since there is a large variety of soil types and their properties it is impossible to create a universal ground model. A variant of the engineering kinematic theory of ground contact pressure as applied to calculation of certain types of foundations has been suggested. To disclose static indetermination of soil behavior under load, interacting with foundations or fencing, its dimensionless diagram of soil deformation was used, presented as a nonlinear function. An equation for the contact soil pressure on foundations with the use of the proposed coefficient of vertical pressure associated with the coefficients of lateral pressure on the walls of conventional seal wedge was given. An engineering solution has been obtained for rigid foundations settlement for a complete cycle of vertical load. The way to determine the stiffness coefficient of the foundation soil was presented. Application of the engineering kinematic theory of ground contact pressure was shown on some practical examples.

  11. A New Type of Coupled Wave Theory Capable of Analytically Describing Diffraction in Polychromatic Gratings and Holograms

    International Nuclear Information System (INIS)

    A new type of coupled wave theory is described which is capable, in a very natural way, of analytically describing polychromatic gratings. In contrast to the well known and extremely successful coupled wave theory of Kogelnik, the new theory is based on a differential formulation of the process of Fresnel reflection within the grating. The fundamental coupled wave equations, which are an exact solution of Maxwell's equations for the case of the un-slanted reflection grating, can be analytically solved with minimal approximation. The equations may also be solved in a rotated frame of reference to provide useful formulae for the diffractive efficiency of the general polychromatic slanted grating in three dimensions. The new theory is compared with Kogelnik's theory where extremely good agreement is found for most cases. The theory has also been compared to a rigorous computational chain matrix simulation of the un-slanted grating with excellent agreement for cases typical to display holography. In contrast, Kogelnik's theory shows small discrepancies away from Bragg resonance. The new coupled wave theory may easily be extended to an N-coupled wave theory for the case of the multiplexed polychromatic grating and indeed for the purposes of analytically describing diffraction in the colour hologram. In the simple case of a monochromatic spatially-multiplexed grating at Bragg resonance the theory is in exact agreement with the predictions of conventional N-coupled wave theoctions of conventional N-coupled wave theory.

  12. Method of Moments for the Continuous Transition Between the Brillouin-Wigner-Type and Rayleigh-Schroedinger-Type Multireference Coupled Cluster Theories

    OpenAIRE

    Pittner, Jiri; Piecuch, Piotr

    2009-01-01

    Abstract We apply the method of moments to the multireference (MR) coupled cluster (CC) formalism representing the continuous transition between the Brillouin-Wigner-type and Rayleigh-Schr\\"{o}dinger-type theories based on the Jeziorski-Monkhorst wave function ansatz and derive the formula for the noniterative energy corrections to the corresponding MRCC energies that recover the exact, full configuration interaction energies in the general model space case, inclu...

  13. Understanding physical activity intentions among French Canadians with type 2 diabetes: an extension of Ajzen's theory of planned behaviour

    OpenAIRE

    Godin Gaston; Boudreau François

    2009-01-01

    Abstract Background Regular physical activity is considered a cornerstone for managing type 2 diabetes. However, in Canada, most individuals with type 2 diabetes do not meet national physical activity recommendations. When designing a theory-based intervention, one should first determine the key determinants of physical activity for this population. Unfortunately, there is a lack of information on this aspect among adults with type 2 diabetes. The purpose of this cross-sectional study is to f...

  14. Type II/F-theory Superpotentials with Several Deformations and N=1 Mirror Symmetry

    CERN Document Server

    Alim, Murad; Jockers, Hans; Mayr, Peter; Mertens, Adrian; Soroush, Masoud

    2010-01-01

    We present a detailed study of D-brane superpotentials depending on several open and closed-string deformations. The relative cohomology group associated with the brane defines a generalized hypergeometric GKZ system which determines the off-shell superpotential and its analytic properties under deformation. Explicit expressions for the N=1 superpotential for families of type II/F-theory compactifications are obtained for a list of multi-parameter examples. Using the Hodge theoretic approach to open-string mirror symmetry, we obtain new predictions for integral disc invariants in the A model instanton expansion. We study the behavior of the brane vacua under extremal transitions between different Calabi-Yau spaces and observe that the web of Calabi-Yau vacua remains connected for a particular class of branes.

  15. Enhanced Gauged Symmetry in Type II and F-Theory Compactifications Dynkin Diagrams from Polyhedra

    CERN Document Server

    Perevalov, E V; Perevalov, Eugene; Skarke, Harald

    1997-01-01

    We explain the observation by Candelas and Font that the Dynkin diagrams of nonabelian gauge groups occurring in type IIA and F-theory can be read off from the polyhedron $\\Delta^*$ that provides the toric description of the Calabi--Yau manifold used for compacification. We show how the intersection pattern of toric divisors corresponding to the degeneration of elliptic fibers follows the ADE classification of singularities and the Kodaira classification of degenerations. We treat in detail the cases of elliptic K3 surfaces and K3 fibered threefolds where the fiber is again elliptic. We also explain how even the occurrence of monodromy and non-simply laced groups in the latter case is visible in the toric picture. These methods also work in the fourfold case.

  16. Enhanced gauge symmetry in type II and F-theory compactifications: Dynkin diagrams from polyhedra

    International Nuclear Information System (INIS)

    We explain the observation by Candelas and Font that the Dynkin diagrams of non-abelian gauge groups occurring in type IIA and F-theory can be read off from the polyhedron ?* that provides the toric description of the Calabi-Yau manifold used for compactification. We show how the intersection pattern of toric divisors corresponding to the degeneration of elliptic fibers follows the ADE classification of singularities and the Kodaira classification of degenerations. We treat in detail the cases of elliptic K3 surfaces and K3 fibered threefolds where the fiber is again elliptic. We also explain how even the occurrence of monodromy and non-simply laced groups in the latter case is visible in the toric picture. These methods also work in the fourfold case. (orig.)

  17. Enhanced gauge symmetry in type II and F-theory compactifications: Dynkin diagrams from polyhedra

    Energy Technology Data Exchange (ETDEWEB)

    Perevalov, E.; Skarke, H. [Texas Univ., Austin, TX (United States). Dept. of Physics

    1997-11-17

    We explain the observation by Candelas and Font that the Dynkin diagrams of non-abelian gauge groups occurring in type IIA and F-theory can be read off from the polyhedron {Delta}{sup *} that provides the toric description of the Calabi-Yau manifold used for compactification. We show how the intersection pattern of toric divisors corresponding to the degeneration of elliptic fibers follows the ADE classification of singularities and the Kodaira classification of degenerations. We treat in detail the cases of elliptic K3 surfaces and K3 fibered threefolds where the fiber is again elliptic. We also explain how even the occurrence of monodromy and non-simply laced groups in the latter case is visible in the toric picture. These methods also work in the fourfold case. (orig.). 29 refs.

  18. Natural inflation with and without modulations in type IIB string theory

    CERN Document Server

    Abe, Hiroyuki; Otsuka, Hajime

    2014-01-01

    We propose a mechanism for the natural inflation with and without modulation in the framework of type IIB string theory on toroidal orientifold or orbifold. We explicitly construct the stabilization potential of complex structure, dilaton and K\\"ahler moduli, where one of the imaginary component of complex structure moduli becomes light which is identified as the inflaton. The inflaton potential is generated by the gaugino-condensation term which receives the one-loop threshold corrections determined by the field value of complex structure moduli and the axion decay constant of inflaton is enhanced by the inverse of one-loop factor. We also find the threshold corrections can also induce the modulations to the original scalar potential for the natural inflation. Depending on these modulations, we can predict several sizes of tensor-to-scalar ratio as well as the other cosmological observables reported by WMAP, Planck and/or BICEP2 collaborations.

  19. Fluxed instantons and moduli stabilization in type IIB orientifolds and F theory

    International Nuclear Information System (INIS)

    We study the superpotential induced by Euclidean D3-brane instantons carrying instanton flux, with special emphasis on its significance for the stabilization of Kaehler moduli and Neveu-Schwarz axions in type IIB orientifolds. Quite generally, once a chiral observable sector is included in the compactification, arising on intersecting D7-branes with world-volume flux, resulting charged instanton zero modes prevent a class of instantons from participating in moduli stabilization. We show that instanton flux on Euclidean D3-branes can remove these extra zero modes and help in reinstating full moduli stabilization within a geometric regime. We comment also on the F-theoretic description of this effect of alleviating the general tension between moduli stabilization and chirality. In addition, we propose an alternative solution to this problem based on dressing the instantons with charged matter fields, which is unique to F theory and cannot be realized in the weak coupling limit.

  20. Theory of Type-II Superconductors with Finite London Penetration Depth

    CERN Document Server

    Brandt, E H

    2001-01-01

    Previous continuum theory of type-II superconductors of various shapes with and without vortex pinning in an applied magnetic field and with transport current, is generalized to account for a finite London penetration depth lambda. This extension is particularly important at low inductions B, where the transition to the Meissner state is now described correctly, and for films with thickness comparable to or smaller than lambda. The finite width of the surface layer with screening currents and the correct dc and ac responses in various geometries follow naturally from an equation of motion for the current density in which the integral kernel now accounts for finite lambda. New geometries considered here are thick and thin strips with applied current, and `washers', i.e. thin film squares with a slot and central hole as used for SQUIDs.

  1. The Origin of Quantification

    Directory of Open Access Journals (Sweden)

    Edward MacKinnon

    2013-11-01

    Full Text Available Neither the Greek nor the Alexandrian nor the early Arabic philosopher/scientists ever developed a mathematical representation of qualities, a prerequisite for a mathematical physics. By the early seventeenth century the quantification of qualities was a common practice. This article traces the way this practice developed. It originated with a medievally theological problem and was developed by philosophical logicians who did not have mathematical physics as a goal. The verbal algebra they developed was given a mathematical formulation in the late fifteenth century. This was subsequently assimilated into a neo-Platonic revival that stressed mathematical forms. The quantification of qualities developed in physics supplied the paradigm for quantification in other fields.

  2. On the worldsheet theory of the type IIA AdS(4) x CP(3) superstring

    CERN Document Server

    Sundin, Per

    2009-01-01

    We perform a detailed study of the type IIA superstring in AdS(4)xCP(3). After introducing suitable bosonic light-cone and fermionic kappa worldsheet gauges we present the full SU(2|2)xU(1) covariant string Lagrangian. We then expand the theory in a strong coupling limit and derive the light-cone Hamiltonian up to quartic order in number of fields. To maintain a canonical Poisson structure we have to implement a shift of the fermionic coordinates with the result that the phase space Hamiltonian is rather involved. As a first application of our derivation we calculate energy shifts for string configurations in a closed fermionic subsector and successfully match these with a set of light-cone Bethe equations. We then turn to investigate the mismatch between the degrees of freedom of scattering states and oscillatory string modes. Since only light string modes appear as fundamental Bethe roots in the scattering theory, the physical role of the remaining 4f+4b massive oscillators is rather unclear. By continuing ...

  3. Mirage models confront the LHC. II. Flux-stabilized type IIB string theory

    Science.gov (United States)

    Kaufman, Bryan L.; Nelson, Brent D.

    2014-04-01

    We continue the study of a class of string-motivated effective supergravity theories in light of current data from the CERN Large Hadron Collider (LHC). In this installment we consider type IIB string theory compactified on a Calabi-Yau orientifold in the presence of fluxes, in the manner originally formulated by Kachru et al. We allow for a variety of potential uplift mechanisms and embeddings of the Standard Model field content into D3-and D7-brane configurations. We find that an uplift sector independent of the Kähler moduli, as is the case with anti-D3-branes, is inconsistent with data unless the matter and Higgs sectors are localized on D7 branes exclusively, or are confined to twisted sectors between D3-and D7-branes. We identify regions of parameter space for all possible D-brane configurations that remain consistent with Planck observations on the dark matter relic density and measurements of the CP-even Higgs mass at the LHC. Constraints arising from LHC searches at ?s =8 TeV and the LUX dark matter detection experiment are discussed. The discovery prospects for the remaining parameter space at dark matter direct-detection experiments are described, and signatures for detection of superpartners at the LHC with ?s =14 TeV are analyzed.

  4. Quantification of Spatial Errors of Precipitation Rates and Types from the TRMM Precipitation Radar (the latest successive V6 and V7) over the United States

    Science.gov (United States)

    Chen, S.; Kirstetter, P.; Hong, Y.; Gourley, J. J.; Zhang, J.; Howard, K.; Hu, J.

    2012-12-01

    The spatial error structure of surface rain rates and types from NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) was systematically studied by comparing them with NOAA/National Severe Storms Laboratory's (NSSL) next generation, high-resolution (1km/5min) National Mosaic QPE (Q2) over the TRMM-covered Continental United States (CONUS). Data pairs are first matched at the PR footprint scale (5km/instantaneous) and then grouped into 0.25-degree grid cells to yield spatially distributed error maps and statistics using data from Dec. 2009 through Nov. 2010. Performance metrics include bias, relative bias (RB), root-mean-square error (RMSE), correlation coefficient (CC), and contingency table statistics for different rain rate thresholds and rain types. The differences of bias, RB, RMSE and CC of the latest successive version-6(V6) PR (PRV6) and version-7 PR (PRV7) will show where and how PRV7 improves over PRV6.

  5. Quantification of the physiochemical constraints on the export of spider silk proteins by Salmonella type III secretion

    OpenAIRE

    Voigt Christopher A; Widmaier Daniel M

    2010-01-01

    Abstract Background The type III secretion system (T3SS) is a molecular machine in gram negative bacteria that exports proteins through both membranes to the extracellular environment. It has been previously demonstrated that the T3SS encoded in Salmonella Pathogenicity Island 1 (SPI-1) can be harnessed to export recombinant proteins. Here, we demonstrate the secretion of a variety of unfolded spider silk proteins and use these data to quantify the constraints of this system with respect to t...

  6. Quantification of beta-cell function during IVGTT in Type II and non-diabetic subjects: assessment of insulin secretion by mathematical methods.

    DEFF Research Database (Denmark)

    Kjems, L L; VØlund, A

    2001-01-01

    AIMS/HYPOTHESIS: We compared four methods to assess their accuracy in measuring insulin secretion during an intravenous glucose tolerance test in patients with Type II (non-insulin-dependent) diabetes mellitus and with varying beta-cell function and matched control subjects. METHODS: Eight control subjects and eight Type II diabetic patients underwent an intravenous glucose tolerance test with tolbutamide and an intravenous bolus injection of C-peptide to assess C-peptide kinetics. Insulin secretion rates were determined by the Eaton deconvolution (reference method), the Insulin SECretion method (ISEC) based on population kinetic parameters as well as one-compartment and two-compartment versions of the combined model of insulin and C-peptide kinetics. To allow a comparison of the accuracy of the four methods, fasting rates and amounts of insulin secreted during the first phase (0-10 min) and the second phase (10-180 min) were calculated. RESULTS: All secretion responses from the ISEC method were strongly correlated to those obtained by the Eaton deconvolution method (r = 0.83-0.92). The one-compartment combined model, however, showed a high correlation to the reference method only for the first-phase insulin response (r = 0.78). The two-compartment combined model failed to provide reliable estimates of insulin secretion in three of the control subjects and in two patients with Type II diabetes. The four methods were accurate with respect to mean basal and first-phase secretion response. The one-compartment and two-compartment combined models were less accurate in measuring the second-phase response. CONCLUSION/INTERPRETATION: The ISEC method can be applied to normal, obese or Type II diabetic patients. In patients with deviating kinetics of C-peptide the Eaton deconvolution method is the method of choice while the one-compartment combined model is suitable for measuring only the first-phase insulin secretion.

  7. A new simple and rapid LC-ESI-MS/MS method for quantification of plasma oxysterols as dimethylaminobutyrate esters. Its successful use for the diagnosis of Niemann-Pick type C disease.

    Science.gov (United States)

    Boenzi, Sara; Deodato, Federica; Taurisano, Roberta; Martinelli, Diego; Verrigni, Daniela; Carrozzo, Rosalba; Bertini, Enrico; Pastore, Anna; Dionisi-Vici, Carlo; Johnson, David W

    2014-11-01

    Two oxysterols, cholestan-3?,5?,6?-triol (C-triol) and 7-ketocholesterol (7-KC), have been recently proposed as diagnostic markers of Niemann-Pick type C (NP-C) disease, representing a potential alternative diagnostic tool to the more invasive and time consuming filipin test in cultured fibroblasts. Usually, the oxysterols are detected and quantified by liquid chromatography-tandem mass spectrometry (LC-MS/MS) method using atmospheric pressure chemical ionization (APCI) or electro-spray-ionization (ESI) sources, after a variety of derivatization procedures to enhance sensitivity. We developed a sensitive LC-MS/MS method to quantify the oxysterols in plasma as dimethylaminobutyrate ester, suitable for ESI analysis. This method, with an easy liquid-phase extraction and a short derivatization procedure, has been validated to demonstrate specificity, linearity, recovery, lowest limit of quantification, accuracy and precision. The assay was linear over a concentration range of 0.5-200ng/mL for C-triol and 1.0-200ng/mL for 7-KC. Intra-day and inter-day coefficients of variation (CV%) were disease. PMID:25038260

  8. LRS Bianchi type -V cosmology with heat flow in scalar: tensor theory

    Scientific Electronic Library Online (English)

    C.P., Singh.

    2009-12-01

    Full Text Available In this paper we present a spatially homogeneous locally rotationally symmetric (LRS) Bianchi type -V perfect fluid model with heat conduction in scalar tensor theory proposed by Saez and Ballester. The field equations are solved with and without heat conduction by using a law of variation for the m [...] ean Hubble parameter, which is related to the average scale factor of metric and yields a constant value for the deceleration parameter. The law of variation for the mean Hubble parameter generates two types of cosmologies one is of power -law form and second the exponential form. Using these two forms singular and non -singular solutions are obtained with and without heat conduction. We observe that a constant value of the deceleration parameter is reasonable a description of the different phases of the universe. We arrive to the conclusion that the universe decelerates for positive value of deceleration parameter where as it accelerates for negative one. The physical constraints on the solutions of the field equations, and, in particular, the thermodynamical laws and energy conditions that govern such solutions are discussed in some detail.The behavior of the observationally important parameters like expansion scalar, anisotropy parameter and shear scalar is considered in detail.

  9. Three-dimensional theory of quantum memories based on ?-type atomic ensembles

    International Nuclear Information System (INIS)

    We develop a three-dimensional theory for quantum memories based on light storage in ensembles of ?-type atoms, where two long-lived atomic ground states are employed. We consider light storage in an ensemble of finite spatial extent and we show that within the paraxial approximation the Fresnel number of the atomic ensemble and the optical depth are the only important physical parameters determining the quality of the quantum memory. We analyze the influence of these parameters on the storage of light followed by either forward or backward read-out from the quantum memory. We show that for small Fresnel numbers the forward memory provides higher efficiencies, whereas for large Fresnel numbers the backward memory is advantageous. The optimal light modes to store in the memory are presented together with the corresponding spin waves and outcoming light modes. We show that for high optical depths such ?-type atomic ensembles allow for highly efficient backward and forward memories even for small Fresnel numbers F(greater-or-similar sign)0.1.

  10. Analytical validation of a second-generation immunoassay for the quantification of N-terminal pro-B-type natriuretic peptide in canine blood.

    Science.gov (United States)

    Cahill, Roberta J; Pigeon, Kathleen; Strong-Townsend, Marilyn I; Drexel, Jan P; Clark, Genevieve H; Buch, Jesse S

    2015-01-01

    N-terminal pro-B-type natriuretic peptide (NT-proBNP) has been shown to have clinical utility as a biomarker in dogs with heart disease. There were several limitations associated with early diagnostic assay formats including a limited dynamic range and the need for protease inhibitors to maintain sample stability. A second-generation Cardiopet® proBNP enzyme-linked immunosorbent assay (IDEXX Laboratories Inc., Westbrook, Maine) was developed to address these limitations, and the present study reports the results of the analytical method validation for the second-generation assay. Coefficients of variation for intra-assay, interassay, and total precision based on 8 samples ranged from 3.9% to 8.9%, 2.0% to 5.0%, and 5.5% to 10.6%, respectively. Analytical sensitivity was established at 102 pmol/l. Accuracy averaged 102.0% based on the serial dilutions of 5 high-dose canine samples. Bilirubin, lipids, and hemoglobin had no effect on results. Reproducibility across 3 unique assay lots was excellent with an average coefficient of determination (r (2)) of 0.99 and slope of 1.03. Both ethylenediamine tetra-acetic acid plasma and serum gave equivalent results at time of blood draw (slope = 1.02, r (2) = 0.89; n = 51) but NT-proBNP was more stable in plasma at 25°C with median half-life measured at 244 hr and 136 hr for plasma and serum, respectively. Plasma is the preferred sample type and is considered stable up to 48 hr at room temperature whereas serum should be frozen or refrigerated when submitted for testing. Results of this study validate the second-generation canine Cardiopet proBNP assay for accurate and precise measurement of NT-proBNP in routine sample types from canine patients. PMID:25525139

  11. Quantification of the DNA Cleavage and Packaging Proteins UL15 and UL28 in A and B Capsids of Herpes Simplex Virus Type 1

    OpenAIRE

    Beard, Philippa M.; Duffy, Carol; Baines, Joel D.

    2004-01-01

    The proteins produced by the herpes simplex virus type 1 (HSV-1) genes UL15 and UL28 are believed to form part of the terminase enzyme, a protein complex essential for the cleavage of newly synthesized, concatameric herpesvirus DNA and the packaging of the resultant genome lengths into preformed capsids. This work describes the purification of recombinant forms of pUL15 and pUL28, which allowed the calculation of the average number of copies of each protein in A and B capsids and in capsids l...

  12. Evaluation of the VACUTAINER PPT Plasma Preparation Tube for Use with the Bayer VERSANT Assay for Quantification of Human Immunodeficiency Virus Type 1 RNA

    OpenAIRE

    Elbeik, Tarek; Nassos, Patricia; Kipnis, Patricia; Haller, Barbara; Ng, Valerie L.

    2005-01-01

    Separation and storage of plasma within 2 h of phlebotomy is required for the VACUTAINER PPT Plasma Preparation Tube (PPT) versus 4 h for the predecessor VACUTAINER EDTA tube for human immunodeficiency virus type 1 (HIV-1) viral load (HIVL) testing by the VERSANT HIV-1 RNA 3.0 assay (branched DNA). The 2-h limit for PPT imposes time constraints for handling and transporting to the testing laboratory. This study compares HIVL reproducibility from matched blood in EDTA tubes and PPTs and betwee...

  13. Evaluation of the performance of 57 Japanese participating laboratories by two types of z-scores in proficiency test for the quantification of pesticide residues in brown rice.

    Science.gov (United States)

    Otake, Takamitsu; Yarita, Takashi; Aoyagi, Yoshie; Numata, Masahiko; Takatsu, Akiko

    2014-11-01

    A proficiency test for the analysis of pesticide residues in brown rice was carried out to support upgrading in analytical skills of participant laboratories. Brown rice containing three target pesticides (etofenprox, fenitrothion, and isoprothiolane) was used as the test samples. The test samples were distributed to the 57 participants and analyzed by appropriate analytical methods chosen by each participant. It was shown that there was no significant difference among the reported values obtained by different types of analytical method. The analytical results obtained by National Metrology Institute of Japan (NMIJ) were 3 % to 10 % greater than those obtained by participants. The results reported by the participant were evaluated by using two types of z-scores, that is, one was the score based on the consensus values calculated from the analytical results of participants, and the other one was the score based on the reference values obtained by NMIJ with high reliability. Acceptable z-scores based on the consensus values and NMIJ reference values were achieved by 87 % to 89 % and 79 % to 94 % of the participants, respectively. PMID:25258285

  14. Non-perturbative black holes in Type-IIA String Theory versus the No-Hair conjecture

    International Nuclear Information System (INIS)

    We obtain the first black hole solution to Type-IIA String Theory compactified on an arbitrary self-mirror Calabi–Yau manifold in the presence of non-perturbative quantum corrections. Remarkably enough, the solution involves multivalued functions, which could lead to a violation of the No-Hair conjecture. We discuss how String Theory forbids such scenario. However, the possibility still remains open in the context of four-dimensional ungauged Supergravity. (paper)

  15. Theory of flux cutting and flux transport at the critical current of a type-II superconducting cylindrical wire

    OpenAIRE

    Clem, John R.

    2011-01-01

    I introduce a critical-state theory incorporating both flux cutting and flux transport to calculate the magnetic-field and current-density distributions inside a type-II superconducting cylinder at its critical current in a longitudinal applied magnetic field. The theory is an extension of the elliptic critical-state model introduced by Romero-Salazar and Perez-Rodriguez. The vortex dynamics depend in detail upon two nonlinear effective resistivities for flux cutting (\\rho_\\...

  16. D6R4 term in type IIB string theory on T2 and U-duality

    International Nuclear Information System (INIS)

    We propose a manifestly U-duality invariant modular form for the D6R4 interaction in the effective action of type IIB string theory compactified on T2. It receives perturbative contributions up to genus three, as well as nonperturbative contributions from D-instantons and (p,q) string instantons wrapping T2. Our construction is based on constraints coming from string perturbation theory, U-duality, the decompactification limit to ten dimensions, and the equality of the perturbative part of the amplitude in type IIA and type IIB string theories. Using duality, parts of the perturbative amplitude are also shown to match exactly the results obtained from 11 dimensional supergravity compactified on T3 at one loop. We also obtain parts of the genus one and genus k amplitudes for the D2kR4 interaction for arbitrary k?4. We enhance a part of this amplitude to a U-duality invariant modular form

  17. Convex Optimal Uncertainty Quantification

    OpenAIRE

    Han, Shuo; Tao, Molei; Topcu, Ufuk; Owhadi, Houman; Murray, Richard M.

    2013-01-01

    Optimal uncertainty quantification (OUQ) is a framework for numerical extreme-case analysis of stochastic systems with imperfect knowledge of the underlying probability distribution and functions/events. This paper presents sufficient conditions (when underlying functions are known) under which an OUQ problem can be reformulated as a finite-dimensional convex optimization problem.

  18. All order $\\alpha'$ higher derivative corrections to non-BPS branes of type IIB Super string theory

    CERN Document Server

    Hatefi, Ehsan

    2013-01-01

    By dealing with the evaluation of string theory correlators of $$, the complete and closed form of the amplitude of two fermion fields, one tachyon and one closed string Ramond-Ramond field in type IIB super string theory is found. Specifically by comparing infinite tachyon poles in field theory amplitude with infinite tachyon poles of the S-matrix of string amplitude (for $p+1=n$ case), all the infinite higher derivative corrections of two tachyons and two fermions (in type IIB) to all orders of $\\alpha'$ have been discovered. Using these new couplings, we are able to produce infinite $t'+s'+u$-channel tachyon poles of string theory in field theory. Due to internal degrees of freedom of fermions and tachyon (Chan-Paton factors) we comment that, neither there should be single $s,t-$channel fermion (tachyon pole) nor their infinite poles. Due to internal CP factor we also discover that there is no coupling between two closed string Ramond-Ramond field and one tachyon in type II super string theory. Taking into...

  19. Mobilité du plomb radiogénique dans le gisement d'uranium de type discordance de Shea Creek (Saskatchewan, Canada) : chemins de migration et quantification des pertes en Pb

    Science.gov (United States)

    Kister, Philippe; Cuney, Michel; Golubev, Viacheslav N.; Royer, Jean-Jacques; Le Carlier De Veslud, Christian; Rippert, Jean-Claude

    2004-03-01

    The average Pb/U ratio of the Shea Creek unconformity-type uranium deposit has been estimated at 0.071±0.015. The calculation was performed on a volume enclosing the orebody to take into account the possible radiogenic lead migration within the ore zone. Despite this precaution, this ratio is significantly lower than the expected ratio (0.211) assuming a main U deposition around 1315 Ma, as suggested by previous U?Pb isotopic dating. Although part of the radiogenic lead can be trapped as galena within the orebody, about 60% of Pb have migrated more than 700 m away from the orebody, preferentially along the unconformity. To cite this article: P. Kister et al., C. R. Geoscience 336 (2004).

  20. Search of unified theory of basic types of elementary particle interactions. Part 2

    International Nuclear Information System (INIS)

    An attempt is made at evolving a renormalized theory unifying the electromagnetic interactions theory and the weak interactions theory. The theory is based on the principle of the gauge symmetry and the idea of its spontaneous breaking. Feynman and Gell-Mann suggested a universal four-fermion theory completed with an intermediate boson theory. In order that the theory be renormalized it should be based on a suitable group of local gauge symmetries. All weak interactions comprised in the universal four-fermion theory are correctly described by a gauge theory based on weak isotopic symmetry. When also the phase symmetry is introduced, the photon can be included in the gauge theory suggested. In addition to vector bosons and fermions, the Higgs boson with a zero spin is present in the theory. The theory unifying weak and electromagnetic interactions was proposed by Weinberg and Salam and is confirmed by experiments and new discoveries. Another attempt includes the establishment of a theory unifying the said interactions with strong interactions, the so-called grand unification. The task here consists in finding such a symmetry group which would include as special cases of transformations symmetries corresponding to strong, weak and electromagnetic interactions. Group SU(5) seems to be a suitable group for this unification. (M.D.)

  1. Investigation of the association of growth rate in grower-finishing pigs with the quantification of Lawsonia intracellularis and porcine circovirus type 2

    DEFF Research Database (Denmark)

    Johansen, Markku; Nielsen, MaiBritt

    2013-01-01

    As a part of a prospective cohort study in four herds, a nested case control study was carried out. Five slow growing pigs (cases) and five fast growing pigs (controls) out of 60 pigs were selected for euthanasia and laboratory examination at the end of the study in each herd. A total of 238 pigs, all approximately 12 weeks old, were included in the study during the first week in the grower–finisher barn. In each herd, approximately 60 pigs from four pens were individually ear tagged. The pigs were weighed at the beginning of the study and at the end of the 6–8 weeks observation period. Clinical data, blood and faecal samples were serially collected from the 60 selected piglets every second week in the observation period. In the killed pigs serum was examined for antibodies against Lawsonia intracellularis (LI) and procine circovirus type 2 (PCV2) and in addition PCV2 viral DNA content was quantified. In faeces the quantity of LI cells/g faeces and number of PCV2 copies/g faeces was measured by qPCR. The objective of the study was to examine if growth rate in grower-finishing pig is associated with the detection of LI and PCV2 infection or clinical data. This study has shown that diarrhoea is a significant risk factor for low growth rate and that one log10 unit increase in LI load increases the odds ratio for a pig to have a low growth rate by 2.0 times. Gross lesions in the small intestine and LI load > log10 6/g were significant risk factors for low growth. No association between PCV2 virus and low growth was found.

  2. Investigation of the association of growth rate in grower-finishing pigs with the quantification of Lawsonia intracellularis and porcine circovirus type 2.

    Science.gov (United States)

    Johansen, Markku; Nielsen, Maibritt; Dahl, Jan; Svensmark, Birgitta; Bækbo, Poul; Kristensen, Charlotte Sonne; Hjulsager, Charlotte Kristiane; Jensen, Tim K; Ståhl, Marie; Larsen, Lars E; Angen, Oystein

    2013-01-01

    As a part of a prospective cohort study in four herds, a nested case control study was carried out. Five slow growing pigs (cases) and five fast growing pigs (controls) out of 60 pigs were selected for euthanasia and laboratory examination at the end of the study in each herd. A total of 238 pigs, all approximately 12 weeks old, were included in the study during the first week in the grower-finisher barn. In each herd, approximately 60 pigs from four pens were individually ear tagged. The pigs were weighed at the beginning of the study and at the end of the 6-8 weeks observation period. Clinical data, blood and faecal samples were serially collected from the 60 selected piglets every second week in the observation period. In the killed pigs serum was examined for antibodies against Lawsonia intracellularis (LI) and procine circovirus type 2 (PCV2) and in addition PCV2 viral DNA content was quantified. In faeces the quantity of LI cells/g faeces and number of PCV2 copies/g faeces was measured by qPCR. The objective of the study was to examine if growth rate in grower-finishing pig is associated with the detection of LI and PCV2 infection or clinical data. This study has shown that diarrhoea is a significant risk factor for low growth rate and that one log(10) unit increase in LI load increases the odds ratio for a pig to have a low growth rate by 2.0 times. Gross lesions in the small intestine and LI load>log(10)6/g were significant risk factors for low growth. No association between PCV2 virus and low growth was found. PMID:22854321

  3. The use of quantitative PCR for identification and quantification of Brachyspira pilosicoli, Lawsonia intracellularis and Escherichia coli fimbrial types F4 and F18 in pig feces.

    Science.gov (United States)

    Ståhl, M; Kokotovic, B; Hjulsager, C K; Breum, S Ø; Angen, Ø

    2011-08-01

    Four quantitative PCR (qPCR) assays were evaluated for quantitative detection of Brachyspira pilosicoli, Lawsonia intracellularis, and E. coli fimbrial types F4 and F18 in pig feces. Standard curves were based on feces spiked with the respective reference strains. The detection limits from the spiking experiments were 10(2) bacteria/g feces for Bpilo-qPCR and Laws-qPCR, 10(3)CFU/g feces for F4-qPCR and F18-qPCR. The PCR efficiency for all four qPCR assays was between 0.91 and 1.01 with R(2) above 0.993. Standard curves, slopes and elevation, varied between assays and between measurements from pure DNA from reference strains and feces spiked with the respective strains. The linear ranges found for spiked fecal samples differed both from the linear ranges from pure culture of the reference strains and between the qPCR tests. The linear ranges were five log units for F4-qPCR, and Laws-qPCR, six log units for F18-qPCR and three log units for Bpilo-qPCR in spiked feces. When measured on pure DNA from the reference strains used in spiking experiments, the respective log ranges were: seven units for Bpilo-qPCR, Laws-qPCR and F18-qPCR and six log units for F4-qPCR. This shows the importance of using specific standard curves, where each pathogen is analysed in the same matrix as sample DNA. The qPCRs were compared to traditional bacteriological diagnostic methods and found to be more sensitive than cultivation for E. coli and B. pilosicoli. The qPCR assay for Lawsonia was also more sensitive than the earlier used method due to improvements in DNA extraction. In addition, as samples were not analysed for all four pathogen agents by traditional diagnostic methods, many samples were found positive for agents that were not expected on the basis of age and case history. The use of quantitative PCR tests for diagnosis of enteric diseases provides new possibilities for veterinary diagnostics. The parallel simultaneous analysis for several bacteria in multi-qPCR and the determination of the quantities of the infectious agents increases the information obtained from the samples and the chance for obtaining a relevant diagnosis. PMID:21530108

  4. The use of quantitative PCR for identification and quantification of Brachyspira pilosicoli, Lawsonia intracellularis and Escherichia coli fimbrial types F4 and F18 in pig feces

    DEFF Research Database (Denmark)

    Ståhl, Marie; Kokotovic, Branko

    2011-01-01

    Four quantitative PCR (qPCR) assays were evaluated for quantitative detection of Brachyspira pilosicoli, Lawsonia intracellularis, and E. coli fimbrial types F4 and F18 in pig feces. Standard curves were based on feces spiked with the respective reference strains. The detection limits from the spiking experiments were 102 bacteria/g feces for BpiloqPCR and Laws-qPCR, 103 CFU/g feces for F4-qPCR and F18-qPCR. The PCR efficiency for all four qPCR assays was between 0.91 and 1.01 with R2 above 0.993. Standard curves, slopes and elevation, varied between assays and between measurements from pure DNA from reference strains and feces spiked with the respective strains. The linear ranges found for spiked fecal samples differed both from the linear ranges from pure culture of the reference strains and between the qPCR tests. The linear ranges were five log units for F4- qPCR, and Laws-qPCR, six log units for F18-qPCR and three log units for Bpilo-qPCR in spiked feces. When measured on pure DNA from the reference strains used in spiking experiments, the respective log ranges were: seven units for Bpilo-qPCR, Laws-qPCR and F18-qPCR and six log units for F4-qPCR. This shows the importance of using specific standard curves, where each pathogen is analysed in the same matrix as sample DNA. The qPCRs were compared to traditional bacteriological diagnostic methods and found to be more sensitive than cultivation for E. coli and B. pilosicoli. The qPCR assay for Lawsonia was also more sensitive than the earlier used method due to improvements in DNA extraction. In addition, as samples were not analysed for all four pathogen agents by traditional diagnostic methods, many samples were found positive for agents that were not expected on the basis of age and case history. The use of quantitative PCR tests for diagnosis of enteric diseases provides new possibilities for veterinary diagnostics. The parallel simultaneous analysis for several bacteria in multi-qPCR and the determination of the quantities of the infectious agents increases the information obtained from the samples and the chance for obtaining a relevant diagnosis.

  5. Experimento para quantificar a eficiência de aspersão de líquidos: aplicação em distribuidores espinha de peixe Liquid aspersion efficiency quantification experiment: application in ladder-type distributors

    Directory of Open Access Journals (Sweden)

    Marlene Silva de Moraes

    2008-03-01

    Full Text Available O presente texto descreve um equipamento na escala-piloto e um método simples para comparar a eficiência de distribuidores de líquido. A técnica consiste basicamente em analisar a massa do líquido coletado em 21 tubos verticais de 52mm de diâmetro interno e 800 mm de comprimento dispostos em arranjo quadrático colocados abaixo do distribuidor. Uma manta acrílica que não dispersa o líquido com 50 mm de espessura foi fixada entre o distribuidor e o banco de tubos para evitar respingos. Como exemplo de aplicação foram realizados ensaios com nove distribuidores do tipo espinha de peixe de 4 tubos paralelos cada, para uma coluna com 400 mm de diâmetro. Variaram-se o número (n de furos (95, 127 e 159 furos/m², o diâmetro (d dos furos (2, 3 e 4 mm e as vazões (q de (1,2; 1,4 e 1,6m³/h. A melhor eficiência de espalhamento pelo menor desvio-padrão foi obtida com n de 159, d de 2 e q de 1,4 indicando as limitações de regras práticas de projeto. A pressão (p, na entrada do distribuidor, para essa condição, foi de apenas 51000 Pa (0,51 kgf/cm² e a velocidade média (v em cada orifício foi de 6,3 m/s.This paper describes a device developed on the pilot scale and a simple approach to compare liquid distributor efficiencies. The technique consists basically of analyzing the mass of the liquid collected in 21 vertical pipes measuring 52 mm in internal diameter and 800 mm in length placed in a quadratic arrangement and positioned below the distributor. A 50 mm thick acrylic blanket that does not disperse liquids was placed between the distributor and the pipe bank to avoid splashes. Assays were carried out with ladder-type distributors equipped with 4 parallel pipes each for a column measuring 400 mm in diameter as an example of the application. The number (n of orifices (95, 127, and 159 orifices/m², orifice diameter (d (2, 3, and 4 mm and the flowrate (q (1.2; 1.4; and 1.6 m3/h were varied. The best spread efficiency, which presented the lowest standard deviation, was achieved with 159 orifices, 2 mm and 1.4 m³/h. The pressure (p at the distributor's inlet for this condition was only 51000 Pa (0.51 kgf/cm², while the average velocity (v was 6.3 m/s in each orifice. These results show some limitations of the practical rules used in distributor designs.

  6. From Peierls brackets to a generalized Moyal bracket for type-I gauge theories

    CERN Document Server

    Esposito, G; Esposito, Giampiero; Stornaiolo, Cosimo

    2006-01-01

    In the space-of-histories approach to gauge fields and their quantization, the Maxwell, Yang--Mills and gravitational field are well known to share the property of being type-I theories, i.e. Lie brackets of the vector fields which leave the action functional invariant are linear combinations of such vector fields, with coefficients of linear combination given by structure constants. The corresponding gauge-field operator in the functional integral for the in-out amplitude is an invertible second-order differential operator. For such an operator, we consider advanced and retarded Green functions giving rise to a Peierls bracket among group-invariant functionals. Our Peierls bracket is a Poisson bracket on the space of all group-invariant functionals in two cases only: either the gauge-fixing is arbitrary but the gauge fields lie on the dynamical sub-space; or the gauge-fixing is a linear functional of gauge fields, which are generic points of the space of histories. In both cases, the resulting Peierls bracke...

  7. Theory of quenching quantum fluctuations of a laser system with a ladder-type configuration

    International Nuclear Information System (INIS)

    The theory of a laser system with a ladder-type configuration is studied in detail based on the quantum Langevin approach. By using an external field to link the lower lasing level with another atomic level, whose decay rate is much larger, laser intensity significantly increases and the quantum-limited linewidth can be quenched. We also discuss the spectrum of fluctuations of the output field, and the result shows that the fluctuations at low frequencies can be much suppressed too. On the other hand, this quenching approach can realize a laser output between two atomic levels, whose decay rates do not satisfy the usual lasing condition that the decay rate of the lower lasing level should be larger than that of the upper lasing level. It will be very useful to realize a laser output with the wavelength we want. This quenching approach has been widely used in the absorption spectrum of the ytterbium optical lattice clock and in the laser cooling approach for calcium atoms. Here we apply it in the stimulated emission of lasers.

  8. f(T) theories from holographic dark energy models within Bianchi type I universe

    Science.gov (United States)

    Fayaz, V.; Hossienkhani, H.; Pasqua, A.; Amirabadi, M.; Ganji, M.

    2015-02-01

    Recently, the teleparallel Lagrangian density described by the torsion scalar T has been extended to a function of T. The f( T) modified teleparallel gravity has been proposed as the natural gravitational alternative for dark energy to explain the late time acceleration of the universe. We consider spatially homogenous and anisotropic Bianchi type I universe in the context of f( T) gravity. The purpose of this work is to develop a reconstruction of the f( T) gravity model according to the holographic dark energy model. We have considered an action, of the form T + g( T) + L m, describing Einstein's gravity plus a function of the torsion scalar. In the framework of the said modified gravity theory, we have considered the equation of state of the holographic dark energy density. Subsequently, we have developed a reconstruction scheme for modified gravity with f( T) action. Finally, we have also studied the de Sitter and power-law solutions when the universe enters a phantom phase and shown that such solutions may exist for some f( T) solutions with the holographic and new agegraphic dark energy scenario.

  9. Supersymmetry constraints on the R4 multiplet in type IIB string theory on T2

    International Nuclear Information System (INIS)

    We consider a class of eight derivative interactions in the effective action of type IIB string theory compactified on T2. These 1/2 BPS interactions have moduli-dependent couplings. We impose the constraints of supersymmetry to show that each of these couplings satisfies a first order differential equation on moduli space which relate it to other couplings in the same supermuliplet. These equations can be iterated to give second order differential equations for the various couplings. The couplings which only depend on the SL(2,R4) moduli satisfy the Laplace equation on moduli space and are given by modular forms of SL(2,Z). On the other hand the ones that only depend on the SO(3), SL(3,R4) moduli satisfy the Poisson equation on moduli space where the source terms are given by other couplings in the same supermultiplet. The couplings of the interactions which are charged under SU(2) are not automorphic forms of SL(3,Z). Among the interactions that we consider the R4 coupling depends on all the moduli. (papers)

  10. Chern class identities from tadpole matching in type IIB and F-theory

    International Nuclear Information System (INIS)

    In light of Sen's weak coupling limit of F-theory as a type IIB orientifold, the compatibility of the tadpole conditions leads to a non-trivial identity relating the Euler characteristics of an elliptically fibered Calabi-Yau fourfold and of certain related surfaces. We present the physical argument leading to the identity, and a mathematical derivation of a Chern class identity which confirms it, after taking into account singularities of the relevant loci. This identity of Chern classes holds in arbitrary dimension, and for varieties that are not necessarily Calabi-Yau. Singularities are essential in both the physics and the mathematics arguments: the tadpole relation may be interpreted as an identity involving stringy invariants of a singular hypersurface, and corrections for the presence of pinch-points. The mathematical discussion is streamlined by the use of Chern-Schwartz-MacPherson classes of singular varieties. We also show how the main identity may be obtained by applying 'Verdier specialization' to suitable constructible functions.

  11. Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for Polynomial Transforms Based on Induction

    CERN Document Server

    Sandryhaila, Aliaksei; Pueschel, Markus

    2010-01-01

    A polynomial transform is the multiplication of an input vector $x\\in\\C^n$ by a matrix $\\PT_{b,\\alpha}\\in\\C^{n\\times n},$ whose $(k,\\ell)$-th element is defined as $p_\\ell(\\alpha_k)$ for polynomials $p_\\ell(x)\\in\\C[x]$ from a list $b=\\{p_0(x),\\dots,p_{n-1}(x)\\}$ and sample points $\\alpha_k\\in\\C$ from a list $\\alpha=\\{\\alpha_0,\\dots,\\alpha_{n-1}\\}$. Such transforms find applications in the areas of signal processing, data compression, and function interpolation. Important examples include the discrete Fourier and cosine transforms. In this paper we introduce a novel technique to derive fast algorithms for polynomial transforms. The technique uses the relationship between polynomial transforms and the representation theory of polynomial algebras. Specifically, we derive algorithms by decomposing the regular modules of these algebras as a stepwise induction. As an application, we derive novel $O(n\\log{n})$ general-radix algorithms for the discrete Fourier transform and the discrete cosine transform of type 4.

  12. Psychosocial correlates of dietary behaviour in type 2 diabetic women, using a behaviour change theory.

    Science.gov (United States)

    Didarloo, A; Shojaeizadeh, D; Gharaaghaji Asl, R; Niknami, S; Khorami, A

    2014-06-01

    The study evaluated the efficacy of the Theory of Reasoned Action (TRA), along with self-efficacy to predict dietary behaviour in a group of Iranian women with type 2 diabetes. A sample of 352 diabetic women referred to Khoy Diabetes Clinic, Iran, were selected and given a self-administered survey to assess eating behaviour, using the extended TRA constructs. Bivariate correlations and Enter regression analyses of the extended TRA model were performed with SPSS software. Overall, the proposed model explained 31.6% of variance of behavioural intention and 21.5% of variance of dietary behaviour. Among the model constructs, self-efficacy was the strongest predictor of intentions and dietary practice. In addition to the model variables, visit intervals of patients and source of obtaining information about diabetes from sociodemographic factors were also associated with dietary behaviours of the diabetics. This research has highlighted the relative importance of the extended TRA constructs upon behavioural intention and subsequent behaviour. Therefore, use of the present research model in designing educational interventions to increase adherence to dietary behaviours among diabetic patients was recommended and emphasized. PMID:25076670

  13. Calabi-Yau compactifications of type IIB superstring theory; Calabi-Yau Kompaktifizierungen von Typ IIB Superstringtheorie

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, R.

    2001-07-01

    Starting from a non-self-dual action for ten dimensional type IIB supergravity this theory is compactified on a Calabi-Yau 3-fold and 4- fold. The compactification are thereby performed in the limit, in which the volumina of the manifolds are large against the string scale.

  14. Two-loop Yang-Mills theory in the world-line formalism and an Euler-Heisenberg type action

    OpenAIRE

    Sato, Haru-tada; Schmidt, Michael G.; Zahlten, Claus

    2000-01-01

    Within the framework of the world-line formalism we write down in detail a two-loop Euler-Heisenberg type action for gluon loops in Yang-Mills theory and discuss its divergence structure. We exactly perform all the world-line moduli integrals at two loops by inserting a mass parameter, and then extract divergent coefficients to be renormalized.

  15. Non-Abelian dual superconductivity in SU(3) Yang-Mills theory: dual Meissner effect and type of the vacuum

    CERN Document Server

    Shibata, Akihiro; Kato, Seikou; Shinohara, Toru

    2013-01-01

    We have proposed the non-Abelian dual superconductivity picture for quark confinement in the SU(3) Yang-Mills (YM) theory, and have given numerical evidences for the restricted-field dominance and the non-Abelian magnetic monopole dominance in the string tension by applying a new formulation of the YM theory on a lattice. To establish the non-Abelian dual superconductivity picture for quark confinement, we have observed the non-Abelian dual Meissner effect in the SU(3) Yang-Mills theory by measuring the chromoelectric flux created by the quark-antiquark source, and the non-Abelian magnetic monopole currents induced around the flux. We conclude that the dual superconductivity of the SU(3) Yang-Mills theory is strictly the type I and that this type of dual superconductivity is reproduced by the restricted field and the non-Abelian magnetic monopole part, in sharp contrast to the SU(2) case: the border of type I and type II.

  16. Validated method for phytohormone quantification in plants

    OpenAIRE

    Almeida Trapp, Mari?lia; Souza, Gezimar D.; Rodrigues-filho, Edson; Boland, William; Mitho?fer, Axel

    2014-01-01

    Phytohormones are long time known as important components of signaling cascades in plant development and plant responses to various abiotic and biotic challenges. Quantifications of phytohormone levels in plants are typically carried out using GC or LC-MS/MS systems, due to their high sensitivity, specificity, and the fact that not much sample preparation is needed. However, mass spectrometer-based analyses are often affected by the particular sample type (different matrices), extraction proc...

  17. D4R4 term in type IIB string theory on T2 and U-duality

    International Nuclear Information System (INIS)

    We propose a manifestly U-duality invariant modular form for the D4R4 interaction in type IIB string theory compactified on T2. It receives perturbative contributions up to two loops, and nonperturbative contributions from D-instantons and (p,q) string instantons wrapping T2. We provide evidence for this modular form by showing that the coefficients at tree level and at one loop precisely match those obtained using string perturbation theory. Using duality, parts of the perturbative amplitude are also shown to match exactly the results obtained from 11 dimensional supergravity compactified on T3 at one loop. Decompactifying the theory to nine dimensions, we obtain a U-duality invariant modular form, whose coefficients at tree level and at one loop agree with string perturbation theory

  18. Multimodal Defect Quantification

    OpenAIRE

    Hu?bner, S.; Stackelberg, B. Von; Fuchs, T.

    2010-01-01

    In the last few years thermographic testing methods gained much importance in the testing of composite materials such as carbon reinforced polymers (CFRP). In many cases the detectabiliy of defects like voids, cracks or debonding has been proofed. Up to now the quantitative analysis of these defects is limited. First approaches are done here by profilometric calculation for the depth, but lateral quantification is still difficult due to thermal diffusion. Herein we present a method, which all...

  19. XPS quantification of the hetero-junction interface energy

    International Nuclear Information System (INIS)

    Highlights: ? Quantum entrapment or polarization dictates the performance of dopant, impurity, interface, alloy and compounds. ? Interface bond energy, energy density, and atomic cohesive energy can be determined using XPS and our BOLS theory. ? Presents a new and reliable method for catalyst design and identification. ? Entrapment makes CuPd to be a p-type catalyst and polarization derives AgPd as an n-type catalyst. - Abstract: We present an approach for quantifying the heterogeneous interface bond energy using X-ray photoelectron spectroscopy (XPS). Firstly, from analyzing the XPS core-level shift of the elemental surfaces we obtained the energy levels of an isolated atom and their bulk shifts of the constituent elements for reference; then we measured the energy shifts of the specific energy levels upon interface alloy formation. Subtracting the referential spectrum from that collected from the alloy, we can distil the interface effect on the binding energy. Calibrated based on the energy levels and their bulk shifts derived from elemental surfaces, we can derive the bond energy, energy density, atomic cohesive energy, and free energy at the interface region. This approach has enabled us to clarify the dominance of quantum entrapment at CuPd interface and the dominance of polarization at AgPd and BeW interfaces, as the origin of interface energy change. Developed approach not only enhances the power of XPS but also enables the quantification of the interfa the quantification of the interface energy at the atomic scale that has been an issue of long challenge.

  20. Advances in type-2 fuzzy sets and systems theory and applications

    CERN Document Server

    Mendel, Jerry; Tahayori, Hooman

    2013-01-01

    This book explores recent developments in the theoretical foundations and novel applications of general and interval type-2 fuzzy sets and systems, including: algebraic properties of type-2 fuzzy sets, geometric-based definition of type-2 fuzzy set operators, generalizations of the continuous KM algorithm, adaptiveness and novelty of interval type-2 fuzzy logic controllers, relations between conceptual spaces and type-2 fuzzy sets, type-2 fuzzy logic systems versus perceptual computers; modeling human perception of real world concepts with type-2 fuzzy sets, different methods for generating membership functions of interval and general type-2 fuzzy sets, and applications of interval type-2 fuzzy sets to control, machine tooling, image processing and diet.  The applications demonstrate the appropriateness of using type-2 fuzzy sets and systems in real world problems that are characterized by different degrees of uncertainty.

  1. Bianchi Type-II String Cosmological Model with Magnetic Field in Scalar-tensor Theory of Gravitation

    Science.gov (United States)

    Sharma, N. K.; Singh, J. K.

    2015-03-01

    The spatially homogeneous and totally anisotropic Bianchi type-II cosmological solutions of massive strings have been investigated in the presence of the magnetic field in the framework of scalar-tensor theory of gravitation formulated by Saez and Ballester (Phys. Lett. A 113:467, 1986). With the help of special law of variation for Hubble's parameter proposed by Berman (Nuovo Cimento B 74:182, 1983) string cosmological model is obtained in this theory. Some physical and kinematical properties of the model are also discussed.

  2. Theory of flux cutting and flux transport at the critical current of a type-II superconducting cylindrical wire

    International Nuclear Information System (INIS)

    I introduce a critical-state theory incorporating both flux cutting and flux transport to calculate the magnetic-field and current-density distributions inside a type-II superconducting cylinder at its critical current in a longitudinal applied magnetic field. The theory is an extension of the elliptic critical-state model introduced by Romero-Salazar and Perez-Rodriguez. The vortex dynamics depend in detail on two nonlinear effective resistivities for flux cutting (?(parallel)) and flux flow (?(perpendicular)), and their ratio r = ?(parallel)/?(perpendicular). When r c(?) that makes the vortex arc unstable.

  3. Quantification of Human T-lymphotropic virus type I (HTLV-I) provirus load in a rural West African population: no enhancement of human immunodeficiency virus type 2 pathogenesis, but HTLV-I provirus load relates to mortality

    DEFF Research Database (Denmark)

    Ariyoshi, K; Berry, N

    2003-01-01

    Human T-lymphotropic virus type I (HTLV-I) provirus load was examined in a cohort of a population in Guinea-Bissau among whom human immunodeficiency virus (HIV) type 2 is endemic. Geometric mean of HIV-2 RNA load among HTLV-I-coinfected subjects was significantly lower than that in subjects infected with HIV-2 alone (212 vs. 724 copies/mL; P=.02). Adjusted for age, sex, and HIV status, the risk of death increased with HTLV-I provirus load; mortality hazard ratio was 1.59 for each log10 increase in HTLV-I provirus copies (P=.038). There is no enhancing effect of HTLV-I coinfection on HIV-2 disease, but high HTLV-I provirus loads may contribute to mortality.

  4. Inflation and Singularity of a Bianchi Type-VII0 Universe with a Dirac Field in the Einstein—Cartan Theory

    International Nuclear Information System (INIS)

    We discuss Bianchi type-VII0 cosmology with a Dirac field in the Einstein—Cartan (E-C) theory and obtain the equations of the Dirac and gravitational fields in the E-C theory. A Bianchi type-VII0 inflationary solution is found. When (3)/16S2 - ?2 > 0, the Universe may avoid singularity. (geophysics, astronomy, and astrophysics)

  5. Inflation and Singularity of a Bianchi Type-VII0 Universe with a Dirac Field in the Einstein—Cartan Theory

    Science.gov (United States)

    Huang, Zeng-Guang; Fang, Wei; Lu, Hui-Qing

    2011-08-01

    We discuss Bianchi type-VII0 cosmology with a Dirac field in the Einstein—Cartan (E-C) theory and obtain the equations of the Dirac and gravitational fields in the E-C theory. A Bianchi type-VII0 inflationary solution is found. When , the Universe may avoid singularity.

  6. In situ wave phenomena in the upstream and downstream regions of interplanetary shocks: Implications for type 2 burst theories

    Science.gov (United States)

    Thejappa, G.; MacDowall, R. J.; Vinas, A. F.

    1997-01-01

    The results are presented of in situ waves observed by the Ulyssess unified radio and plasma wave experiment (URAP) in the upstream and downstream regions of a large number of interplanetary shocks. The Langmuir waves which are the most essential ingredients for the type 2 radio emission are observed only in the upstream regions of a limited number of shocks. On the other hand, the ion-acoustic-like waves (0.5 to 5 kHz) are observed near most of the interplanetary shocks. Implications of observations made for the electron acceleration mechanisms at the collisionless shocks and for type 2 burst theories are presented.

  7. Stringy Unification of Type IIA and IIB Supergravities under N=2 D=10 Supersymmetric Double Field Theory

    OpenAIRE

    Jeon, Imtak; Lee, Kanghoon; Park, Jeong-hyuck; Suh, Yoonji

    2012-01-01

    To the full order in fermions, we construct D=10 type II supersymmetric double field theory. We spell the precise N=2 supersymmetry transformation rules as for 32 supercharges. The constructed action unifies type IIA and IIB supergravities in a manifestly covariant manner with respect to O(10,10) T-duality and a pair of local Lorentz groups, or Spin(1,9) \\times Spin(9,1), besides the usual general covariance of supergravities or the generalized diffeomorphism. While the theo...

  8. Bianchi Type II, VIII & IX Cosmological Model with Magnetized Anisotropic Dark Energy in Brans-Dicke Theory of Gravitation

    Science.gov (United States)

    Wankhade, K. S.; Sancheti, M. M.

    2014-08-01

    In the present paper we studied Bianchi type II, VIII & IX space time in the presence of magnetized anisotropic dark energy in Brans-Dicke theory of gravitation. The exact solution of the field equations under the assumption on the anisotropy of the fluid are obtained for exponential and power law expansions. The obtained models approach isotropy asymptotically at large value of t. Some physical properties of the model are discussed.

  9. Generalized coorbit space theory and inhomogeneous function spaces of Besov-Lizorkin-Triebel type

    CERN Document Server

    Rauhut, Holger

    2010-01-01

    Coorbit space theory is an abstract approach to function spaces and their atomic decompositions. The original theory developed by Feichtinger and Gr{\\"o}chenig in the late 1980ies heavily uses integrable representations of locally compact groups. Their theory covers, in particular, homogeneous Besov-Lizorkin-Triebel spaces, modulation spaces, Bergman spaces, and the recent shearlet spaces. However, inhomogeneous Besov-Lizorkin-Triebel spaces cannot be covered by their group theoretical approach. Later it was recognized by Fornasier and the first named author that one may replace coherent states related to the group representation by more general abstract continuous frames. In the first part of the present paper we significantly extend this abstract generalized coorbit space theory to treat a wider variety of coorbit spaces. A unified approach towards atomic decompositions and Banach frames with new results for general coorbit spaces is presented. In the second part we apply the abstract setting to a specific ...

  10. Generalized canonical formalism and S matrix of theories with constraints of general type

    International Nuclear Information System (INIS)

    The method of canonical quantization of systems with constraints of the first and second class of an arbitrary rank is discussed. The effectiveness of the method is demonstrated using the Yang-Mills fields and gravitational fields as examples. A correct expression for the S matrix of theories quadratic in momenta within the frames of canonical gauges including ghost fields is derived. General quantization is performed and the S matrix in a configurational space for relativistic membrane theories, being a generalization of the string theories to the case of extended spatial realization, is obtained. It is shown that the membrane theory in the space of n+1 measurements is a system with constraints of the n rank

  11. Generalized canonical formalism and the S-matrix of theories with constraints of the general type

    International Nuclear Information System (INIS)

    A canonical quantization method is given for systems with first and second class constraints of arbitrary rank. The effectiveness of the method is demonstrated using sample Yang-Mills and gravitational fields. A correct expression is derived for the S-matrix of theories that are momentum-quadratic within the scope of canonical gauges, including ghost fields. Generalized quantization is performed and the S-matrix is derived in configurational space for theories of relativistic membranes representing a generalization of theories of strings to the case of an extended spatial implementation. It is demonstrated that the theory of membranes in n+l-dimensional space is a system with rank-n constraints

  12. Quantification and Negation in Event Semantics

    Directory of Open Access Journals (Sweden)

    Lucas Champollion

    2010-12-01

    Full Text Available Recently, it has been claimed that event semantics does not go well together with quantification, especially if one rejects syntactic, LF-based approaches to quantifier scope. This paper shows that such fears are unfounded, by presenting a simple, variable-free framework which combines a Neo-Davidsonian event semantics with a type-shifting based account of quantifier scope. The main innovation is that the event variable is bound inside the verbal denotation, rather than at sentence level by existential closure. Quantifiers can then be interpreted in situ. The resulting framework combines the strengths of event semantics and type-shifting accounts of quantifiers and thus does not force the semanticist to posit either a default underlying word order or a syntactic LF-style level. It is therefore well suited for applications to languages where word order is free and quantifier scope is determined by surface order. As an additional benefit, the system leads to a straightforward account of negation, which has also been claimed to be problematic for event-based frameworks.ReferencesBarker, Chris. 2002. ‘Continuations and the nature of quantification’. Natural Language Semantics 10: 211–242.http://dx.doi.org/10.1023/A:1022183511876Barker, Chris & Shan, Chung-chieh. 2008. ‘Donkey anaphora is in-scope binding’. Semantics and Pragmatics 1: 1–46.Beaver, David & Condoravdi, Cleo. 2007. ‘On the logic of verbal modification’. In Maria Aloni, Paul Dekker & Floris Roelofsen (eds. ‘Proceedings of the Sixteenth Amsterdam Colloquium’, 3–9. Amsterdam, Netherlands: University of Amsterdam.Beghelli, Filippo & Stowell, Tim. 1997. ‘Distributivity and negation: The syntax of each and every’. In Anna Szabolcsi (ed. ‘Ways of scope taking’, 71–107. Dordrecht, Netherlands: Kluwer.Brasoveanu, Adrian. 2010. ‘Modified Numerals as Post-Suppositions’. In Maria Aloni, Harald Bastiaanse, Tikitu de Jager & Katrin Schulz (eds. ‘Logic, Language and Meaning’, Lecture Notes in Computer Science, vol. 6042, 203–212. Berlin, Germany: Springer.Carlson, Gregory N. 1977. Reference to Kinds in English. Ph.D. thesis, University of Massachusetts, Amherst, MA.Carlson, Gregory N. 1984. ‘Thematic roles and their role in semantic interpretation’. Linguistics 22: 259–279.http://dx.doi.org/10.1515/ling.1984.22.3.259Champollion, Lucas. 2010. Parts of a whole: Distributivity as a bridge between aspect and measurement. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA.Champollion, Lucas, Tauberer, Josh & Romero, Maribel. 2007. ‘The Penn Lambda Calculator: Pedagogical software for natural language semantics’. In Tracy Holloway King & Emily Bender (eds. ‘Proceedings of the Grammar Engineering Across Frameworks(GEAF 2007 Workshop’, Stanford, CA: CSLI Online Publications.Condoravdi, Cleo. 2002. ‘Punctual until as a scalar NPI’. In Sharon Inkelas & Kristin Hanson (eds. ‘The nature of the word’, 631–654. Cambridge, MA: MIT Press.Csirmaz, Aniko. 2006. ‘Aspect, Negation and Quantifiers’. In Liliane Haegeman, Joan Maling, James McCloskey & Katalin E. Kiss (eds. ‘Event Structure And The Left Periphery’, Studies in Natural Language and Linguistic Theory, vol. 68, 225–253. SpringerNetherlands.Davidson, Donald. 1967. ‘The logical form of action sentences’. In Nicholas Rescher (ed. ‘The logic of decision and action’, 81–95. Pittsburgh, PA: University of Pittsburgh Press.de Swart, Henriëtte. 1996. ‘Meaning and use of not . . . until’. Journal of Semantics 13: 221–263.http://dx.doi.org/10.1093/jos/13.3.221de Swart, Henriëtte & Molendijk, Arie. 1999. ‘Negation and the temporal structure of narrative discourse’. Journal of Semantics 16: 1–42.http://dx.doi.org/10.1093/jos/16.1.1Dowty, David R. 1979. Word meaning and Montague grammar. Dordrecht, Netherlands: Reidel.Eckardt, Regine. 2010. ‘A Logic for Easy Linking Semantics’. In Maria Aloni, Harald Bastiaanse, Tikitu de Jager & Katrin Schulz (eds. ‘Logic, Language and Meaning’, Lecture Notes in Computer Science, vo

  13. Fitting the luminosity data from type Ia supernovae in the frame of the Cosmic Defect theory

    CERN Document Server

    Tartaglia, A; Cardone, V; Radicella, N

    2008-01-01

    The Cosmic Defect (CD) theory is reviewed and used to fit the data for the accelerated expansion of the universe, obtained from the apparent luminosity of 192 SnIa's. The fit from CD is compared with the one obtained by means of $\\Lambda $CDM. The results from both theories are in good agreement and the fits are satisfactory. The correspondence between both approaches is discussed and interpreted.

  14. Identification of a novel V1-type AVP receptor based on the molecular recognition theory.

    OpenAIRE

    Herrera, V. L.; Ruiz-opazo, N.

    2001-01-01

    BACKGROUND: The molecular recognition theory predicts that binding domains of peptide hormones and their corresponding receptor binding domains evolved from complementary strands of genomic DNA, and that a process of selective evolutionary mutational events within these primordial domains gave rise to the high affinity and high specificity of peptide hormone-receptor interactions observed today in different peptide hormone-receptor systems. Moreover, this theory has been broadened as a genera...

  15. Path and Path Deviation Equations in Kaluza-Klein Type Theories

    OpenAIRE

    Kahil, M. E.

    2005-01-01

    Path and path deviation equations for charged, spinning and spinning charged objects in different versions of Kaluza-Klein (KK) theory using a modified Bazanski Lagrangian have been derived. The significance of motion in five dimensions, especially for a charged spinning object, has been examined. We have also extended the modified Bazanski approach to derive the path and path deviation equations of a test particle in a version of non-symmetric KK theory.

  16. Holographic-Type Gravitation via Non-Differentiability in Weyl-Dirac Theory

    Directory of Open Access Journals (Sweden)

    Mihai Pricop

    2013-08-01

    Full Text Available In the Weyl-Dirac non-relativistic hydrodynamics approach, the non-linear interaction between sub-quantum level and particle gives non-differentiable properties to the space. Therefore, the movement trajectories are fractal curves, the dynamics are described by a complex speed field and the equation of motion is identified with the geodesics of a fractal space which corresponds to a Schrodinger non-linear equation. The real part of the complex speed field assures, through a quantification condition, the compatibility between the Weyl-Dirac non-elativistic hydrodynamic model and the wave mechanics. The mean value of the fractal speed potential, identifies with the Shanon informational energy, specifies, by a maximization principle, that the sub-quantum level “stores” and “transfers” the informational energy in the form of force. The wave-particle duality is achieved by means of cnoidal oscillations modes of the state density, the dominance of one of the characters, wave or particle, being put into correspondence with two flow regimes (non-quasi-autonomous and quasi-autonomous of the Weyl-Dirac fluid. All these show a direct connection between the fractal structure of space and holographic principle.

  17. Accident sequence quantification with KIRAP

    International Nuclear Information System (INIS)

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs

  18. Introduction to string theory

    International Nuclear Information System (INIS)

    Open and closed boson theories are discussed in a classical framework, highlighting the physical interpretation of conformal symmetry and the Virasoro (1970) algebra. The quantification of bosonic strings is done within the old covariant operational formalism. This method is much less elegant and powerful than the BRST quantification, but it quickly reveals the physical content of quantum theory. Generalization to theories with fermionic degrees of freedom is introduced: the Neveu-Schartz (1971) and Ramond (1971) models, their reduced supersymmetry (two dimensions) and the Gliozzi, Scherk and Olive (1977) projection which leads to a supersymmetry theory in the usual meaning of the term

  19. Toward a Two-Factor Theory of One Type of Mathematics Disabilities.

    Science.gov (United States)

    Robinson, Carol S.; Menchetti, Bruce M.; Torgesen, Joseph K.

    2002-01-01

    This article proposes a two-factor theory of mathematics disabilities based on the premise that weak cognitive representations lead to poorer retrieval of information from long term memory. Comparison of children with math disabilities alone (MD) and those with both math and reading disabilities (MD/RD) suggests that weak phonological processing…

  20. Assessment of an improved multiaxial strength theory based on creep-rupture data for type 316 stainless steel

    Science.gov (United States)

    Huddleston, R. L.

    1992-03-01

    A new multiaxial strength theory incorporating three independent stress parameters was developed and reported by the author in 1984. It was formally incorporated into ASME Code Case N47-29 in 1990. In the earlier paper, the new model was shown to provide significantly more accurate stress-rupture life predictions, than the classical theories of von Mises, Tresca, and Rankine, for the type 304 stainless steel tested at 593 C under different biaxial stress states. Further assessments for other alloys are showing similar results. The current paper provides additional results for type 316 stainless steel specimens tested at 600 C under tension-tension and tension-compression stress states and shows 2 to 3 orders of magnitude reduction in the scatter in predicted versus observed lives. A key feature of the new theory, which incorporates the maximum deviatoric stress, the first invariant of the stress tensor, and the second invariant of the deviatoric stress tensor, is its ability to distinguish between life under tensile versus compressive stress states.

  1. Algebraic geometry approach in gravity theory and new relations between the parameters in type I low-energy string theory action in theories with extra dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Dimitrov, B.G. [Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna (Russian Federation)

    2010-04-15

    On the base of the distinction between covariant and contravariant metric tensor components, a new (multivariable) cubic algebraic equation for reparametrization invariance of the gravitational Lagrangian has been derived and parametrized with complicated non-elliptic functions, depending on the (elliptic) Weierstrass function and its derivative. This is different from standard algebraic geometry, where only two-dimensional cubic equations are parametrized with elliptic functions and not multivariable ones. Physical applications of the approach have been considered in reference to theories with extra dimensions. The s.c. ''length function'' l(x) has been introduced and found as a solution of quasilinear differential equations in partial derivatives for two different cases of ''compactification + rescaling'' and ''rescaling + compactification''. New physically important relations (inequalities) between the parameters in the action are established, which cannot be derived in the case l = 1 of the standard gravitational theory, but should be fulfilled also for that case. (Abstract Copyright [2010], Wiley Periodicals, Inc.)

  2. Search for different links with the same Jones' type polynomials: Ideas from graph theory and statistical mechanics

    CERN Document Server

    Przytycki, J H

    1995-01-01

    We describe in this talk three methods of constructing different links with the same Jones type invariant. All three can be thought as generalizations of mutation. The first combines the satellite construction with mutation. The second uses the notion of rotant, taken from the graph theory, the third, invented by Jones, transplants into knot theory the idea of the Yang-Baxter equation with the spectral parameter (idea employed by Baxter in the theory of solvable models in statistical mechanics). We extend the Jones result and relate it to Traczyk's work on rotors of links. We also show further applications of the Jones idea, e.g. to 3-string links in the solid torus. We stress the fact that ideas coming from various areas of mathematics (and theoretical physics) has been fruitfully used in knot theory, and vice versa. (This is the detailed version of the talk given at the Banach Center Colloquium, Warsaw, Poland, March 24, 1994: ``W poszukiwaniu nietrywialnego wezla z trywialnym wielomianem Jonesa: grafy i me...

  3. Theory of the Carrier Fermi Energy and Density of States of n- and p-TYPE SnTe

    Science.gov (United States)

    Das, R. K.; Mohapatro, S.

    In the present work we theoretically develop a k?? model to calculate the carrier electronic structure for both n- and p-type SnTe. Here ? is the momentum operator in the presence of the spin-orbit interaction. The work is an extension of the theory developed for n- and p-PbTe earlier by one of the authors to evaluate the Fermi energy and the density of states (DOS). We consider a six-level energy basis for SnTe, as proposed by Bernick and Kleinman. One set of calculations was done by diagonalizing the k?? Hamiltonian matrix for the band-edge states and treating the far bands using perturbation theory. In the second set we have rediagonalized the k?? Hamiltonian matrix for the band edge states, treating the first diagonalization as the basis. The far bands are, as usual, included through perturbation. We have compared the results of both the sets. Results obtained for n- and p-type SnTe are also compared with that of n- and p-type PbTe. The similarities and contrasts are discussed. An indirect comparison with the DOS of the metallic tin suggests that the calculations are fairly reasonable. The results are also compared with some recent results for SnTe.

  4. Cyclic uniaxial and biaxial hardening of type 304 stainless steel modeled by the viscoplasticity theory based on overstress

    Science.gov (United States)

    Yao, David; Krempl, Erhard

    1988-01-01

    The isotropic theory of viscoplasticity based on overstress does not use a yield surface or a loading and unloading criterion. The inelastic strain rate depends on overstress, the difference between the stress and the equilibrium stress, and is assumed to be rate dependent. Special attention is paid to the modeling of elastic regions. For the modeling of cyclic hardening, such as observed in annealed Type 304 stainless steel, and additional growth law for a scalar quantity which represents the rate independent asymptotic value of the equilibrium stress is added. It is made to increase with inelastic deformation using a new scalar measure which differentiates between nonproportional and proportional loading. The theory is applied to correlate uniaxial data under two step amplitude loading including the effect of further hardening at the high amplitude and proportional and nonproportional cyclic loadings. Results are compared with corresponding experiments.

  5. KK-monopoles and G-structures in M-theory/type IIA reductions

    Science.gov (United States)

    Danielsson, Ulf; Dibitetto, Giuseppe; Guarino, Adolfo

    2015-02-01

    We argue that M-theory/massive IIA backgrounds including KK-monopoles are suitably described in the language of G-structures and their intrinsic torsion. To this end, we study classes of minimal supergravity models that admit an interpretation as twisted reductions in which the twist parameters are not restricted to satisfy the Jacobi constraints ?? = 0 required by an ordinary Scherk-Schwarz reduction. We first derive the correspondence between four-dimensional data and torsion classes of the internal space and, then, check the one-to-one correspondence between higher-dimensional and four-dimensional equations of motion. Remarkably, the whole construction holds regardless of the Jacobi constraints, thus shedding light upon the string/M-theory interpretation of (smeared) KK-monopoles.

  6. KK-monopoles and G-structures in M-theory/type IIA reductions

    CERN Document Server

    Danielsson, Ulf; Guarino, Adolfo

    2014-01-01

    We argue that M-theory/massive IIA backgrounds including KK-monopoles are suitably described in the language of G-structures and their intrinsic torsion. To this end, we study classes of minimal supergravity models that admit an interpretation as twisted reductions in which the twist parameters are not restricted to satisfy the Jacobi constraints $\\omega\\, \\omega=0$ required by an ordinary Scherk-Schwarz reduction. We first derive the correspondence between four-dimensional data and torsion classes of the internal space and, then, check the one-to-one correspondence between higher-dimensional and four-dimensional equations of motion. Remarkably, the whole construction holds regardless of the Jacobi constraints, thus shedding light upon the string/M-theory interpretation of (smeared) KK-monopoles.

  7. BPS-type equations in the non-anticommutative N=2 supersymmetric U(1) gauge theory

    International Nuclear Information System (INIS)

    We investigate the equations of motion in the four-dimensional non-anticommutative N=2 supersymmetric U(1) gauge field theory, in the search for BPS configurations. The BPS-like equations, generalizing the Abelian (anti)self-duality conditions, are proposed. We prove full solvability of our BPS-like equations, as well their consistency with the equations of motion. Certain restrictions on the allowed scalar field values are also found. Surviving supersymmetry is briefly discussed too

  8. The double Mellin-Barnes type integrals and their applications to convolution theory

    CERN Document Server

    Hai, Nguyen Thanh

    1992-01-01

    This book presents new results in the theory of the double Mellin-Barnes integrals popularly known as the general H-function of two variables.A general integral convolution is constructed by the authors and it contains Laplace convolution as a particular case and possesses a factorization property for one-dimensional H-transform. Many examples of convolutions for classical integral transforms are obtained and they can be applied for the evaluation of series and integrals.

  9. Maier-Saupe-type theory of ferroelectric nanoparticles in nematic liquid crystals

    OpenAIRE

    Lopatina, Lena M.; Selinger, Jonathan V.

    2011-01-01

    Several experiments have reported that ferroelectric nanoparticles have drastic effects on nematic liquid crystals--increasing the isotropic-nematic transition temperature by about 5 K, and greatly increasing the sensitivity to applied electric fields. In a recent paper [L. M. Lopatina and J. V. Selinger, Phys. Rev. Lett. 102, 197802 (2009)], we modeled these effects through a Landau theory, based on coupled orientational order parameters for the liquid crystal and the nanop...

  10. KK-monopoles and G-structures in M-theory/type IIA reductions

    OpenAIRE

    Danielsson, Ulf; Dibitetto, Giuseppe; Guarino, Adolfo

    2014-01-01

    We argue that M-theory/massive IIA backgrounds including KK-monopoles are suitably described in the language of G-structures and their intrinsic torsion. To this end, we study classes of minimal supergravity models that admit an interpretation as twisted reductions in which the twist parameters are not restricted to satisfy the Jacobi constraints $\\omega\\, \\omega=0$ required by an ordinary Scherk-Schwarz reduction. We first derive the correspondence between four-dimensional ...

  11. Quantum mechanical analysis on faujasite-type molecular sieves by using fermi dirac statistics and quantum theory of dielectricity

    International Nuclear Information System (INIS)

    We studied Faujasite type molecular sieves by using Fermi Dirac statistics and the quantum theory of dielectricity. We developed an empirical relationship for quantum capacitance which follows an inverse Gaussian profile in the frequency range of 66 Hz - 3 MHz. We calculated quantum capacitance, sample crystal momentum, charge quantization and quantized energy of Faujasite type molecular sieves in the frequency range of 0.1 Hz - 10/sup 4/ MHz. Our calculations for diameter of sodalite and super-cages of Faujasite type molecular sieves are in agreement with experimental results reported in this manuscript. We also calculated quantum polarizability, quantized molecular field, orientational polarizability and deformation polarizability by using experimental results of Ligia Frunza etal. The phonons are over damped in the frequency range 0.1 Hz - 10 kHz and become a source for producing cages in the Faujasite type molecular sieves. Ion exchange recovery processes occur due to over damped phonon excitations in Faujasite type molecular sieves and with increasing temperatures. (author)

  12. Tritium quantification in metallic samples

    International Nuclear Information System (INIS)

    ITER radwastes are generated from various facilities, which are Tokamak, hot cell building, RWB, and tritium plant building during operation and maintenance of ITER. The treatment systems of the radwastes are for long half lives intermediate radawstes (Type B radwastes generated in Tokamak), the radwastes including pure tritium (from tritium plant and fuel), and low level solid and liquid radwastes (Type A radwastes). The radwastes are required to be analyzed for radionuclide inventory before the radwastes will be stored at ITER hot cell facilities for 20 year according to the ITER policy. Especially, tritium shall be analyzed because of its concentration in the waste potentially higher than the Type B tritium criterion. There are several destructive or non-destructive methods for assay of tritium in metallic samples. A nondestructive method generally adopts high-sensitive photo film along with ? - particle detection technique. Other non-destructive ones are the Radiography (RG) technique in which it applies the magnetic microscope and radioluminography (RLG) based on the photo stimulated luminescence (PSL). In destructive analysis methods, electrochemical layer-by-layer etching (ELLE) and chemical acid dissolution or chemical acid leaching method (CAD of CAL) have known as mostly common techniques. The CAD or CAL technique as a destructive method has a merit of accurate analytical result and convenient test method compared to above nondestructive methods. Accordingl above nondestructive methods. Accordingly, it is considered that CAL method is the most suitable for tritium quantification. CAL method has already been developed in Nuclear Chemistry Research Division (NCRD). However, some metallic samples need to be analyzed for improvement of analytical reliability of CAL method. Considering tritium concentration in ITER radwastes, radwastes of CANDU type NPP was selected as a proper sample, since ITER sample has 10-9 Bq/g of radioactivity. Compared to samples in PWR and BWR NPP, tritium concentration in the coolant of Korean CANDU NPP has been reported to have a concentration of 0.1 MBq/m3 during 2000 ? 2006, whereas the case of Korean PWR NPP was about 0.003 MBq/m3 in the same time frame. The samples aimed at this project for tritium measurement are pressure tubes irradiated by nuclear fuels of CANDU type Korean NPP, which had been used for supporting nuclear fuel bundles. Although the CANDU samples are difficult to be directly compared to ITER metallic radwastes, they were prepared for evaluation of application of CAL method

  13. Tritium quantification in metallic samples

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Hong Joo; Yun, Myung Hee; Park, Jong Ho; Yeon, Jei Won; Song, Kyu Seok [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    ITER radwastes are generated from various facilities, which are Tokamak, hot cell building, RWB, and tritium plant building during operation and maintenance of ITER. The treatment systems of the radwastes are for long half lives intermediate radawstes (Type B radwastes generated in Tokamak), the radwastes including pure tritium (from tritium plant and fuel), and low level solid and liquid radwastes (Type A radwastes). The radwastes are required to be analyzed for radionuclide inventory before the radwastes will be stored at ITER hot cell facilities for 20 year according to the ITER policy. Especially, tritium shall be analyzed because of its concentration in the waste potentially higher than the Type B tritium criterion. There are several destructive or non-destructive methods for assay of tritium in metallic samples. A nondestructive method generally adopts high-sensitive photo film along with {beta} - particle detection technique. Other non-destructive ones are the Radiography (RG) technique in which it applies the magnetic microscope and radioluminography (RLG) based on the photo stimulated luminescence (PSL). In destructive analysis methods, electrochemical layer-by-layer etching (ELLE) and chemical acid dissolution or chemical acid leaching method (CAD of CAL) have known as mostly common techniques. The CAD or CAL technique as a destructive method has a merit of accurate analytical result and convenient test method compared to above nondestructive methods. Accordingly, it is considered that CAL method is the most suitable for tritium quantification. CAL method has already been developed in Nuclear Chemistry Research Division (NCRD). However, some metallic samples need to be analyzed for improvement of analytical reliability of CAL method. Considering tritium concentration in ITER radwastes, radwastes of CANDU type NPP was selected as a proper sample, since ITER sample has 10{sup -9} Bq/g of radioactivity. Compared to samples in PWR and BWR NPP, tritium concentration in the coolant of Korean CANDU NPP has been reported to have a concentration of 0.1 MBq/m{sup 3} during 2000 {approx} 2006, whereas the case of Korean PWR NPP was about 0.003 MBq/m{sup 3} in the same time frame. The samples aimed at this project for tritium measurement are pressure tubes irradiated by nuclear fuels of CANDU type Korean NPP, which had been used for supporting nuclear fuel bundles. Although the CANDU samples are difficult to be directly compared to ITER metallic radwastes, they were prepared for evaluation of application of CAL method

  14. Wrappers, Aspects, Quantification and Events

    Science.gov (United States)

    Filman, Robert E.

    2005-01-01

    Talk overview: Object infrastructure framework (OIF). A system development to simplify building distributed applications by allowing independent implementation of multiple concern. Essence and state of AOP. Trinity. Quantification over events. Current work on a generalized AOP technology.

  15. Calculation of Fayet–Iliopoulos D-term in type I string theory revisited: T6/Z3 orbifold case

    International Nuclear Information System (INIS)

    The string one-loop computation of the Fayet–Iliopoulos D-term in type I string theory in the case of T6/Z3 orbifold compactification associated with annulus (planar) and the Möbius strip string worldsheet diagrams is reexamined. The mass extracted from the sum of these amplitudes through a limiting procedure is found to be non-vanishing, which is contrary to the earlier computation. The sum can be made finite by a rescaling of the modular parameter in the closed string channel

  16. The classical Yang-Baxter equation and the associated Yangian symmetry of gauged WZW-type theories

    CERN Document Server

    Itsios, Georgios; Siampos, Konstantinos; Torrielli, Alessandro

    2014-01-01

    We construct the Lax-pair, the classical monodromy matrix and the corresponding solution of the Yang-Baxter equation, for a class of integrable gauged WZW-type theories interpolating between the WZW model and the non-Abelian T-dual of the principal chiral model for a simple group. We derive in full detail the Yangian algebra using two independent methods: by computing the algebra of the non-local charges and alternatively through an expansion of the Maillet brackets for the monodromy matrix. As a byproduct, we also provide a detailed general proof of the Serre relations for the Yangian symmetry.

  17. Conformally reduced WZNW theory, new extended chiral algebras and their associated today type integrable systems: Pt. 1

    International Nuclear Information System (INIS)

    The authors propose and analyse a large class of conformal reductions Cons[g(H,d)] of WZNW theory based on the integral gradations of the underlying Lie algebra g. The W-bases of the associated W-algebras W[g(H,d)] are constructed under the generalized Drinfeld-Sokolov gauge which the authors call O'Raifeartaigh gauge of the constrained Kac-Moody currents, and the equations of motion of the extended Toda type integrable systems corresponding to these Walgebras are derived

  18. Calculation of Fayet-Iliopoulos D-term in type I string theory revisited: T^6/Z^3 orbifold case

    CERN Document Server

    Itoyama, H

    2013-01-01

    The string one-loop computation of Fayet-Iliopoulos D-term in type I string theory in the case of T^6/Z^3 orbifold compactification associated with annulus (planar) and Mobius strip string worldsheet diagrams is reexamined. The mass extracted from the sum of these amplitudes through a limiting procedure is found to be non-vanishing, which is contrary to the earlier computation. The sum can be made finite by a rescaling of the modular parameter in the closed string channel.

  19. Training load quantification in triathlon

    OpenAIRE

    ROBERTO CEJUELA ANTA; JONATHAN ESTEVE-LANAO

    2011-01-01

    There are different Indices of Training Stress of varying complexity, to quantification Training load. Examples include the training impulse (TRIMP), the session (RPE), Lucia’s TRIMP or Summated Zone Score. But the triathlon, a sport to be combined where there are interactions between different segments, is a complication when it comes to quantify the training. The aim of this paper is to review current methods of quantification, and to propose a scale to quantify the training load in triat...

  20. Chern-Simons and Born-Infeld gravity theories and Maxwell algebras type

    International Nuclear Information System (INIS)

    Recently it was shown that standard odd- and even-dimensional general relativity can be obtained from a (2n + 1)-dimensional Chern-Simons Lagrangian invariant under the B2n+1 algebra and from a (2n)-dimensional Born-Infeld Lagrangian invariant under a subalgebra LB2n+1, respectively. Very recently, it was shown that the generalized Inoenue-Wigner contraction of the generalized AdS-Maxwell algebras provides Maxwell algebras of types Mm which correspond to the so-called Bm Lie algebras. In this article we report on a simple model that suggests a mechanism by which standard odd-dimensional general relativity may emerge as the weak coupling constant limit of a (2p + 1)-dimensional Chern-Simons Lagrangian invariant under the Maxwell algebra type M2m+1, if and only if m ? p. Similarly, we show that standard even-dimensional general relativity emerges as the weak coupling constant limit of a (2p)-dimensional Born-Infeld type Lagrangian invariant under a subalgebra LM2m of theMaxwell algebra type, if and only if m ? p. It is shown that when m 2m+1 and for a (2p)-dimensional Born-Infeld type Lagrangian invariant under the LM2m algebra. (orig.)

  1. Uniformization method in the theory of the nonlinear Hamiltonian systems of the Vlasov and Hartree types

    International Nuclear Information System (INIS)

    Possibilities of obtaining approximate solutions to equations of the Hartree type the known solutions of uniformized linear equations are investigated. The nonlinear Hamiltonian systems described by abstract equations of the Vlasov and Hartree type are considered by means of algebraic methods. New notion of the ''uniformization'' is introduced which represents the generalization of the second quantization method for the arbitrary Hamiltonian (Lie-Jordan) algebras, in particular, for operator algebras in indefinite spaces. Functional calculus of uniformized observables is developed, extending and unifying the calculus of generating functionals of commuting and anticommuting variables for even operators

  2. Non-abelian black holes and catastrophe theory; 2, charged type

    CERN Document Server

    Tachizawa, T; Torii, T; Tachizawa, T; Maeda, K; Torii, T

    1995-01-01

    We reanalyze the gravitating monopole and its black hole solutions in the Einstein-Yang-Mills-Higgs system and we discuss their stabilities from the point of view of catastrophe theory. Although these non-trivial solutions exhibit fine and complicated structures, we find that stability is systematically understood via a swallow tail catastrophe. The Reissner-Nordstr\\"{o}m trivial solution becomes unstable from the point where the non-trivial monopole black hole appears. We also find that, within a very small parameter range, the specific heat of a monopole black hole changes its sign .

  3. Stability theory of drift-type flute modes in finite-? plasmas

    International Nuclear Information System (INIS)

    The linear theory of flute modes in a finite-?, inhomogeneous, magnetized plasma is developed. The collisionless Vlasov equation is used in a slab geometry, including the effects of a constant gravity field. The magnetic drift mode and the g x B mode are examined, and their stability properties are described. These two ''modes'' are shown to be different limits of the general finite-? interchange mode. The wave-particle resonances, between the flute modes and the particles that delB drift perpendicular to the unperturbed magnetic field, are included in a self-consistent manner

  4. The Vilkovisky-De Witt effective action in BF-type topological field theories

    International Nuclear Information System (INIS)

    The one-loop off-shell effective action is reexamined for the case of BF theories in three dimensions. Within the context of the Vilkovisky-DeWitt reparametrization invariant framework, it is shown how the choice of an acceptable field space metric is crucial for obtaining gauge invariant results, even when this metric is field independent. It is found that the phase contribution to the one-loop effective action is proportional to the pure Chern-Simons term. The possible dependence of these results on the choise of field metric is briefly discussed. (orig.)

  5. Weyl Group Multiple Dirichlet Series Type A Combinatorial Theory (AM-175)

    CERN Document Server

    Brubaker, Ben; Friedberg, Solomon

    2011-01-01

    Weyl group multiple Dirichlet series are generalizations of the Riemann zeta function. Like the Riemann zeta function, they are Dirichlet series with analytic continuation and functional equations, having applications to analytic number theory. By contrast, these Weyl group multiple Dirichlet series may be functions of several complex variables and their groups of functional equations may be arbitrary finite Weyl groups. Furthermore, their coefficients are multiplicative up to roots of unity, generalizing the notion of Euler products. This book proves foundational results about these series an

  6. Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs

    CERN Document Server

    Pueschel, M; Pueschel, Markus; Moura, Jose M. F.

    2007-01-01

    This paper presents a systematic methodology based on the algebraic theory of signal processing to classify and derive fast algorithms for linear transforms. Instead of manipulating the entries of transform matrices, our approach derives the algorithms by stepwise decomposition of the associated signal models, or polynomial algebras. This decomposition is based on two generic methods or algebraic principles that generalize the well-known Cooley-Tukey FFT and make the algorithms' derivations concise and transparent. Application to the 16 discrete cosine and sine transforms yields a large class of fast algorithms, many of which have not been found before.

  7. A calculation methodology applied for fuel management in PWR type reactors using first order perturbation theory

    International Nuclear Information System (INIS)

    An attempt has been made to obtain a strategy coherent with the available instruments and that could be implemented with future developments. A calculation methodology was developed for fuel reload in PWR reactors, which evolves cell calculation with the HAMMER-TECHNION code and neutronics calculation with the CITATION code.The management strategy adopted consists of fuel element position changing at the beginning of each reactor cycle in order to decrease the radial peak factor. The bi-dimensional, two group First Order perturbation theory was used for the mathematical modeling. (L.C.J.A.)

  8. Investigating Strength and Frequency Effects in Recognition Memory Using Type-2 Signal Detection Theory

    Science.gov (United States)

    Higham, Philip A.; Perfect, Timothy J.; Bruno, Davide

    2009-01-01

    Criterion- versus distribution-shift accounts of frequency and strength effects in recognition memory were investigated with Type-2 signal detection receiver operating characteristic (ROC) analysis, which provides a measure of metacognitive monitoring. Experiment 1 demonstrated a frequency-based mirror effect, with a higher hit rate and lower…

  9. Chern-Simons and Born-Infeld gravity theories and Maxwell algebras type

    Energy Technology Data Exchange (ETDEWEB)

    Concha, P.K.; Penafiel, D.M.; Rodriguez, E.K.; Salgado, P. [Universidad de Concepcion, Departamento de Fisica, Concepcion (Chile)

    2014-02-15

    Recently it was shown that standard odd- and even-dimensional general relativity can be obtained from a (2n + 1)-dimensional Chern-Simons Lagrangian invariant under the B{sub 2n+1} algebra and from a (2n)-dimensional Born-Infeld Lagrangian invariant under a subalgebra L{sup B{sub 2}{sub n}{sub +}{sub 1}}, respectively. Very recently, it was shown that the generalized Inoenue-Wigner contraction of the generalized AdS-Maxwell algebras provides Maxwell algebras of types M{sub m} which correspond to the so-called B{sub m} Lie algebras. In this article we report on a simple model that suggests a mechanism by which standard odd-dimensional general relativity may emerge as the weak coupling constant limit of a (2p + 1)-dimensional Chern-Simons Lagrangian invariant under the Maxwell algebra type M{sub 2m+1}, if and only if m ? p. Similarly, we show that standard even-dimensional general relativity emerges as the weak coupling constant limit of a (2p)-dimensional Born-Infeld type Lagrangian invariant under a subalgebra L{sup M{sub 2}{sub m}} of theMaxwell algebra type, if and only if m ? p. It is shown that when m < p this is not possible for a (2p+1)-dimensional Chern-Simons Lagrangian invariant under the M{sub 2m+1} and for a (2p)-dimensional Born-Infeld type Lagrangian invariant under the L{sup M{sub 2}{sub m}} algebra. (orig.)

  10. Massless particles, orthosymplectic symmetry and another type of Kaluza-Klein theory

    International Nuclear Information System (INIS)

    The superalgebra osp(8/1) is intimately related to the twistor program. Its most singular representation has the following property: restricted to the conformal subalgebra it contains each and every massless representation exactly once. In other words, one irreducible representation of osp(8/1) describes all massless particles with maximal efficiency. It is believed that such unification is required if massless fields of high spins are to have self-consistent interactions. There are other reasons for studying massless particles of all spins simultaneously. There is a very appealing model in which massless particles are viewed as states of two so(3,2) singletons. The astounding fact is that all free two-singleton states are precisely massless. The most singular representation of osp(8/2) is irreducible on osp(8/1) and completely determined by the latter representation. It finds direct application in supergravity theories. The most interesting Sp(8/R) homogeneous space is 10-dimensional. The action of the conformal subgroup leaves invariant a unique 4-dimensional submanifold that can be identified with space time. Kaluza-Klein expansion of the scalar field on 10-space, around this 4-dimensional manifold, leads to a field theory of massless particles with all integer spins on space time. A supersymmetric extension is also possible. (Auth.)

  11. Probabilistic bounding analysis in the Quantification of Margins and Uncertainties

    International Nuclear Information System (INIS)

    The current challenge of nuclear weapon stockpile certification is to assess the reliability of complex, high-consequent, and aging systems without the benefit of full-system test data. In the absence of full-system testing, disparate kinds of information are used to inform certification assessments such as archival data, experimental data on partial systems, data on related or similar systems, computer models and simulations, and expert knowledge. In some instances, data can be scarce and information incomplete. The challenge of Quantification of Margins and Uncertainties (QMU) is to develop a methodology to support decision-making in this informational context. Given the difficulty presented by mixed and incomplete information, we contend that the uncertainty representation for the QMU methodology should be expanded to include more general characterizations that reflect imperfect information. One type of generalized uncertainty representation, known as probability bounds analysis, constitutes the union of probability theory and interval analysis where a class of distributions is defined by two bounding distributions. This has the advantage of rigorously bounding the uncertainty when inputs are imperfectly known. We argue for the inclusion of probability bounds analysis as one of many tools that are relevant for QMU and demonstrate its usefulness as compared to other methods in a reliability example with imperfect input information.mation.

  12. M-theory on 'toric' G2 cones and its type II reduction

    International Nuclear Information System (INIS)

    We analyze a class of conical G2 metrics admitting two commuting isometries, together with a certain one-parameter family of G2 deformations which preserves these symmetries. Upon using recent results of Calderbank and Pedersen, we extract the IIA reduction of eleven-dimensional supergravity on such backgrounds, as well as its type IIB dual. The associated type II solutions are expected to contain 6-branes and 5-branes respectively. By studying the general asympotics of the IIA and IIB solutions around the relevant loci, we confirm the interpretation of such solutions in terms of localized and delocalized branes. In particular, we find explicit, general expressions for the string coupling and R-R/NS-NS fields in the vicinity of these objects. Our solutions contain and generalize the field configurations relevant for certain models considered in recent work of Acharya and Witten. (author)

  13. Session Types = Intersection Types + Union Types

    CERN Document Server

    Padovani, Luca

    2011-01-01

    We propose a semantically grounded theory of session types which relies on intersection and union types. We argue that intersection and union types are natural candidates for modeling branching points in session types and we show that the resulting theory overcomes some important defects of related behavioral theories. In particular, intersections and unions provide a native solution to the problem of computing joins and meets of session types. Also, the subtyping relation turns out to be a pre-congruence, while this is not always the case in related behavioral theories.

  14. On importance of higher non-linear interactions in the theory of type II incommensurate systems

    International Nuclear Information System (INIS)

    We reveal that the role of the higher non-linear local interactions in the conventional theoretical models developed to describe phase transitions in type II incommensurate systems is underestimated. Their consistent consideration in the thermodynamic potential expansion allows one to remove key contradictions in explanation of the experimental data for ferroelectric Sn2P2Se6 in the vicinity of the modulated-commensurate phase transition point

  15. Theory of the normal modes of vibrations in the lanthanide type crystals

    International Nuclear Information System (INIS)

    For the lanthanide type crystals, a vast and rich, though incomplete amount of experimental data has been accumulated, from linear and non linear optics, during the last decades. The main goal of the current research work is to report a new methodology and strategy to put forward a more representative approach to account for the normal modes of vibrations for a complex N-body system. For illustrative purposes, the chloride lanthanide type crystals Cs2NaLnCl6 have been chosen and we develop new convergence tests as well as a criterion to deal with the details of the F-matrix (potential energy matrix). A novel and useful concept of natural potential energy distributions (NPED) is introduced and examined throughout the course of this work. The diagonal and non diagonal contributions to these NPED-values, are evaluated for a series of these crystals explicitly. Our model is based upon a total of seventy two internal coordinates and ninety eight internal Hooke type force constants. An optimization mathematical procedure is applied with reference to the series of chloride lanthanide crystals and it is shown that the strategy and model adopted is sound from both a chemical and a physical viewpoints. We can argue that the current model is able to accommodate a number of interactions and to provide us with a very useful physical insight. The limitations and advantages of the current model and the most likely sources for improvements are discussed in detaies for improvements are discussed in detail.

  16. Isobaric labeling-based relative quantification in shotgun proteomics.

    Science.gov (United States)

    Rauniyar, Navin; Yates, John R

    2014-12-01

    Mass spectrometry plays a key role in relative quantitative comparisons of proteins in order to understand their functional role in biological systems upon perturbation. In this review, we review studies that examine different aspects of isobaric labeling-based relative quantification for shotgun proteomic analysis. In particular, we focus on different types of isobaric reagents and their reaction chemistry (e.g., amine-, carbonyl-, and sulfhydryl-reactive). Various factors, such as ratio compression, reporter ion dynamic range, and others, cause an underestimation of changes in relative abundance of proteins across samples, undermining the ability of the isobaric labeling approach to be truly quantitative. These factors that affect quantification and the suggested combinations of experimental design and optimal data acquisition methods to increase the precision and accuracy of the measurements will be discussed. Finally, the extended application of isobaric labeling-based approach in hyperplexing strategy, targeted quantification, and phosphopeptide analysis are also examined. PMID:25337643

  17. Uncertainty Quantification in Climate Modeling

    Science.gov (United States)

    Sargsyan, K.; Safta, C.; Berry, R.; Debusschere, B.; Najm, H.

    2011-12-01

    We address challenges that sensitivity analysis and uncertainty quantification methods face when dealing with complex computational models. In particular, climate models are computationally expensive and typically depend on a large number of input parameters. We consider the Community Land Model (CLM), which consists of a nested computational grid hierarchy designed to represent the spatial heterogeneity of the land surface. Each computational cell can be composed of multiple land types, and each land type can incorporate one or more sub-models describing the spatial and depth variability. Even for simulations at a regional scale, the computational cost of a single run is quite high and the number of parameters that control the model behavior is very large. Therefore, the parameter sensitivity analysis and uncertainty propagation face significant difficulties for climate models. This work employs several algorithmic avenues to address some of the challenges encountered by classical uncertainty quantification methodologies when dealing with expensive computational models, specifically focusing on the CLM as a primary application. First of all, since the available climate model predictions are extremely sparse due to the high computational cost of model runs, we adopt a Bayesian framework that effectively incorporates this lack-of-knowledge as a source of uncertainty, and produces robust predictions with quantified uncertainty even if the model runs are extremely sparse. In particular, we infer Polynomial Chaos spectral expansions that effectively encode the uncertain input-output relationship and allow efficient propagation of all sources of input uncertainties to outputs of interest. Secondly, the predictability analysis of climate models strongly suffers from the curse of dimensionality, i.e. the large number of input parameters. While single-parameter perturbation studies can be efficiently performed in a parallel fashion, the multivariate uncertainty analysis requires a large number of training runs, as well as an output parameterization with respect to a fast-growing spectral basis set. To alleviate this issue, we adopt the Bayesian view of compressive sensing, well-known in the image recognition community. The technique efficiently finds a sparse representation of the model output with respect to a large number of input variables, effectively obtaining a reduced order surrogate model for the input-output relationship. The methodology is preceded by a sampling strategy that takes into account input parameter constraints by an initial mapping of the constrained domain to a hypercube via the Rosenblatt transformation, which preserves probabilities. Furthermore, a sparse quadrature sampling, specifically tailored for the reduced basis, is employed in the unconstrained domain to obtain accurate representations. The work is supported by the U.S. Department of Energy's CSSEF (Climate Science for a Sustainable Energy Future) program. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Theory of superfluid states with singlet and triplet types of pairing in nuclear matter

    International Nuclear Information System (INIS)

    The paper presents the results of investigation of superfluid states in a two-component Fermi liquid in the framework of the Fermi liquid approach. Particular attention is paid to superfluid states in nuclear matter which are characterized by the superposition of singlet and triplet types of pairing in spin and isospin spaces. The authors have formulated the basic points of the Fermi liquid approach which are used in the study of superfluidity in nuclear matter with the superposition of singlet and triplet types of pairing. Derivation of the system of self-consistency equations and their solution are presented. For concrete calculations the interaction in the Skyrme model is taken. Using this model the conditions for the existence of the considered states are determined. These conditions impose certain constraints on the potential of interaction and on the density of particles in the system. It is shown that the states with a complete set of nonzero order parameters are realized only in a narrow density range, whose width and position in the density scale depend on the choice of a particular Skyrme force. Considered are 18 different parameterizations, and indicated is for which of them the studied types of superfluid states may appear The problem of stability of the states with superposition of singlet and triplet types of pairing is studied. It is shown that the lowest value of the thermodynamic potential corresponds to purely triplet states, then in order of increasing there are the thermodynamic potential of purely singlet states, and mixed singlet-triplet states. The case of unitary states is considered separately. For these states the solutions of the self-consistency equations are analyzed too. The density range for these states is defined and it is shown that this range is different than from that which corresponds to the nonunitary states. In addition, studied is the problem of the existence of unitary superfluid states with the superposition of singlet and triplet superfluidity in the case of asymmetrical nuclear matter. It is shown that the appearance of asymmetry causes the unitarity of superfluid states in nuclear matter to be broken.

  19. A new rosane-type diterpenoid from Stachys parviflora and its density functional theory studies.

    Science.gov (United States)

    Farooq, Umar; Ayub, Khurshid; Hashmi, Muhammad Ali; Sarwar, Rizwana; Khan, Afsar; Ali, Mumtaz; Ahmad, Manzoor; Khan, Ajmal

    2015-05-01

    A new rosane-type diterpenoid (1) has been isolated from the chloroform fraction of Stachys parviflora. Structure of 1 was proposed based on 1D and 2D NMR techniques including correlation spectroscopy, heteronuclear multiple quantum coherence, heteronuclear multiple bond correlation and nuclear Overhauser effect spectroscopy. A theoretical model for the electronic and spectroscopic properties of compound 1 is also developed. The geometries and electronic properties were modelled at B3LYP/6-31G(*) and the theoretical scaled spectroscopic data correlate nicely with the experimental data. PMID:25482043

  20. Towards an understanding of Type Ia supernovae from a synthesis of theory and observations

    CERN Document Server

    Hillebrandt, W; Röpke, F K; Ruiter, A J

    2013-01-01

    Motivated by the fact that calibrated light curves of Type Ia supernovae (SNe Ia) have become a major tool to determine the expansion history of the Universe, considerable attention has been given to, both, observations and models of these events over the past 15 years. Here, we summarize new observational constraints, address recent progress in modeling Type Ia supernovae by means of three-dimensional hydrodynamic simulations, and discuss several of the still open questions. It will be be shown that the new models have considerable predictive power which allows us to study observable properties such as light curves and spectra without adjustable non-physical parameters. This is a necessary requisite to improve our understanding of the explosion mechanism and to settle the question of the applicability of SNe Ia as distance indicators for cosmology. We explore the capabilities of the models by comparing them with observations and we show how such models can be applied to study the origin of the diversity of S...

  1. Combining perturbation theory and transformation electromagnetics for finite element solution of Helmholtz-type scattering problems

    Science.gov (United States)

    Kuzuoglu, Mustafa; Ozgun, Ozlem

    2014-10-01

    A numerical method is proposed for efficient solution of scattering from objects with weakly perturbed surfaces by combining the perturbation theory, transformation electromagnetics and the finite element method. A transformation medium layer is designed over the smooth surface, and the material parameters of the medium are determined by means of a coordinate transformation that maps the smooth surface to the perturbed surface. The perturbed fields within the domain are computed by employing the material parameters and the fields of the smooth surface as source terms in the Helmholtz equation. The main advantage of the proposed approach is that if repeated solutions are needed (such as in Monte Carlo technique or in optimization problems requiring multiple solutions for a set of perturbed surfaces), computational resources considerably decrease because a single mesh is used and the global matrix is formed only once. Only the right hand side vector is changed with respect to the perturbed material parameters corresponding to each of the perturbed surfaces. The technique is validated via finite element simulations.

  2. Lovelock type gravity and small black holes in heterotic string theory

    International Nuclear Information System (INIS)

    We analyze near horizon behavior of small D-dimensional 2-charge black holes by modifying tree level effective action of heterotic string with all extended Gauss-Bonnet densities. We show that there is a nontrivial and unique choice of parameters, independent of D, for which the black hole entropy in any dimension is given by 4?(nw)1/2, which is exactly the statistical entropy of 1/2-BPS states of heterotic string compactified on T9-D x S1 with momentum n and winding w. This, in a sense, extends the results of Sen JHEP07(2005)073 to all dimensions. We also show that our Lovelock type action belongs to the more general class of actions sharing the similar behaviour on the AdS2 x SD-2 near horizon geometry

  3. Critical state theory for nonparallel flux line lattices in type-II superconductors

    CERN Document Server

    Badía, A

    2001-01-01

    Coarse-grained flux density profiles in type-II superconductors with non-parallel vortex configurations are obtained by a proposed phenomenological least action principle. We introduce a functional $C[H(x)]$, which is minimized under a constraint of the kind $J$ belongs to $Delta$ for the current density vector, where $Delta$ is a bounded set. This generalizes the concept of critical current density introduced by C. P. Bean for parallel vortex configurations. In particular, we choose the isotropic case ($Delta$ is a circle), for which the field penetration profiles $H(x,t)$ are derived when a changing external excitation is applied. Faraday's law, and the principle of minimum entropy production rate for stationary thermodynamic processes dictate the evolution of the system. Calculations based on the model can reproduce the physical phenomena of flux transport and consumption, and the striking effect of magnetization collapse in crossed field measurements.

  4. Secret symmetries of type IIB superstring theory on Ad{{S}_{3}} × {{S}^{3}} × {{M}^{4}}

    Science.gov (United States)

    Pittelli, Antonio; Torrielli, Alessandro; Wolf, Martin

    2014-11-01

    We establish features of so-called Yangian secret symmetries for AdS3 type IIB superstring backgrounds, thus verifying the persistence of such symmetries to this new instance of the AdS/CFT correspondence. Specifically, we find two a priori different classes of secret symmetry generators. One class of generators, anticipated from the previous literature, is more naturally embedded in the algebra governing the integrable scattering problem. The other class of generators is more elusive and somewhat closer in its form to its higher-dimensional AdS5 counterpart. All of these symmetries respect left-right crossing. In addition, by considering the interplay between left and right representations, we gain a new perspective on the AdS5 case. We also study the RTT-realisation of the Yangian in AdS3 backgrounds, thus establishing a new incarnation of the Beisert–de Leeuw construction.

  5. Extension Theory and Krein-type Resolvent Formulas for Nonsmooth Boundary Value Problems

    CERN Document Server

    Abels, Helmut; Wood, Ian Geoffrey

    2010-01-01

    For a strongly elliptic second-order operator $A$ on a bounded domain $\\Omega\\subset \\mathbb{R}^n$ it has been known for many years how to interpret the general closed $L_2(\\Omega)$-realizations of $A$ as representing boundary conditions (generally nonlocal), when the domain and coefficients are smooth. The purpose of the present paper is to extend this representation to nonsmooth domains and coefficients, including the case of H\\"older $C^{\\frac32+\\varepsilon}$-smoothness, in such a way that pseudodifferential methods are still available for resolvent constructions and ellipticity considerations. We show how it can be done for domains with $B^\\frac32_{2,p}$-smoothness and operators with $H^1_q$-coefficients, for suitable $p>2(n-1)$ and $q>n$. In particular, Kre\\u\\i{}n-type resolvent formulas are established in such nonsmooth cases. Some unbounded domains are allowed.

  6. A recipe for EFT uncertainty quantification in nuclear physics

    OpenAIRE

    Furnstahl, R. J.; Phillips, D. R.; Wesolowski, S.

    2014-01-01

    The application of effective field theory (EFT) methods to nuclear systems provides the opportunity to rigorously estimate the uncertainties originating in the nuclear Hamiltonian. Yet this is just one source of uncertainty in the observables predicted by calculations based on nuclear EFTs. We discuss the goals of uncertainty quantification in such calculations and outline a recipe to obtain statistically meaningful error bars for their predictions. We argue that the differe...

  7. The classical Yang-Baxter equation and the associated Yangian symmetry of gauged WZW-type theories

    Science.gov (United States)

    Itsios, Georgios; Sfetsos, Konstantinos; Siampos, Konstantinos; Torrielli, Alessandro

    2014-12-01

    We construct the Lax-pair, the classical monodromy matrix and the corresponding solution of the Yang-Baxter equation, for a two-parameter deformation of the Principal chiral model for a simple group. This deformation includes as a one-parameter subset, a class of integrable gauged WZW-type theories interpolating between the WZW model and the non-Abelian T-dual of the principal chiral model. We derive in full detail the Yangian algebra using two independent methods: by computing the algebra of the non-local charges and alternatively through an expansion of the Maillet brackets for the monodromy matrix. As a byproduct, we also provide a detailed general proof of the Serre relations for the Yangian symmetry.

  8. Conformally de Sitter space from anisotropic space-like D3-brane of type IIB string theory

    Science.gov (United States)

    Roy, Shibaji

    2014-05-01

    We construct a four-dimensional de Sitter space up to a conformal transformation by compactifying the anisotropic SD3-brane solution of type IIB string theory on a six-dimensional product space of the form H5×S1, where H5 is a five-dimensional hyperbolic space and S1 is a circle. The radius of the hyperbolic space is chosen to be constant. The radius of the circle and the dilaton in four dimensions are time dependent and not constant in general. By different choices of parameters characterizing the SD3-brane solution, either the dilaton or the radius of the circle can be made constant but not both. The form field is also nonvanishing in general, but it can be made to vanish without affecting the solution. This construction might be useful for a better understanding of dS/CFT correspondence as well as for cosmology.

  9. Uncertainty Quantification in Hybrid Dynamical Systems

    CERN Document Server

    Sahai, Tuhin

    2011-01-01

    Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontinuities, we use a wavelet-based Wiener-Haar expansion. We develop a boundary layer approach to propagate uncertainty through separable reset conditions. We also introduce a transport theory based approach for propagating uncertainty through hybrid dynamical systems. Here the expansion yields a set of hyperbolic equations that are solved by integrating along characteristics. The solution of the partial differential equation along the characteristics allows one to quantify uncertainty in hybrid or switching dynamical systems. The above method...

  10. Training load quantification in triathlon

    Directory of Open Access Journals (Sweden)

    ROBERTO CEJUELA ANTA

    2011-06-01

    Full Text Available There are different Indices of Training Stress of varying complexity, to quantification Training load. Examples include the training impulse (TRIMP, the session (RPE, Lucia’s TRIMP or Summated Zone Score. But the triathlon, a sport to be combined where there are interactions between different segments, is a complication when it comes to quantify the training. The aim of this paper is to review current methods of quantification, and to propose a scale to quantify the training load in triathlon simple application.

  11. k-string tensions in the 4-d SU(N)-inspired dual Abelian-Higgs-type theory

    CERN Document Server

    Antonov, D; Ebert, D; Antonov, Dmitri; Debbio, Luigi Del; Ebert, Dietmar

    2004-01-01

    The k-string tensions are explored in the 4-d $[U(1)]^{N-1}$-invariant dual Abelian-Higgs-type theory. In the London limit of this theory, the Casimir scaling is found in the approximation when small-sized closed dual strings are disregarded. When these strings are treated in the dilute-plasma approximation, explicit corrections to the Casimir scaling are found. The leading correction due to the deviation from the London limit is also derived. Its N-ality dependence turns out to be the same as that of the first non-trivial correction produced by closed strings. It also turns out that this N-ality dependence coincides with that of the leading correction to the k-string tension, which emerges by way of the non-diluteness of the monopole plasma in the 3-d SU(N) Georgi-Glashow model. Finally, we prove that, in the latter model, Casimir scaling holds even at monopole densities close to the mean one, provided the string world sheet is flat.

  12. Flux-induced soft terms on type IIB/F-theory matter curves and hypercharge dependent scalar masses

    Science.gov (United States)

    Cámara, Pablo G.; Ibáñez, Luis E.; Valenzuela, Irene

    2014-06-01

    Closed string fluxes induce generically SUSY-breaking soft terms on supersymmetric type IIB orientifold compactifications with D3/D7 branes. This was studied in the past by inserting those fluxes on the DBI+CS actions for adjoint D3/D7 fields, where D7-branes had no magnetic fluxes. In the present work we generalise those computations to the phenomenologically more relevant case of chiral bi-fundamental fields laying at 7-brane intersections and F-theory local matter curves. We also include the effect of 7-brane magnetic flux as well as more general closed string backgrounds, including the effect of distant -branes. We discuss several applications of our results. We find that squark/slepton masses become in general flux-dependent in F-theory GUT's. Hypercharge-dependent non-universal scalar masses with a characteristic sfermion hierarchy m {/E 2} < m {/L 2} < m {/Q 2} < m {/D 2} < m {/U 2} are obtained. There are also flavor-violating soft terms both for matter fields living at intersecting 7-branes or on D3-branes at singularities. They point at a very heavy sfermion spectrum to avoid FCNC constraints. We also discuss the possible microscopic description of the fine-tuning of the EW Higgs boson in compactifications with a MSSM spectrum.

  13. Development of flow network analysis code for block type VHTR core by linear theory method

    International Nuclear Information System (INIS)

    VHTR (Very High Temperature Reactor) is high-efficiency nuclear reactor which is capable of generating hydrogen with high temperature of coolant. PMR (Prismatic Modular Reactor) type reactor consists of hexagonal prismatic fuel blocks and reflector blocks. The flow paths in the prismatic VHTR core consist of coolant holes, bypass gaps and cross gaps. Complicated flow paths are formed in the core since the coolant holes and bypass gap are connected by the cross gap. Distributed coolant was mixed in the core through the cross gap so that the flow characteristics could not be modeled as a simple parallel pipe system. It requires lot of effort and takes very long time to analyze the core flow with CFD analysis. Hence, it is important to develop the code for VHTR core flow which can predict the core flow distribution fast and accurate. In this study, steady state flow network analysis code is developed using flow network algorithm. Developed flow network analysis code was named as FLASH code and it was validated with the experimental data and CFD simulation results. (authors)

  14. Statistical image quantification toward optimal scan fusion and change quantification

    Science.gov (United States)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  15. Virus detection and quantification using electrical parameters

    Science.gov (United States)

    Ahmad, Mahmoud Al; Mustafa, Farah; Ali, Lizna M.; Rizvi, Tahir A.

    2014-10-01

    Here we identify and quantitate two similar viruses, human and feline immunodeficiency viruses (HIV and FIV), suspended in a liquid medium without labeling, using a semiconductor technique. The virus count was estimated by calculating the impurities inside a defined volume by observing the change in electrical parameters. Empirically, the virus count was similar to the absolute value of the ratio of the change of the virus suspension dopant concentration relative to the mock dopant over the change in virus suspension Debye volume relative to mock Debye volume. The virus type was identified by constructing a concentration-mobility relationship which is unique for each kind of virus, allowing for a fast (within minutes) and label-free virus quantification and identification. For validation, the HIV and FIV virus preparations were further quantified by a biochemical technique and the results obtained by both approaches corroborated well. We further demonstrate that the electrical technique could be applied to accurately measure and characterize silica nanoparticles that resemble the virus particles in size. Based on these results, we anticipate our present approach to be a starting point towards establishing the foundation for label-free electrical-based identification and quantification of an unlimited number of viruses and other nano-sized particles.

  16. SWATH enables precise label-free quantification on proteome scale.

    Science.gov (United States)

    Huang, Qiang; Yang, Lu; Luo, Ji; Guo, Lin; Wang, Zhiyuan; Yang, Xiangyun; Jin, Wenhai; Fang, Yanshan; Ye, Juanying; Shan, Bing; Zhang, Yaoyang

    2015-04-01

    MS-based proteomics has emerged as a powerful tool in biological studies. The shotgun proteomics strategy, in which proteolytic peptides are analyzed in data-dependent mode, enables a detection of the most comprehensive proteome (>10 000 proteins from whole-cell lysate). The quantitative proteomics uses stable isotopes or label-free method to measure relative protein abundance. The isotope labeling strategies are more precise and accurate compared to label-free methods, but labeling procedures are complicated and expensive, and the sample number and types are also limited. Sequential window acquisition of all theoretical mass spectra (SWATH) is a recently developed technique, in which data-independent acquisition is coupled with peptide spectral library match. In principle SWATH method is able to do label-free quantification in an MRM-like manner, which has higher quantification accuracy and precision. Previous data have demonstrated that SWATH can be used to quantify less complex systems, such as spiked-in peptide mixture or protein complex. Our study first time assessed the quantification performance of SWATH method on proteome scale using a complex mouse-cell lysate sample. In total 3600 proteins got identified and quantified without sample prefractionation. The SWATH method shows outstanding quantification precision, whereas the quantification accuracy becomes less perfect when protein abundances differ greatly. However, this inaccuracy does not prevent discovering biological correlates, because the measured signal intensities had linear relationship to the sample loading amounts; thus the SWATH method can predict precisely the significance of a protein. Our results prove that SWATH can provide precise label-free quantification on proteome scale. PMID:25560523

  17. Synchrotron-maser theory of type II solar radio emission processes - the physical model and generation mechanism

    International Nuclear Information System (INIS)

    A theory is proposed to explain the generation mechanism of type II solar radio bursts. It is suggested that the shock wave formed at the leading edge of a coronal transient can accelerate electrons. Because of the nature of the acceleration process, the energized electrons can possess a hollow-beam type distribution function. When the electron beam propagates along the ambient magnetic field to lower altitudes and attains larger pitch angles, a synchrotron-maser instability can set in. This instability leads to the amplification of unpolarized or weakly polarized radiation. The present discussion incorporates a model which describes the ambient magnetic field and background plasma by means of MHD simulation. The potential emission regions may be located approximately, according to the time-dependent MHD simulation. Since the average local plasma frequency in the source region can be evaluated from the MHD model, the frequent drift associated with the radiation may be estimated. The result seems to be in good agreement with that derived from observations. 65 references

  18. A recipe for EFT uncertainty quantification in nuclear physics

    Science.gov (United States)

    Furnstahl, R. J.; Phillips, D. R.; Wesolowski, S.

    2015-03-01

    The application of effective field theory (EFT) methods to nuclear systems provides the opportunity to rigorously estimate the uncertainties originating in the nuclear Hamiltonian. Yet this is just one source of uncertainty in the observables predicted by calculations based on nuclear EFTs. We discuss the goals of uncertainty quantification in such calculations and outline a recipe to obtain statistically meaningful error bars for their predictions. We argue that the different sources of theory error can be accounted for within a Bayesian framework, as we illustrate using a toy model.

  19. Uncertainty Quantification in Multiscale Simulation of Materials: A Prospective

    Science.gov (United States)

    Chernatynskiy, Aleksandr; Phillpot, Simon R.; LeSar, Richard

    2013-07-01

    Simulation has long since joined experiment and theory as a valuable tool to address materials problems. Analysis of errors and uncertainties in experiment and theory is well developed; such analysis for simulations, particularly for simulations linked across length scales and timescales, is much less advanced. In this prospective, we discuss salient issues concerning uncertainty quantification (UQ) from a variety of fields and review the sparse literature on UQ in materials simulations. As specific examples, we examine the development of atomistic potentials and multiscale simulations of crystal plasticity. We identify needs for conceptual advances, needs for the development of best practices, and needs for specific implementations.

  20. A recipe for EFT uncertainty quantification in nuclear physics

    CERN Document Server

    Furnstahl, R J; Wesolowski, S

    2014-01-01

    The application of effective field theory (EFT) methods to nuclear systems provides the opportunity to rigorously estimate the uncertainties originating in the nuclear Hamiltonian. Yet this is just one source of uncertainty in the observables predicted by calculations based on nuclear EFTs. We discuss the goals of uncertainty quantification in such calculations and outline a recipe to obtain statistically meaningful error bars for their predictions. We argue that the different sources of theory error can be accounted for within a Bayesian framework, as we illustrate using a toy model.

  1. A note on teleparallel Killing vector fields in Bianchi type VIII and IX space—times in teleparallel theory of gravitation

    International Nuclear Information System (INIS)

    In this paper we classify Bianchi type VIII and IX space—times according to their teleparallel Killing vector fields in the teleparallel theory of gravitation by using a direct integration technique. It turns out that the dimensions of the teleparallel Killing vector fields are either 4 or 5. From the above study we have shown that the Killing vector fields for Bianchi type VIII and IX space—times in the context of teleparallel theory are different from that in general relativity. (general)

  2. Quantification of natural phenomena

    International Nuclear Information System (INIS)

    The science is like a great spider's web in which unexpected connections appear and therefore it is frequently difficult to already know the consequences of new theories on those existent. The physics is a clear example of this. The Newton mechanics laws describe the physical phenomena observable accurately by means of our organs of the senses or by means of observation teams not very sophisticated. After their formulation at the beginning of the XVIII Century, these laws were recognized in the scientific world as a mathematical model of the nature. Together with the electrodynamics law, developed in the XIX century, and the thermodynamic one constitutes what we call the classic physics. The state of maturity of the classic physics at the end of last century it was such that some scientists believed that the physics was arriving to its end obtaining a complete description of the physical phenomena. The spider's web of the knowledge was supposed finished, or at least very near its termination. It ended up saying, in arrogant form, that if the initial conditions of the universe were known, we could determine the state of the same one in any future moment. Two phenomena related with the light would prove in firm form that mistaken that they were, creating unexpected connections in the great spider's web of the knowledge and knocking down part of her. The thermal radiation of the bodies and the fact that the light spreads to constant speed in the hole, without having an aant speed in the hole, without having an absolute system of reference with regard to which this speed is measured, they constituted the decisive factors in the construction of a new physics. The development of sophisticated of measure teams gave access to more precise information and it opened the microscopic world to the observation and confirmation of existent theories

  3. Structural Quantification of Entanglement

    Science.gov (United States)

    Shahandeh, F.; Sperling, J.; Vogel, W.

    2014-12-01

    We introduce an approach which allows a full structural and quantitative analysis of multipartite entanglement. The sets of states with different structures are convex and nested. Hence, they can be distinguished from each other using appropriate measurable witnesses. We derive equations for the construction of optimal witnesses and discuss general properties arising from our approach. As an example, we formulate witnesses for a 4-cluster state and perform a full quantitative analysis of the entanglement structure in the presence of noise and losses. The strength of the method in multimode continuous variable systems is also demonstrated for a dephased Greenberger-Horne-Zeilinger-type state.

  4. Critical evaluation of HPV16 gene copy number quantification by SYBR green PCR

    OpenAIRE

    Pett Mark R; Herdman Michael T; Stanley Margaret; Foster Nicola; Ng Grace; Roberts Ian; Teschendorff Andrew; Coleman Nicholas

    2008-01-01

    Abstract Background Human papilloma virus (HPV) load and physical status are considered useful parameters for clinical evaluation of cervical squamous cell neoplasia. However, the errors implicit in HPV gene quantification by PCR are not well documented. We have undertaken the first rigorous evaluation of the errors that can be expected when using SYBR green qPCR for quantification of HPV type 16 gene copy numbers. We assessed a modified method, in which external calibration curves were gener...

  5. Quantification of mannan-binding lectin

    DEFF Research Database (Denmark)

    Frederiksen, Pernille D; Thiel, Steffen

    2006-01-01

    Mannan-binding lectin (MBL) is attracting considerable interest due to its role in the immune defense. The high frequency of congenital MBL deficiency makes it feasible to evaluate clinical relevance through epidemiological investigations on fairly limited numbers of patients. MBL deficiency is determined by three mutant allotypes termed B, C and D in the coding region as well as mutations in the promoter region. It has been suggested that individuals, with deficiency-associated allotypes, may present significant amounts of low molecular weight MBL. We have compared the quantification of MBL by four commercially available assays with results obtained by our own in-house assays. Most assays are selectively sensitive for the wild type MBL (allotype A), but special combinations of antibodies also detect mutant forms of MBL. Thus a sandwich-type time-resolved immunoflourometric assay (TRIFMA), with a mouse monoclonal antibody (93C) as the catching and detecting antibody, shows B/B and D/D homozygous individuals to present signals corresponding to up to 500 ng MBL per ml (with plasma from an A/A individual as standard) as compared to less than 50 ng/ml and 200 ng/ml, respectively, when measured in other assays. In GPC at isotonic conditions the MBL in B/B and D/D individuals showed a Mr of 450 kDa. This MBL cannot bind to mannan. We further present a new method for quantifying the amount of MBL polypeptide chain. By applying plasma samples on SDS-PAGE at reducing conditions followed by Western blotting and quantification by chemiluminescense, this approach presents single polypeptide chains to the antibody independent of allotype differences in the collagen-like region. Titrations of recombinant MBL served as standard. In sera from homozygous mutants (O/O) the MBL concentrations estimated on Western blot were in the range of 100 to 500 ng/ml and correlated with that measured in the 93C-based TRIFMA.

  6. Aerobic physical activity and resistance training: an application of the theory of planned behavior among adults with type 2 diabetes in a random, national sample of Canadians

    OpenAIRE

    Karunamuni Nandini; Trinh Linda; Courneya Kerry S; Plotnikoff Ronald C; Sigal Ronald J

    2008-01-01

    Abstract Background Aerobic physical activity (PA) and resistance training are paramount in the treatment and management of type 2 diabetes (T2D), but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB) in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random na...

  7. Application of time-dependent neutron transport theory to high-temperature reactors of pebble bed type

    International Nuclear Information System (INIS)

    The neutronic behaviour of high-temperature reactors (HTR) of pebble bed type is typically studied by means of neutron diffusion methods, which have proven to be sufficiently accurate for most instances of steady-state studies and accident analyses. In this paper, we propose a novel, advanced approach based on neutron transport theory instead of the diffusion approximation, employing the time-dependent transport code DORTTD. The code is a transient extension of the well-known Discrete Ordinates code DORT, featuring a fully implicit time discretization scheme and a general coupling interface to thermal-hydraulics codes. On the example of the PBMR268 as computational prototype, several aspects of steady-state modelling are studied, like the treatment of large void cavities and the modelling of control rods. With the steady-state as point of departure, the transient features and options of DORT-TD are elaborated in some detail. Moreover, results from the code verification and validation phase, based on a suite of computational transient benchmarks, are reported. Finally, we introduce the coupled neutronics-thermal hydraulics code system DORT-TD/THERMIX, which allows performing transient studies relevant for HTR safety analysis, like depressurization and loss of flow scenarios as well as reactivity initiated transients. Preliminary results from coupled multi-group transport calculations are reported. (authors)

  8. Calculation of Fayet-Iliopoulos D-term in type I string theory revisited: T6/Z3 orbifold case

    Science.gov (United States)

    Itoyama, H.; Yano, Kohei

    2013-12-01

    The string one-loop computation of the Fayet-Iliopoulos D-term in type I string theory in the case of T6/Z3 orbifold compactification associated with annulus (planar) and the Möbius strip string worldsheet diagrams is reexamined. The mass extracted from the sum of these amplitudes through a limiting procedure is found to be non-vanishing, which is contrary to the earlier computation. The sum can be made finite by a rescaling of the modular parameter in the closed string channel. both of the amplitudes from annulus and those from the Möbius strip diagrams are divergent separately. In order to deal with these objects, we analyze them, using the variables in the closed string picture rather than those in the original open string picture. This is in accordance with the successful calculations carried out for infinity cancellation [22] and tadpole cancellation [23,24,2] in open string one-loop diagrams; the lattice sum due to the compactification should be included. Upon change of variables which may differ for the annulus and for the Möbius strip cases, this factor becomes important to the issue. The conclusions drawn from our computation are stated in Section 7 and in Section 8.

  9. Effective potential in non-supersymmetric SU(N) x SU(N) gauge theory and interactions of type 0 D3-branes

    CERN Document Server

    Tseytlin, Arkady A

    1999-01-01

    We study some aspects of short-distance interaction between parallel D3-branes in type 0 string theory as described by the corresponding world-volume gauge theory. We compute the one-loop effective potential in the non-supersymmetric SU(N) x SU(N) gauge theory (which is a Z_2 projection of the U(2N) n=4 SYM theory) representing dyonic branes composed of N electric and N magnetic D3-branes. The branes of the same type repel at short distances, but an electric and a magnetic brane attract, and the forces between self-dual branes cancel. The self-dual configuration (with the positions of the electric and the magnetic branes, i.e. the diagonal entries of the adjoint scalar fields, being the same) is stable against separation of one electric or one magnetic brane, but is unstable against certain modes of separation of several same-type branes. This instability should be suppressed in the large N limit, i.e. should be irrelevant for the large N CFT interpretation of the gauge theory suggested in hep-th/9901101.

  10. Quantification of atmospheric water soluble inorganic and organic nitrogen

    OpenAIRE

    Beni?tez, Juan Manuel Gonza?lez

    2010-01-01

    The key aims of this project were: (i) investigation of atmospheric nitrogen deposition, focused on discrimination between bulk, wet and dry deposition, and between particulate matter and gas phase, (ii) accurate quantification of the contributions of dissolved organic and inorganic nitrogen to each type of deposition, and (iii) exploration of the origin and potential sources of atmospheric water soluble organic nitrogen (WSON). This project was particularly focused on the WSON fraction becau...

  11. Object Oriented Design Security Quantification

    OpenAIRE

    Suhel Ahmad Khan

    2011-01-01

    Quantification of security at early phase produces a significant improvement to understand the management of security artifacts for best possible results. The proposed study discusses a systematic approach to quantify security based on complexity factors which having impact on security attributes. This paper provides a road-map to researchers and software practitioner to assess, and preferably, quantify software security in design phase. A security assessment through complexity framework (SVD...

  12. Precise Quantification of Nanoparticle Internalization

    OpenAIRE

    Gottstein, Claudia; Wu, Guohui; Wong, Benjamin J.; Zasadzinski, Joseph Anthony

    2013-01-01

    Nanoparticles have opened new exciting avenues for both diagnostic and therapeutic applications in human disease, and targeted nanoparticles are increasingly used as specific drug delivery vehicles. The precise quantification of nanoparticle internalization is of importance to measure the impact of physical and chemical properties on the uptake of nanoparticles into target cells or into cells responsible for rapid clearance. Internalization of nanoparticles has been measured...

  13. Sz.-Nagy-Foias theory and Lax-Phillips type semigroups in the description of quantum mechanical resonances

    International Nuclear Information System (INIS)

    A quantum mechanical version of the Lax-Phillips scattering theory was recently developed. This theory is a natural framework for the description of quantum unstable systems. However, since the spectrum of the generator of evolution in this theory is unbounded from below, the existing framework does not apply to a large class of quantum mechanical scattering problems. It is shown in this work that the fundamental mathematical structure underlying the Lax-Phillips theory, i.e., the Sz.-Nagy-Foias theory of contraction operators on Hilbert space, can be used for the construction of a formalism in which models associated with a semibounded spectrum may be accomodated

  14. Higher dimensional analogue of the Blau-Thompson model and N_T=8, D=2 Hodge-type cohomological gauge theories

    CERN Document Server

    Geyer, B

    2003-01-01

    The higher dimensional analogue of the Blau-Thompson model in D=5 is constructed by a N_T=1 topological twist of N=2, D=5 super Yang-Mills theory. Its dimenional reduction to D=4 and D=3 gives rise to the B-model and N_T=4 equivariant extension of the Blau-Thompson model, respectively. A further dimensional reduction to D=2 provides another example of a N_T=8 Hodge-type cohomological theory with global symmetry group SU(2) \\otimes \\bar SU(2). Moreover, it is shown that this theory possesses actually a larger global symmetry group SU(4) and and that it agrees with the N_T=8 topological twisting of N+16, D=2 super Yang-Mills theory.

  15. Understanding the p-Type Conduction Properties of the Transparent Conducting Oxide CuBO2: A Density Functional Theory Analysis

    OpenAIRE

    Watson, Graeme William; Scanlon, David

    2009-01-01

    CuCrO2 is the most promising Cu-based delafossite for p-type optoelectronic devices. Despite this, little is known about the p-type conduction mechanism of this material, with both CuI/CuII and CrIII/CrIV hole mechanisms being proposed. In this article we examine the electronic structure, thermodynamic stability and the p-type defect chemistry of this ternary compound using density functional theory with three different approaches to the exchange and correlation; the generalized-gradient-appr...

  16. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  17. Quantum non-equilibrium and relaxation to equilibrium for a class of de Broglie-Bohm-type theories

    International Nuclear Information System (INIS)

    The de Broglie-Bohm theory is about non-relativistic point-particles that move deterministically along trajectories. The theory reproduces the predictions of standard quantum theory, given that the distribution of particles over an ensemble of systems, all described by the same wavefunction ?, equals the quantum equilibrium distribution |?|2. Numerical simulations done by Valentini and Westman (2005 Proc. R. Soc. A 461 253) have illustrated that non-equilibrium particle distributions may relax to quantum equilibrium after some time. Here we consider non-equilibrium distributions and their relaxation properties for a particular class of trajectory theories (first studied in detail by Deotto and Ghirardi (1998 Found. Phys. 28 1)) that are empirically equivalent to the de Broglie-Bohm theory in quantum equilibrium. In the examples we studied of such theories, we found a speed-up of the relaxation, compared to the ordinary de Broglie-Bohm theory. Hence non-equilibrium predictions that depend strongly on relaxation properties, such as those studied recently by Valentini, may vary across different trajectory theories. As such, these theories might be experimentally distinguishable.

  18. ACB-PCR quantification of somatic oncomutation.

    Science.gov (United States)

    Myers, Meagan B; McKinzie, Page B; Wang, Yiying; Meng, Fanxue; Parsons, Barbara L

    2014-01-01

    Allele-specific competitive blocker-polymerase chain reaction (ACB-PCR) is a sensitive approach for the selective amplification of an allele. Using the ACB-PCR technique, hotspot point mutations in oncogenes and tumor-suppressor genes (oncomutations) are being developed as quantitative biomarkers of cancer risk. ACB-PCR employs a mutant specific primer (with a 3'-penultimate mismatch relative to the mutant DNA sequence, but a double 3'-terminal mismatch relative to the wild-type DNA sequence) to selectively amplify rare mutant DNA molecules. A blocker primer (having a non-extendable 3'-end and with a 3'-penultimate mismatch relative to the wild-type DNA sequence, but a double 3'-terminal mismatch relative to the mutant DNA sequence) is included in ACB-PCR to selectively repress amplification from the abundant wild-type molecules. Consequently, ACB-PCR is capable of quantifying the level of a single basepair substitution mutation in a DNA population when present at a mutant:wild type ratio of 10(-5) or greater. Quantification of rare mutant alleles is achieved by parallel analysis of unknown samples and mutant fraction (MF) standards (defined mixtures of mutant and wild-type DNA sequences). The ability to quantify specific mutations with known association to cancer has several important applications, including evaluating the carcinogenic potential of chemical exposures in rodent models and in the diagnosis and treatment of cancer. This chapter provides a step-by-step description of the ACB-PCR methodology as it has been used to measure human KRAS codon 12 GGT to GAT mutation. PMID:24623241

  19. On a singular Fredholm-type integral equation arising in N=2 super Yang-Mills theories

    CERN Document Server

    Ferrari, Franco

    2012-01-01

    In this work we study the Nekrasov-Shatashvili limit of the Nekrasov instanton partition function of Yang-Mills field theories with N=2 supersymmetry and gauge group SU(n). The theories are coupled with fundamental matter. The equation that determines the density of eigenvalues at the leading order in the saddle-point approximation is exactly solved. The dominating contribution to the instanton free energy is computed. The requirement that this energy is finite imposes quantization conditions on the parameters of the theory that are in agreement with analogous conditions that have been derived in previous works. Using methods borrowed from the theory of matrix models, a field theoretical expression of the full instanton partition function is derived. It is checked that in the Nekrasov-Shatashvili (thermodynamic) limit the action of the field theory obtained in this way reproduces exactly the equation of motion used in the saddle-point calculations.

  20. Quantification of uncertainties of water vapour column retrievals using future instruments

    OpenAIRE

    Diedrich, H.; Preusker, R.; Lindstrot, R.; Fischer, J.

    2012-01-01

    This study presents a quantification of uncertainties of water vapour retrievals based on near infrared measurements of upcoming instruments. The concepts of three scheduled spectrometer were taken into account: OLCI (Ocean and Land Color Instrument) on Sentinel-3, METimage on MetOp (Meteorological Operational Satellite) and FCI (Flexible Combined Imager) on MTG (Meteosat Third Generation). Optimal estimation theory was used to estimate th...

  1. An EPGPT-based approach for uncertainty quantification

    International Nuclear Information System (INIS)

    Generalized Perturbation Theory (GPT) has been widely used by many scientific disciplines to perform sensitivity analysis and uncertainty quantification. This manuscript employs recent developments in GPT theory, collectively referred to as Exact-to-Precision Generalized Perturbation Theory (EPGPT), to enable uncertainty quantification for computationally challenging models, e.g. nonlinear models associated with many input parameters and many output responses and with general non-Gaussian parameters distributions. The core difference between EPGPT and existing GPT is in the way the problem is formulated. GPT formulates an adjoint problem that is dependent on the response of interest. It tries to capture via the adjoint solution the relationship between the response of interest and the constraints on the state variations. EPGPT recasts the problem in terms of a smaller set of what is referred to as the 'active' responses which are solely dependent on the physics model and the boundary and initial conditions rather than on the responses of interest. The objective of this work is to apply an EPGPT methodology to propagate cross-sections variations in typical reactor design calculations. The goal is to illustrate its use and the associated impact for situations where the typical Gaussian assumption for parameters uncertainties is not valid and when nonlinear behavior must be considered. To allow this demonstration, exaggerated variations will be employed to stimulate nonliations will be employed to stimulate nonlinear behavior in simple prototypical neutronics models. (authors)

  2. Method of moments for the continuous transition between the Brillouin-Wigner-type and Rayleigh-Schroumldinger-type multireference coupled cluster theories.

    Czech Academy of Sciences Publication Activity Database

    Pittner, Ji?í; Piecuch, P.

    2009-01-01

    Ro?. 107, 8-12 (2009), s. 1209-1221. ISSN 0026-8976 R&D Projects: GA ?R GA203/07/0070; GA AV ?R 1ET400400413; GA AV ?R KSK4040110 Institutional research plan: CEZ:AV0Z40400503 Keywords : multireference coupled cluster theory * method of moments of coupled cluster equations * state-universal multireference coupled cluster approach Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.634, year: 2009

  3. Uncertainty quantification for systems of conservation laws

    International Nuclear Information System (INIS)

    Uncertainty quantification through stochastic spectral methods has been recently applied to several kinds of non-linear stochastic PDEs. In this paper, we introduce a formalism based on kinetic theory to tackle uncertain hyperbolic systems of conservation laws with Polynomial Chaos (PC) methods. The idea is to introduce a new variable, the entropic variable, in bijection with our vector of unknowns, which we develop on the polynomial basis: by performing a Galerkin projection, we obtain a deterministic system of conservation laws. We state several properties of this deterministic system in the case of a general uncertain system of conservation laws. We then apply the method to the case of the inviscid Burgers' equation with random initial conditions and we present some preliminary results for the Euler system. We systematically compare results from our new approach to results from the stochastic Galerkin method. In the vicinity of discontinuities, the new method bounds the oscillations due to Gibbs phenomenon to a certain range through the entropy of the system without the use of any adaptative random space discretizations. It is found to be more precise than the stochastic Galerkin method for smooth cases but above all for discontinuous cases

  4. Dynamic behaviors of spin-1/2 bilayer system within Glauber-type stochastic dynamics based on the effective-field theory

    Science.gov (United States)

    Erta?, Mehmet; Kantar, Ersin; Keskin, Mustafa

    2014-05-01

    The dynamic phase transitions (DPTs) and dynamic phase diagrams of the kinetic spin-1/2 bilayer system in the presence of a time-dependent oscillating external magnetic field are studied by using Glauber-type stochastic dynamics based on the effective-field theory with correlations for the ferromagnetic/ferromagnetic (FM/FM), antiferromagnetic/ferromagnetic (AFM/FM) and antiferromagnetic/antiferromagnetic (AFM/AFM) interactions. The time variations of average magnetizations and the temperature dependence of the dynamic magnetizations are investigated. The dynamic phase diagrams for the amplitude of the oscillating field versus temperature were presented. The results are compared with the results of the same system within Glauber-type stochastic dynamics based on the mean-field theory.

  5. AdS3 x w(S3 x S3 x S1) solutions of type IIB string theory

    International Nuclear Information System (INIS)

    We analyse a recently constructed class of local solutions of type IIB supergravity that consist of a warped product of AdS3 with a seven-dimensional internal space. In one duality frame the only other non-vanishing fields are the NS 3-form and the dilaton. We analyse in detail how these local solutions can be extended to give infinite families of globally well-defined solutions of type IIB string theory, with the internal space having topology S3 x S3 x S1 and with properly quantized 3-form flux.

  6. Medición volumétrica de grasa visceral abdominal con resonancia magnética y su relación con antropometría, en una población diabética Quantification of visceral adipose tissue using magnetic resonance imaging compared with anthropometry, in type 2 diabetic patients

    Directory of Open Access Journals (Sweden)

    Cristóbal Serrano García

    2012-12-01

    Full Text Available Background: Visceral fat accumulation is associated with the development of metabolic diseases. Anthropometry is one of the methods used to quantify it. aim: to evaluate the relationship between visceral adipose tissue volume (VAT, measured with magnetic resonance imaging (MRI, and anthropometric indexes, such as body mass index (BMI and waist circumference (WC, in type 2 diabetic patients (DM2. Patients and Methods: Twenty four type 2 diabetic patients aged 55 to 78 years (15 females and weighting 61.5 to 97 kg, were included. The patients underwent MRI examination on a Philips Intera® 1.5T MR scanner. The MRI protocol included a spectral excitation sequence centered at the fat peak. The field of view included from L4-L5 to the diaphragmatic border. VAT was measured using the software Image J®. Weight, height, BMI, WC and body fat percentage (BF%, derived from the measurement offour skinfolds with the equation of Durnin and Womersley, were also measured. The association between MRIVAT measurement and anthropometry was evaluated using the Pearson's correlation coefficient. Results: Mean VAT was 2478 ± 758 ml, mean BMI29.5 ± 4.7 kg/m², and mean WC was 100 ± 9.7 cm. There was a poor correlation between VAT, BMI (r = 0.18 and WC (r = 0.56. Conclusions: BMI and WC are inaccurate predictors of VAT volume in type 2 diabetic patients.

  7. Medición volumétrica de grasa visceral abdominal con resonancia magnética y su relación con antropometría, en una población diabética / Quantification of visceral adipose tissue using magnetic resonance imaging compared with anthropometry, in type 2 diabetic patients

    Scientific Electronic Library Online (English)

    Cristóbal, Serrano García; Francisco, Barrera; Pilar, Labbé; Jessica, Liberona; Marco, Arrese; Pablo, Irarrázabal; Cristián, Tejos; Sergio, Uribe.

    1535-15-01

    Full Text Available [...] Abstract in english Background: Visceral fat accumulation is associated with the development of metabolic diseases. Anthropometry is one of the methods used to quantify it. aim: to evaluate the relationship between visceral adipose tissue volume (VAT), measured with magnetic resonance imaging (MRI), and anthropometric [...] indexes, such as body mass index (BMI) and waist circumference (WC), in type 2 diabetic patients (DM2). Patients and Methods: Twenty four type 2 diabetic patients aged 55 to 78 years (15 females) and weighting 61.5 to 97 kg, were included. The patients underwent MRI examination on a Philips Intera® 1.5T MR scanner. The MRI protocol included a spectral excitation sequence centered at the fat peak. The field of view included from L4-L5 to the diaphragmatic border. VAT was measured using the software Image J®. Weight, height, BMI, WC and body fat percentage (BF%), derived from the measurement offour skinfolds with the equation of Durnin and Womersley, were also measured. The association between MRIVAT measurement and anthropometry was evaluated using the Pearson's correlation coefficient. Results: Mean VAT was 2478 ± 758 ml, mean BMI29.5 ± 4.7 kg/m², and mean WC was 100 ± 9.7 cm. There was a poor correlation between VAT, BMI (r = 0.18) and WC (r = 0.56). Conclusions: BMI and WC are inaccurate predictors of VAT volume in type 2 diabetic patients.

  8. Prospects of using the second-order perturbation theory of the MP2 type in the theory of electron scattering by polyatomic molecules.

    Czech Academy of Sciences Publication Activity Database

    ?ársky, Petr

    2015-01-01

    Ro?. 191, ?. 2015 (2015), s. 191- 192 . ISSN 1551-7616 R&D Projects: GA MŠk OC09079; GA MŠk(CZ) OC10046; GA ?R GA202/08/0631 Grant ostatní: COST(XE) CM0805; COST(XE) CM0601 Institutional support: RVO:61388955 Keywords : electron-scattering * calculation of cross sections * second-order perturbation theory Subject RIV: CF - Physical ; Theoretical Chemistry

  9. Application of the perturbation theory-differential formalism-for sensitivity analysis in steam generators of PWR type nuclear power plants

    International Nuclear Information System (INIS)

    An homogeneous model which simulates the stationary behavior of steam generators of PWR type reactors and uses the differential formalism of perturbation theory for analysing sensibility of linear and non-linear responses, is presented. The PERGEVAP computer code to calculate the temperature distribution in the steam generator and associated importance function, is developed. The code also evaluates effects of the thermohydraulic parameter variation on selected functionals. The obtained results are compared with results obtained by GEVAP computer code . (M.C.K.)

  10. Quantification of Flow Structures in Syntectonic Magmatic Rocks

    Science.gov (United States)

    Kruhl, J. H.; Gerik, A.

    2007-12-01

    Fabrics of syntectonic magmatic rocks provide important information on melt emplacement and crystallization conditions and, consequently, information on state and development of certain parts of the continental crust. Therefore, detailed studies on magmatic fabrics and, specifically, their quantification is a necessary prerequisite for any more detailed study. Fabric anisotropy and heterogeneity are fundamental properties of magmatic rocks. Their quantification can be performed by recently developed modified methods of fractal geometry. (i) A modified Cantor-dust method leads to a direction-related fractal dimension and, consequently, to quantification of fabric anisotropy. (ii) A modified perimeter method allows determination of fractal dimensions of complex curves in relation to their average orientations. (iii) A combination of box-counting method with kriging results in a contour map of the box-counting dimension, revealing the local fabric heterogeneity. (iv) A combination of method iii and a modified Cantor-dust method leads to mapping of fabric anisotropy (Kruhl et al. 2004, Peternell et al. subm.). Automation of these methods allows fast recording, generation of large data sets and the application of quantification methods on large areas (Gerik & Kruhl subm.). It leads to a precision of fabric analysis, not obtainable by manual execution of methods. Specifically, the direction-related Cantor-dust method has proven useful for analyzing magmatic flow structures and quantifying the intensity of flow. Application of this method to different types of syntectonic magmatic rocks will be presented and discussed. References: Gerik, A. & Kruhl, J.H.: Towards automated pattern quantification: time-efficient assessment of anisotropy of 2D pattern with AMOCADO. Computers & Geosciences (subm.). Kruhl, J.H., Andries, F., Peternell, M. & Volland, S. 2004: Fractal geometry analyses of rock fabric anisotropies and inhomogeneities. In: Kolymbas, D. (ed.), Fractals in Geotechnical Engineering, Advances in Geotechnical Engineering and Tunnelling, 9, Logos, Berlin, 115-135. Peternell, M., Bitencourt, M.F. & Kruhl, J.H.: New methods for large-scale rock fabric quantification - the Piquiri Syenite Massif, Southern Brazil. Journal of Structural Geology (subm.)

  11. Cuantificación del carbono almacenado en formaciones vegetales amazónicas en "CICRA", Madre de Dios (Perú) / Quantification of the carbon storage in amazon vegetation types at "CICRA", Madre de Dios (Peru)

    Scientific Electronic Library Online (English)

    Carlos, Martel; Lianka, Cairampoma.

    2012-08-01

    Full Text Available SciELO Peru | Language: Spanish Abstract in spanish La llanura amazónica peruana se caracteriza por la presencia de múltiples formaciones vegetales. Éstas cada vez reciben mayor impacto por actividades antropogénicas tales como la minería y tala. Todo esto, sumado al cambio climático global, genera desconcierto sobre el futuro de los bosques. La iden [...] tificación de los niveles de almacenamiento de carbono en áreas boscosas, y específicamente en cada formación vegetal, permitiría un mejor manejo de las zonas de conservación, así como identificar las áreas potenciales que servirían para el financiamiento de la absorción de carbono y otros servicios ambientales. El presente estudio fue desarrollado en la estación Biológica del Centro de Investigación y Capacitación Río Los Amigos (CICRA). En el CICRA se identificaron tres formaciones vegetales principales, el bosque de terraza, el bosque inundable y el aguajal. Siendo los bosques de terraza los de mayor extensión y mayor cantidad de carbono acumulado. Como resultado se valorizó la vegetación presente en el CICRA, en alrededor de 11 millones de dólares americanos. El ingreso a la oferta de los bonos de carbono promovería la conservación de los bosques. Abstract in english The Peruvian Amazon Basin is characterized by the presence of multiple vegetation types. They are being given great impact by human activities such as mining and, logging. All this, coupled with global climate change, creates confusion about the future of our forests. The identification of levels of [...] carbon storage in forested areas, and specifically in each vegetation type, would allow better management of conservation areas, and then identify potential areas that could serve to finance carbon sequestration and other environmental services. This study was conducted at the Biological Station for Research and Training Center Rio Los Amigos (CICRA, Spanish acronym). At the station three main formations were identified, alluvial terrace forests, flood terrace forests and Mauritia swamps. The alluvial terrace forest presents the most extensive area and the highest amount of carbon stored. As result, CICRA vegetations were valued at approx. 11 millions U.S. dollars. Admission to the supply of carbon credits could promote Amazon forest conservation.

  12. Selection rules, RR couplings on non-BPS branes and their all order $\\alpha'$-corrections in type IIA(B) super string theories

    CERN Document Server

    Hatefi, Ehsan

    2013-01-01

    By computing three and four point functions of non-BPS scattering amplitudes, including a closed string Ramond-Ramond (RR), gauge/scalar and tachyon in type IIA (IIB) super string theories, we discover a unique expansion for tachyon amplitudes in both non-BPS and D-brane anti D-brane formalism. Based on remarks on Chan-Paton factors and arXiv:1304.3711, we propose selection rules for all non-BPS scattering amplitudes of type II super string theory. These selection rules, rule out various non-BPS higher point correlation functions of the string theory. The amplitudes of a closed string RR, one tachyon, one scalar field $$ and a closed string RR, one tachyon, one scalar and one gauge field $$ and their all order $\\alpha'$ higher derivative corrections have been explored. Having used some of the new Wess-Zumino terms, tachyonic DBI action, this universal tachyon expansion and the proposed selection rule, we are able to produce infinite number of $u'$- channel tachyon and $t$- channel massless scalar poles of $$ ...

  13. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  14. Theory of quantum frequency conversion and type-II parametric down-conversion in the high-gain regime

    OpenAIRE

    Christ, Andreas; Brecht, Benjamin; Mauerer, Wolfgang; Silberhorn, Christine

    2012-01-01

    Frequency conversion (FC) and type-II parametric down-conversion (PDC) processes serve as basic building blocks for the implementation of quantum optical experiments: type-II PDC enables the efficient creation of quantum states such as photon-number states and Einstein-Podolsky-Rosen-states (EPR-states). FC gives rise to technologies enabling efficient atom-photon coupling, ultrafast pulse gates and enhanced detection schemes. However, despite their widespread deployment, th...

  15. Type II/F-theory superpotentials and Ooguri-Vafa invariants of compact Calabi-Yau threefolds with three deformations

    Science.gov (United States)

    Xu, Feng-Jun; Yang, Fu-Zhong

    2014-04-01

    We calculate the D-brane superpotentials for two Calabi-Yau manifolds with three deformations by the generalized hypergeometric GKZ systems, which give rise to the flux superpotentials 𝒲GVW of the dual F-theory compactification on the relevant Calabi-Yau fourfolds in the weak decoupling limit. We also compute the Ooguri-Vafa invariants from A-model expansion with mirror symmetry, which are related to the open Gromov-Witten invariants.

  16. Application of perturbation theory to sensitivity calculations of PWR type reactor cores using the two-channel model

    International Nuclear Information System (INIS)

    Sensitivity calculations are very important in design and safety of nuclear reactor cores. Large codes with a great number of physical considerations have been used to perform sensitivity studies. However, these codes need long computation time involving high costs. The perturbation theory has constituted an efficient and economical method to perform sensitivity analysis. The present work is an application of the perturbation theory (matricial formalism) to a simplified model of DNB (Departure from Nucleate Boiling) analysis to perform sensitivity calculations in PWR cores. Expressions to calculate the sensitivity coefficients of enthalpy and coolant velocity with respect to coolant density and hot channel area were developed from the proposed model. The CASNUR.FOR code to evaluate these sensitivity coefficients was written in Fortran. The comparison between results obtained from the matricial formalism of perturbation theory with those obtained directly from the proposed model makes evident the efficiency and potentiality of this perturbation method for nuclear reactor cores sensitivity calculations (author). 23 refs, 4 figs, 7 tabs

  17. Calculation of Fayet–Iliopoulos D-term in type I string theory revisited: T{sup 6}/Z{sub 3} orbifold case

    Energy Technology Data Exchange (ETDEWEB)

    Itoyama, H., E-mail: itoyama@sci.osaka-cu.ac.jp [Department of Mathematics and Physics, Graduate School of Science, Osaka City University (Japan); Osaka City University Advanced Mathematical Institute (OCAMI), 3-3-138, Sugimoto, Sumiyoshi-ku, Osaka 558-8585 (Japan); Yano, Kohei, E-mail: kyano@sci.osaka-cu.ac.jp [Department of Mathematics and Physics, Graduate School of Science, Osaka City University (Japan)

    2013-12-18

    The string one-loop computation of the Fayet–Iliopoulos D-term in type I string theory in the case of T{sup 6}/Z{sub 3} orbifold compactification associated with annulus (planar) and the Möbius strip string worldsheet diagrams is reexamined. The mass extracted from the sum of these amplitudes through a limiting procedure is found to be non-vanishing, which is contrary to the earlier computation. The sum can be made finite by a rescaling of the modular parameter in the closed string channel.

  18. Study on exploration theory and SAR technology for interlayer oxidation zone sandstone type uranium deposit and its application in Eastern Jungar Basin

    International Nuclear Information System (INIS)

    Started with analyzing the features of metallogenetic epoch and space distribution of typical interlayer oxidation zone sandstone type uranium deposit both in China and abroad and their relations of basin evolution, the authors have proposed the idea that the last unconformity mainly controls the metallogenetic epoch and the strength of structure activity after the last unconformity determines the deposit space. An exploration theory with the kernel from new events to the old one is put forward. The means and method to use SAR technology to identify ore-controlling key factors are discussed. An application study in Eastern Jungar Basin is performed

  19. Identification and Quantification of Carbonate Species Using Rock-Eval Pyrolysis.

    OpenAIRE

    Pillot D.; Deville E.; Prinzhofer A.

    2014-01-01

    This paper presents a new reliable and rapid method to characterise and quantify carbonates in solid samples based on monitoring the CO2 flux emitted by progressive thermal decomposition of carbonates during a programmed heating. The different peaks of destabilisation allow determining the different types of carbonates present in the analysed sample. The quantification of each peak gives the respective proportions of these diffe...

  20. A balance theory of peripheral corticotropin-releasing factor receptor type 1 and type 2 signaling to induce colonic contractions and visceral hyperalgesia in rats.

    Science.gov (United States)

    Nozu, Tsukasa; Takakusaki, Kaoru; Okumura, Toshikatsu

    2014-12-01

    Several recent studies suggest that peripheral corticotropin-releasing factor (CRF) receptor type 1 (CRF1) and CRF2 have a counter regulatory action on gastrointestinal functions. We hypothesized that the activity balance of each CRF subtype signaling may determine the changes in colonic motility and visceral sensation. Colonic contractions were assessed by the perfused manometry, and contractions of colonic muscle strips were measured in vitro in rats. Visceromotor response was determined by measuring contractions of abdominal muscle in response to colorectal distensions (CRDs) (60 mm Hg for 10 min twice with a 30-min rest). All drugs were administered through ip route in in vivo studies. CRF increased colonic contractions. Pretreatment with astressin, a nonselective CRF antagonist, blocked the CRF-induced response, but astressin2-B, a selective CRF2 antagonist, enhanced the response by CRF. Cortagine, a selective CRF1 agonist, increased colonic contractions. In in vitro study, CRF increased contractions of muscle strips. Urocortin 2, a selective CRF2 agonist, itself did not alter the contractions but blocked this increased response by CRF. Visceromotor response to the second CRD was significantly higher than that of the first. Astressin blocked this CRD-induced sensitization, but astressin2-B or CRF did not affect it. Meanwhile, astressin2-B together with CRF significantly enhanced the sensitization. Urocortin 2 blocked, but cortagine significantly enhanced, the sensitization. These results indicated that peripheral CRF1 signaling enhanced colonic contractility and induced visceral sensitization, and these responses were modulated by peripheral CRF2 signaling. The activity balance of each subtype signaling may determine the colonic functions in response to stress. PMID:25279793

  1. Risk Quantification and Evaluation Modelling

    Directory of Open Access Journals (Sweden)

    Manmohan Singh

    2014-07-01

    Full Text Available In this paper authors have discussed risk quantification methods and evaluation of risks and decision parameter to be used for deciding on ranking of the critical items, for prioritization of condition monitoring based risk and reliability centered maintenance (CBRRCM. As time passes any equipment or any product degrades into lower effectiveness and the rate of failure or malfunctioning increases, thereby lowering the reliability. Thus with the passage of time or a number of active tests or periods of work, the reliability of the product or the system, may fall down to a low value known as a threshold value, below which the reliability should not be allowed to dip. Hence, it is necessary to fix up the normal basis for determining the appropriate points in the product life cycle where predictive preventive maintenance may be applied in the programme so that the reliability (the probability of successful functioning can be enhanced, preferably to its original value, by reducing the failure rate and increasing the mean time between failure. It is very important for defence application where reliability is a prime work. An attempt is made to develop mathematical model for risk assessment and ranking them. Based on likeliness coefficient ?1 and risk coefficient ?2 ranking of the sub-systems can be modelled and used for CBRRCM.Defence Science Journal, Vol. 64, No. 4, July 2014, pp. 378-384, DOI:http://dx.doi.org/10.14429/dsj.64.6366 

  2. Quantification of plant safety status

    International Nuclear Information System (INIS)

    In the process of simplifying the complex state of the plant into a binary state, the information loss is inevitable. To minimize the information loss, the quantification of plant safety status has been formulated through the combination of the probability density function arising from the sensor measurement and the membership function representing the expectation of the state of the system. Therefore, in this context, the safety index is introduced in an attempt to quantify the plant status from the perspective of safety. The combination of probability density function and membership function is is achieved through the integration of the fuzzy intersection of the two functions, and it often is not a simple task to integrate the fuzzy intersection due to the complexity that is the result of the fuzzy intersection. Therefore, a methodology based on the Algebra of Logic is used to express the fuzzy intersection and the fuzzy union of the arbitrary functions analytically. These exact analytical expressions are then numerically integrated by the application of Monte Carlo method. The benchmark tests for rectangular area and both fuzzy intersection and union of two normal distribution functions have been performed. Lastly, the safety index was determined for the Core Reactivity Control of Yonggwang 3 and 4 using the presented methodology. 10 refs., 7 figs. (author)

  3. Damage quantification of shear buildings using deflections obtained by modal flexibility

    International Nuclear Information System (INIS)

    This paper presents a damage quantification method for shear buildings using the damage-induced inter-storey deflections (DI-IDs) estimated by the modal flexibilities from ambient vibration measurements. This study intends to provide a basis for the damage quantification problem of more complex building structures by investigating a rather idealized type of structures, shear buildings. Damage in a structure represented by loss of stiffness generally induces additional deflection, which may contain essential information about the damage. From an analytical investigation, the general equation of damage quantification by the damage-induced deflection is proposed and its special case for shear buildings is also proposed based on the damage-induced inter-storey deflection. The proposed damage quantification method is advantageous compared to conventional FE updating approaches since the number of variables in the optimization problem is only dependent on the complexity of damage parametrization, not on the complexity of the structure. For this reason, the damage quantification for shear buildings is simplified to a form that does not require any FE updating. Numerical and experimental studies on a five-storey shear building were carried out for two damage scenarios with 10% column EI reductions. From the numerical study, it was found that the lower four natural frequencies and mode shapes were enough to make errors in the deflection estimation and the damage quantificationn estimation and the damage quantification below 1%. From the experimental study, deflections estimated by the modal flexibilities were found to agree well with the deflections obtained from static push-over tests. Damage quantifications by the proposed method were also found to agree well with true amounts of damage obtained from static push-over tests

  4. Type II/F-theory Superpotentials and Ooguri-Vafa Invariants of Compact Calabi-Yau Threefolds with Three Deformations

    CERN Document Server

    Xu, Feng-Jun

    2012-01-01

    We construct the generalized hypergeometric GKZ systems for two Calabi-Yau manifolds with three parameters and D-branes wrapped on divisors with single open-string moduli, respectively, and further calculate the D-brane superpotentials which give rise to the flux superpotentials $\\mathcal{W}_{GVW}$ of the dual F-theory compactification on the relevant Calabi-Yau fourfolds in the weak decoupling limit. We also compute the Ooguri-Vafa invariants from A-model expansion with mirror symmetry, which are related to the open Gromov-Witten invariants.

  5. The Types of Axisymmetric Exact Solutions Closely Related to n-SOLITONS for Yang-Mills Theory

    Science.gov (United States)

    Zhong, Zai Zhe

    In this letter, we point out that if a symmetric 2×2 real matrix M(?,z) obeys the Belinsky-Zakharov equation and |det(M)|=1, then an axisymmetric Bogomol'nyi field exact solution for the Yang-Mills-Higgs theory can be given. By using the inverse scattering technique, some special Bogomol'nyi field exact solutions, which are closely related to the true solitons, are generated. In particular, the Schwarzschild-like solution is a two-soliton-like solution.

  6. Dynamic behaviors of spin-1/2 bilayer system within Glauber-type stochastic dynamics based on the effective-field theory

    International Nuclear Information System (INIS)

    The dynamic phase transitions (DPTs) and dynamic phase diagrams of the kinetic spin-1/2 bilayer system in the presence of a time-dependent oscillating external magnetic field are studied by using Glauber-type stochastic dynamics based on the effective-field theory with correlations for the ferromagnetic/ferromagnetic (FM/FM), antiferromagnetic/ferromagnetic (AFM/FM) and antiferromagnetic/antiferromagnetic (AFM/AFM) interactions. The time variations of average magnetizations and the temperature dependence of the dynamic magnetizations are investigated. The dynamic phase diagrams for the amplitude of the oscillating field versus temperature were presented. The results are compared with the results of the same system within Glauber-type stochastic dynamics based on the mean-field theory. - Highlights: • The Ising bilayer system is investigated within the Glauber dynamics based on EFT. • The time variations of average order parameters to find phases are studied. • The dynamic phase diagrams are found for the different interaction parameters. • The system displays the critical points as well as a re-entrant behavior

  7. A $W^1_2$-theory of Stochastic Partial Differential Systems of Divergence type on $C^1$ domains

    CERN Document Server

    Kim, Kyeong-Hun

    2010-01-01

    In this paper we study the stochastic partial differential systems of divergence type with $C^1$ space domains in $\\bR^d$. Existence and uniqueness results are obtained in terms of Sobolev spaces with weights so that we allow the derivatives of the solution to blow up near the boundary. The coefficients of the systems are only measurable and are allowed to blow up near the boundary.

  8. Quantification of Mycobacterium avium subsp. paratuberculosis Strains Representing Distinct Genotypes and Isolated from Domestic and Wildlife Animal Species by Use of an Automatic Liquid Culture System

    OpenAIRE

    Abendan?o, Naiara; Sevilla, Iker; Prieto, Jose? M.; Garrido, Joseba M.; Juste, Ramon A.; Alonso-hearn, Marta

    2012-01-01

    Quantification of 11 clinical strains of Mycobacterium avium subsp. paratuberculosis isolated from domestic (cattle, sheep, and goat) and wildlife (fallow deer, deer, wild boar, and bison) animal species in an automatic liquid culture system (Bactec MGIT 960) was accomplished. The strains were previously isolated and typed using IS1311 PCR followed by restriction endonuclease analysis (PCR-REA) into type C, S, or B. A strain-specific quantification curve was generated for each M. avium subsp....

  9. Analysis of New Type Air-conditioning for Loom Based on CFD Simulation and Theory of Statistics

    Directory of Open Access Journals (Sweden)

    Ruiliang Yang

    2011-01-01

    Full Text Available Based on theory of statistics, main factors affecting effects of loom workshop’s large and small zone ventilation using the CFD simulation in this paper.  Firstly, four factors and three levels of orthogonal experimental table is applied to CFD simulation, the order from major to minor of four factors is obtained, which can provide theoretical basis for design and operation. Then single-factor experiment method is applied to CFD simulation, certain factor changing can be obtained with best levels of other factors. Base on above recommend parameters, CFD software is applied to simulate relative humid and PMV on the loom. Lastly, comparison of simulation results and experiment is used to verify feasibility of simulation results.

  10. A Cahn-Hilliard-type phase-field theory for species diffusion coupled with large elastic deformations: Application to phase-separating Li-ion electrode materials

    Science.gov (United States)

    Di Leo, Claudio V.; Rejovitzky, Elisha; Anand, Lallit

    2014-10-01

    We formulate a unified framework of balance laws and thermodynamically-consistent constitutive equations which couple Cahn-Hilliard-type species diffusion with large elastic deformations of a body. The traditional Cahn-Hilliard theory, which is based on the species concentration c and its spatial gradient ?c, leads to a partial differential equation for the concentration which involves fourth-order spatial derivatives in c; this necessitates use of basis functions in finite-element solution procedures that are piecewise smooth and globally C1-continuous. In order to use standard C0-continuous finite-elements to implement our phase-field model, we use a split-method to reduce the fourth-order equation into two second-order partial differential equations (pdes). These two pdes, when taken together with the pde representing the balance of forces, represent the three governing pdes for chemo-mechanically coupled problems. These are amenable to finite-element solution methods which employ standard C0-continuous finite-element basis functions. We have numerically implemented our theory by writing a user-element subroutine for the widely used finite-element program Abaqus/Standard. We use this numerically implemented theory to first study the diffusion-only problem of spinodal decomposition in the absence of any mechanical deformation. Next, we use our fully coupled theory and numerical-implementation to study the combined effects of diffusion and stress on the lithiation of a representative spheroidal-shaped particle of a phase-separating electrode material.

  11. Black holes in type IIA string on Calabi-Yau threefolds with affine ADE geometries and q-deformed 2d quiver gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Ahl Laamara, R. [Lab/UFR-Physique des Hautes Energies, Faculte des Sciences, Rabat (Morocco) and GNPHE, Groupement National de Physique des Hautes Energies, Siege focal: FS, Rabat (Morocco)]. E-mail: ahllaamara@gmail.com; Belhaj, A. [Lab/UFR-Physique des Hautes Energies, Faculte des Sciences, Rabat (Morocco) and GNPHE, Groupement National de Physique des Hautes Energies, Siege focal: FS, Rabat (Morocco) and Virtual African Centre for Basic Science and Technology, Focal point, Lab/UFR-PHE, FSR (Morocco) and Departamento de Fisica Teorica, Universidad de Zaragoza, 50009 Zaragoza (Spain)]. E-mail: belhaj@unizar.es; Drissi, L.B. [Lab/UFR-Physique des Hautes Energies, Faculte des Sciences, Rabat (Morocco) and GNPHE, Groupement National de Physique des Hautes Energies, Siege focal: FS, Rabat (Morocco)]. E-mail: lbdrissi@gmail.com; Saidi, E.H. [Lab/UFR-Physique des Hautes Energies, Faculte des Sciences, Rabat (Morocco) and GNPHE, Groupement National de Physique des Hautes Energies, Siege focal: FS, Rabat (Morocco) and Virtual African Centre for Basic Science and Technology, Focal point, Lab/UFR-PHE, FSR (Morocco) and Academie Hassan II des Sciences and Techniques, College Physique-Chimie (Morocco)]. E-mail: h-saidi@fsr.ac.ma

    2007-08-06

    Motivated by studies on 4d black holes and q-deformed 2d Yang-Mills theory, and borrowing ideas from compact geometry of the blowing up of affine ADE singularities, we build a class of local Calabi-Yau threefolds (CY{sup 3}) extending the local 2-torus model O(m)+O(-m)->T{sup 2} considered in [C. Gomez, S. Montanez, A comment on quantum distribution functions and the OSV conjecture, hep-th/0608162] to test OSV conjecture. We first study toric realizations of T{sup 2} and then build a toric representation of X{sub 3} using intersections of local Calabi-Yau threefolds O(m)+O(-m-2)->P{sup 1}. We develop the 2d N=2 linear {sigma}-model for this class of toric CY{sup 3}s. Then we use these local backgrounds to study partition function of 4d black holes in type IIA string theory and the underlying q-deformed 2d quiver gauge theories. We also make comments on 4d black holes obtained from D-branes wrapping cycles in O(m)+O(-m-2)->B{sub k} with m=(m{sub 1},...,m{sub k}) a k-dim integer vector and B{sub k} a compact complex one dimension base consisting of the intersection of k 2-spheres S{sub i}{sup 2} with generic intersection matrix I{sub ij}. We give as well the explicit expression of the q-deformed path integral measure of the partition function of the 2d quiver gauge theory in terms of I{sub ij}. A comment on the link between our analysis and the construction of [N. Caporaso, M. Cirafici, L. Griguolo, S. Pasquetti, D. Seminara, R.J. Szabo, Topological strings, two-dimensional Yang-Mills theory and Chern-Simons theory on torus bundles, hep-th/0609129] is also given.

  12. Shielding Theory

    Directory of Open Access Journals (Sweden)

    Ion N.Chiuta

    2009-05-01

    Full Text Available The paper determines relations for shieldingeffectiveness relative to several variables, includingmetal type, metal properties, thickness, distance,frequency, etc. It starts by presenting some relationshipsregarding magnetic, electric and electromagnetic fieldsas a pertinent background to understanding and applyingfield theory. Since literature about electromagneticcompatibility is replete with discussions about Maxwellequations and field theory only a few aspects arepresented.

  13. Perturbation theory

    International Nuclear Information System (INIS)

    After noting some advantages of using perturbation theory some of the various types are related on a chart and described, including many-body nonlinear summations, quartic force-field fit for geometry, fourth-order correlation approximations, and a survey of some recent work. Alternative initial approximations in perturbation theory are also discussed. 25 references

  14. Behaviour of a fibre-type thermal insulation for high temperature reactors under rapid depressurization; theory and measurement

    International Nuclear Information System (INIS)

    The depressurization behaviour of a fibre-type thermal insulation has been investigated both by measurements with air and helium and with numerical models. A simple lumped parameter model has been used to reproduce the measured transients for air as well as for helium. All the experimental data have been obtained with reasonable accuracy by fitting two empirical parameters, the effective surfaces of the flow through the venting holes and the flow through the perforated tube. It is remarkable that the same parameters reproduce the experimental data for such different gases as air and helium. The dependence of the depressurization behaviour on the different parameters has been treated by a dimensional analysis. (Auth.)

  15. Preference for a vanishingly small cosmological constant in supersymmetric vacua in a Type IIB string theory model

    International Nuclear Information System (INIS)

    We study the probability distribution P(?) of the cosmological constant ? in a specific set of KKLT type models of supersymmetric IIB vacua. We show that, as we sweep through the quantized flux values in this flux compactification, P(?) behaves divergent at ?=0? and the median magnitude of ? drops exponentially as the number of complex structure moduli h2,1 increases. Also, owing to the hierarchical and approximate no-scale structure, the probability of having a positive Hessian (mass-squared matrix) approaches unity as h2,1 increases

  16. Fiber fine structure during solar type IV radio bursts: Observations and theory of radiation in presence of localized whistler turbulence

    International Nuclear Information System (INIS)

    Observations with a digital spectrometer within the frequency band between 250 and 273 MHz of fiber fine structures during the type IV solar radio burst of 1978 October 1 are presented and analyzed. The results are summarized in histograms. Typical values for drift rates are in the range between -2.3 and -9.9 MHz s-1. Frequency intervals between absorption and emission within the spectrum were measured to be within 0.9 and 2.7 MHz. Several types of spectra are discussed. A theoretical interpretation is based upon the model of a population of electrons trapped within a magnetic-mirror loop-configuration. It is shown that the fiber emission can be explained assuming an interaction between spatially localized strong whistler turbulence (solitons) and a broad-band Langmuir wave spectrum. Estimates using the observed flux values indicate that a fiber is composed of some 1011--1014 solitons occupying a volume of about 105--108 km3. Ducting of whistler solitons in low-density magnetic loops provides a plausible explanation for coherent behavior during the lifetime of an individual fiber. The magnetic field strength is found to be 6.2< or =B< or =35 gauss at the radio source and 15.3< or =B< or =76 gauss at the lower hybrid wave level respectively. The quasi-periodicity of the fiber occurrence is interpreted as periodically switched-on soliton production

  17. An explanatory model of adjustment to type I diabetes based on attachment, coping, and self-regulation theories.

    Science.gov (United States)

    Bazzazian, S; Besharat, M A

    2012-01-01

    The aim of this study was to develop and test a model of adjustment to type I diabetes. Three hundred young adults (172 females and 128 males) with type I diabetes were asked to complete the Adult Attachment Inventory (AAI), the Brief Illness Perception Questionnaire (Brief IPQ), Task-oriented subscale of the Coping Inventory for Stressful Situations (CISS), D-39, and well-being subscale of the Mental Health Inventory (MHI). HbA1c was obtained from laboratory examination. Results from structural equation analysis partly supported the hypothesized model. Secure and avoidant attachment styles were found to have effects on illness perception, ambivalent attachment style did not have significant effect on illness perception. Three attachment styles had significant effect on task-oriented coping strategy. Avoidant attachment had negative direct effect on adjustment too. Regression effects of illness perception and task-oriented coping strategy on adjustment were positive. Therefore, positive illness perception and more usage of task-oriented coping strategy predict better adjustment to diabetes. So, the results confirmed the theoretical bases and empirical evidence of effectiveness of attachment styles in adjustment to chronic disease and can be helpful in devising preventive policies, determining high-risk maladjusted patients, and planning special psychological treatment. PMID:21678193

  18. The synthesis and characterization of Ag-N dual-doped p-type ZnO: experiment and theory.

    Science.gov (United States)

    Duan, Li; Wang, Pei; Yu, Xiaochen; Han, Xiao; Chen, Yongnan; Zhao, Peng; Li, Donglin; Yao, Ran

    2014-03-01

    Ag-N dual-doped ZnO films have been fabricated by a chemical bath deposition method. The p-type conductivity of the dual-doped ZnO:(Ag, N) is stable over a long period of time, and the hole concentration in the ZnO:(Ag, N) is much higher than that in mono-doped ZnO:Ag or ZnO:N. We found that this is because AgZn-NO complex acceptors can be formed in ZnO:(Ag, N). First-principles calculations show that the complex acceptors generate a fully occupied band above the valance band maximum, so the acceptor levels become shallower and the hole concentration is increased. Furthermore, the binding energy of the Ag-N complex in ZnO is negative, so ZnO:(Ag, N) can be stable. These results indicate that the Ag-N dual-doping may be expected to be a potential route to achieving high-quality p-type ZnO for use in a variety of devices. PMID:24448605

  19. Effective field theory and Ab-initio calculation of p-type (Ga, Fe)N within LDA and SIC approximation

    International Nuclear Information System (INIS)

    Based on first-principles spin-density functional calculations, using the Korringa–Kohn–Rostoker method combined with the coherent potential approximation, we investigated the half-metallic ferromagnetic behavior of (Ga, Fe)N co-doped with carbon within the self-interaction-corrected local density approximation. Mechanism of hybridization and interaction between magnetic ions in p-type (Ga, Fe)N is investigated. Stability energy of ferromagnetic and disorder local moment states was calculated for different carbon concentration. The local density and the self-interaction-corrected approximations have been used to explain the strong ferromagnetic interaction observed and the mechanism that stabilizes this state. The transition temperature to the ferromagnetic state has been calculated within the effective field theory, with a Honmura–Kaneyoshi differential operator technique. - Highlights: ? The paper focus on the study the magnetic properties and electronic structure of p-type (Ga, Fe)N within LDA and SIC approximation. ? These methods allow us to explain the strong ferromagnetic interaction observed and the mechanism for its stability and the mechanism of hybridization and interaction between magnetic ions in p-type (Ga, Fe). ? The results obtained are interesting and can be serve as a reference in the field of dilute magnetic semi conductor.

  20. Quantification of seismic risk. Modeling of dependencies

    International Nuclear Information System (INIS)

    The German PSA Guideline (as issued in 2005) includes methods for conducting risk analyses for internal and external hazards. These analyses are part of the comprehensive Probabilistic Safety Analysis (PSA) that has to be performed within the safety reviews for German nuclear power plants (NPP). In the recent past, the analytical tools for hazards established in this guideline have been challenged, particularly in the quantification of seismically induced core damage probabilities. This paper contains the results of recent research and development activities regarding Seismic PSA. New developments are presented on the comprehensive consideration of dependencies in modeling the seismic failure behavior of a NPP. The accident at the Fukushima Dai-ichi NPP in March 2011 gave reason and indications for checking again the models and results from calculating the seismic risk. Based on a general definition of risk, a model for estimating the seismically induced core damage probability is stepwise derived. It is assumed that the results of site specific Probabilistic Seismic Hazard Assessments (PSHA) are known for the NPP site under consideration. all possible hazard, event, structure, system or component dependencies, which have to be considered in case of an earthquake, are considered, analysed and assessed. Proposals for modelling each type of dependency identified are presented. The following dependencies are considered in this context: Hazard related dependencies, dependencies on the level of initiating events and Dependencies regarding failure of structures, systems and components (SSC). It examines the extend the dependencies have been considered so far in the seismic PSA models and what the consequences of neglecting them may be. The search for and the assessment of dependencies will be implemented in the systematic procedure to compile the seismic equipment list (SEL). The SEL contains all SSC, whose failures contribute to the seismically induced core damage probability. In a Seismic PSA, a vast quantity of data sets have to be handled to characterize SSC fragilities (which depend on the intensity of the earthquake) as well as all types of dependencies. For that purpose, a database is being developed. (author)

  1. Nonresonant Beam-Type Cyclotron Instability at the Supernova Shock: Analytical Linear Theory and Numerical Simulation of the Nonlinear Evolution

    Science.gov (United States)

    Shapiro, Vitali D.; Quest, Kevin B.; Okolicsanyi, Marco

    1996-11-01

    Upstream from the supernova quasiparallel shock wave the hot component representing cosmic rays is locked at the shock front due to the scattering in the magnetic field fluctuations and is accelerated by the diffusive shock acceleration mechanism. Cosmic ray population being isotropic in the shock frame and drifting with the shock velocity relative to the interstellar plasma is shown to be unstable in respect to the non-resonant beam type electromagnetic instability similar to the fire-hose. Analysis of the dispersion relation of the instability shows that for powerlike distribution of cosmic ray particles f(?)? ?-4 resulting from the diffusive shock acceleration, the maximum growth rate of magnetic fluctuations occurs at the wavelengths comparable with the gyroradius at the upper edge of the energy spectrum. Diffusion of cosmic ray particles caused by such extremely long fluctuations of the magnetic field is calculated and is shown to be especially important for the most energetic part of cosmic rays.

  2. Electronic structure of cage-type ternaries ARu2Al10 – Theory and XPS experiment (A = Ce and U)

    International Nuclear Information System (INIS)

    Highlights: ? Electronic structures of (U;Ce)Ru2Al10 probed by X-ray photoemission and ab initio. ? Good agreement between valence-band XPS and calculated (within LDA) spectra. ? More itinerant character of the U 5f than Ce 4f electrons in these compounds. ? Reduced Fermi surface of CeRu2Al10 compared with the U-based system. -- Abstract: The electronic structure of the isomorphic, orthorhombic URu2Al10 and CeRu2Al10 aluminides have been studied by X-ray photoelectron spectroscopy (XPS) and ab initio calculations using the fully relativistic full-potential local-orbital (FPLO) method within the local density approximation (LDA). The calculated data of the former system revealed fairly sharp triple-peaks of the U 5f states around the Fermi level (EF) and a large broad contribution from the Ru 4d states expanded from EF to about 6.5 eV of binding energy. Although the size and positions of the Ru 4d bands for the latter compound are quite similar to those of the U-based one, the double Ce 4f sharp peaks are placed almost completely above EF underlying their mostly localized character. We have also analyzed the Fermi surfaces (FSs) in these two aluminides. The calculated results of both ternaries were then compared with our experimental XPS data for URu2Al10 and with such data for CeRu2Al10 available in the literature. The results are in fairly good agreement between the theory and experiment. Especially, the fact that the spectrum weight of the Ce 4f electrons below EF turned out to be very much reduced, reflecting rather a small f–c hybridization of these electrons compared to considerably larger one in the U-based compound

  3. Application of adaptive hierarchical sparse grid collocation to the uncertainty quantification of nuclear reactor simulators

    Energy Technology Data Exchange (ETDEWEB)

    Yankov, A.; Downar, T. [University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States)

    2013-07-01

    Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)

  4. Application of adaptive hierarchical sparse grid collocation to the uncertainty quantification of nuclear reactor simulators

    International Nuclear Information System (INIS)

    Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)

  5. Predictive transport modelling of type I ELMy H-mode dynamics using a theory-motivated combined ballooning-peeling model

    International Nuclear Information System (INIS)

    This paper discusses predictive transport simulations of the type I ELMy high confinement mode (H-mode) with a theory-motivated edge localized mode (ELM) model based on linear ballooning and peeling mode stability theory. In the model, a total mode amplitude is calculated as a sum of the individual mode amplitudes given by two separate linear differential equations for the ballooning and peeling mode amplitudes. The ballooning and peeling mode growth rates are represented by mutually analogous terms, which differ from zero upon the violation of a critical pressure gradient and an analytical peeling mode stability criterion, respectively. The damping of the modes due to non-ideal magnetohydrodynamic effects is controlled by a term driving the mode amplitude towards the level of background fluctuations. Coupled to simulations with the JETTO transport code, the model qualitatively reproduces the experimental dynamics of type I ELMy H-mode, including an ELM frequency that increases with the external heating power. The dynamics of individual ELM cycles is studied. Each ELM is usually triggered by a ballooning mode instability. The ballooning phase of the ELM reduces the pressure gradient enough to make the plasma peeling unstable, whereby the ELM continues driven by the peeling mode instability, until the edge current density has been depleted to a stable level. Simulations with current ramp-up and ramp-down are studied as examples of situations in which pure peeling and p of situations in which pure peeling and pure ballooning mode ELMs, respectively, can be obtained. The sensitivity with respect to the ballooning and peeling mode growth rates is investigated. Some consideration is also given to an alternative formulation of the model as well as to a pure peeling model

  6. New spin(7) holonomy metrics admitting G2 holonomy reductions and M-theory/type-IIA dualities

    International Nuclear Information System (INIS)

    As is well known, when D6 branes wrap a special Lagrangian cycle on a noncompact Calabi-Yau threefold in such a way that the internal string frame metric is a Kaehler one there exists a dual description, which is given in terms of a purely geometrical 11-dimensional background with an internal metric of G2 holonomy. It is also known that when D6 branes wrap a coassociative cycle of a noncompact G2 manifold in the presence of a self-dual two-form strength the internal part of the string frame metric is conformal to the G2 metric and there exists a dual description, which is expressed in terms of a purely geometrical 11-dimensional background with an internal noncompact metric of spin(7) holonomy. In the present work it is shown that any G2 metric participating in the first of these dualities necessarily participates in one of the second type. Additionally, several explicit spin(7) holonomy metrics admitting a G2 holonomy reduction along one isometry are constructed. These metrics can be described as R fibrations over a 6-dimensional Kaehler metric, thus realizing the pattern spin(7)?G2?(Kahler) mentioned above. Several of these examples are further described as fibrations over the Eguchi-Hanson gravitational instanton and, to the best of our knowledge, have not been previously considered in the literature.

  7. Quantum theory of open systems based on stochastic differential equations of generalized Langevin (non-Wiener) type

    International Nuclear Information System (INIS)

    It is shown that the effective Hamiltonian representation, as it is formulated in author’s papers, serves as a basis for distinguishing, in a broadband environment of an open quantum system, independent noise sources that determine, in terms of the stationary quantum Wiener and Poisson processes in the Markov approximation, the effective Hamiltonian and the equation for the evolution operator of the open system and its environment. General stochastic differential equations of generalized Langevin (non-Wiener) type for the evolution operator and the kinetic equation for the density matrix of an open system are obtained, which allow one to analyze the dynamics of a wide class of localized open systems in the Markov approximation. The main distinctive features of the dynamics of open quantum systems described in this way are the stabilization of excited states with respect to collective processes and an additional frequency shift of the spectrum of the open system. As an illustration of the general approach developed, the photon dynamics in a single-mode cavity without losses on the mirrors is considered, which contains identical intracavity atoms coupled to the external vacuum electromagnetic field. For some atomic densities, the photons of the cavity mode are “locked” inside the cavity, thus exhibiting a new phenomenon of radiation trapping and non-Wiener dynamics.

  8. Computation of geometries and frequencies of singlet and triplet nitromethane with density functional theory byusing gaussian type orbitals

    International Nuclear Information System (INIS)

    The results of the computational study of the structures, energies, dipole moments and IR spectra for a singlet and a triplet nitromethane are presented. Five different hybrids (BHandH, BHandHLYP, B3LYP, B3P86 and B3PW91), local (SVWN), and nonlocal (BLYP) DFT methods are used with various sizes of the gaussian type of basis set. The obtained results are compared to the HF, MP2, and MCSCF ab initio calculations, as well as, to the experimental results. Becke's three functional based hybrid DFT methods outperform the following: the ab initio (HF, MP2 and MCSCF), the Becke's half-and-half based DFT methods, and the local (SVWN or LSDA) and nonlocal (BLYP) DFT methods. The computed nitromethane geometry, the dipole moment, the energy difference, and the IR frequency are in extraordinary agreement with the experimental results. Thus, we are recommending the B3LYP and the B3PW91 as the methods of choice when the computational study of small open-quotes difficultclose quotes molecules is considered

  9. Instantaneous Wavenumber Estimation for Damage Quantification in Layered Plate Structures

    Science.gov (United States)

    Mesnil, Olivier; Leckey, Cara A. C.; Ruzzene, Massimo

    2014-01-01

    This paper illustrates the application of instantaneous and local wavenumber damage quantification techniques for high frequency guided wave interrogation. The proposed methodologies can be considered as first steps towards a hybrid structural health monitoring/ nondestructive evaluation (SHM/NDE) approach for damage assessment in composites. The challenges and opportunities related to the considered type of interrogation and signal processing are explored through the analysis of numerical data obtained via EFIT simulations of damage in CRFP plates. Realistic damage configurations are modeled from x-ray CT scan data of plates subjected to actual impacts, in order to accurately predict wave-damage interactions in terms of scattering and mode conversions. Simulation data is utilized to enhance the information provided by instantaneous and local wavenumbers and mitigate the complexity related to the multi-modal content of the plate response. Signal processing strategies considered for this purpose include modal decoupling through filtering in the frequency/wavenumber domain, the combination of displacement components, and the exploitation of polarization information for the various modes as evaluated through the dispersion analysis of the considered laminate lay-up sequence. The results presented assess the effectiveness of the proposed wavefield processing techniques as a hybrid SHM/NDE technique for damage detection and quantification in composite, plate-like structures.

  10. Error quantification of polarimetry-based precipitation estimates from X-band radars

    Science.gov (United States)

    Behnke, K. K.; Diederich, M.; Troemel, S.; Ryzhkov, A.; Simmer, C.

    2012-12-01

    Although theory and derived methods for radar polarimetry advanced considerably during the past decades, the quantitative estimation of precipitation from these measurements is still subject to various error sources. Therefore an integrated quantification of estimated polarimetric moments uncertainties including their projection into rainfall rates is indispensable. Besides improved hydrometeor typing and raindrop size discrimination, polarimetric moments allow for the correction of attenuation and beam blockage which are major challenges for the derivation of reliable rainfall estimates especially at shorter wavelengths like X-band. Moreover, polarimetry provides for a multitude of rainfall estimators appropriate for different precipitation regimes and types. This study concentrates on the performance of the estimators R(Z) with Z, the attenuation and beam blockage corrected reflectivity factor at horizontal polarization, R(KDP) with KDP, the specific differential phase, a combination of both estimators, and finally, R(A) using the specific attenuation A derived with the ZPHI method. We analyze observations of the operational German polarimetric X-band twin radar system BoXPol (Bonn) and JüXPol (Jülich) with both radars separated by about 50 km. The large spatial overlap of the mutual observation areas and several in-situ rain measurements in the same area constitute an ideal testbed for our study. We present two approaches for quantifying the error in resulting rain rates. The first approach is based on a statistical evaluation of particle concentration Nw and mean drop diameter Dm analyzed from long-term disdrometer observations in the BoXPol/JüXPol region. The different estimator performances are analyzed by comparison of the retrieved rain rates with rain gauge observations in relation to the ranges of Nw observed during different synoptic conditions. In the second approach, the estimated drop size distributions are used to simulate the polarimetric moments in order to investigate the variability of parameters used in R -Z, R -KDP, and R -A-relationships and to compare the simulated with the observed polarimetric moments.

  11. Efficient uncertainty quantification in unsteady aeroelastic simulations:

    OpenAIRE

    Witteveen, J. A. S.; Bijl, H.

    2009-01-01

    An efficient uncertainty quantification method for unsteady problems is presented in order to achieve a constant accuracy in time for a constant number of samples. The approach is applied to the aeroelastic problems of a transonic airfoil flutter system and the AGARD 445.6 wing benchmark with uncertainties in the flow and the structure.

  12. Anwendung der "Uncertainty Quantification" bei eisenbahndynamischen problemen

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter

    2013-01-01

    The paper describes the results of the application of "Uncertainty Quantification" methods in railway vehicle dynamics. The system parameters are given by probability distributions. The results of the application of the Monte-Carlo and generalized Polynomial Chaos methods to a simple bogie model will be discussed.

  13. Recurrence quantification analysis in Liu's attractor

    International Nuclear Information System (INIS)

    Recurrence Quantification Analysis is used to detect transitions chaos to periodical states or chaos to chaos in a new dynamical system proposed by Liu et al. This system contains a control parameter in the second equation and was originally introduced to investigate the forming mechanism of the compound structure of the chaotic attractor which exists when the control parameter is zero

  14. Precise protein quantification based on peptide quantification using iTRAQ™

    OpenAIRE

    Sickmann Albert; Altenhöfer Daniela; Pütz Stephanie; Boehm Andreas M; Falk Michael

    2007-01-01

    Abstract Background Mass spectrometry based quantification of peptides can be performed using the iTRAQ™ reagent in conjunction with mass spectrometry. This technology yields information about the relative abundance of single peptides. A method for the calculation of reliable quantification information is required in order to obtain biologically relevant data at the protein expression level. Results A method comprising sound error estimation and statistical methods is presented that allows ...

  15. Aerobic physical activity and resistance training: an application of the theory of planned behavior among adults with type 2 diabetes in a random, national sample of Canadians

    Directory of Open Access Journals (Sweden)

    Karunamuni Nandini

    2008-12-01

    Full Text Available Abstract Background Aerobic physical activity (PA and resistance training are paramount in the treatment and management of type 2 diabetes (T2D, but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random national sample which was created by generating a random list of household phone numbers. The list was proportionate to the actual number of household telephone numbers for each Canadian province (with the exception of Quebec. These individuals completed self-report TPB constructs of attitude, subjective norm, perceived behavioral control and intention, and a 3-month follow-up that assessed aerobic PA and resistance training. Results TPB explained 10% and 8% of the variance respectively for aerobic PA and resistance training; and accounted for 39% and 45% of the variance respectively for aerobic PA and resistance training intentions. Conclusion These results may guide the development of appropriate PA interventions for aerobic PA and resistance training based on the TPB.

  16. Entanglement quantification by local unitaries

    CERN Document Server

    Monras, A; Giampaolo, S M; Gualdi, G; Davies, G B; Illuminati, F

    2011-01-01

    Invariance under local unitary operations is a fundamental property that must be obeyed by every proper measure of quantum entanglement. However, this is not the only aspect of entanglement theory where local unitaries play a relevant role. In the present work we show that the application of suitable local unitary operations defines a family of bipartite entanglement monotones, collectively referred to as "shield entanglement". They are constructed by first considering the (squared) Hilbert- Schmidt distance of the state from the set of states obtained by applying to it a given local unitary. To the action of each different local unitary there corresponds a different distance. We then minimize these distances over the sets of local unitaries with different spectra, obtaining an entire family of different entanglement monotones. We show that these shield entanglement monotones are organized in a hierarchical structure, and we establish the conditions that need to be imposed on the spectrum of a local unitary f...

  17. Lax-Phillips scattering theory with perturbations of the type: V(x)=(?(x))/|x|?, where ?=2-(n)/s, ? is an element of Ls(Rn), s > 2 and s ? (n)/2

    International Nuclear Information System (INIS)

    A scattering theory for the wave equation with compactly supported perturbations was developed by Lax-Phillips in 1967. Using Enss approach, Phillips developed a Lax-Phillips scattering theory with short range perturbations of the type: V(x)=o((1)/|x|?), ? > 2. In this paper we develop a scattering theory for more general perturbations, i.e. for V(x)=(?(x))/|x|?, where ?=2-(n)/s, ? is an element of Ls(Rn), s > 2 and s ? (n)/2. Refs

  18. Quantification of dichromatism: a characteristic of color in transparent materials.

    Science.gov (United States)

    Kreft, Samo; Kreft, Marko

    2009-07-01

    The color of a material, such as solution of a dye, can change by changing parameters like pH, temperature, illumination direction, and illumination type. Dichromatism -- a color change due to the difference in thickness of the material -- has long been known as a property of only a few materials. Here we show that dichromatism is a common property of many substances and materials, and we introduce a method for its quantification. We defined dichromaticity index (DI) as the difference in hue angle (Delta h(ab)) between the color of the sample at the dilution, where the chroma is maximal, and the color of four times more diluted (or thinner) and four times more concentrated (or thicker) sample. The two hue angle differences are called dichromaticity index toward lighter (DI(L)) and dichromaticity index toward darker (DI(D)), respectively. High dichromaticity was found for materials that were previously known as dichromatic (pumpkin oil, bromophenol). PMID:19568292

  19. Review of Hydroelasticity Theories

    DEFF Research Database (Denmark)

    Chen, Xu-jun; Wu, You-sheng

    2006-01-01

    Existing hydroelastic theories are reviewed. The theories are classified into different types: two-dimensional linear theory, two-dimensional nonlinear theory, three-dimensional linear theory and three-dimensional nonlinear theory. Applications to analysis of very large floating structures (VLFS) are reviewed and discussed in details. Special emphasis is placed on papers from China and Japan (in native languages) as these papers are not generally publicly known in the rest of the world.

  20. The application study on building materials with computer color quantification system

    Science.gov (United States)

    Li, Zhendong; Yu, Haiye; Li, Hongnan; Zhao, Hongxia

    2006-01-01

    The first impression of any building to a person is its exterior and decoration, and therefore the quality of decoration project shows the more important position in building project. A lot of projects produce quality problem because of the material color difference, which exists universally at the common project, and is often found at the high-grade decoration; therefore, how to grasp and control the color change of building materials, and carry out color quantification, it has the very important meaning. According to the color theory, a computer vision system used in color quantification measurement is established, the standard illuminant A is selected as the light source. In order to realize the standardization of color evaluation, the mutual conversion between RGB and XYZ color space is studied, which is realized by the BP network. According to the colorimetry theory, the computer program is compiled in order to establish the software system, and realize the color quantitative appraisement in whole color gamut. LCH model is used at quantifying the color of building materials, and L *a *b * model is used at comparing the color change. If the wooden floor is selected and laid improperly during family fitment, it is easy to present "flower face". The color also arises greater discrepancy using the laths of same tree. We can give the laying scheme using the color quantification system; at the same time, the color difference problem laying stone materials is also studied in this paper, and the solution scheme has been given using this system.

  1. Uncertainty quantification methodology development for the best-estimate safety analysis

    International Nuclear Information System (INIS)

    This study deals with two approaches to uncertainty quantification methodology. In the first approach, an uncertainty quantification methodology is proposed and applied to the estimation of nuclear reactor fuel peak cladding temperature (PCT) uncertainty. The proposed method adopts the use of Latin hypercube sampling (LHS). The independency between the input variables is verified through a correlation coefficient test. The uncertainty of the output variables is estimated through a goodness-of-fit test on the sample data. In the application, the approach taken to quantifying the total mean and total 95% probability PCTs is given. Emphasis is placed upon the PCT uncertainty estimation due to models' or correlations' uncertainties with the assumption that significant sources of PCT uncertainty are determined. In the second approach, an uncertainty quantification methodology is proposed for a severe accident analysis which has large uncertainties. The proposed method adopts the concept of probabilistic belief measure to transform an analyst's belief on a top event into the equivalent probability of that top event. For the purpose of comparison, analyses are done by 1) applying probability theory regarding the occurring probability of top event as a physical probability or a frequency, 2) applying fuzzy set theory with fuzzy numbered occurring probability of top event, and 3) transforming the analysts' belief on the top event into equivalent probability by the probabilistiequivalent probability by the probabilistic belief measure method

  2. Quantification of arbuscular mycorrhizal fungal DNA in roots: how important is material preservation?

    Science.gov (United States)

    Janoušková, Martina; Püschel, David; Hujslová, Martina; Slavíková, Renata; Jansa, Jan

    2015-04-01

    Monitoring populations of arbuscular mycorrhizal fungi (AMF) in roots is a pre-requisite for improving our understanding of AMF ecology and functioning of the symbiosis in natural conditions. Among other approaches, quantification of fungal DNA in plant tissues by quantitative real-time PCR is one of the advanced techniques with a great potential to process large numbers of samples and to deliver truly quantitative information. Its application potential would greatly increase if the samples could be preserved by drying, but little is currently known about the feasibility and reliability of fungal DNA quantification from dry plant material. We addressed this question by comparing quantification results based on dry root material to those obtained from deep-frozen roots of Medicago truncatula colonized with Rhizophagus sp. The fungal DNA was well conserved in the dry root samples with overall fungal DNA levels in the extracts comparable with those determined in extracts of frozen roots. There was, however, no correlation between the quantitative data sets obtained from the two types of material, and data from dry roots were more variable. Based on these results, we recommend dry material for qualitative screenings but advocate using frozen root materials if precise quantification of fungal DNA is required. PMID:25186648

  3. New approach for the quantification of processed animal proteins in feed using light microscopy.

    Science.gov (United States)

    Veys, P; Baeten, V

    2010-07-01

    A revision of European Union's total feed ban on animal proteins in feed will need robust quantification methods, especially for control analyses, if tolerance levels are to be introduced, as for fishmeal in ruminant feed. In 2006, a study conducted by the Community Reference Laboratory for Animal Proteins in feedstuffs (CRL-AP) demonstrated the deficiency of the official quantification method based on light microscopy. The study concluded that the method had to be revised. This paper puts forward an improved quantification method based on three elements: (1) the preparation of permanent slides with an optical adhesive preserving all morphological markers of bones necessary for accurate identification and precision counting; (2) the use of a counting grid eyepiece reticle; and (3) new definitions for correction factors for the estimated portions of animal particles in the sediment. This revised quantification method was tested on feeds adulterated at different levels with bovine meat and bone meal (MBM) and fishmeal, and it proved to be effortless to apply. The results obtained were very close to the expected values of contamination levels for both types of adulteration (MBM or fishmeal). Calculated values were not only replicable, but also reproducible. The advantages of the new approach, including the benefits of the optical adhesive used for permanent slide mounting and the experimental conditions that need to be met to implement the new method correctly, are discussed. PMID:20432096

  4. Forest Carbon Leakage Quantification Methods and Their Suitability for Assessing Leakage in REDD

    Directory of Open Access Journals (Sweden)

    Sabine Henders

    2012-01-01

    Full Text Available This paper assesses quantification methods for carbon leakage from forestry activities for their suitability in leakage accounting in a future Reducing Emissions from Deforestation and Forest Degradation (REDD mechanism. To that end, we first conducted a literature review to identify specific pre-requisites for leakage assessment in REDD. We then analyzed a total of 34 quantification methods for leakage emissions from the Clean Development Mechanism (CDM, the Verified Carbon Standard (VCS, the Climate Action Reserve (CAR, the CarbonFix Standard (CFS, and from scientific literature sources. We screened these methods for the leakage aspects they address in terms of leakage type, tools used for quantification and the geographical scale covered. Results show that leakage methods can be grouped into nine main methodological approaches, six of which could fulfill the recommended REDD leakage requirements if approaches for primary and secondary leakage are combined. The majority of methods assessed, address either primary or secondary leakage; the former mostly on a local or regional and the latter on national scale. The VCS is found to be the only carbon accounting standard at present to fulfill all leakage quantification requisites in REDD. However, a lack of accounting methods was identified for international leakage, which was addressed by only two methods, both from scientific literature.

  5. Programming and Reasoning with Guarded Recursion for Coinductive Types

    OpenAIRE

    Clouston, Ranald; Bizjak, Ales?; Grathwohl, Hans Bugge; Birkedal, Lars

    2015-01-01

    We present the guarded lambda-calculus, an extension of the simply typed lambda-calculus with guarded recursive and coinductive types. The use of guarded recursive types ensures the productivity of well-typed programs. Guarded recursive types may be transformed into coinductive types by a type-former inspired by modal logic and Atkey-McBride clock quantification, allowing the typing of acausal functions. We give a call-by-name operational semantics for the calculus, and defi...

  6. Physiologic upper limits of pore size of different blood capillary types and another perspective on the dual pore theory of microvascular permeability

    Directory of Open Access Journals (Sweden)

    Sarin Hemant

    2010-08-01

    Full Text Available Abstract Background Much of our current understanding of microvascular permeability is based on the findings of classic experimental studies of blood capillary permeability to various-sized lipid-insoluble endogenous and non-endogenous macromolecules. According to the classic small pore theory of microvascular permeability, which was formulated on the basis of the findings of studies on the transcapillary flow rates of various-sized systemically or regionally perfused endogenous macromolecules, transcapillary exchange across the capillary wall takes place through a single population of small pores that are approximately 6 nm in diameter; whereas, according to the dual pore theory of microvascular permeability, which was formulated on the basis of the findings of studies on the accumulation of various-sized systemically or regionally perfused non-endogenous macromolecules in the locoregional tissue lymphatic drainages, transcapillary exchange across the capillary wall also takes place through a separate population of large pores, or capillary leaks, that are between 24 and 60 nm in diameter. The classification of blood capillary types on the basis of differences in the physiologic upper limits of pore size to transvascular flow highlights the differences in the transcapillary exchange routes for the transvascular transport of endogenous and non-endogenous macromolecules across the capillary walls of different blood capillary types. Methods The findings and published data of studies on capillary wall ultrastructure and capillary microvascular permeability to lipid-insoluble endogenous and non-endogenous molecules from the 1950s to date were reviewed. In this study, the blood capillary types in different tissues and organs were classified on the basis of the physiologic upper limits of pore size to the transvascular flow of lipid-insoluble molecules. Blood capillaries were classified as non-sinusoidal or sinusoidal on the basis of capillary wall basement membrane layer continuity or lack thereof. Non-sinusoidal blood capillaries were further sub-classified as non-fenestrated or fenestrated based on the absence or presence of endothelial cells with fenestrations. The sinusoidal blood capillaries of the liver, myeloid (red bone marrow, and spleen were sub-classified as reticuloendothelial or non-reticuloendothelial based on the phago-endocytic capacity of the endothelial cells. Results The physiologic upper limit of pore size for transvascular flow across capillary walls of non-sinusoidal non-fenestrated blood capillaries is less than 1 nm for those with interendothelial cell clefts lined with zona occludens junctions (i.e. brain and spinal cord, and approximately 5 nm for those with clefts lined with macula occludens junctions (i.e. skeletal muscle. The physiologic upper limit of pore size for transvascular flow across the capillary walls of non-sinusoidal fenestrated blood capillaries with diaphragmed fenestrae ranges between 6 and 12 nm (i.e. exocrine and endocrine glands; whereas, the physiologic upper limit of pore size for transvascular flow across the capillary walls of non-sinusoidal fenestrated capillaries with open 'non-diaphragmed' fenestrae is approximately 15 nm (kidney glomerulus. In the case of the sinusoidal reticuloendothelial blood capillaries of myeloid bone marrow, the transvascular transport of non-endogenous macromolecules larger than 5 nm into the bone marrow interstitial space takes place via reticuloendothelial cell-mediated phago-endocytosis and transvascular release, which is the case for systemic bone marrow imaging agents as large as 60 nm in diameter. Conclusions The physiologic upper limit of pore size in the capillary walls of most non-sinusoidal blood capillaries to the transcapillary passage of lipid-insoluble endogenous and non-endogenous macromolecules ranges between 5 and 12 nm. Therefore, macromolecules larger than the physiologic upper limits of pore size in the non-sinusoidal blood capillary types generally do not accumulate within the respective tissue interstitial

  7. TiQuant: Software for tissue analysis, quantification and surface reconstruction

    OpenAIRE

    Friebel, Adrian; Neitsch, Johannes; Johann, Tim; Hammad, Seddik; Hengstler, Jan G.; Drasdo, Dirk; Hoehme, Stefan

    2014-01-01

    Motivation: TiQuant is a modular software tool for efficient quantification of biological tissues based on volume data obtained by biomedical image modalities. It includes a number of versatile image and volume processing chains tailored to the analysis of different tissue types which have been experimentally verified. TiQuant implements a novel method for the reconstruction of three-dimensional surfaces of biological systems, data that often cannot be obtained experimentall...

  8. Uncertainty quantification for porous media flows

    International Nuclear Information System (INIS)

    Uncertainty quantification is an increasingly important aspect of many areas of computational science, where the challenge is to make reliable predictions about the performance of complex physical systems in the absence of complete or reliable data. Predicting flows of oil and water through oil reservoirs is an example of a complex system where accuracy in prediction is needed primarily for financial reasons. Simulation of fluid flow in oil reservoirs is usually carried out using large commercially written finite difference simulators solving conservation equations describing the multi-phase flow through the porous reservoir rocks. This paper examines a Bayesian Framework for uncertainty quantification in porous media flows that uses a stochastic sampling algorithm to generate models that match observed data. Machine learning algorithms are used to speed up the identification of regions in parameter space where good matches to observed data can be found

  9. Assessment of Factors Affecting Self-Care Behavior Among Women With Type 2 Diabetes in Khoy City Diabetes Clinic Using the Extended Theory of Reasoned Action

    Directory of Open Access Journals (Sweden)

    Ebrahim Hajizadeh

    2011-11-01

    Full Text Available Background and Aim: Many studies show that the only way to control diabetes and prevent its debilitating complications is continuous self-care. This study aimed to determine factors affecting self-care behavior of diabetic women in Khoy City, Iran based the extended theory of reasoned action (ETRA. Materials and Methods: A sample of 352 women with type 2 diabetes referring to a Diabetes Clinic in Khoy City in West Azarbaijan Province, Iran participated in the study. Appropriate instruments were designed to measure the relevant variables (diabetes knowledge, personal beliefs, subjective norm, self-efficacy and behavioral intention, and self-care behavior based on ETRA. Reliability and validity of the instruments were determined prior to the study. Statistical analysis of the data was done using the SPSS-version 16 software.Results: Based on the data obtained, the proposed model could predict and explain 41% and 26.2% of the variance of behavioral intention and self-care, respectively, in women with type-2 diabetes. The data also indicated that among the constructs of the model perceived self-efficacy was the strongest predictor for intention for self-care behavior. This construct affected both directly and indirectly self-care behavior. The next strongest predictors were attitudes, social pressures, social norms, and intervals between visiting patients by the treating team.Conclusion: The proposed model can predict self-care behavior very well. Thus, it may form the basis for educational interventions aiming at promoting self-care and, ultimately, controlling diabetes.

  10. Quantification of amyloid precursor protein isoforms using quantification concatamer internal standard.

    Science.gov (United States)

    Chen, Junjun; Wang, Meiyao; Turko, Illarion V

    2013-01-01

    It is likely that expression and/or post-translational generation of various protein isoforms can be indicative of initial pathological changes or pathology development. However, selective quantification of individual protein isoforms remains a challenge, because they simultaneously possess common and unique amino acid sequences. Quantification concatamer (QconCAT) internal standards were originally designed for a large-scale proteome quantification and are artificial proteins that are concatamers of tryptic peptides for several proteins. We developed a QconCAT for quantification of various isoforms of amyloid precursor protein (APP). APP-QconCAT includes tryptic peptides that are common for all isoforms of APP concatenated with those tryptic peptides that are unique for specific APP isoforms. Isotope-labeled APP-QconCAT was expressed, purified, characterized, and further used for quantification of total APP, APP695, and amyloid-? (A?) in the human frontal cortex from control and severe Alzheimer's disease donors. Potential biological implications of our quantitative measurements are discussed. It is also expected that using APP-QconCAT(s) will advance our understanding of biological mechanism by which various APP isoforms involved in the pathogenesis of Alzheimer's disease. PMID:23186391

  11. Automated Quantification of Pneumothorax in CT

    OpenAIRE

    Do, Synho; Salvaggio, Kristen; Gupta, Supriya; Kalra, Mannudeep; Ali, Nabeel U.; Pien, Homer

    2012-01-01

    An automated, computer-aided diagnosis (CAD) algorithm for the quantification of pneumothoraces from Multidetector Computed Tomography (MDCT) images has been developed. Algorithm performance was evaluated through comparison to manual segmentation by expert radiologists. A combination of two-dimensional and three-dimensional processing techniques was incorporated to reduce required processing time by two-thirds (as compared to similar techniques). Volumetric measurements on relative pneumothor...

  12. Uncertainty Quantification in Ocean State Estimation

    Science.gov (United States)

    Kalmikov, Alex; Heimbach, Patrick

    2013-07-01

    A Hessian-based method is developed for Uncertainty Quantification in global ocean state estimation and ap- plied to Drake Passage transport. Large error covariance matrices are evaluated by inverting the Hessian of a model- observation misfit functional. First and second derivative codes of the MIT general circulation model are generated by algorithmic differentiation and used to propagate the uncertainties between observation, control and target vari- able domains. The dimensionality of the calculations is reduced by eliminating the observation null-space.

  13. Resource-Bound Quantification for Graph Transformation

    CERN Document Server

    Torrini, Paolo; 10.4204/EPTCS.22.2

    2010-01-01

    Graph transformation has been used to model concurrent systems in software engineering, as well as in biochemistry and life sciences. The application of a transformation rule can be characterised algebraically as construction of a double-pushout (DPO) diagram in the category of graphs. We show how intuitionistic linear logic can be extended with resource-bound quantification, allowing for an implicit handling of the DPO conditions, and how resource logic can be used to reason about graph transformation systems.

  14. Protein quantification using a cleavable reporter peptide.

    Science.gov (United States)

    Duriez, Elodie; Trevisiol, Stephane; Domon, Bruno

    2015-02-01

    Peptide and protein quantification based on isotope dilution and mass spectrometry analysis are widely employed for the measurement of biomarkers and in system biology applications. The accuracy and reliability of such quantitative assays depend on the quality of the stable-isotope labeled standards. Although the quantification using stable-isotope labeled peptides is precise, the accuracy of the results can be severely biased by the purity of the internal standards, their stability and formulation, and the determination of their concentration. Here we describe a rapid and cost-efficient method to recalibrate stable isotope labeled peptides in a single LC-MS analysis. The method is based on the equimolar release of a protein reference peptide (used as surrogate for the protein of interest) and a universal reporter peptide during the trypsinization of a concatenated polypeptide standard. The quality and accuracy of data generated with such concatenated polypeptide standards are highlighted by the quantification of two clinically important proteins in urine samples and compared with results obtained with conventional stable isotope labeled reference peptides. Furthermore, the application of the UCRP standards in complex samples is described. PMID:25411902

  15. Multiplexed quantification for data-independent acquisition.

    Science.gov (United States)

    Minogue, Catherine E; Hebert, Alexander S; Rensvold, Jarred W; Westphall, Michael S; Pagliarini, David J; Coon, Joshua J

    2015-03-01

    Data-independent acquisition (DIA) strategies provide a sensitive and reproducible alternative to data-dependent acquisition (DDA) methods for large-scale quantitative proteomic analyses. Unfortunately, DIA methods suffer from incompatibility with common multiplexed quantification methods, specifically stable isotope labeling approaches such as isobaric tags and stable isotope labeling of amino acids in cell culture (SILAC). Here we expand the use of neutron-encoded (NeuCode) SILAC to DIA applications (NeuCoDIA), producing a strategy that enables multiplexing within DIA scans without further convoluting the already complex MS(2) spectra. We demonstrate duplex NeuCoDIA analysis of both mixed-ratio (1:1 and 10:1) yeast and mouse embryo myogenesis proteomes. Analysis of the mixed-ratio yeast samples revealed the strong accuracy and precision of our NeuCoDIA method, both of which were comparable to our established MS(1)-based quantification approach. NeuCoDIA also uncovered the dynamic protein changes that occur during myogenic differentiation, demonstrating the feasibility of this methodology for biological applications. We consequently establish DIA quantification of NeuCode SILAC as a useful and practical alternative to DDA-based approaches. PMID:25621425

  16. Rate theory modeling of defect evolution under cascade damage conditions: the influence of vacancy-type cascade remnants and application to the defect production characterization by microstructural analysis

    International Nuclear Information System (INIS)

    Recent computational and experimental studies have confirmed that high energy cascades produce clustered defects of both vacancy- and interstitial-types as well as isolated point defects. However, the production probability, configuration, stability and other characteristics of the cascade clusters are not well understood in spite of the fact that clustered defect production would substantially affect the irradiation-induced microstructures and the consequent property changes in a certain range of temperatures and displacement rates. In this work, a model of point defect and cluster evolution in irradiated materials under cascade damage conditions was developed by combining the conventional reaction rate theory and the results from the latest molecular dynamics simulation studies. This paper provides a description of the model and a model-based fundamental investigation of the influence of configuration, production efficiency and the initial size distribution of cascade-produced vacancy clusters. In addition, using the model, issues on characterizing cascade-induced defect production by microstructural analysis will be discussed. In particular, the determination of cascade vacancy cluster configuration, surviving defect production efficiency and cascade-interaction volume is attempted by analyzing the temperature dependence of swelling rate and loop growth rate in austenitic steels and model alloys. (author)

  17. Nonequilibrium magnetic properties in a two-dimensional kinetic mixed Ising system within the effective-field theory and Glauber-type stochastic dynamics approach.

    Science.gov (United States)

    Erta?, Mehmet; Deviren, Bayram; Keskin, Mustafa

    2012-11-01

    Nonequilibrium magnetic properties in a two-dimensional kinetic mixed spin-2 and spin-5/2 Ising system in the presence of a time-varying (sinusoidal) magnetic field are studied within the effective-field theory (EFT) with correlations. The time evolution of the system is described by using Glauber-type stochastic dynamics. The dynamic EFT equations are derived by employing the Glauber transition rates for two interpenetrating square lattices. We investigate the time dependence of the magnetizations for different interaction parameter values in order to find the phases in the system. We also study the thermal behavior of the dynamic magnetizations, the hysteresis loop area, and dynamic correlation. The dynamic phase diagrams are presented in the reduced magnetic field amplitude and reduced temperature plane and we observe that the system exhibits dynamic tricritical and reentrant behaviors. Moreover, the system also displays a double critical end point (B), a zero-temperature critical point (Z), a critical end point (E), and a triple point (TP). We also performed a comparison with the mean-field prediction in order to point out the effects of correlations and found that some of the dynamic first-order phase lines, which are artifacts of the mean-field approach, disappeared. PMID:23214741

  18. Mathematical Models in Schema Theory

    OpenAIRE

    Burgin, Mark

    2005-01-01

    In this paper, a mathematical schema theory is developed. This theory has three roots: brain theory schemas, grid automata, and block-shemas. In Section 2 of this paper, elements of the theory of grid automata necessary for the mathematical schema theory are presented. In Section 3, elements of brain theory necessary for the mathematical schema theory are presented. In Section 4, other types of schemas are considered. In Section 5, the mathematical schema theory is developed...

  19. Quantification of image registration error

    Science.gov (United States)

    Mahamat, Adoum H.; Shields, Eric A.

    2014-05-01

    Image registration is a digital image processing technique that takes two or more of images of a scene in different coordinate systems and transforms them into a single coordinate system. Image registration is a necessary step in many advanced image processing techniques, such as multi-frame super-resolution. For that reason, registration accuracy is very crucial. While image registration is usually performed on images, one can perform the registration using metric images as well. This paper will present registration methods and their accuracies for various noise levels for the case of pure translational image motion. Registration techniques will be applied to the images themselves as well as to phase congruency images, gradient images, and edge-detected images. This study will also investigate registration of under-sampled images. Noise-free images are degraded using three types of noise: additive Gaussian noise, fixed-pattern noise along the column direction, and a combination of these two. The registration error is quantified for two registration algorithms with three different images as a function of the signal-to-noise ratio. A test on the usefulness of the image registration and registration accuracy performed on the intensity images of the Stokes imaging polarimeter. The Stokes images calculated before and after registration of the intensity images are compared to each other to show the improvement.

  20. Type I background fields in terms of type IIB ones

    International Nuclear Information System (INIS)

    We choose such boundary conditions for open IIB superstring theory which preserve N=1 SUSY. The explicit solution of the boundary conditions yields effective theory which is symmetric under world-sheet parity transformation ?:??-?. We recognize effective theory as closed type I superstring theory. Its background fields, beside known ? even fields of the initial IIB theory, contain improvements quadratic in ? odd ones

  1. Massive IIA string theory and Matrix theory compactification

    International Nuclear Information System (INIS)

    We propose a Matrix theory approach to Romans' massive Type IIA supergravity. It is obtained by applying the procedure of Matrix theory compactifications to Hull's proposal of the massive Type IIA string theory as M-theory on a twisted torus. The resulting Matrix theory is a super-Yang-Mills theory on large N three-branes with a space-dependent noncommutativity parameter, which is also independently derived by a T-duality approach. We give evidence showing that the energies of a class of physical excitations of the super-Yang-Mills theory show the correct symmetry expected from massive Type IIA string theory in a lightcone quantization

  2. Precise protein quantification based on peptide quantification using iTRAQ™

    Directory of Open Access Journals (Sweden)

    Sickmann Albert

    2007-06-01

    Full Text Available Abstract Background Mass spectrometry based quantification of peptides can be performed using the iTRAQ™ reagent in conjunction with mass spectrometry. This technology yields information about the relative abundance of single peptides. A method for the calculation of reliable quantification information is required in order to obtain biologically relevant data at the protein expression level. Results A method comprising sound error estimation and statistical methods is presented that allows precise abundance analysis plus error calculation at the peptide as well as at the protein level. This yields the relevant information that is required for quantitative proteomics. Comparing the performance of our method named Quant with existing approaches the error estimation is reliable and offers information for precise bioinformatic models. Quant is shown to generate results that are consistent with those produced by ProQuant™, thus validating both systems. Moreover, the results are consistent with that of Mascot™ 2.2. The MATLAB® scripts of Quant are freely available via http://www.protein-ms.de and http://sourceforge.net/projects/protms/, each under the GNU Lesser General Public License. Conclusion The software Quant demonstrates improvements in protein quantification using iTRAQ™. Precise quantification data can be obtained at the protein level when using error propagation and adequate visualization. Quant integrates both and additionally provides the possibility to obtain more reliable results by calculation of wise quality measures. Peak area integration has been replaced by sum of intensities, yielding more reliable quantification results. Additionally, Quant allows the combination of quantitative information obtained by iTRAQ™ with peptide and protein identifications from popular tandem MS identification tools. Hence Quant is a useful tool for the proteomics community and may help improving analysis of proteomic experimental data. In addition, we have shown that a lognormal distribution fits the data of mass spectrometry based relative peptide quantification.

  3. Protocol for Quantification of Defects in Natural Fibres for Composites

    DEFF Research Database (Denmark)

    Mortensen, Ulrich Andreas; Madsen, Bo

    2014-01-01

    Natural bast-type plant fibres are attracting increasing interest for being used for structural composite applications where high quality fibres with good mechanical properties are required. A protocol for the quantification of defects in natural fibres is presented. The protocol is based on the experimental method of optical microscopy and the image analysis algorithms of the seeded region growing method and Otsu’s method. The use of the protocol is demonstrated by examining two types of differently processed flax fibres to give mean defect contents of 6.9 and 3.9%, a difference which is tested to be statistically significant. The protocol is evaluated with respect to the selection of image analysis algorithms, and Otsu’s method is found to be a more appropriate method than the alternative coefficient of variation method. The traditional way of defining defect size by area is compared to the definition of defect size by width, and it is shown that both definitions can be used to give unbiased findings for the comparison between fibre types. Finally, considerations are given with respect to true measures of defect content, number of determinations, and number of significant figures used for the descriptive statistics.

  4. Interpretivistic Conception of Quantification: Tool for Enhancing Quality of Life?

    Directory of Open Access Journals (Sweden)

    Denis Larrivee

    2013-11-01

    Full Text Available Quality of life is fast becoming the standard measure of outcome in clinical trials, residential satisfaction, and educational achievement, to name several social settings, with the consequent proliferation of assessment instruments. Yet its interpretation and definition provoke widespread disagreement, thereby rendering the significance of quantification uncertain. Moreover, quality, or qualia, is philosophically distinct from quantity, or quantitas, and so it is unclear how quantification can serve to modulate quality. Is it thus possible for quantification to enhance quality of life? We propose here that an interpretivistic conception of quantification may offer a more valid approach by which to address quality of life in sociological research.

  5. Development of a VHH-Based Erythropoietin Quantification Assay

    DEFF Research Database (Denmark)

    Kol, Stefan; Beuchert Kallehauge, Thomas

    2015-01-01

    Erythropoietin (EPO) quantification during cell line selection and bioreactor cultivation has traditionally been performed with ELISA or HPLC. As these techniques suffer from several drawbacks, we developed a novel EPO quantification assay. A camelid single-domain antibody fragment directed against human EPO was evaluated as a capturing antibody in a label-free biolayer interferometry-based quantification assay. Human recombinant EPO can be specifically detected in Chinese hamster ovary cell supernatants in a sensitive and pH-dependent manner. This method enables rapid and robust quantification of EPO in a high-throughput setting.

  6. Directional biases in phylogenetic structure quantification: a Mediterranean case study.

    Science.gov (United States)

    Molina-Venegas, Rafael; Roquet, Cristina

    2014-06-01

    Recent years have seen an increasing effort to incorporate phylogenetic hypotheses to the study of community assembly processes. The incorporation of such evolutionary information has been eased by the emergence of specialized software for the automatic estimation of partially resolved supertrees based on published phylogenies. Despite this growing interest in the use of phylogenies in ecological research, very few studies have attempted to quantify the potential biases related to the use of partially resolved phylogenies and to branch length accuracy, and no work has examined how tree shape may affect inference of community phylogenetic metrics. In this study, using a large plant community and elevational dataset, we tested the influence of phylogenetic resolution and branch length information on the quantification of phylogenetic structure; and also explored the impact of tree shape (stemminess) on the loss of accuracy in phylogenetic structure quantification due to phylogenetic resolution. For this purpose, we used 9 sets of phylogenetic hypotheses of varying resolution and branch lengths to calculate three indices of phylogenetic structure: the mean phylogenetic distance (NRI), the mean nearest taxon distance (NTI) and phylogenetic diversity (stdPD) metrics. The NRI metric was the less sensitive to phylogenetic resolution, stdPD showed an intermediate sensitivity, and NTI was the most sensitive one; NRI was also less sensitive to branch length accuracy than NTI and stdPD, the degree of sensitivity being strongly dependent on the dating method and the sample size. Directional biases were generally towards type II errors. Interestingly, we detected that tree shape influenced the accuracy loss derived from the lack of phylogenetic resolution, particularly for NRI and stdPD. We conclude that well-resolved molecular phylogenies with accurate branch length information are needed to identify the underlying phylogenetic structure of communities, and also that sensitivity of phylogenetic structure measures to low phylogenetic resolution can strongly differ depending on phylogenetic tree shape. PMID:25076812

  7. Applying uncertainty quantification to multiphase flow computational fluid dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Gel, A; Garg, R; Tong, C; Shahnam, M; Guenther, C

    2013-07-01

    Multiphase computational fluid dynamics plays a major role in design and optimization of fossil fuel based reactors. There is a growing interest in accounting for the influence of uncertainties associated with physical systems to increase the reliability of computational simulation based engineering analysis. The U.S. Department of Energy's National Energy Technology Laboratory (NETL) has recently undertaken an initiative to characterize uncertainties associated with computer simulation of reacting multiphase flows encountered in energy producing systems such as a coal gasifier. The current work presents the preliminary results in applying non-intrusive parametric uncertainty quantification and propagation techniques with NETL's open-source multiphase computational fluid dynamics software MFIX. For this purpose an open-source uncertainty quantification toolkit, PSUADE developed at the Lawrence Livermore National Laboratory (LLNL) has been interfaced with MFIX software. In this study, the sources of uncertainty associated with numerical approximation and model form have been neglected, and only the model input parametric uncertainty with forward propagation has been investigated by constructing a surrogate model based on data-fitted response surface for a multiphase flow demonstration problem. Monte Carlo simulation was employed for forward propagation of the aleatory type input uncertainties. Several insights gained based on the outcome of these simulations are presented such as how inadequate characterization of uncertainties can affect the reliability of the prediction results. Also a global sensitivity study using Sobol' indices was performed to better understand the contribution of input parameters to the variability observed in response variable.

  8. Thermal behavior of dynamic magnetizations, hysteresis loop areas and correlations of a cylindrical Ising nanotube in an oscillating magnetic field within the effective-field theory and the Glauber-type stochastic dynamics approach

    International Nuclear Information System (INIS)

    The dynamical aspects of a cylindrical Ising nanotube in the presence of a time-varying magnetic field are investigated within the effective-field theory with correlations and Glauber-type stochastic approach. Temperature dependence of the dynamic magnetizations, dynamic total magnetization, hysteresis loop areas and correlations are investigated in order to characterize the nature of dynamic transitions as well as to obtain the dynamic phase transition temperatures and compensation behaviors. Some characteristic phenomena are found depending on the ratio of the physical parameters in the surface shell and core, i.e., five different types of compensation behaviors in the Néel classification nomenclature exist in the system. -- Highlights: ? Kinetic cylindrical Ising nanotube is investigated using the effective-field theory. ? The dynamic magnetizations, hysteresis loop areas and correlations are calculated. ? The effects of the exchange interactions have been studied in detail. ? Five different types of compensation behaviors have been found. ? Some characteristic phenomena are found depending on ratio of physical parameters.

  9. Thermal behavior of dynamic magnetizations, hysteresis loop areas and correlations of a cylindrical Ising nanotube in an oscillating magnetic field within the effective-field theory and the Glauber-type stochastic dynamics approach

    Energy Technology Data Exchange (ETDEWEB)

    Deviren, Bayram, E-mail: bayram.deviren@nevsehir.edu.tr [Department of Physics, Nevsehir University, 50300 Nevsehir (Turkey); Keskin, Mustafa [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)

    2012-02-20

    The dynamical aspects of a cylindrical Ising nanotube in the presence of a time-varying magnetic field are investigated within the effective-field theory with correlations and Glauber-type stochastic approach. Temperature dependence of the dynamic magnetizations, dynamic total magnetization, hysteresis loop areas and correlations are investigated in order to characterize the nature of dynamic transitions as well as to obtain the dynamic phase transition temperatures and compensation behaviors. Some characteristic phenomena are found depending on the ratio of the physical parameters in the surface shell and core, i.e., five different types of compensation behaviors in the Néel classification nomenclature exist in the system. -- Highlights: ? Kinetic cylindrical Ising nanotube is investigated using the effective-field theory. ? The dynamic magnetizations, hysteresis loop areas and correlations are calculated. ? The effects of the exchange interactions have been studied in detail. ? Five different types of compensation behaviors have been found. ? Some characteristic phenomena are found depending on ratio of physical parameters.

  10. Realistic finite unified theory

    Energy Technology Data Exchange (ETDEWEB)

    Hamidi, S.; Schwarz, J.H.

    1984-11-08

    We suggest that starting from a type I superstring theory one can get to an effective chiral four-dimensional theory with softly broken supersymmetry that is ultraviolet finite to all orders. Previously we classified all two-loop-finite chiral anomaly-free N=1 theories. By requiring that such a theory contain at least three families without mirror partners and the necessary Higgs fields, we are led to a unique theory based on SU(5) and containing exactly three families.

  11. Galois theory in bicategories

    CERN Document Server

    Gomez-Torrecillas, Jose

    2007-01-01

    We develop a Galois (descent) theory for comonads within the framework of bicategories. We give generalizations of Beck's theorem and the Joyal-Tierney theorem. Many examples are provided, including classical descent theory, Hopf-Galois theory over Hopf algebras and Hopf algebroids, Galois theory for corings and group-corings, and Morita-Takeuchi theory for corings. As an application we construct a new type of comatrix corings based on (dual) quasi bialgebras.

  12. Hydration free energy of hard-sphere solute over a wide range of size studied by various types of solution theories

    OpenAIRE

    Matubayasi, N.; Kinoshita, M.; Nakahara, M.

    2007-01-01

    The hydration free energy of hard-sphere solute is evaluated over a wide range of size using the method of energy representation, information-theoretic approach, reference interaction site model, and scaled-particle theory. The former three are distribution function theories and the hydration free energy is formulated to reflect the solution structure through distribution functions. The presence of the volume-dependent term is pointed out for the distribution function theories, and the asympt...

  13. Computer Model Inversion and Uncertainty Quantification in the Geosciences

    Science.gov (United States)

    White, Jeremy T.

    The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.

  14. Multivariate data analysis for depth resolved chemical classification and quantification of sulfur in SNMS

    International Nuclear Information System (INIS)

    The quantification of elements in quadrupole based SNMS is hampered by superpositions of atomic and cluster signals. Moreover, the conventional SNMS data evaluation employs only atomic signals to determine elemental concentrations, which not allows any chemical specifications of the determined elements. Improvements in the elemental quantification and additional chemical information can be obtained from kinetic energy analysis and the inclusion of molecular signals into mass spectra evaluation. With the help of multivariate data analysis techniques, the combined information is used for the first time for a quantitative and chemically distinctive determination of sulfur. The kinetic energy analysis, used to solve the interference of sulfur with O2 at masses 32-34 D, turned out to be highly important for the new type of evaluation

  15. The Modest, or Quantificational, Account of Truth

    OpenAIRE

    Wolfgang Künne

    2008-01-01

    Truth is a stable, epistemically unconstrained property of propositions, and the concept of truth admits of a non-reductive explanation: that, in a nutshell, is the view for which I argued in Conceptions of Truth. In this paper I try to explain that explanation in a more detailed and, hopefully, more perspicuous way than I did in Ch. 6.2 of the book and to defend its use of sentential quantification against some of the criticisms it has has come in for.

  16. Quantification of human performance circadian rhythms.

    Science.gov (United States)

    Freivalds, A; Chaffin, D B; Langolf, G D

    1983-09-01

    The quantification of worker performance changes during a shift is critical to establishing worker productivity. This investigation examined the existence of circadian rhythms in response variables that relate most meaningfully to the physiological and neurological state of the body for three subjects maintaining a resting posture for 25 hours on five separate occasions. Significant circadian variation ranging from 3% to 11% of the mean value was detected for elbow flexion strength, physiological tremor, simple reaction time, information processing rate and critical eye-hand tracking capacity. PMID:6637808

  17. Quantification of extracellular UDP-galactose

    OpenAIRE

    Lazarowski, Eduardo R.

    2009-01-01

    The human P2Y14 receptor is potently activated by UDP-glucose (UDP-Glc), UDP-galactose (UDP-Gal), UDP-N-acetylglucosamine (UDP-GlcNAc), and UDP-glucuronic acid. Recently, cellular release of UDP-Glc and UDP-GlcNAc has been reported, but whether additional UDP-sugars are endogenous agonists for the P2Y14 receptor remains poorly defined. In the present study, we describe an assay for the quantification of UDP-Gal with sub-nanomolar sensitivity. This assay is based on the enzymatic conversion of...

  18. A critique of methods in the quantification of risks, costs and benefits in the Societal choice of energy options

    International Nuclear Information System (INIS)

    A discussion is presented on the assessment of the risks, costs and benefits of proposed nuclear power plants and their alternatives. Topics discussed include: information adequacy and simplifying assumptions; the social cost of information; the quantification of subjective values; the various quantitative methods such as statistical and probability theory; engineering or scientific estimation; the modeling of the ecological, economic and social effects of alternative energy sources. (U.K.)

  19. Processing and quantification of x-ray energy dispersive spectra in the Analytical Electron Microscope

    International Nuclear Information System (INIS)

    Spectral processing in x-ray energy dispersive spectroscopy deals with the extraction of characteristic signals from experimental data. In this text, the four basic procedures for this methodology are reviewed and their limitations outlined. Quantification, on the other hand, deals with the interpretation of the information obtained from spectral processing. Here the limitations are for the most part instrumental in nature. The prospects of higher voltage operation does not, in theory, present any new problems and may in fact prove to be more desirable assuming that electron damage effects do not preclude analysis. 28 refs., 6 figs

  20. Quantification of Inositol Hexa-Kis Phosphate in Environmental Samples

    Directory of Open Access Journals (Sweden)

    John A. Tossell

    2012-03-01

    Full Text Available Phosphorous (P is a major contributor to eutrophication of surface waters, yet a complete understanding of the P cycle remains elusive. Inositol hexa-kis phosphate (IHP is the primary form of organic (PO in the environment and has been implicated as an important sink in aquatic and terrestrial samples. IHP readily forms complexes in the environment due to the 12 acidic sites on the molecule. Quantification of IHP in environmental samples has typically relied on harsh extraction methods that limit understanding of IHP interactions with potential soil and aquatic complexation partners. The ability to quantify IHP in-situ at the pH of existing soils provides direct access to the role of IHP in the P cycle. Since it is itself a buffer, adjusting the pH correspondingly alters charged species of IHP present in soil. Density Functional Theory (DFT calculations support the charged species assignments made based pKas associated with the IHP molecule. Raman spectroscopy was used to generate pH dependent spectra of inorganic (PI and IHP as well as (PO from IHP and (PI in soil samples. Electro-spray ionization mass spectroscopy (ESI-MS was used to quantify IHP-Iron complexes in two soil samples using a neutral aqueous extraction.

  1. The Method of Manufactured Universes for validating uncertainty quantification methods

    International Nuclear Information System (INIS)

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which 'experimental' data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport 'universe', models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new 'experiments' within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies.

  2. Mesh refinement for uncertainty quantification through model reduction

    Science.gov (United States)

    Li, Jing; Stinis, Panos

    2015-01-01

    We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan-Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory.

  3. The Method of Manufactured Universes for validating uncertainty quantification methods

    Energy Technology Data Exchange (ETDEWEB)

    Stripling, H.F., E-mail: h.stripling@tamu.edu [Nuclear Engineering Department, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Adams, M.L., E-mail: mladams@tamu.edu [Nuclear Engineering Department, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); McClarren, R.G., E-mail: rgm@tamu.edu [Nuclear Engineering Department, Texas A and M University, 3133 TAMU, College Station, TX 77843-3133 (United States); Mallick, B.K., E-mail: bmallick@stat.tamu.edu [Department of Statistics, Texas A and M University, 3143 TAMU, College Station, TX 77843-3143 (United States)

    2011-09-15

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which 'experimental' data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport 'universe', models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new 'experiments' within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies.

  4. Quantification of abdominal aortic deformation after EVAR

    Science.gov (United States)

    Demirci, Stefanie; Manstad-Hulaas, Frode; Navab, Nassir

    2009-02-01

    Quantification of abdominal aortic deformation is an important requirement for the evaluation of endovascular stenting procedures and the further refinement of stent graft design. During endovascular aortic repair (EVAR) treatment, the aortic shape is subject to severe deformation that is imposed by medical instruments such as guide wires, catheters, and, the stent graft. This deformation can affect the flow characteristics and morphology of the aorta which have been shown to be elicitors for stent graft failures and be reason for reappearance of aneurysms. We present a method for quantifying the deformation of an aneurysmatic aorta imposed by an inserted stent graft device. The outline of the procedure includes initial rigid alignment of the two abdominal scans, segmentation of abdominal vessel trees, and automatic reduction of their centerline structures to one specified region of interest around the aorta. This is accomplished by preprocessing and remodeling of the pre- and postoperative aortic shapes before performing a non-rigid registration. We further narrow the resulting displacement fields to only include local non-rigid deformation and therefore, eliminate all remaining global rigid transformations. Finally, deformations for specified locations can be calculated from the resulting displacement fields. In order to evaluate our method, experiments for the extraction of aortic deformation fields are conducted on 15 patient datasets from endovascular aortic repair (EVAR) treatment. A visual assessment of the registration results and evaluation of the usage of deformation quantification were performed by two vascular surgeons and one interventional radiologist who are all experts in EVAR procedures.

  5. LOCAL DISTANCE’S NEIGHBOURING QUANTIFICATION

    Directory of Open Access Journals (Sweden)

    Meryem Saadoune

    2014-12-01

    Full Text Available Mobility is one of the basic features that define an ad hoc network, an asset that leaves the field free for the nodes to move. The most important aspect of this kind of network turns into a great disadvantage when it comes to commercial applications, take as an example: the automotive networks that allow communication between a groups of vehicles. The ad hoc on-demand distance vector (AODV routing protocol, designed for mobile ad hoc networks, has two main functions. First, it enables route establishment between a source and a destination node by initiating a route discovery process. Second, it maintains the active routes, which means finding alternative routes in a case of a link failure and deleting routes when they are no longer desired. In a highly mobile network those are demanding tasks to be performed efficiently and accurately. In this paper, we focused in the first point to enhance the local decision of each node in the network by the quantification of the mobility of their neighbours. Quantification is made around RSSI algorithm a well known distance estimation method.

  6. Quantification of ontogenetic allometry in ammonoids.

    Science.gov (United States)

    Korn, Dieter

    2012-01-01

    Ammonoids are well-known objects used for studies on ontogeny and phylogeny, but a quantification of ontogenetic change has not yet been carried out. Their planispirally coiled conchs allow for a study of "longitudinal" ontogenetic data, that is data of ontogenetic trajectories that can be obtained from a single specimen. Therefore, they provide a good model for ontogenetic studies of geometry in other shelled organisms. Using modifications of three cardinal conch dimensions, computer simulations can model artificial conchs. The trajectories of ontogenetic allometry of these simulations can be analyzed in great detail in a theoretical morphospace. A method for the classification of conch ontogeny and quantification of the degree of allometry is proposed. Using high-precision cross-sections, the allometric conch growth of real ammonoids can be documented and compared. The members of the Ammonoidea show a wide variety of allometric growth, ranging from near isometry to monophasic, biphasic, or polyphasic allometry. Selected examples of Palaeozoic and Mesozoic ammonoids are shown with respect to their degree of change during ontogeny of the conch. PMID:23134208

  7. Cadmium purification and quantification using immunochromatography.

    Science.gov (United States)

    Sasaki, Kazuhiro; Yongvongsoontorn, Nunnarpas; Tawarada, Kei; Ohnishi, Yoshikazu; Arakane, Tamami; Kayama, Fujio; Abe, Kaoru; Oguma, Shinichi; Ohmura, Naoya

    2009-06-10

    One of the pathways by which cadmium enters human beings is through the consumption of agricultural products. The monitoring of cadmium has a significant role in the management of cadmium intake. Cadmium purification and quantification using immunochromatography were conducted in this study as an alternative means of cadmium analysis. The samples used in this study were rice, tomato, lettuce, garden pea, Arabidopsis thaliana (a widely used model organism for studying plants), soil, and fertilizer. The cadmium immunochromatography has been produced from the monoclonal antibody Nx2C3, which recognize the chelate form of cadmium, Cd.EDTA. The immunochromatography can be used for quantification of cadmium in a range from 0.01 to 0.1 mg/L at 20% mean coefficient of variance. A chelate column employing quaternary ammonium salts was used for the purification of cadmium from HCl extracts of samples. Recoveries of cadmium were near 100%, and the lowest recovery was 76.6% from rice leaves. The estimated cadmium concentrations from the immunochromatography procedure were evaluated by comparison with the results of instrumental analysis (ICP-AES or ICP-MS). By comparison of HCl extracts analyzed by ICP-MS and column eluates analyzed by immunochromatography of the samples, the estimated cadmium concentrations were closely similar, and their recoveries were from 98 to 116%. PMID:19489614

  8. Estimating influence of cofragmentation on peptide quantification and identification in iTRAQ experiments by simulating multiplexed spectra.

    Science.gov (United States)

    Li, Honglan; Hwang, Kyu-Baek; Mun, Dong-Gi; Kim, Hokeun; Lee, Hangyeore; Lee, Sang-Won; Paek, Eunok

    2014-07-01

    Isobaric tag-based quantification such as iTRAQ and TMT is a promising approach to mass spectrometry-based quantification in proteomics as it provides wide proteome coverage with greatly increased experimental throughput. However, it is known to suffer from inaccurate quantification and identification of a target peptide due to cofragmentation of multiple peptides, which likely leads to under-estimation of differentially expressed peptides (DEPs). A simple method of filtering out cofragmented spectra with less than 100% precursor isolation purity (PIP) would decrease the coverage of iTRAQ/TMT experiments. In order to estimate the impact of cofragmentation on quantification and identification of iTRAQ-labeled peptide samples, we generated multiplexed spectra with varying degrees of PIP by mixing the two MS/MS spectra of 100% PIP obtained in global proteome profiling experiments on gastric tumor-normal tissue pair proteomes labeled by 4-plex iTRAQ. Despite cofragmentation, the simulation experiments showed that more than 99% of multiplexed spectra with PIP greater than 80% were correctly identified by three different database search engines-MODa, MS-GF+, and Proteome Discoverer. Using the multiplexed spectra that have been correctly identified, we estimated the effect of cofragmentation on peptide quantification. In 74% of the multiplexed spectra, however, the cancer-to-normal expression ratio was compressed, and a fair number of spectra showed the "ratio inflation" phenomenon. On the basis of the estimated distribution of distortions on quantification, we were able to calculate cutoff values for DEP detection from cofragmented spectra, which were corrected according to a specific PIP and probability of type I (or type II) error. When we applied these corrected cutoff values to real cofragmented spectra with PIP larger than or equal to 70%, we were able to identify reliable DEPs by removing about 25% of DEPs, which are highly likely to be false positives. Our experimental results provide useful insight into the effect of cofragmentation on isobaric tag-based quantification methods. The simulation procedure as well as the corrected cutoff calculation method could be adopted for quantifying the effect of cofragmentation and reducing false positives (or false negatives) in the DEP identification with general quantification experiments based on isobaric labeling techniques. PMID:24918111

  9. Theory of magnetization of p-type Sn1-xGdxTe: Contributions from local moments, lattice diamagnetism and carriers

    International Nuclear Information System (INIS)

    We derive a theory of magnetization for the diluted magnetic semiconductor, p-type Sn1-xGdxTe including the contributions from Gd3+ local moments, carrier-local moment hybridization and lattice diamagnetism as a function of temperature and magnetic field. The local moment contribution Mlocal is a sum of three contributions: Mlocal=Ms+Mp+Mt, where Ms is the dominant single-spin contribution, and Mp and Mt are the contributions from clusters of two- and three-spins, respectively. We have also calculated the contribution due to spin-polarized holes for carrier densities of order 1020 cm-3, using a k?.?? model, where ?? is the momentum operator in the presence of the spin-orbit interaction and k? is the hole wave vector. This contribution includes the carrier-local moment hybridization. We have also included a diamagnetic lattice contribution, which comes from inter-band orbital and spin-orbit contributions. In this contribution, the symbol k? is used for the electronic wave vector. The local moment contribution is dominant and primarily comes from the isolated spins. However, the two- and three-spin contributions increase with increase in the magnetic impurity concentration. The magnitude of the hole-spin polarization is about two orders less than the local moment rs less than the local moment contribution even at field strength of 25 T. However, the magnetization due to carrier spin-density has intrinsic importance due to its role in possible spintronics applications. The lattice diamagnetism shows considerable anisotropy. The total magnetization is calculated from all the three contributions Mlocal, Mc (due to carriers, here holes) and Mdia. We have compared our results with experiment wherever available and the agreement is fairly good. - Highlights: ? Local moment contributions include single-spin, paired spins and triads. ? Single spin contribution is found to be dominant, but other help become main at higher of concentration magnetic impurity. ? Diamagnetic and carrier contributions have their intrinsic importance, albeit their contributions are small. ? Results are compared with experimental results wherever available and the agreement in most cases is reasonable.

  10. Superspace conformal field theory

    International Nuclear Information System (INIS)

    Conformal sigma models and Wess–Zumino–Witten (WZW) models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type-I supergroups, the classification of conformal sigma models and their embedding into string theory. (review)

  11. Superspace conformal field theory

    International Nuclear Information System (INIS)

    Conformal sigma models and WZW models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type I supergroups, the classification of conformal sigma models and their embedding into string theory.

  12. Superspace conformal field theory

    Energy Technology Data Exchange (ETDEWEB)

    Quella, Thomas [Koeln Univ. (Germany). Inst. fuer Theoretische Physik; Schomerus, Volker [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-07-15

    Conformal sigma models and WZW models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type I supergroups, the classification of conformal sigma models and their embedding into string theory.

  13. Band Calculations for Ce Compounds with AuCu$_{3}$-type Crystal Structure on the basis of Dynamical Mean Field Theory I. CePd$_{3}$ and CeRh$_{3}$

    OpenAIRE

    Sakai, Osamu

    2010-01-01

    Band calculations for Ce compounds with the AuCu$_{3}$-type crystal structure were carried out on the basis of dynamical mean field theory (DMFT). The auxiliary impurity problem was solved by a method named NCA$f^{2}$vc (noncrossing approximation including the $f^{2}$ state as a vertex correction). The calculations take into account the crystal-field splitting, the spin-orbit interaction, and the correct exchange process of the $f^{1} \\rightarrow f^{0},f^{2}$ virtual excitat...

  14. D. M. Armstrong on the Identity Theory of Mind

    OpenAIRE

    Shanjendu Nath

    2013-01-01

    The Identity theory of mind occupies an important place in the history of philosophy. This theory is one of the important representations of the materialistic philosophy. This theory is known as "Materialist Monist Theory of Mind". Sometimes it is called "Type Physicalism", "Type Identity" or "Type-Type Theory" or "Mind-Brain Identity Theory". This theory appears in the philosophical domain as a reaction to the failure of Behaviourism. A number of philosophers developed this theory and among...

  15. Survey and Evaluate Uncertainty Quantification Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guang; Engel, David W.; Eslinger, Paul W.

    2012-02-01

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon capture processes. As such, we will develop, as needed and beyond existing capabilities, a suite of robust and efficient computational tools for UQ to be integrated into a CCSI UQ software framework.

  16. Homogeneity of Inorganic Glasses : Quantification and Ranking

    DEFF Research Database (Denmark)

    Jensen, Martin; Zhang, L.

    2011-01-01

    Homogeneity of glasses is a key factor determining their physical and chemical properties and overall quality. However, quantification of the homogeneity of a variety of glasses is still a challenge for glass scientists and technologists. Here, we show a simple approach by which the homogeneity of different glass products can be quantified and ranked. This approach is based on determination of both the optical intensity and dimension of the striations in glasses. These two characteristic values areobtained using the image processing method established recently. The logarithmic ratio between the dimension and the intensity is used to quantify and rank the homogeneity of glass products. Compared with the refractive index method, the image processing method has a wider detection range and a lower statistical uncertainty.

  17. Quantification Methods of Management Skills in Shipping

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2012-04-01

    Full Text Available Romania can not overcome the financial crisis without business growth, without finding opportunities for economic development and without attracting investment into the country. Successful managers find ways to overcome situations of uncertainty. The purpose of this paper is to determine the managerial skills developed by the Romanian fluvial shipping company NAVROM (hereinafter CNFR NAVROM SA, compared with ten other major competitors in the same domain, using financial information of these companies during the years 2005-2010. For carrying out the work it will be used quantification methods of managerial skills to CNFR NAVROM SA Galati, Romania, as example mentioning the analysis of financial performance management based on profitability ratios, net profit margin, suppliers management, turnover.

  18. Quantification and Comparison of Network Degree Distributions

    CERN Document Server

    Aliakbary, Sadegh; Movaghar, Ali

    2013-01-01

    The degree distribution of a network is one of its important features. In many applications, we need to compare different networks and for this aim, we can utilize various network features such as clustering coefficient, average path length and also degree distribution. Most of well-known network features (e.g. clustering, average path length, density, average degree, assortativity and modularity) are real numbers and their comparison is a simple task of subtraction. But comparison of degree distributions is more complicated because degree distributions are not simple real numbers. In this paper, we propose a new method for quantification of degree distributions and we also propose a distance metric for comparing two network degree distributions. We evaluate our proposed method and compare the results with two baseline methods: Kolmogorov-Smirnov test and comparison based on fitted power-law exponent. The evaluation shows a great improvement in our method over the baseline methods.

  19. Advanced Uncertainty Quantification For Fission Observables

    Science.gov (United States)

    Neudecker, D.; Talou, P.; Kawano, T.; Tovesson, F.

    2014-04-01

    In order to provide realistic evaluated uncertainties, well-prepared prior information but also adequate quantification of experimental uncertainties and their correlations are needed. In contemporary evaluations, the latter are often estimated roughly or neglected altogether - especially when it comes to correlations of uncertainties between experiments. In this contribution, a statistical analysis of 237Np(n,f) cross section data for 0.5-200 MeV is presented where special care was taken to estimate realistic correlations of uncertainties of the same and between different experiments. It is shown that taking correlations between experiments into account leads to larger evaluated uncertainties in the present example. A program to analyze experimental data and their uncertainties and correlations in a more reproducible and systematic manner is under development at LANL.

  20. Uncertainty quantification of an Aviation Environmental Toolsuite

    International Nuclear Information System (INIS)

    This paper describes uncertainty quantification (UQ) of a complex system computational tool that supports policy-making for aviation environmental impact. The paper presents the methods needed to create a tool that is “UQ-enabled” with a particular focus on how to manage the complexity of long run times and massive input/output datasets. These methods include a process to quantify parameter uncertainties via data, documentation and expert opinion, creating certified surrogate models to accelerate run-times while maintaining confidence in results, and executing a range of mathematical UQ techniques such as uncertainty propagation and global sensitivity analysis. The results and discussion address aircraft performance, aircraft noise, and aircraft emissions modeling

  1. Quantification practices in the nuclear industry

    International Nuclear Information System (INIS)

    In this chapter the quantification of risk practices adopted by the nuclear industries in Germany, Britain and France are examined as representative of the practices adopted throughout Europe. From this examination a number of conclusions are drawn about the common features of the practices adopted. In making this survey, the views expressed in the report of the Task Force on Safety Goals/Objectives appointed by the Commission of the European Communities, are taken into account. For each country considered, the legal requirements for presentation of quantified risk assessment as part of the licensing procedure are examined, and the way in which the requirements have been developed for practical application are then examined. (author)

  2. Recurrence quantification analysis of global stock markets

    Science.gov (United States)

    Bastos, João A.; Caiado, Jorge

    2011-04-01

    This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.

  3. Whole farm quantification of GHG emissions within smallholder farms in developing countries

    Science.gov (United States)

    Seebauer, Matthias

    2014-03-01

    The IPCC has compiled the best available scientific methods into published guidelines for estimating greenhouse gas emissions and emission removals from the land-use sector. In order to evaluate existing GHG quantification tools to comprehensively quantify GHG emissions and removals in smallholder conditions, farm scale quantification was tested with farm data from Western Kenya. After conducting a cluster analysis to identify different farm typologies GHG quantification was exercised using the VCS SALM methodology complemented with IPCC livestock emission factors and the cool farm tool. The emission profiles of four farm clusters representing the baseline conditions in the year 2009 are compared with 2011 where farmers adopted sustainable land management practices (SALM). The results demonstrate the variation in both the magnitude of the estimated GHG emissions per ha between different smallholder farm typologies and the emissions estimated by applying two different accounting tools. The farm scale quantification further shows that the adoption of SALM has a significant impact on emission reduction and removals and the mitigation benefits range between 4 and 6.5 tCO2 ha-1 yr-1 with significantly different mitigation benefits depending on typologies of the crop-livestock systems, their different agricultural practices, as well as adoption rates of improved practices. However, the inherent uncertainty related to the emission factors applied by accounting tools has substantial implications for reported agricultural emissions. With regard to uncertainty related to activity data, the assessment confirms the high variability within different farm types as well as between different parameters surveyed to comprehensively quantify GHG emissions within smallholder farms.

  4. Quantification of low levels of fluorine content in thin films

    International Nuclear Information System (INIS)

    Fluorine quantification in thin film samples containing different amounts of fluorine atoms was accomplished by combining proton-Rutherford Backscattering Spectrometry (p-RBS) and proton induced gamma-ray emission (PIGE) using proton beams of 1550 and 2330 keV for p-RBS and PIGE measurements, respectively. The capabilities of the proposed quantification method are illustrated with examples of the analysis of a series of samples of fluorine-doped tin oxides, fluorinated silica, and fluorinated diamond-like carbon films. It is shown that this procedure allows the quantification of F contents as low as 1 at.% in thin films with thicknesses in the 100–400 nm range.

  5. Quantification and disposal of radioactive waste from ITER operation

    International Nuclear Information System (INIS)

    The work on the safety and environment for the Next European Torus (NET) is being performed within the European Fusion Technology Safety and Environment Programme by the NET team and under NET contracts. In the area of NET-oriented investigations concerning waste management and disposal, Studsvik is concentrating on the operational waste from both NET and ITER (International Thermonuclear Experimental Reactor). This paper gives a characterization and quantification of the radioactive waste generated from the operation of ITER during the Physics Phase, and from the replacement of all blanket segments (European shielding blanket option) at the end of the Physics Phase after an integrated first-wall loading of 0.03 MWy/m2. The total activity contents and volumes of packaged waste from the Physics Phase operation and from the blanket replacement are estimated. The waste volume from replacement of the shielding blanket segments of ITER is considerably larger than estimated in earlier calculations for NET due to the fact that the ITER conceptual design includes more of the stell shielding in the removable segments. The waste handling and disposal are described using existing Swedish and German concepts for similar waste categories from nuclear fission reactors. This includes the choice of suitable packagings, intermediate storage time for cooling, and type of repository for final disposal. Some typical cost figures for waste handling are also presented. (orig.)ste handling are also presented. (orig.)

  6. Fast Padé Transform for Exact Quantification of Time Signals in Magnetic Resonance Spectroscopy

    Science.gov (United States)

    Belkic, Dzevad

    This work employs the fast Padé transform (FPT) for spectral analysis of theoretically generated time signals. The spectral characteristics of these synthesised signals are reminiscent of the corresponding data that are measured experimentally via encoding digitised free induction decay curves from a healthy human brain using Magnetic Resonance Spectroscopy (MRS). In medicine, in vivo MRS is one of the most promising non-invasive diagnostic tools, especially in oncology, due to the provided biochemical information about functionality of metabolites of the scanned tissue. For success of such diagnostics, it is crucial to carry out the most reliable quantifications of the studied time signals. This quantification problem is the harmonic inversion via the spectral decomposition of the given time signal into its damped harmonic constituents. Such a reconstruction finds the unknown total number of resonances, their complex frequencies and the corresponding complex amplitudes. These spectral parameters of the fundamental harmonics give the peak positions, widths, heights, and phases of all the physical resonances. As per the unified theory of quantum-mechanical spectral analysis and signal processing, the FPT represents the exact solver of the quantification problem, which is mathematically ill-conditioned. The exact and unique solution via the FPT is valid for any noiseless synthesised time signal built from an arbitrary number of damped complex exponentials. These attenuated harmonics can appear as a linear combination with both stationary and non-stationary amplitudes. Such sums produce time signals that yield Lorentzian (non-degenerate) and non-Lorentzian (degenerate) spectra for isolated and overlapped resonances from MRS. We give a convergent validation for these virtues of the FPT. This is achieved through the proof-of-principle investigation by developing an algorithmic feasibility for robust and efficient computations of the exact numerical solution of a typical quantification problem from MRS. The systematics in the methodology designed in the present study represent a veritable paradigm shift for solving the quantification problem in MRS with special ramifications in clinical oncology. This is implied by the explicit demonstration of the remarkable ability of the fast Padé transform to unambiguously quantify all the customary spectral structures, ranging from isolated resonances to those that are tightly overlapped and nearly confluent.

  7. Finitely Axiomatized Set Theory: a nonstandard first-order theory implying ZF

    OpenAIRE

    Cabbolet, Marcoen

    2014-01-01

    It is well-known that a finite axiomatization of Zermelo-Fraenkel set theory (ZF) is not possible in the same first-order language. In this note we show that a finite axiomatization is possible if we extent the language of ZF with the new logical concept of `universal quantification over a family of variables indexed in an arbitrary set X'. We axiomatically introduce Finitely Axiomatized Set Theory (FAST), which consists of eleven theorems of ZF plus a new constructive axiom...

  8. Cyclic oligomers in polyamide for food contact material: quantification by HPLC-CLND and single-substance calibration.

    Science.gov (United States)

    Heimrich, M; Bönsch, M; Nickl, H; Simat, T J

    2012-01-01

    Cyclic oligomers are the major substances migrating from polyamide (PA) food contact materials. However, no commercial standards are available for the quantification of these substances. For the first time the quantification of cyclic oligomers was carried out by HPLC coupled with a chemiluminescence nitrogen detector (CLND) and single-substance calibration. Cyclic monomer (MW?=?226?Da) and dimer (MW?=?452?Da) of PA66 were synthesised and equimolar N detection of CLND to synthesised oligomers, caprolactam, 6-aminohexanoic acid (monomers of PA6) and caffeine (a typical nitrogen calibrant) was proven. Relative response factors (UVD at 210?nm) referring to caprolactam were determined for cyclic PA6 oligomers from dimer to nonamer, using HPLC-CLND in combination with a UVD. A method for quantification of cyclic oligomer content in PA materials was introduced using HPLC-CLND analysis and caffeine as a single nitrogen calibrant. The method was applied to the quantification of cyclic PA oligomers in several PA granulates. For two PA6 granulates from different manufacturers markedly different oligomer contents were analysed (19.5 versus 13.4?g?kg?¹). The elution pattern of cyclic oligomers offers the possibility of identifying the PA type and differentiating between PA copolymers and blends. PMID:22329416

  9. Assessment of molecular recognition element for the quantification of human epidermal growth factor using surface plasmon resonance

    Scientific Electronic Library Online (English)

    Ira Amira, Rosti; Ramakrishnan Nagasundara, Ramanan; Tau Chuan, Ling; Arbakariya B, Ariff.

    2013-11-15

    Full Text Available Background: A method for the selection of suitable molecular recognition element (MRE) for the quantification of human epidermal growth factor (hEGF) using surface plasmon resonance (SPR) is presented. Two types of hEGF antibody, monoclonal and polyclonal, were immobilized on the surface of chip and [...] validated for its characteristics and performance in the quantification of hEGF. Validation of this analytical procedure was to demonstrate the stability and suitability of antibody for the quantification of target protein. Results: Specificity, accuracy and precision for all samples were within acceptable limit for both antibodies. The affinity and kinetic constant of antibodies-hEGF binding were evaluated using a 1:1 Langmuir interaction model. The model fitted well to all binding responses simultaneously. Polyclonal antibody (pAb) has better affinity (K D = 7.39e-10 M) than monoclonal antibody (mAb) (K D = 9.54e-9 M). Further evaluation of kinetic constant demonstrated that pAb has faster reaction rate during sample injection, slower dissociation rate during buffer injection and higher level of saturation state than mAb. Besides, pAb has longer shelf life and greater number of cycle run. Conclusions: Thus, pAb was more suitable to be used as a stable MRE for further quantification works from the consideration of kinetic, binding rate and shelf life assessment.

  10. Detection and quantification of hogwash oil in soybean oils using low-cost spectroscopy and chemometrics

    Science.gov (United States)

    Mignani, A. G.; Ciaccheri, L.; Mencaglia, A. A.; Cichelli, A.; Xing, J.; Yang, X.; Sun, W.; Yuan, L.

    2013-05-01

    This paper presents the detection and quantification of hogwash oil in soybean oils by means of absorption spectroscopy. Three types of soybean oils were adulterated with different concentrations of hogwash oil. The spectra were measured in the visible band using a white LED and a low-cost spectrometer. The measured spectra were processed by means of multivariate analysis to distinguish the adulteration and, for each soybean oil, to quantify the adulterant concentration. Then the visible spectra were sliced into two bands for modeling a simple setup made of two LEDs only. The successful results indicate potentials for implementing a smartphone-compatible device for self-assessment of soybean oil quality.

  11. Higher Order Quasi Monte-Carlo Integration in Uncertainty Quantification

    OpenAIRE

    Dick, Josef; Gia, Quoc Thong Le; Schwab, Christoph

    2014-01-01

    We review recent results on dimension-robust higher order convergence rates of Quasi-Monte Carlo Petrov-Galerkin approximations for response functionals of infinite-dimensional, parametric operator equations which arise in computational uncertainty quantification.

  12. Multiphysics modeling and uncertainty quantification for an active composite reflector

    Science.gov (United States)

    Peterson, Lee D.; Bradford, S. C.; Schiermeier, John E.; Agnes, Gregory S.; Basinger, Scott A.

    2013-09-01

    A multiphysics, high resolution simulation of an actively controlled, composite reflector panel is developed to extrapolate from ground test results to flight performance. The subject test article has previously demonstrated sub-micron corrected shape in a controlled laboratory thermal load. This paper develops a model of the on-orbit performance of the panel under realistic thermal loads, with an active heater control system, and performs an uncertainty quantification of the predicted response. The primary contribution of this paper is the first reported application of the Sandia developed Sierra mechanics simulation tools to a spacecraft multiphysics simulation of a closed-loop system, including uncertainty quantification. The simulation was developed so as to have sufficient resolution to capture the residual panel shape error that remains after the thermal and mechanical control loops are closed. An uncertainty quantification analysis was performed to assess the predicted tolerance in the closed-loop wavefront error. Key tools used for the uncertainty quantification are also described.

  13. Synthesis and Review: Advancing agricultural greenhouse gas quantification

    Science.gov (United States)

    Olander, Lydia P.; Wollenberg, Eva; Tubiello, Francesco N.; Herold, Martin

    2014-07-01

    Reducing emissions of agricultural greenhouse gases (GHGs), such as methane and nitrous oxide, and sequestering carbon in the soil or in living biomass can help reduce the impact of agriculture on climate change while improving productivity and reducing resource use. There is an increasing demand for improved, low cost quantification of GHGs in agriculture, whether for national reporting to the United Nations Framework Convention on Climate Change (UNFCCC), underpinning and stimulating improved practices, establishing crediting mechanisms, or supporting green products. This ERL focus issue highlights GHG quantification to call attention to our existing knowledge and opportunities for further progress. In this article we synthesize the findings of 21 papers on the current state of global capability for agricultural GHG quantification and visions for its improvement. We conclude that strategic investment in quantification can lead to significant global improvement in agricultural GHG estimation in the near term.

  14. FRANX. Application for analysis and quantification of the APS fire

    International Nuclear Information System (INIS)

    The FRANX application has been developed by EPRI within the Risk and Reliability User Group in order to facilitate the process of quantification and updating APS Fire (also covers floods and earthquakes). By applying fire scenarios are quantified in the central integrating the tasks performed during the APS fire. This paper describes the main features of the program to allow quantification of an APS Fire. (Author)

  15. On the universality of PIV uncertainty quantification by image matching:

    OpenAIRE

    Sciacchitano, A.; Scarano, F.; Wieneke, B.

    2013-01-01

    The topic of uncertainty quantification in particle image velocimetry (PIV) is recognized as very relevant in the experimental fluid mechanics community, especially when dealing with turbulent flows, where PIV plays a prime role as diagnostic tool. The issue is particularly important when PIV is used to assess the validity of results obtained with computational fluid dynamics (CFD). An approach for PIV data uncertainty quantification based on image matching has been introduced by Sciacchitano...

  16. Experimental entanglement verification and quantification via uncertainty relations

    OpenAIRE

    Wang, Zhi-wei; Huang, Yun-feng; Ren, Xi-feng; Zhang, Yong-sheng; Guo, Guang-can

    2006-01-01

    We report on experimental studies on entanglement quantification and verification based on uncertainty relations for systems consisting of two qubits. The new proposed measure is shown to be invariant under local unitary transformations, by which entanglement quantification is implemented for two-qubit pure states. The nonlocal uncertainty relations for two-qubit pure states are also used for entanglement verification which serves as a basic proposition and promise to be a g...

  17. Enabling Uncertainty Quantification of Large Aircraft System Simulation Models

    OpenAIRE

    Carlsson, Magnus; Steinkellner, So?ren; Gavel, Hampus; O?lvander, Johan

    2013-01-01

    A common viewpoint in both academia and industry is that that Verification, Validation and Uncertainty Quantification (VV&UQ) of simulation models are vital activities for a successful deployment of model-based system engineering. In the literature, there is no lack of advice regarding methods for VV&UQ. However, for industrial applications available methods for Uncertainty Quantification (UQ) often seem too detailed or tedious to even try. The consequence is that no UQ is performed, ...

  18. The parallel reaction monitoring method contributes to a highly sensitive polyubiquitin chain quantification

    International Nuclear Information System (INIS)

    Highlights: •The parallel reaction monitoring method was applied to ubiquitin quantification. •The ubiquitin PRM method is highly sensitive even in biological samples. •Using the method, we revealed that Ufd4 assembles the K29-linked ubiquitin chain. -- Abstract: Ubiquitylation is an essential posttranslational protein modification that is implicated in a diverse array of cellular functions. Although cells contain eight structurally distinct types of polyubiquitin chains, detailed function of several chain types including K29-linked chains has remained largely unclear. Current mass spectrometry (MS)-based quantification methods are highly inefficient for low abundant atypical chains, such as K29- and M1-linked chains, in complex mixtures that typically contain highly abundant proteins. In this study, we applied parallel reaction monitoring (PRM), a quantitative, high-resolution MS method, to quantify ubiquitin chains. The ubiquitin PRM method allows us to quantify 100 attomole amounts of all possible ubiquitin chains in cell extracts. Furthermore, we quantified ubiquitylation levels of ubiquitin-proline-?-galactosidase (Ub-P-?gal), a historically known model substrate of the ubiquitin fusion degradation (UFD) pathway. In wild-type cells, Ub-P-?gal is modified with ubiquitin chains consisting of 21% K29- and 78% K48-linked chains. In contrast, K29-linked chains are not detected in UFD4 knockout cells, suggesting that Ufd4 assembles the K29-linked ubiquitin chain(s) on Ub-P-?gal in vivo. Thus, the ubiquitin PRM is a novel, useful, quantitative method for analyzing the highly complicated ubiquitin system

  19. Uncertainty Quantification with Applications to Engineering Problems

    DEFF Research Database (Denmark)

    Bigoni, Daniele

    2015-01-01

    The systematic quantification of the uncertainties affecting dynamical systems and the characterization of the uncertainty of their outcomes is critical for engineering design and analysis, where risks must be reduced as much as possible. Uncertainties stem naturally from our limitations in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor-train (STT) decomposition, a novel high-order method for the effective propagation of uncertainties which aims at providing an exponential convergence rate while tackling the curse of dimensionality. The curse of dimensionality is a problem that afflicts many methods based on meta-models, for which the computational cost increases exponentially with the number of inputs of the approximated function – which we will call dimension in the following. The STT-decomposition is based on the Polynomial Chaos (PC) approximation and the low-rank decomposition of the function describing the Quantity of Interest of the considered problem. The low-rank decomposition is obtained through the discrete tensor-train decomposition, which is constructed using an optimization algorithm for the selection of the relevant points on which the function needs to be evaluated. The selection of these points is informed by the approximated function and thus it is able to adapt to its features. The number of function evaluations needed for the construction grows only linearly with the dimension and quadratically with the rank. In this work we will present and use the functional counterpart of this low-rank decomposition and, after proving some auxiliary properties, we will apply PC on it, obtaining the STT-decomposition. This will allow the decoupling of each dimension, leading to a much cheaper construction of the PC surrogate. In the associated paper, the capabilities of the STT-decomposition are checked on commonly used test functions and on an elliptic problem with random inputs. This work will also present three active research directions aimed at improving the efficiency of the STT-decomposition. In this context, we propose three new strategies for solving the ordering problem suffered by the tensor-train decomposition, for computing better estimates with respect to the norms usually employed in UQ and for the anisotropic adaptivity of the method. The second part of this work presents engineering applications of the UQ framework. Both the applications are characterized by functions whose evaluation is computationally expensive and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose characteristics are uncertain. These analysis are carried out using mostly PC methods, and resorting to random sampling methods for comparison and when strictly necessary. The second application of the UQ framework is on the propagation of the uncertainties entering a fully non-linear and dispersive model of water waves. This computationally challenging task is tackled with the adoption of state-of-the-art software for its numerical solution and of efficient PC methods. The aim of this study is the construction of stochastic benchmarks where to test UQ methodologies before being applied to full-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix.

  20. A modified extraction protocol enables detection and quantification of celiac disease-related gluten proteins from wheat

    OpenAIRE

    Broeck, H. C.; America, A. H. P.; Smulders, M. J. M.; Bosch, H. J.; Hamer, R. J.; Gilissen, L. J. W. J.; Meer, I. M.

    2009-01-01

    The detection, analysis, and quantification of individual celiac disease (CD) immune responsive gluten proteins in wheat and related cereals (barley, rye) require an adequate and reliable extraction protocol. Because different types of gluten proteins behave differently in terms of solubility, currently different extraction protocols exist. The performance of various documented gluten extraction protocols is evaluated for specificity and completeness by gel electrophoresis (SDS-PAGE), immunob...

  1. High-Order Metrics for Model Uncertainty Quantification and Validation

    International Nuclear Information System (INIS)

    It is well known that the true values of measured and computed data are impossible to know exactly because of various uncontrollable errors and uncertainties arising in the data measurement and interpretation reduction processes. Hence, all inferences, predictions, engineering computations, and other applications of measured and/or computed data are necessarily based on weighted averages over the possibly true values, with weights indicating the degree of plausibility of each value. Furthermore, combination of data from different sources involves a weighted propagation (e.g., via sensitivities) of all uncertainties, requiring reasoning from incomplete information and using probability theory for extracting optimal values together with 'best-estimate' uncertainties from often sparse, incomplete, error-afflicted, and occasionally discrepant data. The current state-of-the-art data assimilation/model calibration methodologies1 for large-scale nonlinear systems cannot take into account uncertainties higher-order than secondorder (i.e., covariances) thereby failing to quantify fully the deviations of the problem under consideration from a normal (Gaussian) multivariate distribution. Such deviations would be quantified by the third- and fourth-order moments (skewness and kurtosis) of the model's predicted results (responses). These higher-order moments would be constructed by combining modeling and experimental uncertainties (which also incorporate the corresponding skewnesslso incorporate the corresponding skewness and kurtosis information), using derivatives of the model responses with respect to the model's parameters. This paper presents explicit expressions for skewness and kurtosis of computed responses, thereby permitting quantification of the deviations of the computed response uncertainties from multivariate normality. In addition, this paper presents a new and most efficient procedure for computing the second-order response derivatives with respect to model parameters using the 'adjoint sensitivity analysis procedure' (ASAP)

  2. Volumetric motion quantification by 3D tissue phase mapped CMR

    Directory of Open Access Journals (Sweden)

    Lutz Anja

    2012-10-01

    Full Text Available Abstract Background The objective of this study was the quantification of myocardial motion from 3D tissue phase mapped (TPM CMR. Recent work on myocardial motion quantification by TPM has been focussed on multi-slice 2D acquisitions thus excluding motion information from large regions of the left ventricle. Volumetric motion assessment appears an important next step towards the understanding of the volumetric myocardial motion and hence may further improve diagnosis and treatments in patients with myocardial motion abnormalities. Methods Volumetric motion quantification of the complete left ventricle was performed in 12 healthy volunteers and two patients applying a black-blood 3D TPM sequence. The resulting motion field was analysed regarding motion pattern differences between apical and basal locations as well as for asynchronous motion pattern between different myocardial segments in one or more slices. Motion quantification included velocity, torsion, rotation angle and strain derived parameters. Results All investigated motion quantification parameters could be calculated from the 3D-TPM data. Parameters quantifying hypokinetic or asynchronous motion demonstrated differences between motion impaired and healthy myocardium. Conclusions 3D-TPM enables the gapless volumetric quantification of motion abnormalities of the left ventricle, which can be applied in future application as additional information to provide a more detailed analysis of the left ventricular function.

  3. The type III manufactory

    OpenAIRE

    Palcoux, Se?bastien

    2011-01-01

    Using unusual objects in the theory of von Neumann algebra, as the chinese game Go or the Conway game of life (generalized on finitely presented groups), we are able to build, by hands, many type III factors.

  4. Type II universal spacetimes

    CERN Document Server

    Hervik, Sigbjørn; Pravda, Vojt?ch; Pravdová, Alena

    2015-01-01

    We study type II universal metrics of the Lorentzian signature. These metrics solve vacuum field equations of all theories of gravitation with the Lagrangian being a polynomial curvature invariant constructed from the metric, the Riemann tensor and its covariant derivatives of arbitrary order. We provide examples of type II universal metrics for all composite number dimensions. On the other hand, we have no examples for prime number dimensions and we prove non-existence of type II universal spacetimes in five dimensions. We also present type II vacuum solutions of selected classes of gravitational theories, such as Lovelock, quadratic and L(Riemann) gravities.

  5. A phenomenological theory of spatially structured local synaptic connectivity.

    Directory of Open Access Journals (Sweden)

    2005-06-01

    Full Text Available The structure of local synaptic circuits is the key to understanding cortical function and how neuronal functional modules such as cortical columns are formed. The central problem in deciphering cortical microcircuits is the quantification of synaptic connectivity between neuron pairs. I present a theoretical model that accounts for the axon and dendrite morphologies of pre- and postsynaptic cells and provides the average number of synaptic contacts formed between them as a function of their relative locations in three-dimensional space. An important aspect of the current approach is the representation of a complex structure of an axonal/dendritic arbor as a superposition of basic structures-synaptic clouds. Each cloud has three structural parameters that can be directly estimated from two-dimensional drawings of the underlying arbor. Using empirical data available in literature, I applied this theory to three morphologically different types of cell pairs. I found that, within a wide range of cell separations, the theory is in very good agreement with empirical data on (i axonal-dendritic contacts of pyramidal cells and (ii somatic synapses formed by the axons of inhibitory interneurons. Since for many types of neurons plane arborization drawings are available from literature, this theory can provide a practical means for quantitatively deriving local synaptic circuits based on the actual observed densities of specific types of neurons and their morphologies. It can also have significant implications for computational models of cortical networks by making it possible to wire up simulated neural networks in a realistic fashion.

  6. Electrophoresis Gel Quantification with a Flatbed Scanner and Versatile Lighting from a Screen Scavenged from a Liquid Crystal Display (LCD) Monitor

    Science.gov (United States)

    Yeung, Brendan; Ng, Tuck Wah; Tan, Han Yen; Liew, Oi Wah

    2012-01-01

    The use of different types of stains in the quantification of proteins separated on gels using electrophoresis offers the capability of deriving good outcomes in terms of linear dynamic range, sensitivity, and compatibility with specific proteins. An inexpensive, simple, and versatile lighting system based on liquid crystal display backlighting is…

  7. Basic concepts in quantum information theory

    International Nuclear Information System (INIS)

    Full text: Quantum information theory provides a framework for the description of quantum systems and their applications in the context of quantum computation and quantum communication. Although several of the basic concepts on which such theory is built are reminiscent of those of (classical) Information Theory, the new rules provided by quantum mechanics introduce properties which have no classical counterpart and that are responsible for most of the applications. In particular, entangled states appear as one of the basic resources in this context. In this lecture I will introduce the basic concepts and applications in Quantum Information, particularly stressing the definition of entanglement, its quantification, and its applications. (author)

  8. Theory of stellar atmospheres

    International Nuclear Information System (INIS)

    Recent progress in the theory of stellar atmospheres is described under the topics of winds from early-type stars, mass-loss from late-type stars, stellar chromospheres and coronae, model atmospheres for late-type stars, and radiative transfer in extended, expanding, and multidimensional atmospheres. (C.F.)

  9. Identification and Quantification of Carbonate Species Using Rock-Eval Pyrolysis

    Directory of Open Access Journals (Sweden)

    Pillot D.

    2013-03-01

    Full Text Available This paper presents a new reliable and rapid method to characterise and quantify carbonates in solid samples based on monitoring the CO2 flux emitted by progressive thermal decomposition of carbonates during a programmed heating. The different peaks of destabilisation allow determining the different types of carbonates present in the analysed sample. The quantification of each peak gives the respective proportions of these different types of carbonates in the sample. In addition to the chosen procedure presented in this paper, using a standard Rock-Eval 6 pyrolyser, calibration characteristic profiles are also presented for the most common carbonates in nature. This method should allow different types of application for different disciplines, either academic or industrial.

  10. Quantification of nanowire penetration into living cells

    Science.gov (United States)

    Xu, Alexander M.; Aalipour, Amin; Leal-Ortiz, Sergio; Mekhdjian, Armen H.; Xie, Xi; Dunn, Alexander R.; Garner, Craig C.; Melosh, Nicholas A.

    2014-04-01

    High-aspect ratio nanostructures such as nanowires and nanotubes are a powerful new tool for accessing the cell interior for delivery and sensing. Controlling and optimizing cellular access is a critical challenge for this new technology, yet even the most basic aspect of this process, whether these structures directly penetrate the cell membrane, is still unknown. Here we report the first quantification of hollow nanowires—nanostraws—that directly penetrate the membrane by observing dynamic ion delivery from each 100-nm diameter nanostraw. We discover that penetration is a rare event: 7.1±2.7% of the nanostraws penetrate the cell to provide cytosolic access for an extended period for an average of 10.7±5.8 penetrations per cell. Using time-resolved delivery, the kinetics of the first penetration event are shown to be adhesion dependent and coincident with recruitment of focal adhesion-associated proteins. These measurements provide a quantitative basis for understanding nanowire-cell interactions, and a means for rapidly assessing membrane penetration.

  11. Quantification of ATOFMS data by multivariate methods.

    Science.gov (United States)

    Fergenson, D P; Song, X H; Ramadan, Z; Allen, J O; Hughes, L S; Cass, G R; Hopke, P K; Prather, K A

    2001-08-01

    Aerosol time-of-flight mass spectrometry (ATOFMS) is capable of measuring the sizes and chemical compositions of individual polydisperse aerosol particles in real time. A qualitative estimate of the particle composition is acquired in the form of a mass spectrum that must be subsequently interpreted in order to draw conclusions regarding atmospheric relevance. The actual problem involves developing a calibration that allows the mass spectral data to be transformed into estimates of the composition of the atmospheric aerosol. A properly calibrated ATOFMS system should be able to quantitatively determine atmospheric concentrations of various species. Ideally, it would be able to accomplish this more rapidly, accurately, with higher size and time resolution, and at a far lower marginal cost than the manual sampling methods that are currently employed. Attempts have already been made at using ATOFMS and similar techniques to extract the bulk chemical species concentration present in an ensemble of particles. This study represents the use of a multivariate calibration method, two-dimensional partial least-squares analysis, for calibrating single-particle mass spectral data. The method presented here is far less labor-intensive than the univariate methods attempted to date and allows for less observer bias. Because of the labor savings, this is also the most comprehensive calibration performed to date, resulting in the quantification of 44 different chemical species. PMID:11510815

  12. Perfusion quantification using Gaussian process deconvolution.

    DEFF Research Database (Denmark)

    Andersen, I.K.; Szymkowiak, A

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated as a constraint in the method. The GPD method, which automatically estimates the noise level in each voxel, has the advantage that model parameters are optimized automatically. The GPD is compared to singular value decomposition (SVD) using a common threshold for the singular values, and to SVD using a threshold optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as data from healthy volunteers. It is shown that GPD is comparable to SVD with a variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion. GPD provides a better estimate of the entire IRF. As the signal-to-noise ratio (SNR) increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes

  13. Perfusion quantification using Gaussian process deconvolution

    DEFF Research Database (Denmark)

    Andersen, I K; Szymkowiak, A

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated as a constraint in the method. The GPD method, which automatically estimates the noise level in each voxel, has the advantage that model parameters are optimized automatically. The GPD is compared to singular value decomposition (SVD) using a common threshold for the singular values, and to SVD using a threshold optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as data from healthy volunteers. It is shown that GPD is comparable to SVD with a variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion. GPD provides a better estimate of the entire IRF. As the signal-to-noise ratio (SNR) increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes.

  14. A regularized method for peptide quantification.

    Science.gov (United States)

    Yang, Chao; Yang, Can; Yu, Weichuan

    2010-05-01

    Peptide abundance estimation is generally the first step in protein quantification. In peptide abundance estimation, peptide overlapping and peak intensity variation are two challenges. The main objective of this paper is to estimate peptide abundance by taking advantage of peptide isotopic distribution and smoothness of peptide elution profile. Our method proposes to solve the peptide overlapping problem and provides a way to control the variance of estimation. We compare our method with a commonly used method on simulated data sets and two real data sets of standard protein mixtures. The results show that our method achieves more accurate estimation of peptide abundance on different samples. In our method, there is a variance-related parameter. Considering the well-known trade-off between the variance and the bias of estimation, we should not only focus on reducing the variance in real applications. A suggestion about parameter selection is given based on the discussion of variance and bias. Matlab source codes and detailed experimental results are available at http://bioinformatics.ust.hk/PeptideQuant/peptidequant.htm. PMID:20201590

  15. Tissue quantification for development of pediatric phantom

    International Nuclear Information System (INIS)

    The optimization of the risk- benefit ratio is a major concern in the pediatric radiology, due to the greater vulnerability of children to the late somatic effects and genetic effects of exposure to radiation compared to adults. In Brazil, it is estimated that the causes of death from head trauma are 18 % for the age group between 1-5 years and the radiograph is the primary diagnostic test for the detection of skull fracture . Knowing that the image quality is essential to ensure the identification of structures anatomical and minimizing errors diagnostic interpretation, this paper proposed the development and construction of homogeneous phantoms skull, for the age group 1-5 years. The construction of the phantoms homogeneous was performed using the classification and quantification of tissue present in the skull of pediatric patients. In this procedure computational algorithms were used, using Matlab, to quantify distinct biological tissues present in the anatomical regions studied , using pictures retrospective CT scans. Preliminary data obtained from measurements show that between the ages of 1-5 years, assuming an average anteroposterior diameter of the pediatric skull region of the 145.73 ± 2.97 mm, can be represented by 92.34 mm ± 5.22 of lucite and 1.75 ± 0:21 mm of aluminum plates of a provision of PEP (Pacient equivalent phantom). After its construction, the phantoms will be used for image and dose optimization in pediatric protocols process to examinations of computerized radiography

  16. Numerical Uncertainty Quantification for Radiation Analysis Tools

    Science.gov (United States)

    Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

    2007-01-01

    Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

  17. Uncertainty quantification in reacting flow modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Le MaÒitre, Olivier P. (UniversitÔe d' Evry Val d' Essonne, Evry, France); Reagan, Matthew T.; Knio, Omar M. (Johns Hopkins University, Baltimore, MD); Ghanem, Roger Georges (Johns Hopkins University, Baltimore, MD); Najm, Habib N.

    2003-10-01

    Uncertainty quantification (UQ) in the computational modeling of physical systems is important for scientific investigation, engineering design, and model validation. In this work we develop techniques for UQ based on spectral and pseudo-spectral polynomial chaos (PC) expansions, and we apply these constructions in computations of reacting flow. We develop and compare both intrusive and non-intrusive spectral PC techniques. In the intrusive construction, the deterministic model equations are reformulated using Galerkin projection into a set of equations for the time evolution of the field variable PC expansion mode strengths. The mode strengths relate specific parametric uncertainties to their effects on model outputs. The non-intrusive construction uses sampling of many realizations of the original deterministic model, and projects the resulting statistics onto the PC modes, arriving at the PC expansions of the model outputs. We investigate and discuss the strengths and weaknesses of each approach, and identify their utility under different conditions. We also outline areas where ongoing and future research are needed to address challenges with both approaches.

  18. Cross recurrence quantification for cover song identification

    International Nuclear Information System (INIS)

    There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from real-world dynamics even though these are not necessarily deterministic and stationary. In the present study, we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose, we here propose a recurrence quantification analysis measure that allows the tracking of potentially curved and disrupted traces in cross recurrence plots (CRPs). We apply this measure to CRPs constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Roessler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.

  19. Cross recurrence quantification for cover song identification

    Energy Technology Data Exchange (ETDEWEB)

    Serra, Joan; Serra, Xavier; Andrzejak, Ralph G [Department of Information and Communication Technologies, Universitat Pompeu Fabra, Roc Boronat 138, 08018 Barcelona (Spain)], E-mail: joan.serraj@upf.edu

    2009-09-15

    There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from real-world dynamics even though these are not necessarily deterministic and stationary. In the present study, we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose, we here propose a recurrence quantification analysis measure that allows the tracking of potentially curved and disrupted traces in cross recurrence plots (CRPs). We apply this measure to CRPs constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Roessler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.

  20. Quantification of Zolpidem in Canine Plasma

    Directory of Open Access Journals (Sweden)

    Mario Giorgi

    2012-01-01

    Full Text Available Problem statement: Zolpidem is a non-benzodiazepine hypnotic agent currently used in human medicine. In contrast to benzodiazepines, zolpidem preferentially binds with the GABAA complex ?? receptors while poorly interacting with the other ? receptor complexes. Recent studies have suggested that ZP may be used to initiate sedation and diminish severe anxiety responses in dogs. The aim of the present study is to develop and validate a new HPLC-FL based method to quantify zolpidem in canine plasma. Approach: Several parameters both in the extraction and in the detection method were evaluated. The applicability of the method was determined by administering zolpidem to one dog. Results: The final mobile phase was acetonitrile: KH2PO4 (15 mM; pH 6.0 40:60 v/v, with a flow rate of 1 mL min-1 and excitation and emission wave lengths of 254 and 400 nm, respectively. The best extraction solvent was CH2Cl2:Et2O (3:7 v/v, this gave recoveries ranging from 83-95%. The limit of quantification was 1 ng mL-1. The chromatographic runs were specific with no interfering peaks at the retention times of the analyte. The other validation parameters were in agreement with the EMEA. Conclusion/Recommendations: This method (extraction, separation and applied techniques is simple and effective. This technique may have applications for pharmacokinetic or toxicological studies.