WorldWideScience

Sample records for quantification theory type

  1. Uncertainty quantification theory, implementation, and applications

    CERN Document Server

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  2. A Study of Tongue and Pulse Diagnosis in Traditional Korean Medicine for Stroke Patients Based on Quantification Theory Type II

    OpenAIRE

    Mi Mi Ko; Tae-Yong Park; Ju Ah Lee; Byoung-Kab Kang; Jungsup Lee; Myeong Soo Lee

    2013-01-01

    In traditional Korean medicine (TKM), pattern identification (PI) diagnosis is important for treating diseases. The aim of this study was to comprehensively investigate the relationship between the PI type and tongue diagnosis or pulse diagnosis variables. The study included 1,879 stroke patients who were admitted to 12 oriental medical university hospitals from June 2006 through March 2009. The status of the pulse and tongue was examined in each patient. Additionally, to investigate relative...

  3. Recurrence quantification analysis theory and best practices

    CERN Document Server

    Jr, Jr; Marwan, Norbert

    2015-01-01

    The analysis of recurrences in dynamical systems by using recurrence plots and their quantification is still an emerging field.  Over the past decades recurrence plots have proven to be valuable data visualization and analysis tools in the theoretical study of complex, time-varying dynamical systems as well as in various applications in biology, neuroscience, kinesiology, psychology, physiology, engineering, physics, geosciences, linguistics, finance, economics, and other disciplines.   This multi-authored book intends to comprehensively introduce and showcase recent advances as well as established best practices concerning both theoretical and practical aspects of recurrence plot based analysis.  Edited and authored by leading researcher in the field, the various chapters address an interdisciplinary readership, ranging from theoretical physicists to application-oriented scientists in all data-providing disciplines.

  4. On Irrelevance and Algorithmic Equality in Predicative Type Theory

    OpenAIRE

    Abel, Andreas; Scherer, Gabriel

    2012-01-01

    Dependently typed programs contain an excessive amount of static terms which are necessary to please the type checker but irrelevant for computation. To separate static and dynamic code, several static analyses and type systems have been put forward. We consider Pfenning's type theory with irrelevant quantification which is compatible with a type-based notion of equality that respects eta-laws. We extend Pfenning's theory to universes and large eliminations and develop its m...

  5. On Irrelevance and Algorithmic Equality in Predicative Type Theory

    CERN Document Server

    Abel, Andreas

    2012-01-01

    Dependently typed programs contain an excessive amount of static terms which are necessary to please the type checker but irrelevant for computation. To separate static and dynamic code, several static analyses and type systems have been put forward. We consider Pfenning's type theory with irrelevant quantification which is compatible with a type-based notion of equality that respects eta-laws. We extend Pfenning's theory to universes and large eliminations and develop its meta-theory. Subject reduction, normalization and consistency are obtained by a Kripke model over the typed equality judgement. Finally, a type-directed equality algorithm is described whose completeness is proven by a second Kripke model.

  6. A dependent nominal type theory

    CERN Document Server

    Cheney, James

    2012-01-01

    Nominal abstract syntax is an approach to representing names and binding pioneered by Gabbay and Pitts. So far nominal techniques have mostly been studied using classical logic or model theory, not type theory. Nominal extensions to simple, dependent and ML-like polymorphic languages have been studied, but decidability and normalization results have only been established for simple nominal type theories. We present a LF-style dependent type theory extended with name-abstraction types, prove soundness and decidability of beta-eta-equivalence checking, discuss adequacy and canonical forms via an example, and discuss extensions such as dependently-typed recursion and induction principles.

  7. Classical field theory via Cohesive homotopy types

    OpenAIRE

    Schreiber, Urs

    2013-01-01

    A brief survey of how classical field theory emerges synthetically in cohesive homotopy type theory. Extended Conference Abstract submitted to the proceedings of the Conference on Type Theory, Homotopy Theory and Univalent Foundations in Barcelona, Fall 2013

  8. Definitional Extension in Type Theory

    OpenAIRE

    Xue, Tao

    2014-01-01

    When we extend a type system, the relation between the original system and its extension is an important issue we want to know. Conservative extension is a traditional relation we study with. But in some cases, like coercive subtyping, it is not strong enough to capture all the properties, more powerful relation between the systems is required. We bring the idea definitional extension from mathematical logic into type theory. In this paper, we study the notion of definitional extension for t...

  9. Causality in Time Series: Its Detection and Quantification by Means of Information Theory.

    Czech Academy of Sciences Publication Activity Database

    Hlavá?ková-Schindler, Kate?ina

    New York : Springer, 2008 - (Emmert-Streib, F.; Dehmer, M.), s. 183-207 ISBN 978-0-387-84815-0. - (Computer Science) R&D Projects: GA MŠk 2C06001 Institutional research plan: CEZ:AV0Z10750506 Keywords : causality * time series * information theory Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2009/AS/schindler-causality in time series its detection and quantification by means of information theory .pdf

  10. Use of Multiple Competitors for Quantification of Human Immunodeficiency Virus Type 1 RNA in Plasma

    OpenAIRE

    Vener, Tanya; Nygren, Malin; Andersson, Annalena; Uhle?n, Mathias; Albert, Jan; Lundeberg, Joakim

    1998-01-01

    Quantification of human immunodeficiency virus type 1 (HIV-1) RNA in plasma has rapidly become an important tool in basic HIV research and in the clinical care of infected individuals. Here, a quantitative HIV assay based on competitive reverse transcription-PCR with multiple competitors was developed. Four RNA competitors containing identical PCR primer binding sequences as the viral HIV-1 RNA target were constructed. One of the PCR primers was fluorescently labeled, which facilitated discri...

  11. Quantification of uncertainty of performance measures using graph theory

    OpenAIRE

    Lopes, Isabel da Silva; Sousa, Sérgio; Nunes, Eusébio P.

    2013-01-01

    In this paper, the graph theory is used to quantify the uncertainty generated in performance measures during the process of performance measurement. A graph is developed considering all the sources of uncertainty present in this process and their relationship. The permanent function of the matrix associated with the graph is used as the basis for determining an uncertainty index.

  12. Homotopy Type Theory: Univalent Foundations of Mathematics

    OpenAIRE

    Program, The Univalent Foundations

    2013-01-01

    Homotopy type theory is a new branch of mathematics, based on a recently discovered connection between homotopy theory and type theory, which brings new ideas into the very foundation of mathematics. On the one hand, Voevodsky's subtle and beautiful "univalence axiom" implies that isomorphic structures can be identified. On the other hand, "higher inductive types" provide direct, logical descriptions of some of the basic spaces and constructions of homotopy theory. Both are ...

  13. Uncertainty Quantification and Propagation in Nuclear Density Functional Theory

    CERN Document Server

    Schunck, N; Higdon, D; Sarich, J; Wild, S M

    2015-01-01

    Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going efforts seek to better root nuclear DFT in the theory of nuclear forces [see Duguet et al., this issue], energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in finite nuclei. In this paper, we review recent efforts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.

  14. Names and Binding in Type Theory

    OpenAIRE

    Schöpp, Ulrich

    2006-01-01

    Names and name-binding are useful concepts in the theory and practice of formal systems. In this thesis we study them in the context of dependent type theory. We propose a novel dependent type theory with primitives for the explicit handling of names. As the main application, we consider programming and reasoning with abstract syntax involving variable binders. Gabbay and Pitts have shown that Fraenkel Mostowski (FM) set theory models a notion of name using which informal work with names ...

  15. On gauge theories of general type

    International Nuclear Information System (INIS)

    Canonical transformations dependence of nonrenormalized effective action and generating functional vertex functions is studied. The effective action construction in gauge theories of general type is described

  16. Modeling Martin Löf Type Theory in Categories

    OpenAIRE

    Lamarche, François

    2012-01-01

    We present a model of Martin-Lof type theory that includes both dependent products and the identity type. It is based on the category of small categories, with cloven Grothendieck bifibrations used to model dependent types. The identity type is modeled by a path functor that seems to have independent interest from the point of view of homotopy theory. We briefly describe this model's strengths and limitations.

  17. Schwarz Type Topological Quantum Field Theories

    CERN Document Server

    Kaul, R K; Ramadevi, P

    2005-01-01

    Topological quantum field theories can be used to probe topological properties of low dimensional manifolds. A class of these theories known as Schwarz type theories, comprise of Chern-Simons theories and BF theories. In three dimensions both capture the properties of knots and links leading to invariants characterising them. These can also be used to construct three-manifold invariants. Three dimensional gravity is described by these field theories. BF theories exist also in higher dimensions. In four dimensions, these describe two-dimensional generalization of knots as well as Donaldson invariants.

  18. Extending Type Theory with Forcing

    OpenAIRE

    Jaber, Guilhem; Tabareau, Nicolas; Sozeau, Matthieu

    2012-01-01

    This paper presents an intuitionistic forcing translation for the Calculus of Constructions (CoC), a translation that corresponds to an internalization of the presheaf construction in CoC. Depending on the chosen set of forcing conditions, the resulting type system can be extended with extra logical principles. The translation is proven correct-in the sense that it preserves type checking-and has been implemented in Coq. As a case study, we show how the forcing translation on integers (which ...

  19. Completeness in Hybrid Type Theory

    DEFF Research Database (Denmark)

    Areces, Carlos; Blackburn, Patrick Rowan

    2014-01-01

    We show that basic hybridization (adding nominals and @ operators) makes it possible to give straightforward Henkin-style completeness proofs even when the modal logic being hybridized is higher-order. The key ideas are to add nominals as expressions of type t, and to extend to arbitrary types the way we interpret @i in propositional and first-order hybrid logic. This means: interpret @i?a , where ?a is an expression of any type a , as an expression of type a that rigidly returns the value that ?a receives at the i-world. The axiomatization and completeness proofs are generalizations of those found in propositional and first-order hybrid logic, and (as is usual inhybrid logic) we automatically obtain a wide range of completeness results for stronger logics and languages. Our approach is deliberately low-tech. We don’t, for example, make use of Montague’s intensional type s, or Fitting-style intensional models; we build, as simply as we can, hybrid logicover Henkin’s logic

  20. Type II string theory and modularity

    OpenAIRE

    Kriz, Igor; Sati, Hisham

    2005-01-01

    This paper, in a sense, completes a series of three papers. In the previous two hep-th/0404013, hep-th/0410293, we have explored the possibility of refining the K-theory partition function in type II string theories using elliptic cohomology. In the present paper, we make that more concrete by defining a fully quantized free field theory based on elliptic cohomology of 10-dimensional spacetime. Moreover, we describe a concrete scenario how this is related to compactification...

  1. Type II string theory and modularity

    International Nuclear Information System (INIS)

    This paper, in a sense, completes a series of three papers. In the previous two, we have explored the possibility of refining the K-theory partition function in type II string theories using elliptic cohomology. In the present paper, we make that more concrete by defining a fully quantized free field theory based on elliptic cohomology of 10-dimensional spacetime. Moreover, we describe a concrete scenario how this is related to compactification of F-theory on an elliptic curve leading to IIA and IIB theories. We propose an interpretation of the elliptic curve in the context of elliptic cohomology. We discuss the possibility of orbifolding of the elliptic curves and derive certain properties of F-theory. We propose a link of this to type IIB modularity, the structure of the topological lagrangian of M-theory, and Witten's index of loop space Dirac operators. The discussion suggests a S1-lift of type IIB and an F-theoretic model for type I obtained by orbifolding that for type IIB

  2. Type II string theory and modularity

    Energy Technology Data Exchange (ETDEWEB)

    Kriz, Igor [Department of Mathematics, University of Michigan, Ann Arbor, MI 48109 (United States); Sati, Hisham [Department of Physics, University of Adelaide, Adelaide, SA 5005 (Australia); Department of Pure Mathematics, University of Adelaide, Adelaide, SA 5005 (Australia)

    2005-08-01

    This paper, in a sense, completes a series of three papers. In the previous two, we have explored the possibility of refining the K-theory partition function in type II string theories using elliptic cohomology. In the present paper, we make that more concrete by defining a fully quantized free field theory based on elliptic cohomology of 10-dimensional spacetime. Moreover, we describe a concrete scenario how this is related to compactification of F-theory on an elliptic curve leading to IIA and IIB theories. We propose an interpretation of the elliptic curve in the context of elliptic cohomology. We discuss the possibility of orbifolding of the elliptic curves and derive certain properties of F-theory. We propose a link of this to type IIB modularity, the structure of the topological lagrangian of M-theory, and Witten's index of loop space Dirac operators. The discussion suggests a S{sup 1}-lift of type IIB and an F-theoretic model for type I obtained by orbifolding that for type IIB.

  3. Applicability of Information Theory to the Quantification of Responses to Anthropogenic Noise by Southeast Alaskan Humpback Whales

    Directory of Open Access Journals (Sweden)

    J. Ellen Blue

    2008-05-01

    Full Text Available We assess the effectiveness of applying information theory to the characterization and quantification of the affects of anthropogenic vessel noise on humpback whale (Megaptera novaeangliae vocal behavior in and around Glacier Bay, Alaska. Vessel noise has the potential to interfere with the complex vocal behavior of these humpback whales which could have direct consequences on their feeding behavior and thus ultimately on their health and reproduction. Humpback whale feeding calls recorded during conditions of high vessel-generated noise and lower levels of background noise are compared for differences in acoustic structure, use, and organization using information theoretic measures. We apply information theory in a self-referential manner (i.e., orders of entropy to quantify the changes in signaling behavior. We then compare this with the reduction in channel capacity due to noise in Glacier Bay itself treating it as a (Gaussian noisy channel. We find that high vessel noise is associated with an increase in the rate and repetitiveness of sequential use of feeding call types in our averaged sample of humpback whale vocalizations, indicating that vessel noise may be modifying the patterns of use of feeding calls by the endangered humpback whales in Southeast Alaska. The information theoretic approach suggested herein can make a reliable quantitative measure of such relationships and may also be adapted for wider application to many species where environmental noise is thought to be a problem.

  4. Uncertainty Quantification for Nuclear Density Functional Theory and Information Content of New Measurements

    CERN Document Server

    McDonnell, J D; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-01-01

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models; to estimate model errors and thereby improve predictive capability; to extrapolate beyond the regions reached by experiment; and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, w...

  5. Explicit Substitutions for Contextual Type Theory

    CERN Document Server

    Abel, Andreas; 10.4204/EPTCS.34.3

    2010-01-01

    In this paper, we present an explicit substitution calculus which distinguishes between ordinary bound variables and meta-variables. Its typing discipline is derived from contextual modal type theory. We first present a dependently typed lambda calculus with explicit substitutions for ordinary variables and explicit meta-substitutions for meta-variables. We then present a weak head normalization procedure which performs both substitutions lazily and in a single pass thereby combining substitution walks for the two different classes of variables. Finally, we describe a bidirectional type checking algorithm which uses weak head normalization and prove soundness.

  6. Penner Type Ensemble for Gauge Theories Revisited

    CERN Document Server

    Krefl, Daniel

    2012-01-01

    The Penner type beta-ensemble for Omega-deformed N=2 SU(2) gauge theory with two massless flavors arising as a limiting case from the AGT conjecture is considered. The partition function can be calculated perturbatively in a saddle-point approximation. A large N limit reproduces the gauge theory partition function expanded in a strong coupling regime, for any beta and beyond tree-level, confirming previous results obtained via special geometry and the holomorphic anomaly equation. The leading terms and gap of the gauge theory free energy at the monopole/dyon point follow as a corollary.

  7. Theory of Type 3 and Type 2 Solar Radio Emissions

    Science.gov (United States)

    Robinson, P. A.; Cairns, I. H.

    2000-01-01

    The main features of some current theories of type III and type II bursts are outlined. Among the most common solar radio bursts, type III bursts are produced at frequencies of 10 kHz to a few GHz when electron beams are ejected from solar active regions, entering the corona and solar wind at typical speeds of 0.1c. These beams provide energy to generate Langmuir waves via a streaming instability. In the current stochastic-growth theory, Langmuir waves grow in clumps associated with random low-frequency density fluctuations, leading to the observed spiky waves. Nonlinear wave-wave interactions then lead to secondary emission of observable radio waves near the fundamental and harmonic of the plasma frequency. Subsequent scattering processes modify the dynamic radio spectra, while back-reaction of Langmuir waves on the beam causes it to fluctuate about a state of marginal stability. Theories based on these ideas can account for the observed properties of type III bursts, including the in situ waves and the dynamic spectra of the radiation. Type 11 bursts are associated with shock waves propagating through the corona and interplanetary space and radiating from roughly 30 kHz to 1 GHz. Their basic emission mechanisms are believed to be similar to those of type III events and radiation from Earth's foreshock. However, several sub-classes of type II bursts may exist with different source regions and detailed characteristics. Theoretical models for type II bursts are briefly reviewed, focusing on a model with emission from a foreshock region upstream of the shock for which observational evidence has just been reported.

  8. Type II string theory and modularity

    CERN Document Server

    Kriz, I; Kriz, Igor; Sati, Hisham

    2005-01-01

    This paper, in a sense, completes a series of three papers. In the previous two hep-th/0404013, hep-th/0410293, we have explored the possibility of refining the K-theory partition function in type II string theories using elliptic cohomology. In the present paper, we make that more concrete by defining a fully quantized free field theory based on elliptic cohomology of 10-dimensional spacetime. Moreover, we describe a concrete scenario how this is related to compactification of F-theory on an elliptic curve leading to IIA and IIB theories. We propose an interpretation of the elliptic curve in the context of elliptic cohomology. We discuss the possibility of orbifolding of the elliptic curves and derive certain properties of F-theory. We propose a link of this to type IIB modularity, the structure of the topological Lagrangian of M-theory, and Witten's index of loop space Dirac operators. We also discuss possible implications of physics in twelve dimensions.

  9. Uncertainty Quantification for Nuclear Density Functional Theory and Information Content of New Measurements

    Science.gov (United States)

    McDonnell, J. D.; Schunck, N.; Higdon, D.; Sarich, J.; Wild, S. M.; Nazarewicz, W.

    2015-03-01

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  10. HPAEC-PAD quantification of Haemophilus influenzae type b polysaccharide in upstream and downstream samples.

    Science.gov (United States)

    van der Put, Robert M F; de Haan, Alex; van den IJssel, Jan G M; Hamidi, Ahd; Beurret, Michel

    2014-07-19

    Due to the rapidly increasing introduction of Haemophilus influenzae type b (Hib) and other conjugate vaccines worldwide during the last decade, reliable and robust analytical methods are needed for the quantitative monitoring of intermediate samples generated during fermentation (upstream processing, USP) and purification (downstream processing, DSP) of polysaccharide vaccine components. This study describes the quantitative characterization of in-process control (IPC) samples generated during the fermentation and purification of the capsular polysaccharide (CPS), polyribosyl-ribitol-phosphate (PRP), derived from Hib. Reliable quantitative methods are necessary for all stages of production; otherwise accurate process monitoring and validation is not possible. Prior to the availability of high performance anion exchange chromatography methods, this polysaccharide was predominantly quantified either with immunochemical methods, or with the colorimetric orcinol method, which shows interference from fermentation medium components and reagents used during purification. Next to an improved high performance anion exchange chromatography-pulsed amperometric detection (HPAEC-PAD) method, using a modified gradient elution, both the orcinol assay and high performance size exclusion chromatography (HPSEC) analyses were evaluated. For DSP samples, it was found that the correlation between the results obtained by HPAEC-PAD specific quantification of the PRP monomeric repeat unit released by alkaline hydrolysis, and those from the orcinol method was high (R(2)=0.8762), and that it was lower between HPAEC-PAD and HPSEC results. Additionally, HPSEC analysis of USP samples yielded surprisingly comparable results to those obtained by HPAEC-PAD. In the early part of the fermentation, medium components interfered with the different types of analysis, but quantitative HPSEC data could still be obtained, although lacking the specificity of the HPAEC-PAD method. Thus, the HPAEC-PAD method has the advantage of giving a specific response compared to the orcinol assay and HPSEC, and does not show interference from various components that can be present in intermediate and purified PRP samples. PMID:25045809

  11. Hoare type theory, polymorphism and separation

    DEFF Research Database (Denmark)

    Nanevski, Alexandar; Morrisett, J. Gregory

    2008-01-01

    We consider the problem of reconciling a dependently typed functional language with imperative features such as mutable higher-order state, pointer aliasing, and nontermination. We propose Hoare type theory (HTT), which incorporates Hoare-style specifications into types, making it possible to statically track and enforce correct use of side effects. The main feature of HTT is the Hoare type {P}x:A{Q} specifying computations with precondition P and postcondition Q that return a result of type A. Hoare types can be nested, combined with other types, and abstracted, leading to a smooth integration with higher-order functions and type polymorphism. We further show that in the presence of type polymorphism, it becomes possible to interpret the Hoare types in the “small footprint” manner, as advocated by separation logic, whereby specifications tightly describe the state required by the computation. We establish that HTT is sound and compositional, in the sense that separate verifications of individual program components suffice to ensure the correctness of the composite program.

  12. Predictions for orientifold field theories from type 0' string theory

    CERN Document Server

    Armoni, A

    2005-01-01

    Two predictions about finite-N non-supersymmetric "orientifold field theories" are made by using the dual type 0' string theory on C^3 / Z_2 x Z_2 orbifold singularity. First, the mass ratio between the lowest pseudoscalar and scalar color-singlets is estimated to be equal to the ratio between the axial anomaly and the scale anomaly at strong coupling, M_- / M_+ ~ C_- / C_+. Second, the ratio between the domain wall tension and the value of the quark condensate is computed.

  13. Quantification of Spatial Parameters in 3D Cellular Constructs Using Graph Theory

    Directory of Open Access Journals (Sweden)

    G. E. Plopper

    2009-01-01

    Full Text Available Multispectral three-dimensional (3D imaging provides spatial information for biological structures that cannot be measured by traditional methods. This work presents a method of tracking 3D biological structures to quantify changes over time using graph theory. Cell-graphs were generated based on the pairwise distances, in 3D-Euclidean space, between nuclei during collagen I gel compaction. From these graphs quantitative features are extracted that measure both the global topography and the frequently occurring local structures of the “tissue constructs.” The feature trends can be controlled by manipulating compaction through cell density and are significant when compared to random graphs. This work presents a novel methodology to track a simple 3D biological event and quantitatively analyze the underlying structural change. Further application of this method will allow for the study of complex biological problems that require the quantification of temporal-spatial information in 3D and establish a new paradigm in understanding structure-function relationships.

  14. Quantification analysis of CT for aphasic patients

    International Nuclear Information System (INIS)

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on Slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis). (author)

  15. Uncertainty Quantification of the Pion-Nucleon Low-Energy Coupling Constants up to Fourth Order in Chiral Perturbation Theory

    CERN Document Server

    Wendt, K A; Ekström, A

    2014-01-01

    We extract the statistical uncertainties for the pion-nucleon ($\\pi N$) low energy constants (LECs) up to fourth order $\\mathcal{O}(Q^4)$ in the chiral expansion of the nuclear effective Lagrangian. The LECs are optimized with respect to experimental scattering data. For comparison, we also present an uncertainty quantification that is based solely on \\pin{} scattering phase shifts. Statistical errors on the LECs are critical in order to estimate the subsequent uncertainties in \\textit{ab initio} modeling of light and medium mass nuclei which exploit chiral effective field theory. As an example of the this, we present the first complete predictions with uncertainty quantification of peripheral phase shifts of elastic proton-neutron scattering.

  16. Canonical quantization of gauge theories of special type

    International Nuclear Information System (INIS)

    General description of a class of theories (theories of special type) to which most popular physical field theories are refered is presented. For special type theories different quantization methods are considered and their equivalent character is proved. Explicit form of gauge transformation generators is determined and also equivalence in determination of physical functions in Lagrangian and Hamiltonian formalisms is proved

  17. Quantification of left to right shunts in adults with atrial septal defects of secundum type, using radionuclide technique

    International Nuclear Information System (INIS)

    Quantification of left to right shunt was carried out in 15 adult patients with a suspected ostium secundum atrial septal defect (ASD II). Radionuclide shunt quantitation correlated well with the results of righ heart catheterization. The radionuclide technique failed in two patients for technical reasons, but revealed no false negative or false positive results when technically satisfactory. The diagnosis was confirmed at operation. It is concluded that the radionuclide technique is a useful and reliable method which can also be used at follow-up after surgery in patients with artrial septal defects of secundum type. 20 refs., 3 figs., 1 tab

  18. Determination and quantification of collagen types by LC-MS/MS and CE-MS/MS.

    Czech Academy of Sciences Publication Activity Database

    Mikšík, Ivan; Pataridis, Statis; Eckhardt, Adam; Lacinová, Kate?ina; Sedláková, Pavla

    Freiberg : Forschungsinstitut für Leder und Kunststoffbahnen (FILK)gGmbH, 2012, s. 131-141. ISBN 978-3-00-039421-8. [Freiberg Collagen Symposium /5./. Freiberg (DE), 04.09.2012-05.09.2012] R&D Projects: GA ?R(CZ) GA203/08/1428; GA ?R(CZ) GAP206/12/0453 Institutional research plan: CEZ:AV0Z50110509 Institutional support: RVO:67985823 Keywords : collagen * protein quantification Subject RIV: CB - Analytical Chemistry, Separation

  19. Uncertainty Quantification of Composite Laminate Damage with the Generalized Information Theory

    Energy Technology Data Exchange (ETDEWEB)

    J. Lucero; F. Hemez; T. Ross; K.Kline; J.Hundhausen; T. Tippetts

    2006-05-01

    This work presents a survey of five theories to assess the uncertainty of projectile impact induced damage on multi-layered carbon-epoxy composite plates. Because the types of uncertainty dealt with in this application are multiple (variability, ambiguity, and conflict) and because the data sets collected are sparse, characterizing the amount of delamination damage with probability theory alone is possible but incomplete. This motivates the exploration of methods contained within a broad Generalized Information Theory (GIT) that rely on less restrictive assumptions than probability theory. Probability, fuzzy sets, possibility, and imprecise probability (probability boxes (p-boxes) and Dempster-Shafer) are used to assess the uncertainty in composite plate damage. Furthermore, this work highlights the usefulness of each theory. The purpose of the study is not to compare directly the different GIT methods but to show that they can be deployed on a practical application and to compare the assumptions upon which these theories are based. The data sets consist of experimental measurements and finite element predictions of the amount of delamination and fiber splitting damage as multilayered composite plates are impacted by a projectile at various velocities. The physical experiments consist of using a gas gun to impact suspended plates with a projectile accelerated to prescribed velocities, then, taking ultrasound images of the resulting delamination. The nonlinear, multiple length-scale numerical simulations couple local crack propagation implemented through cohesive zone modeling to global stress-displacement finite element analysis. The assessment of damage uncertainty is performed in three steps by, first, considering the test data only; then, considering the simulation data only; finally, performing an assessment of total uncertainty where test and simulation data sets are combined. This study leads to practical recommendations for reducing the uncertainty and improving the prediction accuracy of the damage modeling and finite element simulation.

  20. Weak omega-categories from intensional type theory

    OpenAIRE

    Lumsdaine, Peter LeFanu

    2008-01-01

    We show that for any type in Martin-L\\"of Intensional Type Theory, the terms of that type and its higher identity types form a weak omega-category in the sense of Leinster. Precisely, we construct a contractible globular operad of definable composition laws, and give an action of this operad on the terms of any type and its identity types.

  1. Matrix theory of type IIB plane wave from membranes

    International Nuclear Information System (INIS)

    We write down a maximally supersymmetric one parameter deformation of the field theory action of Bagger and Lambert. We show that this theory on R x T2 is invariant under the superalgebra of the maximally supersymmetric Type IIB plane wave. It is argued that this theory holographically describes the Type IIB plane wave in the discrete light-cone quantization (DLCQ).

  2. Distributions of countable models of theories with continuum many types

    OpenAIRE

    Popkov, Roman A.; Sudoplatov, Sergey V.

    2012-01-01

    We present distributions of countable models and correspondent structural characteristics of complete theories with continuum many types: for prime models over finite sets relative to Rudin-Keisler preorders, for limit models over types and over sequences of types, and for other countable models of theory.

  3. The dopant type and amount governs the electrochemical performance of graphene platforms for the antioxidant activity quantification.

    Science.gov (United States)

    Hui, Kai Hwee; Ambrosi, Adriano; Sofer, Zden?k; Pumera, Martin; Bonanni, Alessandra

    2015-05-01

    Graphene doped with heteroatoms can show new or improved properties as compared to the original undoped material. It has been reported that the type of heteroatoms and the doping conditions can have a strong influence on the electronic and electrochemical properties of the resulting material. Here, we wish to compare the electrochemical behavior of two n-type and two p-type doped graphenes, namely boron-doped graphenes and nitrogen-doped graphenes containing different amounts of heteroatoms. We show that the boron-doped graphene containing a higher amount of dopants provides the best electroanalytical performance in terms of calibration sensitivity, selectivity and linearity of response for the detection of gallic acid normally used as the standard probe for the quantification of antioxidant activity of food and beverages. Our findings demonstrate that the type and amount of heteroatoms used for the doping have a profound influence on the electrochemical detection of gallic acid rather than the structural properties of the materials such as amounts of defects, oxygen functionalities and surface area. This finding has a profound influence on the application of doped graphenes in the field of analytical chemistry. PMID:25920751

  4. Godel Type Metrics in Einstein-Aether Theory

    OpenAIRE

    Gurses, Metin

    2008-01-01

    Aether theory is introduced to implement the violation of the Lorentz invariance in general relativity. For this purpose a unit timelike vector field introduced to theory in addition to the metric tensor. Aether theory contains four free parameters which satisfy some inequalities in order that the theory to be consistent with the observations. We show that the G{\\" o}del type of metrics of general relativity are also exact solutions of the Einstein-aether theory. The only fi...

  5. Quantification of age-related changes in the structure model type and trabecular thickness of human tibial cancellous

    DEFF Research Database (Denmark)

    Ding, Ming; Hvid, I

    2000-01-01

    Structure model type and trabecular thickness are important characteristics in describing cancellous bone architecture. It has been qualitatively observed that a radical change of trabeculae from plate-like to rod-like occurs in aging, bone remodeling, and osteoporosis. Thickness of trabeculae has traditionally been measured using model-based histomorphometric methods on two-dimensional (2-D) sections. However, no quantitative study has been published based on three-dimensional (3-D) methods on the age-related changes in structure model type and trabecular thickness for human peripheral (tibial) cancellous bone. In this study, 160 human proximal tibial cancellous bone specimens from 40 normal donors, aged 16 to 85 years, were collected. These specimens were micro-computed tomography (micro-CT) scanned, then the micro-CT images were segmented using optimal thresholds. From accurate 3-D data sets, structure model type and trabecular thickness were quantified by means of novel 3-D methods. Structure model type was assessed by calculating the structure model index (SMI). The SMI was quantified based on a differential analysis of the triangulated bone surface of a structure. This technique allows quantification of structure model type, such as plate, rod objects, or mixture of plates or rods. Trabecular thickness is calculated directly from 3-D images, which is especially important for an a priori unknown or changing structure. Furthermore, 2-D trabecular thickness was also calculated based on the plate model. Our results showed that structure model type changed towards more rod-like in the elderly, and that trabecular thickness declined significantly with age. These changes become significant after 80 years of age for human tibial cancellous bone, whereas both properties seem to remain relatively unchanged between 20 and 80 years. Although a fairly close relationship was seen between 3-D trabecular thickness and 2-D trabecular thickness, real 3-D trabecular thickness was significantly underestimated using 2-D method.

  6. On the strength of proof-irrelevant type theories

    CERN Document Server

    Werner, Benjamin

    2008-01-01

    We present a type theory with some proof-irrelevance built into the conversion rule. We argue that this feature is useful when type theory is used as the logical formalism underlying a theorem prover. We also show a close relation with the subset types of the theory of PVS. We show that in these theories, because of the additional extentionality, the axiom of choice implies the decidability of equality, that is, almost classical logic. Finally we describe a simple set-theoretic semantics.

  7. Noninvasive quantification of metabotropic glutamate receptor type 1 with [¹¹C]ITDM: a small-animal PET study.

    Science.gov (United States)

    Yamasaki, Tomoteru; Fujinaga, Masayuki; Yui, Joji; Ikoma, Yoko; Hatori, Akiko; Xie, Lin; Wakizaka, Hidekatsu; Kumata, Katsushi; Nengaki, Nobuki; Kawamura, Kazunori; Zhang, Ming-Rong

    2014-04-01

    Because of its role in multiple central nervous system (CNS) pathways, metabotropic glutamate receptor type 1 (mGluR1) is a crucial target in the development of pharmaceuticals for CNS disorders. N-[4-[6-(isopropylamino)-pyrimidin-4-yl]-1,3-thiazol-2-yl]-N-methyl-4-[(11)C]-methylbenzamide ([(11)C]ITDM) was recently developed as a positron emission tomography (PET) ligand for mGluR1. To devise a method for measurement of the binding potential (BPND) of [(11)C]ITDM to mGluR1, reference tissue methods aimed at replacing measurement of the arterial input function are desirable. In this study, we evaluated a noninvasive quantification method of mGluR1 with [(11)C]ITDM, demonstrating its accuracy using Huntington disease model R6/2 mice. The BPND measurements based on the Logan reference (Logan Ref) method have closely approximated that based on the arterial input method. We performed PET analysis with Logan Ref to assess its accuracy in quantifying the decline of mGluR1 expression in R6/2 mice. Significant decreases in BPND values in R6/2 mice were detected in cerebellum, thalamus, striatum, and cingulate cortex. We compared autoradiographs of R6/2 mouse brain sections with immunohistochemical images, and found a close correlation between changes in radioactive signal intensity and degree of mGluR1 expression. In conclusion, [(11)C]ITDM-PET is a promising tool for in vivo quantification of mGluR1 expression. PMID:24398932

  8. THE TOTAL DNA QUANTIFICATION FOR THREE TYPES OF TISSUE FROM CARASSIUS AURATUS GIBELIO BLOCH

    OpenAIRE

    Zenovia Olteanu; Cristian Campeanu; Lucian Gorgan

    2005-01-01

    We established the total DNA quantity and the variability intervals for three types of tissue (muscle, liver and spleen) from five individuals of Carassius auratus gibelio Bloch, to characterize this species from the point of view of this parameter.

  9. Radiochemical Separation and Quantification of Tritium in Metallic Radwastes Generated from CANDU Type NPP - 13279

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, H.J.; Choi, K.C.; Choi, K.S.; Park, T.H.; Park, Y.J.; Song, K. [Korea Atomic Energy Research Institute, P.O. Box 105, Yuseong, Daejeon, 305-330 (Korea, Republic of)

    2013-07-01

    As a destructive quantification method of {sup 3}H in low and intermediate level radwastes, bomb oxidation, sample oxidation, and wet oxidation methods have been introduced. These methods have some merits and demerits in the radiochemical separation of {sup 3}H radionuclides. That is, since the bomb oxidation and sample oxidation methods are techniques using heating at high temperature, the separation methods of the radionuclides are relatively simple. However, since {sup 3}H radionuclide has a property of being diffused deeply into the inside of metals, {sup 3}H which is distributed on the surface of the metals can only be extracted if the methods are applied. As an another separation method, the wet oxidation method makes {sup 3}H oxidized with an acidic solution, and extracted completely to an oxidized HTO compound. However, incomplete oxidized {sup 3}H compounds, which are produced by reactions of acidic solutions and metallic radwastes, can be released into the air. Thus, in this study, a wet oxidation method to extract and quantify the {sup 3}H radionuclide from metallic radwastes was established. In particular, a complete extraction method and complete oxidation method of incomplete chemical compounds of {sup 3}H using a Pt catalyst were studied. The radioactivity of {sup 3}H in metallic radwastes is extracted and measured using a wet oxidation method and liquid scintillation counter. Considering the surface dose rate of the sample, the appropriate size of the sample was determined and weighed, and a mixture of oxidants was added to a 200 ml round flask with 3 tubes. The flask was quickly connected to the distilling apparatus. 20 mL of 16 wt% H{sub 2}SO{sub 4} was given into the 200-ml round flask through a dropping funnel while under stirring and refluxing. After dropping, the temperature of the mixture was raised to 96 deg. C and the sample was leached and oxidized by refluxing for 3 hours. At that time, the incomplete oxidized {sup 3}H compounds were completely oxidized using the Pt catalysts and produced a stable HTO compound. After that, about a 20 ml solution was distilled in the separation apparatus, and the distillate was mixed with an ultimagold LLT as a cocktail solution. The solution in the vial was left standing for at least 24 hours. The radioactivity of {sup 3}H was counted directly using a liquid scintillation analyzer (Packard, 2500 TR/AB, Alpha and Beta Liquid Scintillation Analyzer). (authors)

  10. A Calculational Theory of Pers as Types

    OpenAIRE

    Hutton, Graham; Voermans, Ed

    1992-01-01

    In the calculational approach to programming, programs are derived from specifications by algebraic reasoning. This report presents a calculational programming framework based upon the notion of binary relations as programs, and partial equivalence relations (pers) as types. Working with relations as programs generalises the functional paradigm, admiting non-determinism and the use of relation converse. Working with pers as types permits a natural treatment of types that are subject to law...

  11. Closed tachyon solitons in type II string theory

    CERN Document Server

    García-Etxebarria, Iñaki; Uranga, Angel M

    2015-01-01

    Type II theories can be described as the endpoint of closed string tachyon condensation in certain orbifolds of supercritical type 0 theories. In this paper, we study solitons of this closed string tachyon and analyze the nature of the resulting defects in critical type II theories. The solitons are classified by the real K-theory groups KO of bundles associated to pairs of supercritical dimensions. For real codimension 4 and 8, corresponding to $KO({\\bf S}^4)={\\bf Z}$ and $KO({\\bf S}^8)={\\bf Z}$, the defects correspond to a gravitational instanton and a fundamental string, respectively. We apply these ideas to reinterpret the worldsheet GLSM, regarded as a supercritical theory on the ambient toric space with closed tachyon condensation onto the CY hypersurface, and use it to describe charged solitons under discrete isometries. We also suggest the possible applications of supercritical strings to the physical interpretation of the matrix factorization description of F-theory on singular spaces.

  12. THE TOTAL DNA QUANTIFICATION FOR THREE TYPES OF TISSUE FROM CARASSIUS AURATUS GIBELIO BLOCH

    Directory of Open Access Journals (Sweden)

    Zenovia Olteanu

    2005-08-01

    Full Text Available We established the total DNA quantity and the variability intervals for three types of tissue (muscle, liver and spleen from five individuals of Carassius auratus gibelio Bloch, to characterize this species from the point of view of this parameter.

  13. Contributions to the theory of Weber-type gravitational antenna

    International Nuclear Information System (INIS)

    The authors generalize the three-dimensional theory of Weber-type gravitational antenna. The effects of thermoconduction and internal friction are taken into account. The compatibility of the method with the general relativity is proved. (author)

  14. Scattering theory for delta-type potentials

    Science.gov (United States)

    Shabat, A. B.

    2015-04-01

    In the example of the Korteweg-de Vries equation, we consider the problem of extending the applicability of the inverse spectral transform method using delta-type potentials and their Darboux transformations. In this case, the problem of the properties of scattering data reduces to studying explicitly given entire functions of the spectral parameter.

  15. Revised theory of Pierce-type electron guns

    International Nuclear Information System (INIS)

    Attempts to date to obtain the shape of the beam forming electrodes of various Pierce-type electron guns are briefly discussed with emphasis on the many discrepansis in the results of previous works. A revised theory of Pierce-type electron guns is proposed. The shapes of the beam-forming electrodes for all known configurations of Pierce guns were computed on the basis of the proposed theory. (orig.)

  16. D-term Inflation in Type I String Theory

    OpenAIRE

    Halyo, Edi

    1999-01-01

    D-term inflation realized in heterotic string theory has two problems: the scale of the anomalous D-term is too large for accounting for COBE data and the coupling constant of the anomalous U(1) is too large for supergravity to be valid. We show that both of these problems can be easily solved in D-term inflation based on type I string theory or orientifolds of type IIB strings.

  17. Quantification of the host response proteome after herpes simplex virus type 1 infection.

    Science.gov (United States)

    Berard, Alicia R; Coombs, Kevin M; Severini, Alberto

    2015-05-01

    Viruses employ numerous host cell metabolic functions to propagate and manage to evade the host immune system. For herpes simplex virus type 1 (HSV1), a virus that has evolved to efficiently infect humans without seriously harming the host in most cases, the virus-host interaction is specifically interesting. This interaction can be best characterized by studying the proteomic changes that occur in the host during infection. Previous studies have been successful at identifying numerous host proteins that play important roles in HSV infection; however, there is still much that we do not know. This study identifies host metabolic functions and proteins that play roles in HSV infection, using global quantitative stable isotope labeling by amino acids in cell culture (SILAC) proteomic profiling of the host cell combined with LC-MS/MS. We showed differential proteins during early, mid and late infection, using both cytosolic and nuclear fractions. We identified hundreds of differentially regulated proteins involved in fundamental cellular functions, including gene expression, DNA replication, inflammatory response, cell movement, cell death, and RNA post-transcriptional modification. Novel differentially regulated proteins in HSV infections include some previously identified in other virus systems, as well as fusion protein, involved in malignant liposarcoma (FUS) and hypoxia up-regulated 1 protein precursor (HYOU1), which have not been identified previously in any virus infection. PMID:25815715

  18. Species-independent bioassay for sensitive quantification of antiviral type I interferons

    Directory of Open Access Journals (Sweden)

    Penski Nicola

    2010-02-01

    Full Text Available Abstract Background Studies of the host response to infection often require quantitative measurement of the antiviral type I interferons (IFN-?/? in biological samples. The amount of IFN is either determined via its ability to suppress a sensitive indicator virus, by an IFN-responding reporter cell line, or by ELISA. These assays however are either time-consuming and lack convenient readouts, or they are rather insensitive and restricted to IFN from a particular host species. Results An IFN-sensitive, Renilla luciferase-expressing Rift Valley fever virus (RVFV-Ren was generated using reverse genetics. Human, murine and avian cells were tested for their susceptibility to RVFV-Ren after treatment with species-specific IFNs. RVFV-Ren was able to infect cells of all three species, and IFN-mediated inhibition of viral reporter activity occurred in a dose-dependent manner. The sensitivity limit was found to be 1 U/ml IFN, and comparison with a standard curve allowed to determine the activity of an unknown sample. Conclusions RVFV-Ren replicates in cells of several species and is highly sensitive to pre-treatment with IFN. These properties allowed the development of a rapid, sensitive, and species-independent antiviral assay with a convenient luciferase-based readout.

  19. Intensional type theory with guarded recursive types qua fixed points on universes

    DEFF Research Database (Denmark)

    MØgelberg, Rasmus Ejlers; Birkedal, Lars

    2013-01-01

    Guarded recursive functions and types are useful for giving semantics to advanced programming languages and for higher-order programming with infinite data types, such as streams, e.g., for modeling reactive systems. We propose an extension of intensional type theory with rules for forming fixed points of guarded recursive functions. Guarded recursive types can be formed simply by taking fixed points of guarded recursive functions on the universe of types. Moreover, we present a general model construction for constructing models of the intensional type theory with guarded recursive functions and types. When applied to the groupoid model of intensional type theory with the universe of small discrete groupoids, the construction gives a model of guarded recursion for which there is a one-to-one correspondence between fixed points of functions on the universe of types and fixed points of (suitable) operators on types. In particular, we find that the functor category from the preordered set of natural numbers to the category of groupoids is a model of intensional type theory with guarded recursive types.

  20. Intensional Type Theory with Guarded Recursive Types qua Fixed Points on Universes

    DEFF Research Database (Denmark)

    Birkedal, Lars; Mogelberg, R.E.

    2013-01-01

    Guarded recursive functions and types are useful for giving semantics to advanced programming languages and for higher-order programming with infinite data types, such as streams, e.g., for modeling reactive systems. We propose an extension of intensional type theory with rules for forming fixed points of guarded recursive functions. Guarded recursive types can be formed simply by taking fixed points of guarded recursive functions on the universe of types. Moreover, we present a general model construction for constructing models of the intensional type theory with guarded recursive functions and types. When applied to the groupoid model of intensional type theory with the universe of small discrete groupoids, the construction gives a model of guarded recursion for which there is a one-to-one correspondence between fixed points of functions on the universe of types and fixed points of (suitable) operators on types. In particular, we find that the functor category Grpd?op from the preordered set of natural numbers to the category of groupoids is a model of intensional type theory with guarded recursive types.

  1. Axion Inflation in Type II String Theory

    OpenAIRE

    Grimm, Thomas W.

    2007-01-01

    Inflationary models driven by a large number of axion fields are discussed in the context of type IIB compactifications with N=1 supersymmetry. The inflatons arise as the scalar modes of the R-R two-forms evaluated on vanishing two-cycles in the compact geometry. The vanishing cycles are resolved by small two-volumes or NS-NS B-fields which sit together with the inflatons in the same supermultiplets. String world-sheets wrapping the vanishing cycles correct the metric of the...

  2. One type of classical solution in theories with vacuum periodicity

    Science.gov (United States)

    Lavrelashvili, George

    1992-05-01

    We discuss the properties of a new type of classical solution in theories with vacuum periodicity. This type of solution is in real time and satisfies special boundary conditions. The analytic expression of such a solution in the simplest (0+1)-dimensional model is found. On leave from Tbilisi Mathematical Institute, 380 093 Tbilisi, Georgia.

  3. Type IIB string theory, S-duality, and generalized cohomology

    Energy Technology Data Exchange (ETDEWEB)

    Kriz, Igor [Department of Mathematics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: ikriz@umich.edu; Sati, Hisham [Department of Physics, University of Adelaide, Adelaide, SA 5005 (Australia) and Department of Pure Mathematics, University of Adelaide, Adelaide, SA 5005 (Australia)]. E-mail: hsati@maths.adelaide.edu.au

    2005-05-30

    In the presence of background Neveu-Schwarz flux, the description of the Ramond-Ramond fields of type IIB string theory using twisted K-theory is not compatible with S-duality. We argue that other possible variants of twisted K-theory would still not resolve this issue. We propose instead a connection of S-duality with elliptic cohomology, and a possible T-duality relation of this to a previous proposal for IIA theory, and higher-dimensional limits. In the process, we obtain some other results which may be interesting on their own. In particular, we prove a conjecture of Witten that the 11-dimensional spin cobordism group vanishes on K(Z,6), which eliminates a potential new {theta}-angle in type IIB string theory.

  4. Type IIB string theory, S-duality, and generalized cohomology

    International Nuclear Information System (INIS)

    In the presence of background Neveu-Schwarz flux, the description of the Ramond-Ramond fields of type IIB string theory using twisted K-theory is not compatible with S-duality. We argue that other possible variants of twisted K-theory would still not resolve this issue. We propose instead a connection of S-duality with elliptic cohomology, and a possible T-duality relation of this to a previous proposal for IIA theory, and higher-dimensional limits. In the process, we obtain some other results which may be interesting on their own. In particular, we prove a conjecture of Witten that the 11-dimensional spin cobordism group vanishes on K(Z,6), which eliminates a potential new ?-angle in type IIB string theory

  5. Type IIB String Theory, S-Duality, and Generalized Cohomology

    OpenAIRE

    Kriz, Igor; Sati, Hisham

    2004-01-01

    In the presence of background Neveu-Schwarz flux, the description of the Ramond-Ramond fields of type IIB string theory using twisted K-theory is not compatible with S-duality. We argue that other possible variants of twisted K-theory would still not resolve this issue. We propose instead a possible path to a solution using elliptic cohomology. We also discuss T-duality relation of this to a previous proposal for IIA theory, and higher-dimensional limits. In the process, we ...

  6. Type IIB String Theory, S-Duality, and Generalized Cohomology

    CERN Document Server

    Kriz, I; Kriz, Igor; Sati, Hisham

    2004-01-01

    In the presence of background Neveu-Schwarz flux, the description of the Ramond-Ramond fields of type IIB string theory using twisted K-theory is not compatible with S-duality. We argue that other possible variants of twisted K-theory would still not resolve this issue. We propose instead a possible path to a solution using elliptic cohomology. We also discuss T-duality relation of this to a previous proposal for IIA theory, and higher-dimensional limits.

  7. Development and validation of an enzyme-linked immunosorbent assay for the quantification of a specific MMP-9 mediated degradation fragment of type III collagen--A novel biomarker of atherosclerotic plaque remodeling

    DEFF Research Database (Denmark)

    Barascuk, Natasha; Vassiliadis, Efstathios

    2011-01-01

    Degradation of collagen in the arterial wall by matrix metalloproteinases is the hallmark of atherosclerosis. We have developed an ELISA for the quantification of type III collagen degradation mediated by MMP-9 in urine.

  8. Quantification of Spatial Parameters in 3D Cellular Constructs Using Graph Theory

    OpenAIRE

    Plopper, G E; Zaki, M. J.; Yener, B.; Stegemann, J.P.; L. M. McKeen; Bilgin, C. C.; Hasan, M.A.; Lund, A. W.

    2009-01-01

    Multispectral three-dimensional (3D) imaging provides spatial information for biological structures that cannot be measured by traditional methods. This work presents a method of tracking 3D biological structures to quantify changes over time using graph theory. Cell-graphs were generated based on the pairwise distances, in 3D-Euclidean space, between nuclei during collagen I gel compaction. From these graphs quantitative features are extracted that measure both the global topography and the ...

  9. Uncertainty Propagation and Quantification using Constrained Coupled Adaptive Forward-Inverse Schemes: Theory and Applications

    Science.gov (United States)

    Ryerson, F. J.; Ezzedine, S. M.; Antoun, T.

    2013-12-01

    The success of implementation and execution of numerous subsurface energy technologies such shale gas extraction, geothermal energy, underground coal gasification rely on detailed characterization of the geology and the subsurface properties. For example, spatial variability of subsurface permeability controls multi-phase flow, and hence impacts the prediction of reservoir performance. Subsurface properties can vary significantly over several length scales making detailed subsurface characterization unfeasible if not forbidden. Therefore, in common practices, only sparse measurements of data are available to image or characterize the entire reservoir. For example pressure, P, permeability, k, and production rate, Q, measurements are only available at the monitoring and operational wells. Elsewhere, the spatial distribution of k is determined by various deterministic or stochastic interpolation techniques and P and Q are calculated from the governing forward mass balance equation assuming k is given at all locations. Several uncertainty drivers, such as PSUADE, are then used to propagate and quantify the uncertainty (UQ) of quantities (variable) of interest using forward solvers. Unfortunately, forward-solver techniques and other interpolation schemes are rarely constrained by the inverse problem itself: given P and Q at observation points determine the spatially variable map of k. The approach presented here, motivated by fluid imaging for subsurface characterization and monitoring, was developed by progressively solving increasingly complex realistic problems. The essence of this novel approach is that the forward and inverse partial differential equations are the interpolator themselves for P, k and Q rather than extraneous and sometimes ad hoc schemes. Three cases with different sparsity of data are investigated. In the simplest case, a sufficient number of passive pressure data (pre-production pressure gradients) are given. Here, only the inverse hyperbolic equation for the distribution of k is solved, provided that Cauchy data are appropriately assigned. In the next stage, only a limited number of passive measurements are provided. In this case, the forward and inverse PDEs are solved simultaneously. This is accomplished by adding regularization terms and filtering the pressure gradients in the inverse problem. Both the forward and the inverse problem are either simultaneously or sequentially coupled and solved using implicit schemes, adaptive mesh refinement, Galerkin finite elements. The final case arises when P, k, and Q data only exist at producing wells. This exceedingly ill posed problem calls for additional constraints on the forward-inverse coupling to insure that the production rates are satisfied at the desired locations. Results from all three cases are presented demonstrating stability and accuracy of the proposed approach and, more importantly, providing some insights into the consequences of data under sampling, uncertainty propagation and quantification. We illustrate the advantages of this novel approach over the common UQ forward drivers on several subsurface energy problems in either porous or fractured or/and faulted reservoirs. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  10. Inflationary constraints on type IIA string theory

    International Nuclear Information System (INIS)

    We prove that inflation is forbidden in the most well understood class of semi-realistic type IIA string compactifications: Calabi-Yau compactifications with only standard NS-NS 3-form flux, R-R fluxes, D6-branes and O6-planes at large volume and small string coupling. With these ingredients, the first slow-roll parameter satisfies ? ? 27/13 whenever V>0, ruling out both inflation (including brane/anti-brane inflation) and de Sitter vacua in this limit. Our proof is based on the dependence of the 4-dimensional potential on the volume and dilaton moduli in the presence of fluxes and branes. We also describe broader classes of IIA models which may include cosmologies with inflation and/or de Sitter vacua. The inclusion of extra ingredients, such as NS 5-branes and geometric or non-geometric NS-NS fluxes, evades the assumptions used in deriving the no-go theorem. We focus on NS 5-branes and outline how such ingredients may prove fruitful for cosmology, but we do not provide an explicit model. We contrast the results of our IIA analysis with the rather different situation in IIB

  11. Axion inflation in type II string theory

    International Nuclear Information System (INIS)

    Inflationary models driven by a large number of axion fields are discussed in the context of type IIB compactifications with N=1 supersymmetry. The inflatons arise as the scalar modes of the R-R two-forms evaluated on vanishing two-cycles in the compact geometry. The vanishing cycles are resolved by small two-volumes or NS-NS B fields which sit together with the inflatons in the same supermultiplets. String world sheets wrapping the vanishing cycles correct the metric of the R-R inflatons. They can help to generate kinetic terms close to the Planck scale and a mass hierarchy between the axions and their nonaxionic partners during inflation. At small string coupling, D-brane corrections are subleading in the metric of the R-R inflatons. However, an axion potential can be generated by D1 instantons or gaugino condensates on D5-branes. Models with a sufficiently large number of axions admit regions of chaotic inflation which can stretch over the whole axion field range for potentials from gaugino condensates. These models could allow for a possibly detectable amount of gravitational waves with tensor to scalar ratio as high as r<0.14

  12. Multivariate Bonferroni-type inequalities theory and applications

    CERN Document Server

    Chen, John

    2014-01-01

    Multivariate Bonferroni-Type Inequalities: Theory and Applications presents a systematic account of research discoveries on multivariate Bonferroni-type inequalities published in the past decade. The emergence of new bounding approaches pushes the conventional definitions of optimal inequalities and demands new insights into linear and Fréchet optimality. The book explores these advances in bounding techniques with corresponding innovative applications. It presents the method of linear programming for multivariate bounds, multivariate hybrid bounds, sub-Markovian bounds, and bounds using Hamil

  13. Quantification of genetically modified soybeans using a combination of a capillary-type real-time PCR system and a plasmid reference standard.

    Science.gov (United States)

    Toyota, Akie; Akiyama, Hiroshi; Sugimura, Mitsunori; Watanabe, Takahiro; Kikuchi, Hiroyuki; Kanamori, Hisayuki; Hino, Akihiro; Esaka, Muneharu; Maitani, Tamio

    2006-04-01

    Because the labeling of grains and feed- and foodstuffs is mandatory if the genetically modified organism (GMO) content exceeds a certain level of approved genetically modified varieties in many countries, there is a need for a rapid and useful method of GMO quantification in food samples. In this study, a rapid detection system was developed for Roundup Ready Soybean (RRS) quantification using a combination of a capillary-type real-time PCR system, a LightCycler real-time PCR system, and plasmid DNA as the reference standard. In addition, we showed for the first time that the plasmid and genomic DNA should be similar in the established detection system because the PCR efficiencies of using plasmid DNA and using genomic DNA were not significantly different. The conversion factor (Cf) to calculate RRS content (%) was further determined from the average value analyzed in three laboratories. The accuracy and reproducibility of this system for RRS quantification at a level of 5.0% were within a range from 4.46 to 5.07% for RRS content and within a range from 2.0% to 7.0% for the relative standard deviation (RSD) value, respectively. This system rapidly monitored the labeling system and had allowable levels of accuracy and precision. PMID:16636447

  14. Classical instanton and wormhole solutions of Type IIB string theory

    OpenAIRE

    Kim, Jin Young; Lee, H. W.; Myung, Y. S.

    1996-01-01

    We study $p=-1$ D-brane in type IIB superstring theory. In addition to RR instanton, we obtain the RR charged wormhole solution in the Einstein frame. This corresponds to the ten-dimensional singular wormhole solution with infinite euclidean action.

  15. Non-critical type 0 string theories and their field theory duals

    International Nuclear Information System (INIS)

    In this paper we continue the study of the non-critical type 0 string and its field theory duals. We begin by reviewing some facts and conjectures about these theories. We move on to our proposal for the type 0 effective action in any dimension, its RR fields and their Chern-Simons couplings. We then focus on the case without compact dimensions and study its field theory duals. We show that one can parameterize all dual physical quantities in terms of a finite number of unknown parameters. By making some further assumptions on the tachyon couplings, one can still make some 'model independent' statements

  16. Formation of social types in the theory of Orrin Klapp

    Directory of Open Access Journals (Sweden)

    Trifunovi? Vesna

    2007-01-01

    Full Text Available Theory of Orrin Klapp about social types draws attention to important functions that these types have within certain societies as well as that it is preferable to take them into consideration if our goal is more complete knowledge of that society. For Klapp, social types are important social symbols, which in an interesting way reflect society they are part of and for that reason this author dedicates his work to considering their meanings and social functions. He thinks that we can not understand a society without the knowledge about the types with which its members are identified and which serve them as models in their social activity. Hence, these types have cognitive value since, according to Klapp, they assist in perception and "contain the truth", and therefore the knowledge of them allows easier orientation within the social system. Social types also offer insight into the scheme of the social structure, which is otherwise invisible and hidden, but certainly deserves attention if we wish clearer picture about social relations within specific community. The aim of this work is to present this very interesting and inspirative theory of Orrin Klapp, pointing out its importance but also its weaknesses which should be kept in mind during its application in further research.

  17. A Modular Type-checking algorithm for Type Theory with Singleton Types and Proof Irrelevance

    CERN Document Server

    Abel, Andreas; Pagano, Miguel

    2011-01-01

    We define a logical framework with singleton types and one universe of small types. We give the semantics using a PER model; it is used for constructing a normalisation-by-evaluation algorithm. We prove completeness and soundness of the algorithm; and get as a corollary the injectivity of type constructors. Then we give the definition of a correct and complete type-checking algorithm for terms in normal form. We extend the results to proof-irrelevant propositions.

  18. A ground many-valued type theory and its extensions.

    Czech Academy of Sciences Publication Activity Database

    B?hounek, Libor

    Linz : Johannes Kepler Universität, 2014 - (Flaminio, T.; Godo, L.; Gottwald, S.; Klement, E.). s. 15-18 [Linz Seminar on Fuzzy Set Theory /35./. 18.02.2014-22.02.2014, Linz] R&D Projects: GA MŠk ED1.1.00/02.0070; GA MŠk EE2.3.30.0010 Institutional support: RVO:67985807 Keywords : type theory * many-valued logics * higher-order logic * teorie typ? * vícehodnotové logiky * logika vyššího ?ádu Subject RIV: BA - General Mathematics

  19. On global anomalies in type IIB string theory

    CERN Document Server

    Sati, Hisham

    2011-01-01

    We study global gravitational anomalies in type IIB string theory with nontrivial middle cohomology. This requires the study of the action of diffeomorphisms on this group. Several results and constructions, including some recent vanishing results via elliptic genera, make it possible to consider this problem. Along the way, we describe in detail the intersection pairing and the action of diffeomorphisms, and highlight the appearance of various structures, including the Rochlin invariant and its variants on the mapping torus.

  20. On global anomalies in type IIB string theory

    OpenAIRE

    Sati, Hisham

    2011-01-01

    We study global gravitational anomalies in type IIB string theory with nontrivial middle cohomology. This requires the study of the action of diffeomorphisms on this group. Several results and constructions, including some recent vanishing results via elliptic genera, make it possible to consider this problem. Along the way, we describe in detail the intersection pairing and the action of diffeomorphisms, and highlight the appearance of various structures, including the Roch...

  1. Dilaton-driven brane inflation in type IIB string theory

    OpenAIRE

    Kim, Jin Young

    2000-01-01

    We consider the cosmological evolution of the three-brane in the background of type IIB string theory. For two different backgrounds which give nontrivial dilaton profile we have derived the Friedman-like equations. These give the cosmological evolution which is similar to the one by matter density on the universe brane. The effective density blows up as we move towards the singularity showing the initial singularity problem. The analysis shows that when there is axion field...

  2. Type IIB flux vacua from G-theory II

    Science.gov (United States)

    Candelas, Philip; Constantin, Andrei; Damian, Cesar; Larfors, Magdalena; Morales, Jose Francisco

    2015-02-01

    We find analytic solutions of type IIB supergravity on geometries that locally take the form Mink × M 4 × ? with M 4 a generalised complex manifold. The solutions involve the metric, the dilaton, NSNS and RR flux potentials (oriented along the M 4) parametrised by functions varying only over ?. Under this assumption, the supersymmetry equations are solved using the formalism of pure spinors in terms of a finite number of holomorphic functions. Alternatively, the solutions can be viewed as vacua of maximally supersymmetric supergravity in six dimensions with a set of scalar fields varying holomorphically over ?. For a class of solutions characterised by up to five holomorphic functions, we outline how the local solutions can be completed to four-dimensional flux vacua of type IIB theory. A detailed study of this global completion for solutions with two holomorphic functions has been carried out in the companion paper [1]. The fluxes of the global solutions are, as in F-theory, entirely codified in the geometry of an auxiliary K3 fibration over ??1. The results provide a geometric construction of fluxes in F-theory.

  3. Development of Primer-Probe Energy Transfer real-time PCR for the detection and quantification of porcine circovirus type 2

    DEFF Research Database (Denmark)

    Balint, Adam; Tenk, M

    2009-01-01

    A real-time PCR assay, based on Primer-Probe Energy Transfer (PriProET), was developed to improve the detection and quantification of porcine circovirus type 2 (PVC2). PCV2 is recognised as the essential infectious agent in post-weaning multisystemic wasting syndrome (PMWS) and has been associated with other disease syndromes such as porcine dermatitis and nephropathy syndrome (PDNS) and porcine respiratory disease complex (PRDC). Since circoviruses commonly occur in the pig populations and there is a correlation between the severity of the disease and the viral load in the organs and blood, it is important not only to detect PCV2 but also to determine the quantitative aspects of viral load. The PriProET real-time PCR assay described in this study was tested on various virus strains and clinical forms of PMWS in order to investigate any correlation between the clinical signs and viral loads in different organs. The data obtained in this study correlate with those described earlier; namely, the viral load in 1ml plasma and in 500 ng tissue DNA exceeds 10(7) copies in the case of PMWS. The results indicate that the new assay provides a specific, sensitive and robust tool for the improved detection and quantification of PCV2.

  4. Development of a sandwich ELISA-type system for the detection and quantification of hazelnut in model chocolates.

    Science.gov (United States)

    Costa, Joana; Ansari, Parisa; Mafra, Isabel; Oliveira, M Beatriz P P; Baumgartner, Sabine

    2015-04-15

    Hazelnut is one of the most appreciated nuts being virtually found in a wide range of processed foods. The simple presence of trace amounts of hazelnut in foods can represent a potential risk for eliciting allergic reactions in sensitised individuals. The correct labelling of processed foods is mandatory to avoid adverse reactions. Therefore, adequate methodology evaluating the presence of offending foods is of great importance. Thus, the aim of this study was to develop a highly specific and sensitive sandwich enzyme-linked immunosorbent assay (ELISA) for the detection and quantification of hazelnut in complex food matrices. Using in-house produced antibodies, an ELISA system was developed capable to detect hazelnut down to 1 mg kg(-1) and quantify this nut down to 50 mg kg(-1) in chocolates spiked with known amounts of hazelnut. These results highlight and reinforce the value of ELISA as rapid and reliable tool for the detection of allergens in foods. PMID:25466021

  5. Irregular singularities in Liouville theory and Argyres-Douglas type gauge theories, I

    Energy Technology Data Exchange (ETDEWEB)

    Gaiotto, D. [Institute for Advanced Study (IAS), Princeton, NJ (United States); Teschner, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2012-03-15

    Motivated by problems arising in the study of N=2 supersymmetric gauge theories we introduce and study irregular singularities in two-dimensional conformal field theory, here Liouville theory. Irregular singularities are associated to representations of the Virasoro algebra in which a subset of the annihilation part of the algebra act diagonally. In this paper we define natural bases for the space of conformal blocks in the presence of irregular singularities, describe how to calculate their series expansions, and how such conformal blocks can be constructed by some delicate limiting procedure from ordinary conformal blocks. This leads us to a proposal for the structure functions appearing in the decomposition of physical correlation functions with irregular singularities into conformal blocks. Taken together, we get a precise prediction for the partition functions of some Argyres-Douglas type theories on S{sup 4}. (orig.)

  6. Irregular singularities in Liouville theory and Argyres-Douglas type gauge theories, I

    International Nuclear Information System (INIS)

    Motivated by problems arising in the study of N=2 supersymmetric gauge theories we introduce and study irregular singularities in two-dimensional conformal field theory, here Liouville theory. Irregular singularities are associated to representations of the Virasoro algebra in which a subset of the annihilation part of the algebra act diagonally. In this paper we define natural bases for the space of conformal blocks in the presence of irregular singularities, describe how to calculate their series expansions, and how such conformal blocks can be constructed by some delicate limiting procedure from ordinary conformal blocks. This leads us to a proposal for the structure functions appearing in the decomposition of physical correlation functions with irregular singularities into conformal blocks. Taken together, we get a precise prediction for the partition functions of some Argyres-Douglas type theories on S4. (orig.)

  7. Church-style type theories over finitary weakly implicative logics.

    Czech Academy of Sciences Publication Activity Database

    B?hounek, Libor

    Vienna : Vienna University of Technology, 2014 - (Baaz, M.; Ciabattoni, A.; Hetzl, S.). s. 131-133 [LATD 2014. Logic, Algebra and Truth Degrees. 16.07.2014-19.07.2014, Vienna] R&D Projects: GA MŠk ED1.1.00/02.0070; GA MŠk EE2.3.30.0010 Institutional support: RVO:67985807 Keywords : type theory * higher-order logic * weakly implicative logics * teorie typ? * logika vyššího ?ádu * slab? implika?ní logiky Subject RIV: BA - General Mathematics

  8. Type IIB flux vacua from G-theory I

    CERN Document Server

    Candelas, Philip; Damian, Cesar; Larfors, Magdalena; Morales, Jose Francisco

    2014-01-01

    We construct non-perturbatively exact four-dimensional Minkowski vacua of type IIB string theory with non-trivial fluxes. These solutions are found by gluing together, consistently with U-duality, local solutions of type IIB supergravity on $T^4 \\times \\mathbb{C}$ with the metric, dilaton and flux potentials varying along $\\mathbb{C}$ and the flux potentials oriented along $T^4$. We focus on solutions locally related via U-duality to non-compact Ricci-flat geometries. More general solutions and a complete analysis of the supersymmetry equations are presented in the companion paper [1]. We build a precise dictionary between fluxes in the global solutions and the geometry of an auxiliary $K3$ surface fibered over $\\mathbb{CP}^1$. In the spirit of F-theory, the flux potentials are expressed in terms of locally holomorphic functions that parametrize the complex structure moduli space of the $K3$ fiber in the auxiliary geometry. The brane content is inferred from the monodromy data around the degeneration points o...

  9. Threshold anomalies in Horava-Lifshitz-type theories

    International Nuclear Information System (INIS)

    Recently the study of threshold kinematic requirements for particle-production processes has played a very significant role in the phenomenology of theories with departures from Poincare symmetry. We here specialize these threshold studies to the case of a class of violations of Poincare symmetry which has been much discussed in the literature on Horava-Lifshitz scenarios. These involve modifications of the energy-momentum ('dispersion') relation that may be different for different types of particles, but always involve even powers of energy-momentum in the correction terms. We establish the requirements for compatibility with the observed cosmic-ray spectrum, which is sensitive to the photopion-production threshold. We find that the implications for the electron-positron pair-production threshold are rather intriguing, in light of some recent studies of TeV emissions by Blazars. Our findings should also provide additional motivation for examining the fate of the law of energy-momentum conservation in Horava-Lifshitz-type theories.

  10. Real-time RT-PCR for detection, identification and absolute quantification of viral haemorrhagic septicaemia virus using different types of standards.

    Science.gov (United States)

    Lopez-Vazquez, C; Bandín, I; Dopazo, C P

    2015-05-21

    In the present study, 2 systems of real-time RT-PCR-one based on SYBR Green and the other on TaqMan-were designed to detect strains from any genotype of viral haemorrhagic septicaemia virus (VHSV), with high sensitivity and repeatability/reproducibility. In addition, the method was optimized for quantitative purposes (qRT-PCR), and standard curves with different types of reference templates were constructed and compared. Specificity was tested against 26 isolates from 4 genotypes. The sensitivity of the procedures was first tested against cell culture isolation, obtaining a limit of detection (LD) of 100 TCID50 ml-1 (100-fold below the LD using cell culture), at a threshold cycle value (Ct) of 36. Sensitivity was also evaluated using RNA from crude (LD = 1 fg; 160 genome copies) and purified virus (100 ag; 16 copies), plasmid DNA (2 copies) and RNA transcript (15 copies). No differences between both chemistries were observed in sensitivity and dynamic range. To evaluate repeatability and reproducibility, all experiments were performed in triplicate and on 3 different days, by workers with different levels of experience, obtaining Ct values with coefficients of variation always <5. This fact, together with the high efficiency and R2 values of the standard curves, encouraged us to analyse the reliability of the method for viral quantification. The results not only demonstrated that the procedure can be used for detection, identification and quantification of this virus, but also demonstrated a clear correlation between the regression lines obtained with different standards, which will help scientists to compare sensitivity results between different studies. PMID:25993885

  11. Preclinical evaluation and quantification of [18F]MK-9470 as a radioligand for PET imaging of the type 1 cannabinoid receptor in rat brain

    International Nuclear Information System (INIS)

    [18F]MK-9470 is an inverse agonist for the type 1 cannabinoid (CB1) receptor allowing its use in PET imaging. We characterized the kinetics of [18F]MK-9470 and evaluated its ability to quantify CB1 receptor availability in the rat brain. Dynamic small-animal PET scans with [18F]MK-9470 were performed in Wistar rats on a FOCUS-220 system for up to 10 h. Both plasma and perfused brain homogenates were analysed using HPLC to quantify radiometabolites. Displacement and blocking experiments were done using cold MK-9470 and another inverse agonist, SR141716A. The distribution volume (VT) of [18F]MK-9470 was used as a quantitative measure and compared to the use of brain uptake, expressed as SUV, a simplified method of quantification. The percentage of intact [18F]MK-9470 in arterial plasma samples was 80 ± 23 % at 10 min, 38 ± 30 % at 40 min and 13 ± 14 % at 210 min. A polar radiometabolite fraction was detected in plasma and brain tissue. The brain radiometabolite concentration was uniform across the whole brain. Displacement and pretreatment studies showed that 56 % of the tracer binding was specific and reversible. VT values obtained with a one-tissue compartment model plus constrained radiometabolite input had good identifiability (?10 %). Ignoring the radiometabolite contribution using a one-tissue compartment model alone, i.e. without constrained radiometabolite input, overestimated the ometabolite input, overestimated the [18F]MK-9470 VT, but was correlated. A correlation between [18F]MK-9470 VT and SUV in the brain was also found (R 2 = 0.26-0.33; p ? 0.03). While the presence of a brain-penetrating radiometabolite fraction complicates the quantification of [18F]MK-9470 in the rat brain, its tracer kinetics can be modelled using a one-tissue compartment model with and without constrained radiometabolite input. (orig.)

  12. The Biequivalence of Locally Cartesian Closed Categories and Martin-L\\"of Type Theories

    CERN Document Server

    Clairambault, Pierre

    2011-01-01

    Seely's paper "Locally cartesian closed categories and type theory" contains a well-known result in categorical type theory: that the category of locally cartesian closed categories is equivalent to the category of Martin-L\\"of type theories with Pi-types, Sigma-types and extensional identity types. However, Seely's proof relies on the problematic assumption that substitution in types can be interpreted by pullbacks. Here we prove a corrected version of Seely's theorem: that the B\\'enabou-Hofmann interpretation of Martin-L\\"of type theory in locally cartesian closed categories yields a biequivalence of 2-categories. To facilitate the technical development we employ categories with families as a substitute for syntactic Martin-L\\"of type theories. As a second result we prove that if we remove Pi-types the resulting categories with families are biequivalent to left exact categories.

  13. U-dualities in Type II string theories and M-theory

    CERN Document Server

    Musaev, Edvard

    2013-01-01

    In this thesis the recently developed duality covariant approach to string and M-theory is investigated. In this formalism the U-duality symmetry of M-theory or T-duality symmetry of Type II string theory becomes manifest upon extending coordinates that describe a background. The effective potential of Double Field Theory is formulated only up to a boundary term and thus does not capture possible topological effects that may come from a boundary. By introducing a generalised normal we derive a manifestly duality covariant boundary term that reproduces the known Gibbons-Hawking action of General Relativity, if the section condition is imposed. It is shown that the full potential can be represented as a sum of the scalar potential of gauged supergravity and a topological term that is a full derivative. The latter is conjectured to capture non-trivial topological information of the corresponding background, such as monodromy around an exotic brane. Next we show that the Scherk-Schwarz reduction of M-theory exten...

  14. Nonlinear stability of solar type 3 radio bursts. I. Theory

    International Nuclear Information System (INIS)

    A theory of the excitation of solar type 3 bursts is presented. Electrons initially unstable to the linear bump-in-tail instability are shown to amplify Langmuir waves rapidly to energy densities characteristic of strong turbulence. The three-dimensional equations which describe the strong coupling (wave-wave) interactions are derived. For parameters characteristic of the interplanetary medium the equations reduce to those for one dimension. In this case, the oscillating two stream instability (OTSI) is the dominant nonlinear instability, and is stablized through the production of nonlinear ion density fluctuations that efficiently scatter Langmuir waves out of resonance with the electron beam. An analytical model of the electron distribution function is also developed which is used to estimate the total energy losses suffered by the electron beam as it propagates from the solar corona to 1 A.U. and beyond

  15. Almost Special Holonomy in Type IIA&M Theory

    CERN Document Server

    Cvetic, M; Lü, H; Pope, C N

    2002-01-01

    We consider spaces M_7 and M_8 of G_2 holonomy and Spin(7) holonomy in seven and eight dimensions, with a U(1) isometry. For metrics where the length of the associated circle is everywhere finite and non-zero, one can perform a Kaluza-Klein reduction of supersymmetric M-theory solutions (Minkowksi)_4\\times M_7 or (Minkowksi)_3\\times M_8, to give supersymmetric solutions (Minkowksi)_4\\times Y_6 or (Minkowksi)_3\\times Y_7 in type IIA string theory with a non-singular dilaton. We study the associated six-dimensional and seven-dimensional spaces Y_6 and Y_7 perturbatively in the regime where the string coupling is weak but still non-zero, for which the metrics remain Ricci-flat but that they no longer have special holonomy, at the linearised level. In fact they have ``almost special holonomy,'' which for the case of Y_6 means almost Kahler, together with a further condition. For Y_7 we are led to introduce the notion of an ``almost G_2 manifold,'' for which the associative 3-form is closed but not co-closed. We o...

  16. Type IIB flux vacua from G-theory II

    CERN Document Server

    Candelas, Philip; Damian, Cesar; Larfors, Magdalena; Morales, Jose Francisco

    2014-01-01

    We find analytic solutions of type IIB supergravity on geometries that locally take the form $\\text{Mink}\\times M_4\\times \\mathbb{C}$ with $M_4$ a generalised complex manifold. The solutions involve the metric, the dilaton, NSNS and RR flux potentials (oriented along the $M_4$) parametrised by functions varying only over $\\mathbb{C}$. Under this assumption, the supersymmetry equations are solved using the formalism of pure spinors in terms of a finite number of holomorphic functions. Alternatively, the solutions can be viewed as vacua of maximally supersymmetric supergravity in six dimensions with a set of scalar fields varying holomorphically over $\\mathbb{C}$. For a class of solutions characterised by up to five holomorphic functions, we outline how the local solutions can be completed to four-dimensional flux vacua of type IIB theory. A detailed study of this global completion for solutions with two holomorphic functions has been carried out in the companion paper [1]. The fluxes of the global solutions ar...

  17. Geometry of model building in type IIB superstring theory and F-theory compactifications

    International Nuclear Information System (INIS)

    The present thesis is devoted to the study and geometrical description of type IIB superstring theory and F-theory model building. After a concise exposition of the basic concepts of type IIB flux compactifications, we explain their relation to F-theory. Moreover, we give a brief introduction to toric geometry focusing on the construction and the analysis of compact Calabi-Yau (CY) manifolds, which play a prominent role in the compactification of extra spatial dimensions. We study the 'Large Volume Scenario' on explicit new compact four-modulus CY manifolds. We thoroughly analyze the possibility of generating neutral non-perturbative superpotentials from Euclidean D3-branes in the presence of chirally intersecting D7-branes. We find that taking proper account of the Freed-Witten anomaly on non-spin cycles and of the Kaehler cone conditions imposes severe constraints on the models. Furthermore, we systematically construct a large number of compact CY fourfolds that are suitable for F-theory model building. These elliptically fibered CYs are complete intersections of two hypersurfaces in a six-dimensional ambient space. We first construct three-dimensional base manifolds that are hypersurfaces in a toric ambient space. We find that elementary conditions, which are motivated by F-theory GUTs (Grand Unified Theory), lead to strong constraints on the geometry, which significantly reduce the number of suitable models. We work out several examples in more detail. At the end,veral examples in more detail. At the end, we focus on the complex moduli space of CY threefolds. It is a known result that infinite sequences of type IIB flux vacua with imaginary self-dual flux can only occur in so-called D-limits, corresponding to singular points in complex structure moduli space. We refine this no-go theorem by demonstrating that there are no infinite sequences accumulating to the large complex structure point of a certain class of one-parameter CY manifolds. We perform a similar analysis for conifold points and for the decoupling limit, obtaining identical results. Furthermore, we establish the absence of infinite sequences in a D-limit corresponding to the large complex structure limit of a two-parameter CY. We corroborate our results with a numerical study ofthe sequences. (author)

  18. Cuantificación del virus de hepatitis B por la técnica PCR tiempo real / Quantification of viral hepatitis type B using the real-time PCR technique

    Scientific Electronic Library Online (English)

    Elizabeth, Rojas-Cordero.

    2008-11-01

    Full Text Available La técnica de reacción en cadena de la polimerasa (PCR) ha determinado, en la cuantificación de virus, un avance especial para el manejo de infecciones crónicas por virus, en especial, para HIV, virus de hepatitis B y C. La cuantificación en tiempo real se realiza en el ABI PRISM Sequence Detection [...] System. Se amplifica, específicamente, el fragmento pb del genoma del virus de hepatitis B. Se recomienda el HBV PCR kit para la toma de la muestra., determinándose un protocolo para el almacenamiento de la misma. Es importante saber que el congelar las muestras o el almacenamiento prolongado disminuyen la sensibilidad del método. Se puede almacenar por años si es a una temperatura de - 70º C. Tubos con heparina o pacientes heparinizado alteraran la determinación de la muestra. El limite inferior detectable del virus B es de 3.78 UI/ml y el mayor es 1.4 x 10¹¹ UI/ml, se determinan todos los genotipos desde A-H. Los pacientes HBeAg positivos, usualmente, tienen valores mayores a 1 x 10(6) UI/ml y los HBe negativos portadores inactivos por lo general, valores menores a 1 x 10(4). Abstract in english The polymerase chain reaction technique (PCR) has determined a special advance in the virus quantification for the management of chronic viral infections, especially for HIV hepatitis virus type B and C. The real-time quantification is performed in the ABI PRISM Sequence Detection System. The fragme [...] nt pb of the genome of the hepatitis type B virus is amplified. The HBV PCR kit is recommended for taking a sample and a protocol for storing it is determined. It is important to know that freezing the samples or keeping them for a long time decreases the sensitivity of the method. It may be stored for years if kept at a temperature of -70° C. Heparinized patients or tubes with heparin will alter the determination of the sample. The lower limit of detection of B virus is 3.78 Ul/ml and the higher limit is 1.4 x 10(11) Ul/ml. All genotypes from A-H are determined. The patients with HbeAg positive usually present higher values than 1 x 10(6) Ul/ml and the HBe negative inactive carriers usually get lower values than 1 x 10(4).

  19. A Study of the Discrepancies among the Values Orderings of 12 Counseling Theories: The Quantification of Values Differences.

    Science.gov (United States)

    Remer, Rory; Remer, Pamela A.

    1982-01-01

    Demonstrates the relevance of the values ordering framework of Kluckhohn and Strodbeck to counseling theory and practice. Examines the commonalities in values for 12 counseling theories. Presents a method for quantifying the discrepancies between values orderings produced for different counseling orientations. (JAC)

  20. Mapping and Quantification of Land Area and Cover Types with LandsatTM in Carey Island, Selangor, Malaysia

    Directory of Open Access Journals (Sweden)

    J Hj. Kamaruzaman,

    2009-02-01

    Full Text Available Information about current land cover type is essential at a certain level to ensure the optimum use of the land resources. Several approaches can be used to estimate land cover area, where remote sensing and Geographic Information System (GIS is among the method. Therefore, this study was undertaken to evaluate how reliable these technologies in preparing information about land cover in Carey Island, Selangor of Peninsular Malaysia. Erdas Imagine 9.1 was used in digital image processing. A primary data of Landsat TM, with spatial resolution of 30 m was acquired from scene 127/58 on July 2007. Area estimate was calculated using direct expansion method from samples proportion of each segments of land cover type (1 km by 1 km sample size. In this study, four classes of land cover type have been identified and the areas were oil palm, mangrove, water bodies and urban/bare land area. The area estimate for all classes are 11039.28 ha (oil palm, 5242.86 ha (mangrove, 4894.92 ha (water bodies, and 4751.96 ha (urban/bare land, respectively. The overall classification accuracy obtained for this study is 96%.  The results showed that the use of direct expansion method for estimating land cover type area is practical to be used with remote sensing approaches.

  1. Quantification of the types of water in Eudragit RLPO polymer and the kinetics of water loss using FTIR

    DEFF Research Database (Denmark)

    Pirayavaraporn, Chompak; Rades, Thomas

    2013-01-01

    Coalescence of polymer particles in polymer matrix tablets influences drug release. The literature has emphasized that coalescence occurs above the glass transition temperature (Tg) of the polymer and that water may plasticize (lower Tg) the polymer. However, we have shown previously that nonplasticizing water also influences coalescence of Eudragit RLPO; so there is a need to quantify the different types of water in Eudragit RLPO. The purpose of this study was to distinguish the types of water present in Eudragit RLPO polymer and to investigate the water loss kinetics for these different types of water. Eudragit RLPO was stored in tightly closed chambers at various relative humidities (0, 33, 56, 75, and 94%) until equilibrium was reached. Fourier transform infrared spectroscopy (FTIR)-DRIFTS was used to investigate molecular interactions between water and polymer, and water loss over time. Using a curve fitting procedure, the water region (3100-3,700 cm(-1)) of the spectra was analyzed, and used to identifywater present in differing environments in the polymer and to determine the water loss kinetics upon purging the sample with dry compressed air. It was found that four environments can be differentiated (dipole interaction of water with quaternary ammonium groups, water cluster, and water indirectly and directly binding to the carbonyl groups of the polymer) but it was not possible to distinguish whether the different types of water were lost at different rates. It is suggested that water is trapped in the polymer in different forms and this should be considered when investigating coalescence of polymer matrices.

  2. Mapping and Quantification of Land Area and Cover Types with LandsatTM in Carey Island, Selangor, Malaysia

    OpenAIRE

    J Hj. Kamaruzaman,; I Mohd Hasmadi,

    2009-01-01

    Information about current land cover type is essential at a certain level to ensure the optimum use of the land resources. Several approaches can be used to estimate land cover area, where remote sensing and Geographic Information System (GIS) is among the method. Therefore, this study was undertaken to evaluate how reliable these technologies in preparing information about land cover in Carey Island, Selangor of Peninsular Malaysia. Erdas Imagine 9.1 was used in digital image processing. A p...

  3. Type IIA flux compactifications. Vacua, effective theories and cosmological challenges

    International Nuclear Information System (INIS)

    In this thesis, we studied a number of type IIA SU(3)-structure compactifications with 06-planes on nilmanifolds and cosets, which are tractable enough to allow for an explicit derivation of the low energy effective theory. In particular we calculated the mass spectrum of the light scalar modes, using N = 1 supergravity techniques. For the torus and the Iwasawa solution, we have also performed an explicit Kaluza-Klein reduction, which led to the same result. For the nilmanifold examples we have found that there are always three unstabilized moduli corresponding to axions in the RR sector. On the other hand, in the coset models, except for SU(2) x SU(2), all moduli are stabilized. We discussed the Kaluza-Klein decoupling for the supersymmetric AdS vacua and found that it requires going to the Nearly-Calabi Yau limited. We searched for non-trivial de Sitter minima in the original flux potential away from the AdS vacuum. Finally, in chapter 7, we focused on a family of three coset spaces and constructed non-supersymmetric vacua on them. (orig.)

  4. Threshold anomalies in Horava-Lifshitz-type theories

    CERN Document Server

    Amelino-Camelia, Giovanni; Mercati, Flavio

    2009-01-01

    Recently the study of threshold kinematic requirements for particle-production processes has played a very significant role in the phenomenology of theories with departures from Poincare' symmetry. We here specialize these threshold studies to the case of a class of violations of Poincare' symmetry which has been much discussed in the literature on Horava-Lifshitz scenarios. These involve modifications of the energy-momentum ("dispersion") relation that may be different for different types of particles, but always involve even powers of energy-momentum in the correction terms. We establish the requirements for compatibility with the observed cosmic-ray spectrum, which is sensitive to the photopion-production threshold. We find that the implications for the electron-positron pair-production threshold are rather intriguing, in light of some recent studies of TeV emissions by Blazars. Our findings should also provide motivation for examining the fate of the law of energy-momentum conservation in Horava-Lifshitz-...

  5. Type IIA flux compactifications. Vacua, effective theories and cosmological challenges

    Energy Technology Data Exchange (ETDEWEB)

    Koers, Simon

    2009-07-30

    In this thesis, we studied a number of type IIA SU(3)-structure compactifications with 06-planes on nilmanifolds and cosets, which are tractable enough to allow for an explicit derivation of the low energy effective theory. In particular we calculated the mass spectrum of the light scalar modes, using N = 1 supergravity techniques. For the torus and the Iwasawa solution, we have also performed an explicit Kaluza-Klein reduction, which led to the same result. For the nilmanifold examples we have found that there are always three unstabilized moduli corresponding to axions in the RR sector. On the other hand, in the coset models, except for SU(2) x SU(2), all moduli are stabilized. We discussed the Kaluza-Klein decoupling for the supersymmetric AdS vacua and found that it requires going to the Nearly-Calabi Yau limited. We searched for non-trivial de Sitter minima in the original flux potential away from the AdS vacuum. Finally, in chapter 7, we focused on a family of three coset spaces and constructed non-supersymmetric vacua on them. (orig.)

  6. In-vivo segmentation and quantification of coronary lesions by optical coherence tomography images for a lesion type definition and stenosis grading.

    Science.gov (United States)

    Celi, Simona; Berti, Sergio

    2014-10-01

    Optical coherence tomography (OCT) is a catheter-based medical imaging technique that produces cross-sectional images of blood vessels. This technique is particularly useful for studying coronary atherosclerosis. In this paper, we present a new framework that allows a segmentation and quantification of OCT images of coronary arteries to define the plaque type and stenosis grading. These analyses are usually carried out on-line on the OCT-workstation where measuring is mainly operator-dependent and mouse-based. The aim of this program is to simplify and improve the processing of OCT images for morphometric investigations and to present a fast procedure to obtain 3D geometrical models that can also be used for external purposes such as for finite element simulations. The main phases of our toolbox are the lumen segmentation and the identification of the main tissues in the artery wall. We validated the proposed method with identification and segmentation manually performed by expert OCT readers. The method was evaluated on ten datasets from clinical routine and the validation was performed on 210 images randomly extracted from the pullbacks. Our results show that automated segmentation of the vessel and of the tissue components are possible off-line with a precision that is comparable to manual segmentation for the tissue component and to the proprietary-OCT-console for the lumen segmentation. Several OCT sections have been processed to provide clinical outcome. PMID:25077844

  7. Use of TaqMan Real-Time Reverse Transcription-PCR for Rapid Detection, Quantification, and Typing of Norovirus

    OpenAIRE

    Trujillo, A. Angelica; Mccaustland, Karen A.; Zheng, Du-ping; Hadley, Leslie A.; Vaughn, George; Adams, Susan M.; Ando, Tamie; Glass, Roger I.; Monroe, Stephan S.

    2006-01-01

    Noroviruses (NoVs) are the most commonly identified cause of outbreaks and sporadic cases of acute gastroenteritis. We evaluated and optimized NoV-specific TaqMan real-time reverse transcription (RT)-PCR assays for the rapid detection and typing of NoV strains belonging to genogroups GI and GII and adapted them to the LightCycler platform. We expanded the detection ability of the assays by developing an assay that detects the GIV NoV strain. The assays were validated with 92 clinical samples ...

  8. Quantification of zinc atoms in a surface alloy on copper in an industrial-type methanol synthesis catalyst

    DEFF Research Database (Denmark)

    Kuld, Sebastian; Moses, Poul Georg

    2014-01-01

    Methanol has recently attracted renewed interest because of its potential importance as a solar fuel.1 Methanol is also an important bulk chemical that is most efficiently formed over the industrial Cu/ZnO/Al2O3 catalyst. The identity of the active site and, in particular, the role of ZnO as a promoter for this type of catalyst is still under intense debate.2 Structural changes that are strongly dependent on the pretreatment method have now been observed for an industrial-type methanol synthesis catalyst. A combination of chemisorption, reaction, and spectroscopic techniques provides a consistent picture of surface alloying between copper and zinc. This analysis enables a reinterpretation of the methods that have been used for the determination of the Cu surface area and provides an opportunity to independently quantify the specific Cu and Zn areas. This method may also be applied to other systems where metal–support interactions are important, and this work generally addresses the role of the carrier and the nature of the interactions between carrier and metal in heterogeneous catalysts.

  9. Identification of enzymes and quantification of metabolic fluxes in the wild type and in a recombinant Aspergillus oryzae strain

    DEFF Research Database (Denmark)

    Pedersen, Henrik; Carlsen, Morten

    1999-01-01

    Two alpha-amylase-producing strains of Aspergillus oryzae, a wild-type strain and a recombinant containing additional copies of the alpha-amylase gene, were characterized,vith respect to enzyme activities, localization of enzymes to the mitochondria or cytosol, macromolecular composition, and metabolic fluxes through the central metabolism during glucose-limited chemostat cultivations. Citrate synthase and isocitrate dehydrogenase (NAD) activities were found only in the mitochondria, glucose-6-phosphate dehydrogenase and glutamate dehydrogenase (NADP) activities were found only in the cytosol, and isocitrate dehydrogenase (NADP), glutamate oxaloacetate transaminase, malate dehydrogenase, and glutamate dehydrogenase (NAD) activities were found in both the mitochondria and the cytosol, The measured biomass components and ash could account for 95% (wt/wt) of the biomass. The protein and RNA contents increased linearly with increasing specific growth rate, but the carbohydrate and chitin contents decreased. A metabolic model consisting of 69 fluxes and 59 intracellular metabolites was used to calculate the metabolic fluxes through the central metabolism at several specific growth rates, with ammonia or nitrate as the nitrogen source. The flux through the pentose phosphate pathway increased with increasing specific growth rate. The fluxes through the pentose phosphate pathway were 15 to 26% higher for the recombinant strain than for the wild-type strain.

  10. Quantification of zinc atoms in a surface alloy on copper in an industrial-type methanol synthesis catalyst.

    Science.gov (United States)

    Kuld, Sebastian; Conradsen, Christian; Moses, Poul Georg; Chorkendorff, Ib; Sehested, Jens

    2014-06-01

    Methanol has recently attracted renewed interest because of its potential importance as a solar fuel. Methanol is also an important bulk chemical that is most efficiently formed over the industrial Cu/ZnO/Al2O3 catalyst. The identity of the active site and, in particular, the role of ZnO as a promoter for this type of catalyst is still under intense debate. Structural changes that are strongly dependent on the pretreatment method have now been observed for an industrial-type methanol synthesis catalyst. A combination of chemisorption, reaction, and spectroscopic techniques provides a consistent picture of surface alloying between copper and zinc. This analysis enables a reinterpretation of the methods that have been used for the determination of the Cu surface area and provides an opportunity to independently quantify the specific Cu and Zn areas. This method may also be applied to other systems where metal-support interactions are important, and this work generally addresses the role of the carrier and the nature of the interactions between carrier and metal in heterogeneous catalysts. PMID:24764288

  11. Experimental quantification of dynamic forces and shaft motion in two different types of backup bearings under several contact conditions

    DEFF Research Database (Denmark)

    Lahriri, Said; Santos, Ilmar

    2013-01-01

    This paper treats the experimental study on a shaft impacting its stator for different cases. The paper focuses mainly on the measured contact forces and the shaft motion in two different types of backup bearings. As such, the measured contact forces are thoroughly studied. These measured contact forces enable the hysteresis loops to be computed and analyzed. Consequently, the contact forces are plotted against the local deformation in order to assess the contact force loss during the impacts. The shaft motion during contact with the backup bearing is verified with a two-sided spectrum analyses. The analyses show that by use of a conventional annular guide, the shaft undergoes a direct transition from normal operation to a full annular backward whirling state for the case of external excitation. However, in a self-excited vibration case, where the speed is gradually increased and decreased through the first critical speed, the investigation revealed that different paths initiated the onset of backward whip and whirling motion. In order to improve the whirling and the full annular contact behavior, an unconventional pinned backup bearing is realized. The idea is to utilize pin connections that center the rotor during impacts and prevent the shaft from entering a full annular contact state. The experimental results show that the shaft escapes the pins and returns to a normal operational condition during an impact event. © 2013 Elsevier Ltd. All rights reserved.

  12. Heartbeat-related displacement of the thoracic aorta in patients with chronic aortic dissection type B: Quantification by dynamic CTA

    International Nuclear Information System (INIS)

    Purpose: The purpose of this study was to characterize the heartbeat-related displacement of the thoracic aorta in patients with chronic aortic dissection type B (CADB). Materials and methods: Electrocardiogram-gated computed tomography angiography was performed during inspiratory breath-hold in 11 patients with CADB: Collimation 16 mm x 1 mm, pitch 0.2, slice thickness 1 mm, reconstruction increment 0.8 mm. Multiplanar reformations were taken for 20 equidistant time instances through both ascending (AAo) and descending aorta (true lumen, DAoT; false lumen, DAoF) and the vertex of the aortic arch (VA). In-plane vessel displacement was determined by region of interest analysis. Results: Mean displacement was 5.2 ± 1.7 mm (AAo), 1.6 ± 1.0 mm (VA), 0.9 ± 0.4 mm (DAoT), and 1.1 ± 0.4 mm (DAoF). This indicated a significant reduction of displacement from AAo to VA and DAoT (p < 0.05). The direction of displacement was anterior for AAo and cranial for VA. Conclusion: In CADB, the thoracic aorta undergoes a heartbeat-related displacement that exhibits an unbalanced distribution of magnitude and direction along the thoracic vessel course. Since consecutive traction forces on the aortic wall have to be assumed, these observations may have implications on pathogenesis of and treatment strategies for CADB.

  13. Quantification of the physiochemical constraints on the export of spider silk proteins by Salmonella type III secretion

    Directory of Open Access Journals (Sweden)

    Voigt Christopher A

    2010-10-01

    Full Text Available Abstract Background The type III secretion system (T3SS is a molecular machine in gram negative bacteria that exports proteins through both membranes to the extracellular environment. It has been previously demonstrated that the T3SS encoded in Salmonella Pathogenicity Island 1 (SPI-1 can be harnessed to export recombinant proteins. Here, we demonstrate the secretion of a variety of unfolded spider silk proteins and use these data to quantify the constraints of this system with respect to the export of recombinant protein. Results To test how the timing and level of protein expression affects secretion, we designed a hybrid promoter that combines an IPTG-inducible system with a natural genetic circuit that controls effector expression in Salmonella (psicA. LacO operators are placed in various locations in the psicA promoter and the optimal induction occurs when a single operator is placed at the +5nt (234-fold and a lower basal level of expression is achieved when a second operator is placed at -63nt to take advantage of DNA looping. Using this tool, we find that the secretion efficiency (protein secreted divided by total expressed is constant as a function of total expressed. We also demonstrate that the secretion flux peaks at 8 hours. We then use whole gene DNA synthesis to construct codon optimized spider silk genes for full-length (3129 amino acids Latrodectus hesperus dragline silk, Bombyx mori cocoon silk, and Nephila clavipes flagelliform silk and PCR is used to create eight truncations of these genes. These proteins are all unfolded polypeptides and they encompass a variety of length, charge, and amino acid compositions. We find those proteins fewer than 550 amino acids reliably secrete and the probability declines significantly after ~700 amino acids. There also is a charge optimum at -2.4, and secretion efficiency declines for very positively or negatively charged proteins. There is no significant correlation with hydrophobicity. Conclusions We show that the natural system encoded in SPI-1 only produces high titers of secreted protein for 4-8 hours when the natural psicA promoter is used to drive expression. Secretion efficiency can be high, but declines for charged or large sequences. A quantitative characterization of these constraints will facilitate the effective use and engineering of this system.

  14. Precise iteration formulae of the Maslov-type index theory for symplectic paths

    International Nuclear Information System (INIS)

    In this paper, using homotopy components of symplectic matrices, and basic properties of the Maslov-type index theory, we establish precise iteration formulae of the Maslov-type index theory for any path in the symplectic group starting from the identity. (author)

  15. On the Conformal Field Theory Duals of type IIA AdS_4 Flux Compactifications

    OpenAIRE

    Aharony, Ofer; Antebi, Yaron E.; Berkooz, Micha

    2008-01-01

    We study the conformal field theory dual of the type IIA flux compactification model of DeWolfe, Giryavets, Kachru and Taylor, with all moduli stabilized. We find its central charge and properties of its operator spectrum. We concentrate on the moduli space of the conformal field theory, which we investigate through domain walls in the type IIA string theory. The moduli space turns out to consist of many different branches. We use Bezout's theorem and Bernstein's theorem to ...

  16. M Theory, Type IIA Superstrings, and Elliptic Cohomology

    CERN Document Server

    Kriz, I S; Kriz, Igor; Sati, Hisham

    2004-01-01

    The topological part of the M-theory partition function was shown by Witten to be encoded in the index of an E8 bundle in eleven dimensions. This partition function is, however, not automatically anomaly-free. We observe here that the vanishing W_7=0 of the Diaconescu-Moore-Witten anomaly in IIA and compactified M-theory partition function is equivalent to orientability of spacetime with respect to (complex-oriented) elliptic cohomology. Motivated by this, we define an elliptic cohomology correction to the IIA partition function, and propose its relationship to interaction between 2-branes and 5-branes in the M-theory limit.

  17. Quiver Gauge Theories on A-type ALE Spaces

    Science.gov (United States)

    Bruzzo, Ugo; Sala, Francesco; Szabo, Richard J.

    2015-03-01

    We survey and compare recent approaches to the computation of the partition functions and correlators of chiral BPS observables in gauge theories on ALE spaces based on quiver varieties and the minimal resolution X k of the A k-1 toric singularity , in light of their recently conjectured duality with two-dimensional coset conformal field theories. We review and elucidate the rigorous constructions of gauge theories for a particular family of ALE spaces, using their relation to the cohomology of moduli spaces of framed torsion-free sheaves on a suitable orbifold compactification of X k . We extend these computations to generic superconformal quiver gauge theories, obtaining in these instances new constraints on fractional instanton charges, a rigorous proof of the Nekrasov master formula, and new quantizations of Hitchin systems based on the underlying Seiberg-Witten geometry.

  18. M Theory, Type IIA Superstrings, and Elliptic Cohomology

    OpenAIRE

    Kriz, Igor; Sati, Hisham

    2004-01-01

    The topological part of the M-theory partition function was shown by Witten to be encoded in the index of an E8 bundle in eleven dimensions. This partition function is, however, not automatically anomaly-free. We observe here that the vanishing W_7=0 of the Diaconescu-Moore-Witten anomaly in IIA and compactified M-theory partition function is equivalent to orientability of spacetime with respect to (complex-oriented) elliptic cohomology. Motivated by this, we define an ellip...

  19. Krichever-Novikov type algebras theory and applications

    CERN Document Server

    Schlichenmaier, Martin

    2014-01-01

    Krichever and Novikov introduced certain classes of infinite dimensionalLie algebrasto extend the Virasoro algebra and its related algebras to Riemann surfaces of higher genus. The author of this book generalized and extended them toa more general setting needed by the applications. Examples of applications are Conformal Field Theory, Wess-Zumino-Novikov-Witten models, moduli space problems, integrable systems, Lax operator algebras, and deformation theory of Lie algebra. Furthermore they constitute an important class of infinite dimensional Lie algebras which due to their geometric origin are

  20. Initial layer theory and model equations of Volterra type

    International Nuclear Information System (INIS)

    It is demonstrated here that there exist initial layers to singularly perturbed Volterra equations whose thicknesses are not of order of magnitude of 0(?), ? ? 0. It is also shown that the initial layer theory is extremely useful because it allows one to construct the approximate solution to an equation, which is almost identical to the exact solution. (author)

  1. Quantification analysis of CT of ovarian tumors

    International Nuclear Information System (INIS)

    Early symptoms in patients with ovarian tumors are usually few and nonspecific. CT is often very helpful in the diagnosis of ovarian tumors. Although it is difficult to identify normal ovaries, it is usually possible to diagnose ovarian lesions on CT, because with few exceptions they show tumorous enlargement. We can even estimate the histology in typical cases such as dermoid cysts or some types of cystadenomas. However, estimation of histology is difficult in many cases. Tumors other than those of ovarian origin can occur in the pelvis and require differentiation. Ovarian tumors have a close relationship with the uterus and broad ligaments, and make contact with as least one side of the pelvic wall. Enhanced CT with contrast media may facilitate differentiation between pedunculated subserosal leiomyoma uteri and ovarian tumor, because the former shows intense enhancement as a uterine body; the latter is less intense. Thus, we have little difficulty in differentiating between tumors of ovarian origin and those of other origins. Our problem is differentiating between malignant and benign ovarian tumors, and clarification of their histology. In this study, we devised a decision flow chart to attain an accurate diagnosis. In part, we have utilized Hayashi's quantification theory II, a multiple regression analysis where predictive variables are categorical and outside criteria are classificatory. Hayashi stated that the aim of multi-dimensional quantification is to synthetically form numerical representation of intercorrelated patterns to maximize the efficiency of classification, i.e. the success rate of prediction. Thus, quantification of patterns is thought to be effective in facilitating image diagnosis such as CT and minimizing errors. (author)

  2. Quantification on source/receptor relationship of primary pollutants and secondary aerosols from ground sources. Part 1. Theory

    International Nuclear Information System (INIS)

    A new algorithm has been derived for trajectory models to determine the transfer coefficient of each source along or adjacent to a trajectory and to calculate the concentrations of SO2 NOx, sulfate, nitrate, fine particulate matter (PM) and coarse PM at a receptor. The transfer coefficient tf(sm-1) is defined to be the ratio between the contributed concentration ?C(?gm-3) to the receptor from a ground source and the emission rate of the source q (?gm-2s-1) at a grid. i.e. tfidentical to?C/q. The model is developed by combining with a backward trajectory scheme and a circuit-type's parameterization. First, the transfer coefficients of grids along or adjacent a back-trajectory are calculated. Then, the contributed concentration of each emission grid is determined by multiplying its emission rate with the transfer coefficient of the grid. Finally, the concentration at the receptor is determined by the summation of all the contributed concentrations within the domain of simulation. (author)

  3. Quantification of margins and uncertainties: Alternative representations of epistemic uncertainty

    International Nuclear Information System (INIS)

    In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, 'Quantification of Margins and Uncertainties: Conceptual and Computational Basis,' describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notiony are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.

  4. The Classification of Gun’s Type Using Image Recognition Theory

    OpenAIRE

    M.L.Kulthon Kasemsan

    2014-01-01

    The research aims to develop the Gun’s Type and Models Classification (GTMC) system using image recognition theory. It is expected that this study can serve as a guide for law enforcement agencies or at least serve as the catalyst for a similar type of research. Master image storage and image recognition are the two main processes. The procedures involved original images, scaling, gray scale, canny edge detector, SUSAN corner detector, block matching template, and finally gun type’s recogniti...

  5. A New Look at Generalized Rewriting in Type Theory

    Directory of Open Access Journals (Sweden)

    Matthieu Sozeau

    2009-01-01

    Full Text Available Rewriting is an essential tool for computer-based reasoning, both automated and assisted. This is because rewriting is a general notion that permits modeling a wide range of problems and provides a means to effectively solve them. In a proof assistant, rewriting can be used to replace terms in arbitrary contexts, generalizing the usual equational reasoning to reasoning modulo arbitrary relations. This can be done provided the necessary proofs that functions appearing in goals are congruent with respect to specific relations. We present a new implementation of generalized rewriting in the Coq proof assistant, making essential use of the expressive power of dependent types and the recently implemented type class mechanism. The new rewrite tactic improves on and generalizes previous versions by natively supporting higher-order functions, polymorphism and subrelations. The type class system inspired by Haskell provides a perfect interface between the user and the tactic, making it easily extensible.

  6. Surveying problem solution with theory and objective type questions

    CERN Document Server

    Chandra, AM

    2005-01-01

    The book provides a lucid and step-by-step treatment of the various principles and methods for solving problems in land surveying. Each chapter starts with basic concepts and definitions, then solution of typical field problems and ends with objective type questions. The book explains errors in survey measurements and their propagation. Survey measurements are detailed next. These include horizontal and vertical distance, slope, elevation, angle, and direction. Measurement using stadia tacheometry and EDM are then highlighted, followed by various types of levelling problems. Traversing is then explained, followed by a detailed discussion on adjustment of survey observations and then triangulation and trilateration.

  7. Symmetry breaking and restoration in Lifshitz type theories

    OpenAIRE

    Farakos, K.; Metaxas, D.

    2011-01-01

    We consider the one-loop effective potential at zero and finite temperature in scalar field theories with anisotropic space-time scaling. For $z=2$, there is a symmetry breaking term induced at one-loop at zero temperature and we find symmetry restoration through a first-order phase transition at high temperature. For $z=3$, we considered at first the case with a positive mass term at tree level and found no symmetry breaking effects induced at one-loop, and then we study th...

  8. Cosmic web-type classification using decision theory

    Science.gov (United States)

    Leclercq, F.; Jasche, J.; Wandelt, B.

    2015-04-01

    Aims: We propose a decision criterion for segmenting the cosmic web into different structure types (voids, sheets, filaments, and clusters) on the basis of their respective probabilities and the strength of data constraints. Methods: Our approach is inspired by an analysis of games of chance where the gambler only plays if a positive expected net gain can be achieved based on some degree of privileged information. Results: The result is a general solution for classification problems in the face of uncertainty, including the option of not committing to a class for a candidate object. As an illustration, we produce high-resolution maps of web-type constituents in the nearby Universe as probed by the Sloan Digital Sky Survey main galaxy sample. Other possible applications include the selection and labelling of objects in catalogues derived from astronomical survey data.

  9. Cosmic web-type classification using decision theory

    CERN Document Server

    Leclercq, Florent; Wandelt, Benjamin

    2015-01-01

    We propose a decision criterion for segmenting the cosmic web into different structure types (voids, sheets, filaments and clusters) on the basis of their respective probabilities and the strength of data constraints. Our approach is inspired by an analysis of games of chance where the gambler only plays if a positive expected net gain can be achieved based on some degree of privileged information. The result is a general solution for classification problems in the face of uncertainty, including the option of not committing to a class for a candidate object. As an illustration, we produce high-resolution maps of web-type constituents in the nearby Universe as probed by the Sloan Digital Sky Survey main galaxy sample. Other possible applications include the selection and labeling of objects in catalogs derived from astronomical survey data.

  10. On the Conformal Field Theory Duals of type IIA AdS_4 Flux Compactifications

    CERN Document Server

    Aharony, Ofer; Berkooz, Micha

    2008-01-01

    We study the conformal field theory dual of the type IIA flux compactification model of DeWolfe, Giryavets, Kachru and Taylor, with all moduli stabilized. We find its central charge and properties of its operator spectrum. We concentrate on the moduli space of the conformal field theory, which we investigate through domain walls in the type IIA string theory. The moduli space turns out to consist of many different branches. We use Bezout's theorem and Bernstein's theorem to enumerate the different branches of the moduli space and estimate their dimension.

  11. Symmetry breaking and restoration in Lifshitz type theories

    Science.gov (United States)

    Farakos, K.; Metaxas, D.

    2012-02-01

    We consider the one-loop effective potential at zero and finite temperature in scalar field theories with anisotropic space-time scaling. For z = 2, there is a symmetry breaking term induced at one loop at zero temperature and we find symmetry restoration through a first-order phase transition at high temperature. For z = 3, we considered at first the case with a positive mass term at tree level and found no symmetry breaking effects induced at one loop, and then we study the case with a negative mass term at tree level where we cannot conclude about symmetry restoration effects at high temperature because of the imaginary parts that appear in the effective potential for small values of the scalar field.

  12. Symmetry breaking and restoration in Lifshitz type theories

    CERN Document Server

    Farakos, K

    2011-01-01

    We consider the one-loop effective potential at zero and finite temperature in scalar field theories with anisotropic space-time scaling. For $z=2$, there is a symmetry breaking term induced at one-loop at zero temperature and we find symmetry restoration through a first-order phase transition at high temperature. For $z=3$, we considered at first the case with a positive mass term at tree level and found no symmetry breaking effects induced at one-loop, and then we study the case with a negative mass term at tree level where we cannot conclude about symmetry restoration effects at high temperature because of the imaginary parts that appear in the effective potential for small values of the scalar field.

  13. Ginzburg-Landau-type theory of spin superconductivity.

    Science.gov (United States)

    Bao, Zhi-qiang; Xie, X C; Sun, Qing-feng

    2013-01-01

    Spin superconductivity is a recently proposed analogue of conventional charge superconductivity, in which spin currents flow without dissipation but charge currents do not. Here we derive a universal framework for describing the properties of a spin superconductor along similar lines to the Ginzburg-Landau equations that describe conventional superconductors, and show that the second of these Ginzburg-Landau-type equations is equivalent to a generalized London equation. Just as the GL equations enabled researchers to explore the behaviour of charge superconductors, our Ginzburg-Landau-type equations enable us to make a number of non-trivial predictions about the potential behaviour of putative spin superconductor. They enable us to calculate the super spin current in a spin superconductor under a uniform electric field or that induced by a thin conducting wire. Moreover, they allow us to predict the emergence of new phenomena, including the spin-current Josephson effect in which a time-independent magnetic field induces a time-dependent spin current. PMID:24335888

  14. On the field theory of the extended-type electron

    International Nuclear Information System (INIS)

    In a recent paper, the classical theory of Barut and Zhanghi (BZ) for the electron spin [which interpreted the Zitterbewegung (zbw) motion as an internal motion along helical paths] and its ''quantum'' version have been investigated by using the language of Clifford algebras. In so doing, a new non-linear Dirac-like equation (NDE) was derived. We want to readdress the whole subject, and ''complete'' it, by adopting - for the sake of physical clarity - the ordinary tensorial language, within the frame of a first quantization formalism. In particular, we re-derive here the NDE for the electron field, show it to be associated with a new conserved probability current which allows us to work out a quantum probability interpretation of NDE. Actually, we propose this equation in substitution for the Dirac equation, which is obtained from the former by averaging over a zbw cycle. We then derive a new equation of motion for the 4-velocity field which will allow us to regard the electron as an extended object with a classically intelligible internal structure (thus overcoming some known, long-standing problems). We carefully study the solutions of the NDE; with special attention to those implying (at the classical limit) light-like helical motions, since they appear to be the most adequate solutions for the electron description from a kinematical and physical point of view, and to cope with the electromagnetic properties of the electron. (author). 18 refs

  15. Digital games for type 1 and type 2 diabetes: underpinning theory with three illustrative examples.

    Science.gov (United States)

    Kamel Boulos, Maged N; Gammon, Shauna; Dixon, Mavis C; MacRury, Sandra M; Fergusson, Michael J; Miranda Rodrigues, Francisco; Mourinho Baptista, Telmo; Yang, Stephen P

    2015-01-01

    Digital games are an important class of eHealth interventions in diabetes, made possible by the Internet and a good range of affordable mobile devices (eg, mobile phones and tablets) available to consumers these days. Gamifying disease management can help children, adolescents, and adults with diabetes to better cope with their lifelong condition. Gamification and social in-game components are used to motivate players/patients and positively change their behavior and lifestyle. In this paper, we start by presenting the main challenges facing people with diabetes-children/adolescents and adults-from a clinical perspective, followed by three short illustrative examples of mobile and desktop game apps and platforms designed by Ayogo Health, Inc. (Vancouver, BC, Canada) for type 1 diabetes (one example) and type 2 diabetes (two examples). The games target different age groups with different needs-children with type 1 diabetes versus adults with type 2 diabetes. The paper is not meant to be an exhaustive review of all digital game offerings available for people with type 1 and type 2 diabetes, but rather to serve as a taster of a few of the game genres on offer today for both types of diabetes, with a brief discussion of (1) some of the underpinning psychological mechanisms of gamified digital interventions and platforms as self-management adherence tools, and more, in diabetes, and (2) some of the hypothesized potential benefits that might be gained from their routine use by people with diabetes. More research evidence from full-scale evaluation studies is needed and expected in the near future that will quantify, qualify, and establish the evidence base concerning this gamification potential, such as what works in each age group/patient type, what does not, and under which settings and criteria. PMID:25791276

  16. M theory, type IIA string and 4D N=1 SUSY SU(NL) x SU(NR) gauge theory

    International Nuclear Information System (INIS)

    SU(NL) x SU(NR) gauge theories are investigated as effective field theories on D4-branes in type IIA string theory. The classical gauge configuration is shown to match quantitatively with a corresponding classical U(NL) x U(NR) gauge theory. Quantum effects freeze the U(1) gauge factors and turn some parameters into moduli. The SU(NL) x SU(NR) quantum model is realized in M theory. Starting with an N=2 configuration (parallel NS 5-branes), the rotation of a single NS 5-brane is considered. Generically, this leads to a complete lifting of the Coulomb moduli space. The implications of this result to field theory and the dynamics of branes are discussed. When the initial M 5-brane is reducible, part of the Coulomb branch may survive. Some such situations are considered, leading to curves describing the effective gauge couplings for N=1 models. The generalization to models with more gauge group factors is also discussed. (orig.)

  17. Cartan's equations define a topological field theory of the BF type

    International Nuclear Information System (INIS)

    Cartan's first and second structure equations together with first and second Bianchi identities can be interpreted as equations of motion for the tetrad, the connection and a set of two-form fields TI and RJI. From this viewpoint, these equations define by themselves a field theory. Restricting the analysis to four-dimensional spacetimes (keeping gravity in mind), it is possible to give an action principle of the BF type from which these equations of motion are obtained. The action turns out to be equivalent to a linear combination of the Nieh-Yan, Pontrjagin, and Euler classes, and so the field theory defined by the action is topological. Once Einstein's equations are added, the resulting theory is general relativity. Therefore, the current results show that the relationship between general relativity and topological field theories of the BF type is also present in the first-order formalism for general relativity

  18. Bianchi type-V dark energy model in Brans-Dicke theory of gravitation

    Science.gov (United States)

    Rao, V. U. M.; Jaya Sudha, V.

    2015-05-01

    Spatially homogeneous and anisotropic Bianchi type-V with variable equation of state (EOS) parameter and constant deceleration parameter is investigated in the presence of anisotropic dark energy in Brans-Dicke (Phys. Rev. 124:925, 1961) scalar-tensor theory of gravitation. The field equations of this theory have been solved by using the variation law for generalized Hubble's parameter proposed by Bermann (Nuovo Cimento B 74:82, 1983). Some physical and kinematical properties of the model are also discussed.

  19. Seiberg-Witten-type Maps for Currents and Energy-Momentum Tensors in Noncommutative Gauge Theories

    OpenAIRE

    Banerjee, Rabin(S.N. Bose National Centre for Basic Sciences, JD Block, Sector III, Salt Lake, Kolkata 700098, India); Lee, Choonkyu; Yang, Hyun Seok

    2003-01-01

    We derive maps relating the currents and energy-momentum tensors in noncommutative (NC) gauge theories with their commutative equivalents. Some uses of these maps are discussed. Especially, in NC electrodynamics, we obtain a generalization of the Lorentz force law. Also, the same map for anomalous currents relates the Adler-Bell-Jackiw type NC covariant anomaly with the standard commutative-theory anomaly. For the particular case of two dimensions, we discuss the implication...

  20. Screening for and validated quantification of phenethylamine-type designer drugs and mescaline in human blood plasma by gas chromatography/mass spectrometry.

    Science.gov (United States)

    Habrdova, Vilma; Peters, Frank T; Theobald, Denis S; Maurer, Hans H

    2005-06-01

    In recent years, several newer designer drugs of the so-called 2C series such as 2C-D, 2C-E, 2C-P, 2C-B, 2C-I, 2C-T-2, and 2C-T-7 have entered the illicit drug market as recreational drugs. Some fatal intoxications involving 2C-T-7 have been reported. Only scarce data have been published about analyses of these substances in human blood and/or plasma. This paper describes a method for screening and simultaneous quantification of the above-mentioned compounds and their analog mescaline in human blood plasma. The analytes were analyzed by gas chromatography/mass spectrometry in the selected-ion monitoring mode, after mixed-mode solid-phase extraction (HCX) and derivatization with heptafluorobutyric anhydride. The method was fully validated according to international guidelines. Validation data for 2C-T-2 and 2C-T-7 were unacceptable. For all other analytes, the method was linear from 5 to 500 microg/L and the data for accuracy (bias) and precision (coefficient of variation) were within the acceptance limits of +/-15% and <15%, respectively (within +/-20% and <20% near the limit of quantification of 5 microg/L). PMID:15827969

  1. How to obtain a covariant Breit type equation from relativistic Constraint Theory

    International Nuclear Information System (INIS)

    It is shown that, by an appropriate modification of the structure of the interaction potential, the Breit equation can be incorporated into a set of two compatible manifestly covariant wave equations, derived from the general rules of Constraint Theory. The complementary equation to the covariant Breit type equation determines the evolution law in the relative time variable. The interaction potential can be systematically calculated in perturbation theory from Feynman diagrams. The normalization condition of the Breit wave function is determined. The wave equation is reduced, for general classes of potential, to a single Pauli-Schroedinger type equation. (author). 27 refs

  2. Construction of a Gauge-Invariant Action for Type II Superstring Field Theory

    OpenAIRE

    Matsunaga, Hiroaki

    2013-01-01

    We construct a gauge-invariant action for covariant type II string field theory in the NS-NS sector. Our construction is based on the large Hilbert space description and Zwiebach's string products are used. First, we rewrite the action for bosonic string field theory into a new form where a state in the kernel of the generator of the gauge transformation appears explicitly. Then we use the same strategy and write down our type II action, where a projector onto the small Hilb...

  3. Maximal R-symmetry violating amplitudes in type IIb superstring theory.

    Science.gov (United States)

    Boels, Rutger H

    2012-08-24

    On-shell superspace techniques are used to quantify R-symmetry violation in type IIB superstring theory amplitudes in a flat background in 10 dimensions. This shows the existence of a particularly simple class of nonvanishing amplitudes in this theory, which violate R symmetry maximally. General properties of the class and some of its extensions are established that at string tree level are shown to determine the first three nontrivial effective field theory contributions to all multiplicity. This leads to a natural conjecture for the exact analytic part of the first two of these. PMID:23002738

  4. Abelian gauge symmetries and fluxed instantons in compactifications of type IIB and F-theory

    CERN Document Server

    Kerstan, Max

    2014-01-01

    We discuss the role of Abelian gauge symmetries in type IIB orientifold compactifications and their F-theory uplift. Particular emphasis is placed on U(1)s which become massive through the geometric St\\"uckelberg mechanism in type IIB. We present a proposal on how to take such geometrically massive U(1)s and the associated fluxes into account in the Kaluza-Klein reduction of F-theory with the help of non-harmonic forms. Evidence for this proposal is obtained by working out the F-theory effective action including such non-harmonic forms and matching the results with the known type IIB expressions. We furthermore discuss how world-volume fluxes on D3-brane instantons affect the instanton charge with respect to U(1) gauge symmetries and the chiral zero mode spectrum. The classical partition function of M5-instantons in F-theory is discussed and compared with the type IIB results for D3-brane instantons. The type IIB match allows us to determine the correct M5 partition function. Selection rules for the absence o...

  5. Nonperturbative type IIB model building in the F-theory framework

    International Nuclear Information System (INIS)

    This dissertation is concerned with the topic of non-perturbative string theory, which is generally considered to be the most promising approach to a consistent description of quantum gravity. The five known 10-dimensional perturbative string theories are all interconnected by numerous dualities, such that an underlying non-perturbative 11-dimensional theory, called M-theory, is postulated. Due to several technical obstacles, little is known about the fundamental objects in this theory. There exists an alternative non-perturbative description to type IIB string theory, namely F-theory. Here the SL(2;Z) self-duality of IIB theory is geometrized in the form of an elliptic fibration over the space-time. Moreover, higher-dimensional objects like 7-branes are included via singularities into the geometric picture. This formally elegant description, however, requires significant technical effort for the construction of suitable compactification geometries, as many different aspects necessarily have to be dealt with at the same time. On the other hand, the generation of essential GUT building blocks like certain Yukawa couplings or spinor representations is easier compared to perturbative string theory. The goal of this study is therefore to formulate a unified theory within the framework of F-theory, that satisfies basic phenomenological constraints. Within this thesis, at first E3-brane instantons in type IIB string theory - 4-dimensional objects that are entirely wrapped around the invisible dimensions of space-time - are matched with M5-branes in F-theory. Such objects are of great importance in the generation of critical Yukawa couplings or the stabilization of the free parameters of a theory. Certain properties of M5-branes then allow to derive a new criterion for E3-branes to contribute to the superpotential. In the aftermath of this analysis, several compactification geometries are constructed and checked for basic properties that are relevant for semi-realistic unified model building. An important aspect is the proper handling of the gauge flux on the 7-branes. Via the spectral cover description - which at first requires further refinements - chiral matter can be generated and the unified gauge group can be broken to the Standard Model. Ultimately, in this thesis an explicit unified model based on the gauge group SU(5) is constructed within the F-theory framework, such that an acceptable phenomenology and the observed three chiral matter generations are obtained. (orig.)

  6. Nonperturbative type IIB model building in the F-theory framework

    Energy Technology Data Exchange (ETDEWEB)

    Jurke, Benjamin Helmut Friedrich

    2011-02-28

    This dissertation is concerned with the topic of non-perturbative string theory, which is generally considered to be the most promising approach to a consistent description of quantum gravity. The five known 10-dimensional perturbative string theories are all interconnected by numerous dualities, such that an underlying non-perturbative 11-dimensional theory, called M-theory, is postulated. Due to several technical obstacles, little is known about the fundamental objects in this theory. There exists an alternative non-perturbative description to type IIB string theory, namely F-theory. Here the SL(2;Z) self-duality of IIB theory is geometrized in the form of an elliptic fibration over the space-time. Moreover, higher-dimensional objects like 7-branes are included via singularities into the geometric picture. This formally elegant description, however, requires significant technical effort for the construction of suitable compactification geometries, as many different aspects necessarily have to be dealt with at the same time. On the other hand, the generation of essential GUT building blocks like certain Yukawa couplings or spinor representations is easier compared to perturbative string theory. The goal of this study is therefore to formulate a unified theory within the framework of F-theory, that satisfies basic phenomenological constraints. Within this thesis, at first E3-brane instantons in type IIB string theory - 4-dimensional objects that are entirely wrapped around the invisible dimensions of space-time - are matched with M5-branes in F-theory. Such objects are of great importance in the generation of critical Yukawa couplings or the stabilization of the free parameters of a theory. Certain properties of M5-branes then allow to derive a new criterion for E3-branes to contribute to the superpotential. In the aftermath of this analysis, several compactification geometries are constructed and checked for basic properties that are relevant for semi-realistic unified model building. An important aspect is the proper handling of the gauge flux on the 7-branes. Via the spectral cover description - which at first requires further refinements - chiral matter can be generated and the unified gauge group can be broken to the Standard Model. Ultimately, in this thesis an explicit unified model based on the gauge group SU(5) is constructed within the F-theory framework, such that an acceptable phenomenology and the observed three chiral matter generations are obtained. (orig.)

  7. What does it take for a specific prospect theory type household to engage in risky investment?

    OpenAIRE

    HLOUSKOVA, Jaroslava; Tsigaris, Panagiotis

    2012-01-01

    This research note examines the conditions which will induce a prospect theory type investor, whose reference level is set by 'playing it safe', to invest in a risky asset. The conditions indicate that this type of investor requires a large equity premium to invest in risky assets. However, once she does invest because of a large risk premium, she becomes aggressive and buys/sells till an externally imposed upper/lower bound is reached.

  8. Semi-quantification of endolymphatic size on MR imaging after intravenous injection of single-dose gadodiamide. Comparison between two types of processing strategies

    International Nuclear Information System (INIS)

    Many inner ear disorders, including Meniere's disease, are believed to be based on endolymphatic hydrops. We evaluated a newly proposed method for semi-quantification of endolymphatic size in patients with suspected endolymphatic hydrops that uses 2 kinds of processed magnetic resonance (MR) images. Twenty-four consecutive patients underwent heavily T2-weighted (hT2W) MR cisternography (MRC), hT2W 3-dimensional (3D) fluid-attenuated inversion recovery (FLAIR) with inversion time of 2250 ms (positive perilymph image, PPI), and hT2W-3D-IR with inversion time of 2050 ms (positive endolymph image, PEI) 4 hours after intravenous administration of single-dose gadolinium-based contrast material (IV-SD-GBCM). Two images were generated using 2 new methods to process PPI, PEI, and MRC. Three radiologists contoured the cochlea and vestibule on MRC, copied regions of interest (ROIs) onto the 2 kinds of generated images, and semi-quantitatively measured the size of the endolymph for the cochlea and vestibule by setting a threshold pixel value. Each observer noted a strong linear correlation between endolymphatic size of both the cochlea and vestibule of the 2 kinds of generated images. The Pearson correlation coefficients (r) were 0.783, 0.734, and 0.800 in the cochlea and 0.924, 0.930, and 0.933 in the vestibule (P<0.001, for all). In both the cochlea and vestibule, repeated-measures analysis of variance showed no statistically significant difference between observers. Use of the 2 kinds of generated images generated from MR images obtained 4 hours after IV-SD-GBCM might enable semi-quantification of endolymphatic size with little observer dependency. (author)

  9. Quantification of Endogenous Retinoids

    OpenAIRE

    Kane, Maureen A.; Napoli, Joseph L

    2010-01-01

    Numerous physiological processes require retinoids, including development, nervous system function, immune responsiveness, proliferation, differentiation, and all aspects of reproduction. Reliable retinoid quantification requires suitable handling and, in some cases, resolution of geometric isomers that have different biological activities. Here we describe procedures for reliable and accurate quantification of retinoids, including detailed descriptions for handling retinoids, preparing stand...

  10. The central error of M. W. Evans ECE theory - a type mismatch

    CERN Document Server

    Bruhn, G W

    2006-01-01

    This note corrects an erroneous article by M.W. Evans on his GCUFT theory which he took over in his GCUFT book. Due to Evans' bad habit of suppressing seemingly unimportant indices type match errors occur that cannot be removed. In addition some further errors of that article/book chapter are pointed out.

  11. Spontaneous supersymmetry breaking and instanton sum in 2D type IIA Superstring Theory

    International Nuclear Information System (INIS)

    We consider a double-well supersymmetric matrix model and its interpretation as a nonperturbative definition of two-dimensional type IIA superstring theory in the presence of a nontrivial Ramond-Ramond background. The interpretation is based on symmetries in both sides of the matrix model and the IIA string theory, and confirmed by direct comparison of various correlation functions . The full nonperturbative free energy of the matrix model in its double scaling limit is represented by the Tracy-Widom distribution in random matrix theory. We show that instanton contributions in the matrix model survive in the double scaling limit and trigger spontaneous supersymmetry breaking. It implies that the target-space supersymmetry is spontaneously broken due to nonperturbative effects in the IIA string theory

  12. E$_{6(6)}$ Exceptional Field Theory: Review and Embedding of Type IIB

    CERN Document Server

    Baguet, Arnaud; Samtleben, Henning

    2015-01-01

    We review E$_{6(6)}$ exceptional field theory with a particular emphasis on the embedding of type IIB supergravity, which is obtained by picking the GL$(5)\\times {\\rm SL}(2)$ invariant solution of the section constraint. We work out the precise decomposition of the E$_{6(6)}$ covariant fields on the one hand and the Kaluza-Klein-like decomposition of type IIB supergravity on the other. Matching the symmetries, this allows us to establish the precise dictionary between both sets of fields. Finally, we establish on-shell equivalence. In particular, we show how the self-duality constraint for the four-form potential in type IIB is reconstructed from the duality relations in the off-shell formulation of the E$_{6(6)}$ exceptional field theory.

  13. A Novel Framework for Quantification of Supply Chain Risks

    OpenAIRE

    Qazi, Abroon; Quigley, John; Dickson, Alex

    2014-01-01

    Supply chain risk management is an active area of research and there is a research gap of exploring established risk quantification techniques in other fields for application in the context of supply chain management. We have developed a novel framework for quantification of supply chain risks that integrates two techniques of Bayesian belief network and Game theory. Bayesian belief network can capture interdependency between risk factors and Game theory can assess risks associated with confl...

  14. Generalized N=1 and N=2 structures in M-theory and type II orientifolds

    CERN Document Server

    Graña, Mariana

    2012-01-01

    We consider M-theory and type IIA reductions to four dimensions with N=2 and N=1 supersymmetry and discuss their interconnection. Our work is based on the framework of Exceptional Generalized Geometry (EGG), which extends the tangent bundle to include all symmetries in M-theory and type II string theory, covariantizing the local U-duality group E7. We describe general N=1 and N=2 reductions in terms of SU(7) and SU(6) structures on this bundle and thereby derive the effective four-dimensional N=1 and N=2 couplings, in particular we compute the Kahler and hyper-Kahler potentials as well as the triplet of Killing prepotentials (or the superpotential in the N=1 case). These structures and couplings can be described in terms of forms on an eight-dimensional tangent space where SL(8) contained in E7 acts, which might indicate a description in terms of an eight-dimensional internal space, similar to F-theory. We finally discuss an orbifold action in M-theory and its reduction to O6 orientifolds, and show how the pr...

  15. The structure of the R8 term in type IIB string theory

    International Nuclear Information System (INIS)

    Based on the structure of the on-shell linearized superspace of type IIB supergravity, we argue that there is a non-BPS 16 derivative interaction in the effective action of type IIB string theory of the form (t8t8R4)2, which we call the R8 interaction. It lies in the same supermultiplet as the G8R4 interaction. Using the Kawai–Lewellen–Tye relation, we analyze the structure of the tree level eight-graviton scattering amplitude in the type IIB theory, which leads to the R8 interaction at the linearized level. This involves an analysis of color-ordered multi-gluon disc amplitudes in the type I theory, which shows an intricate pole structure and transcendentality consistent with various other interactions. Considerations of S-duality show that the R8 interaction receives non-analytic contributions in the string coupling at one and two loops. Apart from receiving perturbative contributions, we show that the R8 interaction receives a non-vanishing contribution in the one D-instanton-anti-instanton background at leading order in the weak coupling expansion. (paper)

  16. Inflation of Bianchi type-$\\Rmnum{7}_0$ Universe with Dirac Field in Einstein-Cartan theory

    CERN Document Server

    Fang, Wei; Lu, Hui-Qing

    2010-01-01

    We discuss the Bianchi type-$\\Rmnum{7}_0$ cosmology with Dirac field in Einstein-Cartan theory. We obtain the equations of Dirac field and gravitational field in Einstein-Cartan theory. We find a Bianchi type-$\\Rmnum{7}_0$ inflationary solution.

  17. Specimens: "most of" generic NPs in a contextually flexible type theory

    CERN Document Server

    Retoré, Christian

    2011-01-01

    This paper proposes to compute the meanings associated to sentences with generic NPs corresponding to the most of generalized quantifier. We call these generics specimens and they resemble stereotypes or prototypes in lexical semantics. The meanings are viewed as logical formulae that can be thereafter interpreted in your favorite models. We rather depart from the dominant Fregean single untyped universe and go for type theory with hints from Hilbert epsilon calculus and from medieval philosophy. Our type theoretic analysis bears some resemblance with on going work in lexical semantics. Our model also applies to classical examples involving a class (or a generic element of this class) which is provided by the context. An outcome of this study is that, in the minimalism-contextualism debate, if one adopts a type theoretical view, terms encode the purely semantic meaning component while their typing is pragmatically determined.

  18. Convex ordering and quantification of quantumness

    Science.gov (United States)

    Sperling, J.; Vogel, W.

    2015-06-01

    The characterization of physical systems requires a comprehensive understanding of quantum effects. One aspect is a proper quantification of the strength of such quantum phenomena. Here, a general convex ordering of quantum states will be introduced which is based on the algebraic definition of classical states. This definition resolves the ambiguity of the quantumness quantification using topological distance measures. Classical operations on quantum states will be considered to further generalize the ordering prescription. Our technique can be used for a natural and unambiguous quantification of general quantum properties whose classical reference has a convex structure. We apply this method to typical scenarios in quantum optics and quantum information theory to study measures which are based on the fundamental quantum superposition principle.

  19. Four types of coping with COPD-induced breathlessness in daily living: a grounded theory study

    DEFF Research Database (Denmark)

    Bastrup, Lene; Dahl, Ronald

    2013-01-01

    Coping with breathlessness is a complex and multidimensional challenge for people with chronic obstructive pulmonary disease (COPD) and involves interacting physiological, cognitive, affective, and psychosocial dimensions. The aim of this study was to explore how people with moderate to most severe COPD predominantly cope with breathlessness during daily living. We chose a multimodal grounded theory design that holds the opportunity to combine qualitative and quantitative data to capture and explain the multidimensional coping behaviour among poeple with COPD. The participants' main concern in coping with breathlessness appeared to be an endless striving to economise on resources in an effort to preserve their integrity. In this integrity-preserving process, four predominant coping types emerged and were labelled: `Overrater´, `Challenger´, `Underrater´, and `Leveller´. Each coping type comprised distrinctive physiological, cognitive, affective and psychosocial features constituting coping-type-specific indicators. In theory, four predominant coping types with distinct physiological, cognitive, affective and psychosocial properties are observed among people with COPD. The four coping types seem to constitute a coping trajectory. This hypothesis should be further tested in a longitudinal study.

  20. Some properties of the Cauchy-type integral for the Laplace vector fields theory

    International Nuclear Information System (INIS)

    We study the analog of the Cauchy-type integral for the Laplace vector fields theory in case of a piece-wise Liapunov surface of integration and we prove the Sokhotski-Plemelj theorem for it as well as the necessary and sufficient condition for the possibility to extend a given Hoelder function from such a surface up to a Laplace vector field. Formula for the square of the singular Cauchy-type integral is given. The proofs of all these facts are based on intimate relations between Laplace vector held and some versions of quaternionic analysis

  1. WKB - type approximations in the theory of vacuum particle creation in strong fields

    CERN Document Server

    Smolyansky, S A; Panferov, A D; Prozorkevich, A V; Blaschke, D; Juchnowski, L

    2014-01-01

    Within the theory of vacuum creation of an $e^{+}e^{-}$ - plasma in the strong electric fields acting in the focal spot of counter-propagating laser beams we compare predictions on the basis of different WKB-type approximations with results obtained in the framework of a strict kinetic approach. Such a comparison demonstrates a considerable divergence results. We analyse some reasoning for this observation and conclude that WKB-type approximations have an insufficient foundation for QED in strong nonstationary fields. The results obtained in this work on the basis of the kinetic approach are most optimistic for the observation of an $e^{+}e^{-}$ - plasma in the range of optical and x-ray laser facilities. We discuss also the influence of unphysical features of non-adiabatic field models on the reliability of predictions of the kinetic theory.

  2. A sufficient condition for de Sitter vacua in type IIB string theory

    International Nuclear Information System (INIS)

    We derive a sufficient condition for realizing meta-stable de Sitter vacua with small positive cosmological constant within type IIB string theory flux compactifications with spontaneously broken supersymmetry. There are a number of 'lamp post' constructions of de Sitter vacua in type IIB string theory and supergravity. We show that one of them - the method of 'Kaehler uplifting' by F-terms from an interplay between non-perturbative effects and the leading ?'-correction - allows for a more general parametric understanding of the existence of de Sitter vacua. The result is a condition on the values of the flux induced superpotential and the topological data of the Calabi-Yau compactification, which guarantees the existence of a meta-stable de Sitter vacuum if met. Our analysis explicitly includes the stabilization of all moduli, i.e. the Kaehler, dilaton and complex structure moduli, by the interplay of the leading perturbative and non-perturbative effects at parametrically large volume. (orig.)

  3. Chiral resonant solitons in Chern–Simons theory and Broer–Kaup type new hydrodynamic systems

    International Nuclear Information System (INIS)

    Highlights: ? We reduce chiral solitons in quantum potential from Chern–Simons theory of anyons. ? We examine corresponding family of integrable resonant DNLS models. ? Models admit the second non-Madelung hydrodynamic representation. ? New hydrodynamic systems of the Broer–Kaup type are derived. ? Soliton interactions in these systems show the resonant character. - Abstract: New Broer–Kaup type systems of hydrodynamic equations are derived from the derivative reaction–diffusion systems arising in SL(2, R) Kaup–Newell hierarchy, represented in the non-Madelung hydrodynamic form. A relation with the problem of chiral solitons in quantum potential as a dimensional reduction of 2 + 1 dimensional Chern–Simons theory for anyons is shown. By the Hirota bilinear method, soliton solutions are constructed and the resonant character of soliton interaction is found.

  4. Kaluza–Klein-type models of de Sitter and Poincaré gauge theories of gravity

    International Nuclear Information System (INIS)

    We construct Kaluza–Klein-type models with a de Sitter or Minkowski bundle in the de Sitter or Poincaré gauge theory of gravity, respectively. A manifestly gauge-invariant formalism has been given. The gravitational dynamics is constructed by the geometry of the de Sitter or Minkowski bundle and a global section which plays an important role in the gauge-invariant formalism. Unlike the old Kaluza–Klein-type models of gauge theory of gravity, a suitable cosmological term can be obtained in the Lagrangian of our models and the models in the spin-current-free and torsion-free limit will come back to general relativity with a corresponding cosmological term. We also generalize the results to the case with a variable cosmological term. (paper)

  5. Theory of Type-II Superconductors with Finite London Penetration Depth

    OpenAIRE

    Brandt, Ernst Helmut

    2001-01-01

    Previous continuum theory of type-II superconductors of various shapes with and without vortex pinning in an applied magnetic field and with transport current, is generalized to account for a finite London penetration depth lambda. This extension is particularly important at low inductions B, where the transition to the Meissner state is now described correctly, and for films with thickness comparable to or smaller than lambda. The finite width of the surface layer with scre...

  6. Type I Superconductivity upon Monopole Condensation in Seiberg-Witten Theory

    OpenAIRE

    Vainshtein, A.; Yung, A.

    2000-01-01

    We study the confinement scenario in N=2 supersymmetric SU(2) gauge theory near the monopole point upon breaking of N=2 supersymmetry by the adjoint matter mass term. We confirm claims made previously that the Abrikosov-Nielsen-Olesen string near the monopole point fails to be a BPS state once next-to-leading corrections in the adjoint mass parameter taken into account. Our results shows that type I superconductivity arises upon monopole condensation. This conclusion allows ...

  7. The quantum kinks of two-dimensional theories of the phi4 type

    International Nuclear Information System (INIS)

    We apply a recently established method of kink quantization, to two-dimensional theories of the PHI4 type possessing either a Z(4)xO(N) or an SU(N) symmetry and containing N complex scalar fields. Correlation functions of kinks are estimated through a 1/N expansion. Quantum kinks are interpolated by a local field, whenever a broken symmetry phase occurs. These kinks are massless. This result holds up to all orders in 1/N. (orig.)

  8. Bosonic String and String Field Theory: a solution using Ultradistributions of Exponential Type

    OpenAIRE

    Bollini, C. G.; Rocca, M. C.

    2007-01-01

    In this paper we show that Ultradistributions of Exponential Type (UET) are appropriate for the description in a consistent way string and string field theories. A new Lagrangian for the closed string is obtained and shown to be equivalent to Nambu-Goto's Lagrangian. We also show that the string field is a linear superposition of UET of compact support CUET). We evaluate the propagator for the string field, and calculate the convolution of two of them.

  9. Phenomenological theory of 1-3 type multiferroic composite thin film: thickness effect

    International Nuclear Information System (INIS)

    The effect of thickness on the para-ferro-phase transition temperatures, the spontaneous polarization and magnetization and hysteresis loops of 1-3 type multiferroic composite thin films was studied in the framework of Landau phenomenological theory. We took into account the electrostrictive and magnetostrictive effects, misfit strains induced from the interfaces of ferroelectric/ferromagnetic portions and film/substrate. Butterfly loops under external fields were also simulated.

  10. Asymptotic freedom and infrared behavior in the type 0 string approach to gauge theory

    International Nuclear Information System (INIS)

    In a recent paper we considered the type 0 string theories, obtained from the ten-dimensional closed NSR string by a GSO projection which excludes space-time fermions, and studied the low-energy dynamics of N coincident D-branes. This led us to conjecture that the four-dimensional SU(N) gauge theory coupled to six adjoint massless scalars is dual to a background of type 0 theory carrying N units of R-R 5-form flux and involving a tachyon condensate. The tachyon background leads to a 'soft breaking' of conformal invariance, and we derived the corresponding renormalization group equation. Minahan has subsequently found its asymptotic solution for weak coupling and showed that the coupling exhibits logarithmic flow, as expected from the asymptotic freedom of the dual gauge theory. We study this solution in more detail and identify the effect of the 2-loop beta function. We also demonstrate the existence of a fixed point at infinite coupling. Just like the fixed point at zero coupling, it is characterized by the AdS5 x S5 Einstein frame metric. We argue that there is a RG trajectory extending all the way from the zero coupling fixed point in the UV to the infinite coupling fixed point in the IR

  11. Creep design of type 316LN stainless steel by K-R damage theory

    International Nuclear Information System (INIS)

    Kachanov-Rabotnov (K-R) creep damage theory was reviewed, and applied to design a creep curve for type 316LN stainless steel. Seven coefficients used in the theory, i. e., A, B, ?, m, ?, r, and q were determined, and their physical meanings were analyzed clearly. In order to quantify a damage parameter (?), cavity amount was measured in the crept specimen taken from interrupted creep test with time variation, and then the amount was reflected into K-R damage equations. Coefficient ?, which is regarded as a creep tolerance feature of a material, increased with creep strain. Master curve with ?=2.8 was well coincided with an experimental one to the full lifetime. The relationship between damage parameter and life fraction was matched with the theory at exponent r=24 value. It is concluded that K-R damage equation was reliable as the modelling equation for type 316LN stainless steel. Coefficient data obtained from type 316LN stainless steel can be utilized for life prediction of operating material. (author)

  12. Coordinated encoding between cell types in the retina: insights from the theory of phase transitions

    Science.gov (United States)

    Sharpee, Tatyana

    2015-03-01

    In this talk I will describe how the emergence of some types of neurons in the brain can be quantitatively described by the theory of transitions between different phases of matter. The two key parameters that control the separation of neurons into subclasses are the mean and standard deviation of noise levels among neurons in the population. The mean noise level plays the role of temperature in the classic theory of phase transitions, whereas the standard deviation is equivalent to pressure, in the case of liquid-gas transitions, or to magnetic field for magnetic transitions. Our results account for properties of two recently discovered types of salamander OFF retinal ganglion cells, as well as the absence of multiple types of ON cells. We further show that, across visual stimulus contrasts, retinal circuits continued to operate near the critical point whose quantitative characteristics matched those expected near a liquid-gas critical point and described by the nearest-neighbor Ising model in three dimensions. Because the retina needs to operate under changing stimulus conditions, the observed parameters of cell types corresponded to metastable states in the region between the spinodal line and the line describing maximally informative solutions. Such properties of neural circuits can maximize information transmission in a given environment while retaining the ability to quickly adapt to a new environment. NSF CAREER award 1254123 and NIH R01EY019493

  13. Localized Modes in Type II and Heterotic Singular Calabi-Yau Conformal Field Theories

    CERN Document Server

    Mizoguchi, Shun'ya

    2008-01-01

    We consider type II and heterotic string compactifications on an isolated singularity in the noncompact Gepner model approach. The conifold-type ADE noncompact Calabi-Yau threefolds, as well as the ALE twofolds, are modeled by a tensor product of the SL(2,R)/U(1) Kazama-Suzuki model and an N=2 minimal model. Based on the string partition functions on these internal Calabi-Yaus previously obtained by Eguchi and Sugawara, we construct new modular invariant, space-time supersymmetric partition functions for both type II and heterotic string theories, where the GSO projection is performed before the continuous and discrete state contributions are separated. We investigate in detail the massless spectra of the localized modes. In particular, we propose an interesting three generation model, in which each flavor is in the 27+1 representation of E6 and localized on a four-dimensional space-time residing at the tip of the cigar.

  14. A Density Functional Theory Study of Doped Tin Monoxide as a Transparent p-type Semiconductor

    KAUST Repository

    Bianchi Granato, Danilo

    2012-05-01

    In the pursuit of enhancing the electronic properties of transparent p-type semiconductors, this work uses density functional theory to study the effects of doping tin monoxide with nitrogen, antimony, yttrium and lanthanum. An overview of the theoretical concepts and a detailed description of the methods employed are given, including a discussion about the correction scheme for charged defects proposed by Freysoldt and others [Freysoldt 2009]. Analysis of the formation energies of the defects points out that nitrogen substitutes an oxygen atom and does not provide charge carriers. On the other hand, antimony, yttrium, and lanthanum substitute a tin atom and donate n-type carriers. Study of the band structure and density of states indicates that yttrium and lanthanum improves the hole mobility. Present results are in good agreement with available experimental works and help to improve the understanding on how to engineer transparent p-type materials with higher hole mobilities.

  15. Type and structure of time-like singularities in general relativity theory

    International Nuclear Information System (INIS)

    A method is proposed which permits one to deterMine whether a time-like singularity refers to a point, linear or some other type of gravitational field singularity. It is shown that in the general theory of relativity an altogether different type of source may be possible which does not have any analogs in finite curvature space. An analysis is made of a number of solutions containing time-like singularities whose type varies depending on the sign of the functions involved in the solutions. The form of the solution near simple linear sources and of generalized anisotropic solutions is determined more accurately. The space-time described by the ?-metric is investigated completely and the form of the metric near the ends and at singular points of linear Weyl singularities is found

  16. Canonical BF-type topological field theory and fractional statistics of strings

    International Nuclear Information System (INIS)

    We consider BF-type topological field theory coupled to non-dynamical particle and string sources on spacetime manifolds of the form R1xM 3, where M 3 is a 3-manifold without boundary. Canonical quantization of the theory is carried out in the hamiltonian formalism and explicit solutions of the Schroedinger equation are obtained. We show that the Hilbert space is finite dimensional and the physical states carry a one-dimensional projective representation of the local gauge symmetries. When M 3 is homologically non-trivial the wavefunctions in addition carry a multi-dimensional projective representation, in terms of the linking matrix of the homology cycles of M 3, of the discrete group of large gauge transformations. The wavefunctions also carry a one-dimensional representation of the non-trivial linking of the particle trajectories and string surfaces in M 3. This topological field theory therefore provides a phenomenological generalization of anyons to (3+1) dimensions where the holonomies representing fractional statistics arise from the adiabatic transport of particles around strings. We also discuss a duality between large gauge transformations and these linking operations around the homology cycles of M 3, and show that this canonical quantum field theory provides novel quantum representations of the cohomology of M 3 and its associated motion group. ((orig.))ed motion group. ((orig.))

  17. Geometric approach to gauge theories of the Yang-Mills type

    International Nuclear Information System (INIS)

    Adequate language for the description of field theories of the Jang-Mills type is introduced. Mathematical methods necessary for geometric description of gauge fields are stated elementary. It is shown after the review of main notions of differential geometry in what a sense a gauge potential is the compendency in some foliated space, and the gauge field is the curvature in this space. It is also shown how global aspects of the theory, for example, boundary conditions, affect the foliation structure. At that gauge transformations and equations of motion as well as equations of self-duality assume a global character, if they are determined as operations in the foliated space. Determined was also the space of orbits, i. e. set of gauge-nonequivalent potentials and it is shown why in the non-Abelian case there is no continuous fixation of calibration

  18. Bianchi Type-IX Magnetized Dark Energy Model in Saez-Ballester Theory of Gravitation

    Directory of Open Access Journals (Sweden)

    H. R. Ghate

    2014-03-01

    Full Text Available The Bianchi type-IX cosmological model with variable ? has been studied in the scalar tensor theory of gravitation proposed by Saez and Ballester [Phys. Lett. A 113: 467, 1985] in the presence and absence of magnetic field of energy density?b. A special law of variation of Hubble’s parameter proposed by Berman [Nuovo Cimento 74 B, 182, 1983] has been used to solve the field equations. The physical and kinematical properties of the model are also discussed.

  19. A Note on Marginally Stable Bound States in Type II String Theory

    OpenAIRE

    Sen, Ashoke

    1995-01-01

    Spectrum of elementary string states in type II string theory contains ultra-short multiplets that are marginally stable. $U$-duality transformation converts these states into bound states at threshold of $p$-branes carrying Ramond-Ramond charges, and wrapped around $p$-cycles of a torus. We propose a test for the existence of these marginally stable bound states. Using the recent results of Polchinski and of Witten, we argue that the spectrum of bound states of $p$-branes i...

  20. On the vacuum stability in the superrenormalized Yukawa-type theory

    International Nuclear Information System (INIS)

    The stability of the ground state and possibility of appearance of the phase transition in the superrenormalizable nonlical Yukawa-type field theory are investigated. A variational estimation of the upper bound for the effective potential is obtained. It is shown that there exist a finite critical value for the boson-fermion coupling constant. The initial vacuum becomes unstable when this coupling cosntant exceeds the critical value. As a result, the system under consideration goes into the phase with nonvanishing expectation value of the field. 17 refs.; 3 figs

  1. New Type of Hamiltonians Without Ultraviolet Divergence for Quantum Field Theories

    CERN Document Server

    Teufel, Stefan

    2015-01-01

    We propose a novel type of Hamiltonians for quantum field theories. They are mathematically well-defined (and in particular, ultraviolet finite) without any ultraviolet cut-off such as smearing out the particles over a nonzero radius; rather, the particles are assigned radius zero. We describe explicit examples of such Hamiltonians. Their definition, which is best expressed in the particle-position representation of the wave function, involves a novel type of boundary condition on the wave function, which we call an interior-boundary condition. The relevant configuration space is one of a variable number of particles, and the relevant boundary consists of the configurations with two or more particles at the same location. The interior-boundary condition relates the value (or derivative) of the wave function at a boundary point to the value of the wave function at an interior point (here, in a sector of configuration space corresponding to a lesser number of particles).

  2. T-dualization of type IIB superstring theory in double space

    CERN Document Server

    Nikoli?, Bojan

    2015-01-01

    In this article we offer the new interpretation of T-dualization procedure of type IIB superstring theory in double space framework. We use the ghost free action of type IIB superstring in pure spinor formulation in approximation of constant background fields up to the quadratic terms. T-dualization along any subset of the initial coordinates, $x^a$, is equivalent to the permutation of this subset with subset of the corresponding T-dual coordinates, $y_a$, in double space coordinate $Z^M=(x^\\mu,y_\\mu)$. Demanding that the T-dual transformation law after exchange $x^a\\leftrightarrow y_a$ has the same form as initial one, we obtain the T-dual NS-NS and NS-R background fields. The T-dual R-R field strength is determined up to one arbitrary constant under some assumptions.

  3. On the effective theory of type II string compactifications on nilmanifolds and coset spaces

    International Nuclear Information System (INIS)

    In this thesis we analyzed a large number of type IIA strict SU(3)-structure compactifications with fluxes and O6/D6-sources, as well as type IIB static SU(2)-structure compactifications with fluxes and O5/O7-sources. Restricting to structures and fluxes that are constant in the basis of left-invariant one-forms, these models are tractable enough to allow for an explicit derivation of the four-dimensional low-energy effective theory. The six-dimensional compact manifolds we studied in this thesis are nilmanifolds based on nilpotent Lie-algebras, and, on the other hand, coset spaces based on semisimple and U(1)-groups, which admit a left-invariant strict SU(3)- or static SU(2)-structure. In particular, from the set of 34 distinct nilmanifolds we identified two nilmanifolds, the torus and the Iwasawa manifold, that allow for an AdS4, N = 1 type IIA strict SU(3)-structure solution and one nilmanifold allowing for an AdS4, N = 1 type IIB static SU(2)-structure solution. From the set of all the possible six-dimensional coset spaces, we identified seven coset spaces suitable for strict SU(3)-structure compactifications, four of which also allow for a static SU(2)-structure compactification. For all these models, we calculated the four-dimensional low-energy effective theory using N = 1 supergravity techniques. In order to write down the most general four-dimensional effective action, we also studied how to classify the different disconnected ''bubbles'' in moduli space. (orig.)

  4. On the effective theory of type II string compactifications on nilmanifolds and coset spaces

    Energy Technology Data Exchange (ETDEWEB)

    Caviezel, Claudio

    2009-07-30

    In this thesis we analyzed a large number of type IIA strict SU(3)-structure compactifications with fluxes and O6/D6-sources, as well as type IIB static SU(2)-structure compactifications with fluxes and O5/O7-sources. Restricting to structures and fluxes that are constant in the basis of left-invariant one-forms, these models are tractable enough to allow for an explicit derivation of the four-dimensional low-energy effective theory. The six-dimensional compact manifolds we studied in this thesis are nilmanifolds based on nilpotent Lie-algebras, and, on the other hand, coset spaces based on semisimple and U(1)-groups, which admit a left-invariant strict SU(3)- or static SU(2)-structure. In particular, from the set of 34 distinct nilmanifolds we identified two nilmanifolds, the torus and the Iwasawa manifold, that allow for an AdS{sub 4}, N = 1 type IIA strict SU(3)-structure solution and one nilmanifold allowing for an AdS{sub 4}, N = 1 type IIB static SU(2)-structure solution. From the set of all the possible six-dimensional coset spaces, we identified seven coset spaces suitable for strict SU(3)-structure compactifications, four of which also allow for a static SU(2)-structure compactification. For all these models, we calculated the four-dimensional low-energy effective theory using N = 1 supergravity techniques. In order to write down the most general four-dimensional effective action, we also studied how to classify the different disconnected ''bubbles'' in moduli space. (orig.)

  5. Evaluation of the VACUTAINER PPT Plasma Preparation Tube for Use with the Bayer VERSANT Assay for Quantification of Human Immunodeficiency Virus Type 1 RNA

    OpenAIRE

    Elbeik, Tarek; Nassos, Patricia; Kipnis, Patricia; Haller, Barbara; Ng, Valerie L.

    2005-01-01

    Separation and storage of plasma within 2 h of phlebotomy is required for the VACUTAINER PPT Plasma Preparation Tube (PPT) versus 4 h for the predecessor VACUTAINER EDTA tube for human immunodeficiency virus type 1 (HIV-1) viral load (HIVL) testing by the VERSANT HIV-1 RNA 3.0 assay (branched DNA). The 2-h limit for PPT imposes time constraints for handling and transporting to the testing laboratory. This study compares HIVL reproducibility from matched blood in EDTA tubes and PPTs and betwee...

  6. Evaluation of the performance of 57 Japanese participating laboratories by two types of z-scores in proficiency test for the quantification of pesticide residues in brown rice.

    Science.gov (United States)

    Otake, Takamitsu; Yarita, Takashi; Aoyagi, Yoshie; Numata, Masahiko; Takatsu, Akiko

    2014-11-01

    A proficiency test for the analysis of pesticide residues in brown rice was carried out to support upgrading in analytical skills of participant laboratories. Brown rice containing three target pesticides (etofenprox, fenitrothion, and isoprothiolane) was used as the test samples. The test samples were distributed to the 57 participants and analyzed by appropriate analytical methods chosen by each participant. It was shown that there was no significant difference among the reported values obtained by different types of analytical method. The analytical results obtained by National Metrology Institute of Japan (NMIJ) were 3 % to 10 % greater than those obtained by participants. The results reported by the participant were evaluated by using two types of z-scores, that is, one was the score based on the consensus values calculated from the analytical results of participants, and the other one was the score based on the reference values obtained by NMIJ with high reliability. Acceptable z-scores based on the consensus values and NMIJ reference values were achieved by 87 % to 89 % and 79 % to 94 % of the participants, respectively. PMID:25258285

  7. Canonical BF-type topological field theory and fractional statistics of strings

    CERN Document Server

    Bergeron, M; Szabó, R J; Mario Bergeron; Gordon W Semenoff; Richard J

    1994-01-01

    We consider BF-type topological field theory coupled to non-dynamical particle and string sources on spacetime manifolds of the form \\IR^1\\times\\MT, where \\MT is a 3-manifold without boundary. Canonical quantization of the theory is carried out in the Hamiltonian formalism and explicit solutions of the Schr\\"odinger equation are obtained. We show that the Hilbert space is finite dimensional and the physical states carry a one-dimensional projective representation of the local gauge symmetries. When \\MT is homologically non-trivial the wavefunctions in addition carry a multi-dimensional projective representation, in terms of the linking matrix of the homology cycles of \\MT, of the discrete group of large gauge transformations. The wavefunctions also carry a one-dimensional representation of the non-trivial linking of the particle trajectories and string surfaces in \\MT. This topological field theory therefore provides a phenomenological generalization of anyons to (3 + 1) dimensions where the holonomies repres...

  8. The early life origin theory in the development of cardiovascular disease and type 2 diabetes.

    Science.gov (United States)

    Lindblom, Runa; Ververis, Katherine; Tortorella, Stephanie M; Karagiannis, Tom C

    2015-04-01

    Life expectancy has been examined from a variety of perspectives in recent history. Epidemiology is one perspective which examines causes of morbidity and mortality at the population level. Over the past few 100 years there have been dramatic shifts in the major causes of death and expected life length. This change has suffered from inconsistency across time and space with vast inequalities observed between population groups. In current focus is the challenge of rising non-communicable diseases (NCD), such as cardiovascular disease and type 2 diabetes mellitus. In the search to discover methods to combat the rising incidence of these diseases, a number of new theories on the development of morbidity have arisen. A pertinent example is the hypothesis published by David Barker in 1995 which postulates the prenatal and early developmental origin of adult onset disease, and highlights the importance of the maternal environment. This theory has been subject to criticism however it has gradually gained acceptance. In addition, the relatively new field of epigenetics is contributing evidence in support of the theory. This review aims to explore the implication and limitations of the developmental origin hypothesis, via an historical perspective, in order to enhance understanding of the increasing incidence of NCDs, and facilitate an improvement in planning public health policy. PMID:25270249

  9. Study of the OSV Conjecture for 4D N=2 Extremal Black Holes in Type IIB String Theory

    OpenAIRE

    Omidi, Farzad

    2011-01-01

    In this survey we study the OSV conjecture for 4D N=2 extremal black holes of type IIB suprstring theory. We apply T-duality to find the generalized prepotential of the low energy limit of this superstring theory up to one-loop order in the closed string coupling. On the other hand, we calculate the tree-level and one-loop free energies of B-model topological string theory. To compare them we will explicitly show that the OSV conjecture holds for type IIB N=2 extremal black ...

  10. Study of the OSV Conjecture for 4D N=2 Extremal Black Holes in Type IIB String Theory

    CERN Document Server

    Omidi, Farzad

    2011-01-01

    In this survey we study the OSV conjecture for 4D N=2 extremal black holes of type IIB suprstring theory. We apply T-duality to find the generalized prepotential of the low energy limit of this superstring theory up to one-loop order in the closed string coupling. On the other hand, we calculate the tree-level and one-loop free energies of B-model topological string theory. To compare them we will explicitly show that the OSV conjecture holds for type IIB N=2 extremal black holes.

  11. Method of Moments for the Continuous Transition Between the Brillouin-Wigner-Type and Rayleigh-Schroedinger-Type Multireference Coupled Cluster Theories

    OpenAIRE

    Pittner, Jiri; Piecuch, Piotr

    2009-01-01

    Abstract We apply the method of moments to the multireference (MR) coupled cluster (CC) formalism representing the continuous transition between the Brillouin-Wigner-type and Rayleigh-Schr\\"{o}dinger-type theories based on the Jeziorski-Monkhorst wave function ansatz and derive the formula for the noniterative energy corrections to the corresponding MRCC energies that recover the exact, full configuration interaction energies in the general model space case, inclu...

  12. Detection and quantification of bovine papillomavirus type 2 in urinary bladders and lymph nodes in cases of Bovine Enzootic Hematuria from the endemic region of Azores.

    Science.gov (United States)

    Cota, João B; Peleteiro, Maria C; Petti, Lisa; Tavares, Luís; Duarte, Ana

    2015-07-01

    Bovine Enzootic Hematuria (BEH) is a disease with a severe impact on production indexes and characterized by the development of bovine urinary bladder tumors, particularly in the Azores archipelago. The purpose of this study was to investigate and quantify BPV2 tissue distribution in bovine urinary bladder tumors, normal bladders, and iliac lymph nodes of cattle from the Azores. A real-time PCR system targeting the L1 gene was developed and allowed for the specific detection of the virus. BPV2 DNA was detected in a large proportion of the samples tested, both from neoplastic and healthy tissues, indicating that this virus is very prevalent in the bovine population of the Azores. Moreover, all types of tissues tested were positive, confirming a wide viral distribution within the infected animal. Bovine cutaneous papillomas sampled from Portuguese mainland dairy cattle were used as controls. Viral load ranged between 2.2×10(4) copies/cell in the skin papillomas, and 0.0002 copies/cell in the urinary bladders tumors from the Azores. This is the first report presenting quantitative data on BPV2 infection in bovine urinary bladder lesions from the Azores. This approach will provide a useful tool to evaluate the role of BPV2 not only in the pathogenesis BEH but also in cell transformation mechanisms. PMID:26003566

  13. Investigation of the association of growth rate in grower-finishing pigs with the quantification of Lawsonia intracellularis and porcine circovirus type 2

    DEFF Research Database (Denmark)

    Johansen, Markku; Nielsen, MaiBritt

    2013-01-01

    As a part of a prospective cohort study in four herds, a nested case control study was carried out. Five slow growing pigs (cases) and five fast growing pigs (controls) out of 60 pigs were selected for euthanasia and laboratory examination at the end of the study in each herd. A total of 238 pigs, all approximately 12 weeks old, were included in the study during the first week in the grower–finisher barn. In each herd, approximately 60 pigs from four pens were individually ear tagged. The pigs were weighed at the beginning of the study and at the end of the 6–8 weeks observation period. Clinical data, blood and faecal samples were serially collected from the 60 selected piglets every second week in the observation period. In the killed pigs serum was examined for antibodies against Lawsonia intracellularis (LI) and procine circovirus type 2 (PCV2) and in addition PCV2 viral DNA content was quantified. In faeces the quantity of LI cells/g faeces and number of PCV2 copies/g faeces was measured by qPCR. The objective of the study was to examine if growth rate in grower-finishing pig is associated with the detection of LI and PCV2 infection or clinical data. This study has shown that diarrhoea is a significant risk factor for low growth rate and that one log10 unit increase in LI load increases the odds ratio for a pig to have a low growth rate by 2.0 times. Gross lesions in the small intestine and LI load > log10 6/g were significant risk factors for low growth. No association between PCV2 virus and low growth was found.

  14. Evaluation of the VACUTAINER PPT Plasma Preparation Tube for use with the Bayer VERSANT assay for quantification of human immunodeficiency virus type 1 RNA.

    Science.gov (United States)

    Elbeik, Tarek; Nassos, Patricia; Kipnis, Patricia; Haller, Barbara; Ng, Valerie L

    2005-08-01

    Separation and storage of plasma within 2 h of phlebotomy is required for the VACUTAINER PPT Plasma Preparation Tube (PPT) versus 4 h for the predecessor VACUTAINER EDTA tube for human immunodeficiency virus type 1 (HIV-1) viral load (HIVL) testing by the VERSANT HIV-1 RNA 3.0 assay (branched DNA). The 2-h limit for PPT imposes time constraints for handling and transporting to the testing laboratory. This study compares HIVL reproducibility from matched blood in EDTA tubes and PPTs and between PPT pairs following processing within 4 h of phlebotomy, stability of plasma HIV-1 RNA at 24- and 72-h room temperature storage in the tube, and comparative labor and supply requirements. Blood from 159 patients was collected in paired tubes (EDTA/PPT or PPT/PPT): 86 paired EDTA tubes and PPTs were processed 4 h following phlebotomy and their HIVLs were compared, 42 paired PPT/PPT pairs were analyzed for intertube HIVL reproducibility, and 31 PPT/PPT pairs were analyzed for HIV-1 RNA stability by HIVL. Labor and supply requirements were compared between PPT and EDTA tubes. PPTs produce results equivalent to standard EDTA tube results when processed 4 h after phlebotomy. PPT intertube analyte results are reproducible. An average decrease of 13% and 37% in HIVL was observed in PPT plasma after 24 and 72 h of room temperature storage, respectively; thus, plasma can be stored at room temperature up to 24 h in the original tube. PPTs offer labor and supply savings over EDTA tubes. PMID:16081908

  15. Investigation of the association of growth rate in grower-finishing pigs with the quantification of Lawsonia intracellularis and porcine circovirus type 2.

    Science.gov (United States)

    Johansen, Markku; Nielsen, Maibritt; Dahl, Jan; Svensmark, Birgitta; Bækbo, Poul; Kristensen, Charlotte Sonne; Hjulsager, Charlotte Kristiane; Jensen, Tim K; Ståhl, Marie; Larsen, Lars E; Angen, Oystein

    2013-01-01

    As a part of a prospective cohort study in four herds, a nested case control study was carried out. Five slow growing pigs (cases) and five fast growing pigs (controls) out of 60 pigs were selected for euthanasia and laboratory examination at the end of the study in each herd. A total of 238 pigs, all approximately 12 weeks old, were included in the study during the first week in the grower-finisher barn. In each herd, approximately 60 pigs from four pens were individually ear tagged. The pigs were weighed at the beginning of the study and at the end of the 6-8 weeks observation period. Clinical data, blood and faecal samples were serially collected from the 60 selected piglets every second week in the observation period. In the killed pigs serum was examined for antibodies against Lawsonia intracellularis (LI) and procine circovirus type 2 (PCV2) and in addition PCV2 viral DNA content was quantified. In faeces the quantity of LI cells/g faeces and number of PCV2 copies/g faeces was measured by qPCR. The objective of the study was to examine if growth rate in grower-finishing pig is associated with the detection of LI and PCV2 infection or clinical data. This study has shown that diarrhoea is a significant risk factor for low growth rate and that one log(10) unit increase in LI load increases the odds ratio for a pig to have a low growth rate by 2.0 times. Gross lesions in the small intestine and LI load>log(10)6/g were significant risk factors for low growth. No association between PCV2 virus and low growth was found. PMID:22854321

  16. Understanding physical activity intentions among French Canadians with type 2 diabetes: an extension of Ajzen's theory of planned behaviour

    OpenAIRE

    Godin Gaston; Boudreau François

    2009-01-01

    Abstract Background Regular physical activity is considered a cornerstone for managing type 2 diabetes. However, in Canada, most individuals with type 2 diabetes do not meet national physical activity recommendations. When designing a theory-based intervention, one should first determine the key determinants of physical activity for this population. Unfortunately, there is a lack of information on this aspect among adults with type 2 diabetes. The purpose of this cross-sectional study is to f...

  17. The use of quantitative PCR for identification and quantification of Brachyspira pilosicoli, Lawsonia intracellularis and Escherichia coli fimbrial types F4 and F18 in pig feces

    DEFF Research Database (Denmark)

    Ståhl, Marie; Kokotovic, Branko

    2011-01-01

    Four quantitative PCR (qPCR) assays were evaluated for quantitative detection of Brachyspira pilosicoli, Lawsonia intracellularis, and E. coli fimbrial types F4 and F18 in pig feces. Standard curves were based on feces spiked with the respective reference strains. The detection limits from the spiking experiments were 102 bacteria/g feces for BpiloqPCR and Laws-qPCR, 103 CFU/g feces for F4-qPCR and F18-qPCR. The PCR efficiency for all four qPCR assays was between 0.91 and 1.01 with R2 above 0.993. Standard curves, slopes and elevation, varied between assays and between measurements from pure DNA from reference strains and feces spiked with the respective strains. The linear ranges found for spiked fecal samples differed both from the linear ranges from pure culture of the reference strains and between the qPCR tests. The linear ranges were five log units for F4- qPCR, and Laws-qPCR, six log units for F18-qPCR and three log units for Bpilo-qPCR in spiked feces. When measured on pure DNA from the reference strains used in spiking experiments, the respective log ranges were: seven units for Bpilo-qPCR, Laws-qPCR and F18-qPCR and six log units for F4-qPCR. This shows the importance of using specific standard curves, where each pathogen is analysed in the same matrix as sample DNA. The qPCRs were compared to traditional bacteriological diagnostic methods and found to be more sensitive than cultivation for E. coli and B. pilosicoli. The qPCR assay for Lawsonia was also more sensitive than the earlier used method due to improvements in DNA extraction. In addition, as samples were not analysed for all four pathogen agents by traditional diagnostic methods, many samples were found positive for agents that were not expected on the basis of age and case history. The use of quantitative PCR tests for diagnosis of enteric diseases provides new possibilities for veterinary diagnostics. The parallel simultaneous analysis for several bacteria in multi-qPCR and the determination of the quantities of the infectious agents increases the information obtained from the samples and the chance for obtaining a relevant diagnosis.

  18. A New Type of Coupled Wave Theory Capable of Analytically Describing Diffraction in Polychromatic Gratings and Holograms

    International Nuclear Information System (INIS)

    A new type of coupled wave theory is described which is capable, in a very natural way, of analytically describing polychromatic gratings. In contrast to the well known and extremely successful coupled wave theory of Kogelnik, the new theory is based on a differential formulation of the process of Fresnel reflection within the grating. The fundamental coupled wave equations, which are an exact solution of Maxwell's equations for the case of the un-slanted reflection grating, can be analytically solved with minimal approximation. The equations may also be solved in a rotated frame of reference to provide useful formulae for the diffractive efficiency of the general polychromatic slanted grating in three dimensions. The new theory is compared with Kogelnik's theory where extremely good agreement is found for most cases. The theory has also been compared to a rigorous computational chain matrix simulation of the un-slanted grating with excellent agreement for cases typical to display holography. In contrast, Kogelnik's theory shows small discrepancies away from Bragg resonance. The new coupled wave theory may easily be extended to an N-coupled wave theory for the case of the multiplexed polychromatic grating and indeed for the purposes of analytically describing diffraction in the colour hologram. In the simple case of a monochromatic spatially-multiplexed grating at Bragg resonance the theory is in exact agreement with the predictions of conventional N-coupled wave theoctions of conventional N-coupled wave theory.

  19. A New Type of Coupled Wave Theory Capable of Analytically Describing Diffraction in Polychromatic Gratings and Holograms

    Science.gov (United States)

    Brotherton-Ratcliffe, David

    2013-02-01

    A new type of coupled wave theory is described which is capable, in a very natural way, of analytically describing polychromatic gratings. In contrast to the well known and extremely successful coupled wave theory of Kogelnik, the new theory is based on a differential formulation of the process of Fresnel reflection within the grating. The fundamental coupled wave equations, which are an exact solution of Maxwell's equations for the case of the un-slanted reflection grating, can be analytically solved with minimal approximation. The equations may also be solved in a rotated frame of reference to provide useful formulae for the diffractive efficiency of the general polychromatic slanted grating in three dimensions. The new theory is compared with Kogelnik's theory where extremely good agreement is found for most cases. The theory has also been compared to a rigorous computational chain matrix simulation of the un-slanted grating with excellent agreement for cases typical to display holography. In contrast, Kogelnik's theory shows small discrepancies away from Bragg resonance. The new coupled wave theory may easily be extended to an N-coupled wave theory for the case of the multiplexed polychromatic grating and indeed for the purposes of analytically describing diffraction in the colour hologram. In the simple case of a monochromatic spatially-multiplexed grating at Bragg resonance the theory is in exact agreement with the predictions of conventional N-coupled wave theory.

  20. Validated method for phytohormone quantification in plants

    OpenAIRE

    Almeida Trapp, Marília; De Souza, Gezimar D.; Rodrigues-Filho, Edson; Boland, William; Mithöfer, Axel

    2014-01-01

    Phytohormones are long time known as important components of signaling cascades in plant development and plant responses to various abiotic and biotic challenges. Quantifications of phytohormone levels in plants are typically carried out using GC or LC-MS/MS systems, due to their high sensitivity, specificity, and the fact that not much sample preparation is needed. However, mass spectrometer-based analyses are often affected by the particular sample type (different matrices), extraction proc...

  1. Classification and quantification of leaf curvature

    OpenAIRE

    Liu, Zhongyuan; Jia, Liguo; Mao, Yanfei; He, Yuke

    2010-01-01

    Various mutants of Arabidopsis thaliana deficient in polarity, cell division, and auxin response are characterized by certain types of leaf curvature. However, comparison of curvature for clarification of gene function can be difficult without a quantitative measurement of curvature. Here, a novel method for classification and quantification of leaf curvature is reported. Twenty-two mutant alleles from Arabidopsis mutants and transgenic lines deficient in leaf flatness were selected. The muta...

  2. Adaptation of learning resources based on the MBTI theory of psychological types

    Directory of Open Access Journals (Sweden)

    Amel Behaz

    2012-01-01

    Full Text Available Today, the resources available on the web increases significantly. The motivation for the dissemination of knowledge and their acquisition by learners is central to learning. However, learners show differences between the ways of learning that suits them best. The objective of the work presented in this paper is to study how it is possible to integrate models from cognitive theories and ontologies for the adaptation of educational resources. The goal is to provide the system capabilities to conduct reasoning on descriptions obtained in order to automatically adapt the resources to a learner according to his preferences. We rely on the model MBTI (Myers-Briggs Type Indicator for the consideration of learning styles of learners as a criterion for adaptation.

  3. Natural inflation with and without modulations in type IIB string theory

    Science.gov (United States)

    Abe, Hiroyuki; Kobayashi, Tatsuo; Otsuka, Hajime

    2015-04-01

    We propose a mechanism for the natural inflation with and without modulation in the framework of type IIB string theory on toroidal orientifold or orbifold. We explicitly construct the stabilization potential of complex structure, dilaton and Kähler moduli, where one of the imaginary component of complex structure moduli becomes light which is identified as the inflaton. The inflaton potential is generated by the gaugino-condensation term which receives the one-loop threshold corrections determined by the field value of complex structure moduli and the axion decay constant of inflaton is enhanced by the inverse of one-loop factor. We also find the threshold corrections can also induce the modulations to the original scalar potential for the natural inflation. Depending on these modulations, we can predict several sizes of tensor-to-scalar ratio as well as the other cosmological observables reported by WMAP, Planck and/or BICEP2 collaborations.

  4. Natural inflation with and without modulations in type IIB string theory

    CERN Document Server

    Abe, Hiroyuki; Otsuka, Hajime

    2014-01-01

    We propose a mechanism for the natural inflation with and without modulation in the framework of type IIB string theory on toroidal orientifold or orbifold. We explicitly construct the stabilization potential of complex structure, dilaton and K\\"ahler moduli, where one of the imaginary component of complex structure moduli becomes light which is identified as the inflaton. The inflaton potential is generated by the gaugino-condensation term which receives the one-loop threshold corrections determined by the field value of complex structure moduli and the axion decay constant of inflaton is enhanced by the inverse of one-loop factor. We also find the threshold corrections can also induce the modulations to the original scalar potential for the natural inflation. Depending on these modulations, we can predict several sizes of tensor-to-scalar ratio as well as the other cosmological observables reported by WMAP, Planck and/or BICEP2 collaborations.

  5. HANDWRITTEN SIGNATURE VERIFICATIONS USING ADAPTIVE RESONANCE THEORY TYPE-2 (ART-2 NET

    Directory of Open Access Journals (Sweden)

    Tirtharaj Dash

    2012-09-01

    Full Text Available Authorizing hand-written signature has always been a challenge to prevent illegal transactions, especially when the forged and the original signatures are very ‘similar-looking’ in nature. In this paper, we aim to automate forged signature verification process, offline, using Adaptive Resonance Theory type-2 (ART-2, which has been implemented in ‘C’ language using both sequential and parallel programming. The said network has been trained with the original signature and tested with twelve very similar-looking but forged signatures. The mismatch threshold is set as 5%; however, it is set flexible as per the requirement from case-to-case. In order to obtain the desired result, the vigilance parameter (? and the cluster size (m has been tuned by carefully conducted parametric studies. The accuracy of the ART-2 net has been computed as almost 100% with ? = 0.97 and m = 20.

  6. The proteomics quantification dilemma.

    Science.gov (United States)

    Jungblut, Peter R

    2014-07-31

    Proteomics is dominated today by the protein expression discourse, which favorites the bottom-up approach because of its high throughput and its high sensitivity. For quantification this proceeding is misleading, if a protein is present with more than one protein species in the sample to be analyzed. The protein speciation discourse considers this more realistic situation and affords the top-down procedures or at least a separation of the protein species in advance to identification and quantification. Today all of the top-down procedures are one order of magnitude less sensitive than the bottom-up ones. To increase sensitivity and to increase throughput are major challenges for proteomics of the next years. This article is part of a Special Issue entitled: 20years of Proteomics in memory of Viatliano Pallini. Guest Editors: Luca Bini, Juan J. Calvete, Natacha Turck, Denis Hochstrasser and Jean-Charles Sanchez. PMID:24681132

  7. Theory of carrier electronic structure and some related quantities of n-type SnTe

    International Nuclear Information System (INIS)

    We present a theory of the electronic structure calculation for SnTe, based on a k-vector . ?-vector method (?-vector being the momentum operator in the presence of the spin–orbit interaction). The calculation envisages a six-level basis for the energy states at the L-point of the Brillouin zone. The k-vector . ?-vector Hamiltonians for the band-edge states are diagonalized exactly, and the far bands are treated using second-order perturbation theory. We obtain non-parabolic dispersion for the carriers. We calculate the Fermi energy, the density of states, the effective mass and the g-factors using the cylindrical coordinate system for the k-vector-space integration wherever occurring. All these quantities are calculated and studied as functions of carrier concentration for n-type SnTe. We could not compare our carrier-concentration-dependent quantities with some experimental parameters due to unavailability, but the extrapolated band-edge values of the effective masses and the g-factors agree fairly well with the values obtained by previous calculations. In view of the recent interest in the carrier-induced magnetism in semiconductors, our values would be of interest to both theorists and experimentalists

  8. Effective-mass theory of p-type heterostructures under transverse magnetic fields

    Science.gov (United States)

    Wu, G. Y.; Hung, K.-M.; Chen, C.-J.

    1992-07-01

    We present, within the transfer-matrix formalism, a second-order k.p effective-mass theory for p-type heterostructures under transverse magnetic fields (parallel to the layers). The Luttinger Hamiltonian of ?8 states is employed to describe valence bands, with the coupling between heavy and light holes included. We expand the envelope functions in parabolic cylinder functions, and reduce the effective-mass equations from coupled differential equations to matrix equations. The matrices involved are of small dimensionality (typically 8×8), allowing for treating hole magnetic levels and magnetotunneling in a concise manner. The theory has been applied to a single-barrier structure and a double-barrier structure, both made of GaAs and AlxGa1-xAs. In the former case, we present valence bands and wave functions, and, in the latter, we show hole transmission under transverse magnetic fields for various channels. The model presented here is well suited to magneto-optic- and magnetotransport-property calculations.

  9. Efficient quantification of non-Gaussian spin distributions

    CERN Document Server

    Dubost, B; Napolitano, M; Behbood, N; Sewell, R J; Mitchell, M W

    2011-01-01

    We study theoretically and experimentally the quantification of non-Gaussian distributions via non-destructive measurements. Using the theory of cumulants, their unbiased estimators, and the uncertainties of these estimators, we describe a quantification which is simultaneously efficient, unbiased by measurement noise, and suitable for hypothesis tests, e.g., to detect non-classical states. The theory is applied to cold $^{87}$Rb spin ensembles prepared in non-gaussian states by optical pumping and measured by non-destructive Faraday rotation probing. We find an optimal use of measurement resources under realistic conditions, e.g., in atomic ensemble quantum memories.

  10. Three-dimensional theory of quantum memories based on ?-type atomic ensembles

    International Nuclear Information System (INIS)

    We develop a three-dimensional theory for quantum memories based on light storage in ensembles of ?-type atoms, where two long-lived atomic ground states are employed. We consider light storage in an ensemble of finite spatial extent and we show that within the paraxial approximation the Fresnel number of the atomic ensemble and the optical depth are the only important physical parameters determining the quality of the quantum memory. We analyze the influence of these parameters on the storage of light followed by either forward or backward read-out from the quantum memory. We show that for small Fresnel numbers the forward memory provides higher efficiencies, whereas for large Fresnel numbers the backward memory is advantageous. The optimal light modes to store in the memory are presented together with the corresponding spin waves and outcoming light modes. We show that for high optical depths such ?-type atomic ensembles allow for highly efficient backward and forward memories even for small Fresnel numbers F(greater-or-similar sign)0.1.

  11. Three-dimensional theory of quantum memories based on {Lambda}-type atomic ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Zeuthen, Emil; Grodecka-Grad, Anna; Soerensen, Anders S. [QUANTOP, Danish National Research Foundation Center for Quantum Optics, Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen O (Denmark)

    2011-10-15

    We develop a three-dimensional theory for quantum memories based on light storage in ensembles of {Lambda}-type atoms, where two long-lived atomic ground states are employed. We consider light storage in an ensemble of finite spatial extent and we show that within the paraxial approximation the Fresnel number of the atomic ensemble and the optical depth are the only important physical parameters determining the quality of the quantum memory. We analyze the influence of these parameters on the storage of light followed by either forward or backward read-out from the quantum memory. We show that for small Fresnel numbers the forward memory provides higher efficiencies, whereas for large Fresnel numbers the backward memory is advantageous. The optimal light modes to store in the memory are presented together with the corresponding spin waves and outcoming light modes. We show that for high optical depths such {Lambda}-type atomic ensembles allow for highly efficient backward and forward memories even for small Fresnel numbers F(greater-or-similar sign)0.1.

  12. LRS Bianchi type -V cosmology with heat flow in scalar: tensor theory

    Scientific Electronic Library Online (English)

    C.P., Singh.

    2009-12-01

    Full Text Available In this paper we present a spatially homogeneous locally rotationally symmetric (LRS) Bianchi type -V perfect fluid model with heat conduction in scalar tensor theory proposed by Saez and Ballester. The field equations are solved with and without heat conduction by using a law of variation for the m [...] ean Hubble parameter, which is related to the average scale factor of metric and yields a constant value for the deceleration parameter. The law of variation for the mean Hubble parameter generates two types of cosmologies one is of power -law form and second the exponential form. Using these two forms singular and non -singular solutions are obtained with and without heat conduction. We observe that a constant value of the deceleration parameter is reasonable a description of the different phases of the universe. We arrive to the conclusion that the universe decelerates for positive value of deceleration parameter where as it accelerates for negative one. The physical constraints on the solutions of the field equations, and, in particular, the thermodynamical laws and energy conditions that govern such solutions are discussed in some detail.The behavior of the observationally important parameters like expansion scalar, anisotropy parameter and shear scalar is considered in detail.

  13. Non-perturbative black holes in Type-IIA String Theory versus the No-Hair conjecture

    International Nuclear Information System (INIS)

    We obtain the first black hole solution to Type-IIA String Theory compactified on an arbitrary self-mirror Calabi–Yau manifold in the presence of non-perturbative quantum corrections. Remarkably enough, the solution involves multivalued functions, which could lead to a violation of the No-Hair conjecture. We discuss how String Theory forbids such scenario. However, the possibility still remains open in the context of four-dimensional ungauged Supergravity. (paper)

  14. Quantification of Human T-lymphotropic virus type I (HTLV-I) provirus load in a rural West African population: no enhancement of human immunodeficiency virus type 2 pathogenesis, but HTLV-I provirus load relates to mortality

    DEFF Research Database (Denmark)

    Ariyoshi, K; Berry, N

    2003-01-01

    Human T-lymphotropic virus type I (HTLV-I) provirus load was examined in a cohort of a population in Guinea-Bissau among whom human immunodeficiency virus (HIV) type 2 is endemic. Geometric mean of HIV-2 RNA load among HTLV-I-coinfected subjects was significantly lower than that in subjects infected with HIV-2 alone (212 vs. 724 copies/mL; P=.02). Adjusted for age, sex, and HIV status, the risk of death increased with HTLV-I provirus load; mortality hazard ratio was 1.59 for each log10 increase in HTLV-I provirus copies (P=.038). There is no enhancing effect of HTLV-I coinfection on HIV-2 disease, but high HTLV-I provirus loads may contribute to mortality.

  15. Level 2 probabilistic event analyses and quantification

    International Nuclear Information System (INIS)

    In this paper an example of quantification of a severe accident phenomenological event is given. The performed analysis for assessment of the probability that the debris released from the reactor vessel was in a coolable configuration in the lower drywell is presented. It is also analysed the assessment of the type of core/concrete attack that would occur. The coolability of the debris ex-vessel evaluation by an event in the Simplified Boiling Water Reactor (SBWR) Containment Event Tree (CET) and a detailed Decomposition Event Tree (DET) developed to aid in the quantification of this CET event are considered. The headings in the DET selected to represent plant physical states (e.g., reactor vessel pressure at the time of vessel failure) and the uncertainties associated with the occurrence of critical physical phenomena (e.g., debris configuration in the lower drywell) considered important to assessing whether the debris was coolable or not coolable ex-vessel are also discussed

  16. Chern class identities from tadpole matching in type IIB and F-theory

    CERN Document Server

    Aluffi, Paolo

    2009-01-01

    In light of Sen's weak coupling limit of F-theory as a type IIB orientifold, the compatibility of the tadpole conditions leads to a non-trivial identity relating the Euler characteristics of an elliptically fibered Calabi-Yau fourfold and of certain related surfaces. We present the physical argument leading to the identity, and a mathematical derivation of a Chern class identity which confirms it, after taking into account singularities of the relevant loci. This identity of Chern classes holds in arbitrary dimension, and for varieties that are not necessarily Calabi-Yau. Singularities are essential in both the physics and the mathematics arguments: the tadpole relation may be interpreted as an identity involving stringy invariants of a singular hypersurface, and corrections for the presence of pinch-points. The mathematical discussion is streamlined by the use of Chern-Schwartz-MacPherson classes of singular varieties. We also show how the main identity may be obtained by applying `Verdier specialization' to...

  17. Chern class identities from tadpole matching in type IIB and F-theory

    International Nuclear Information System (INIS)

    In light of Sen's weak coupling limit of F-theory as a type IIB orientifold, the compatibility of the tadpole conditions leads to a non-trivial identity relating the Euler characteristics of an elliptically fibered Calabi-Yau fourfold and of certain related surfaces. We present the physical argument leading to the identity, and a mathematical derivation of a Chern class identity which confirms it, after taking into account singularities of the relevant loci. This identity of Chern classes holds in arbitrary dimension, and for varieties that are not necessarily Calabi-Yau. Singularities are essential in both the physics and the mathematics arguments: the tadpole relation may be interpreted as an identity involving stringy invariants of a singular hypersurface, and corrections for the presence of pinch-points. The mathematical discussion is streamlined by the use of Chern-Schwartz-MacPherson classes of singular varieties. We also show how the main identity may be obtained by applying 'Verdier specialization' to suitable constructible functions.

  18. LRS Bianchi type-I cosmological model in f( R, T) theory of gravity with ?( T)

    Science.gov (United States)

    Sahoo, P. K.; Sivakumar, M.

    2015-05-01

    The locally rotationally symmetric (LRS) Bianchi type-I cosmological models have been investigated in f( R, T) theory of gravity, where R is the Ricci scalar and T is the trace of the energy momentum tensor, for some choices of the functional f( R, T)= f 1( R)+ f 2( T). The exact solutions of the field equations are obtained for the linearly varying deceleration parameter q( t) proposed by Akarsu and Dereli (2012). Keeping an eye on the accelerating nature of the universe in the present epoch, the dynamics and physical behaviour of the models have been discussed. It is interesting to note that in one of the model, the universe ends with a big rip. By taking different functional forms for f 2( T) we have investigated whether or not the Big Rip can be avoided. We found that, the Big Rip situation can not be avoided and may be inherent in the linearly varying deceleration parameter. We have also applied the State-finder diagnostics to get the geometrical dynamics of the universe at different phases.

  19. f(T) theories from holographic dark energy models within Bianchi type I universe

    Science.gov (United States)

    Fayaz, V.; Hossienkhani, H.; Pasqua, A.; Amirabadi, M.; Ganji, M.

    2015-02-01

    Recently, the teleparallel Lagrangian density described by the torsion scalar T has been extended to a function of T. The f( T) modified teleparallel gravity has been proposed as the natural gravitational alternative for dark energy to explain the late time acceleration of the universe. We consider spatially homogenous and anisotropic Bianchi type I universe in the context of f( T) gravity. The purpose of this work is to develop a reconstruction of the f( T) gravity model according to the holographic dark energy model. We have considered an action, of the form T + g( T) + L m, describing Einstein's gravity plus a function of the torsion scalar. In the framework of the said modified gravity theory, we have considered the equation of state of the holographic dark energy density. Subsequently, we have developed a reconstruction scheme for modified gravity with f( T) action. Finally, we have also studied the de Sitter and power-law solutions when the universe enters a phantom phase and shown that such solutions may exist for some f( T) solutions with the holographic and new agegraphic dark energy scenario.

  20. From Peierls brackets to a generalized Moyal bracket for type-I gauge theories

    CERN Document Server

    Esposito, G; Esposito, Giampiero; Stornaiolo, Cosimo

    2006-01-01

    In the space-of-histories approach to gauge fields and their quantization, the Maxwell, Yang--Mills and gravitational field are well known to share the property of being type-I theories, i.e. Lie brackets of the vector fields which leave the action functional invariant are linear combinations of such vector fields, with coefficients of linear combination given by structure constants. The corresponding gauge-field operator in the functional integral for the in-out amplitude is an invertible second-order differential operator. For such an operator, we consider advanced and retarded Green functions giving rise to a Peierls bracket among group-invariant functionals. Our Peierls bracket is a Poisson bracket on the space of all group-invariant functionals in two cases only: either the gauge-fixing is arbitrary but the gauge fields lie on the dynamical sub-space; or the gauge-fixing is a linear functional of gauge fields, which are generic points of the space of histories. In both cases, the resulting Peierls bracke...

  1. Isotropization of Bianchi-type cosmological solutions in Brans-Dicke theory

    CERN Document Server

    Chauvet, P; Chauvet, P; Cervantes-Cota, J L

    1995-01-01

    The cosmic, general analitic solutions of the Brans--Dicke Theory for the flat space of homogeneous and isotropic models containing perfect, barotropic, fluids are seen to belong to a wider class of solutions --which includes cosmological models with the open and the closed spaces of the Friedmann--Robertson--Walker metric, as well as solutions for models with homogeneous but anisotropic spaces corresponding to the Bianchi--Type metric clasification-- when all these solutions are expressed in terms of reduced variables. The existence of such a class lies in the fact that the scalar field, \\phi, times a function of the mean scale factor or ``volume element'', a^3 = a_1 a_2 a_3, which depends on time and on the barotropic index of the equation of state used, can be written as a function of a ``cosmic time'' reduced in terms of another function of the mean scale factor depending itself again on the barotropic index but independent of the metrics here employed. This reduction procedure permites one to analyze if ...

  2. Accident sequence quantification with KIRAP

    International Nuclear Information System (INIS)

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs

  3. Quantification and Negation in Event Semantics

    Directory of Open Access Journals (Sweden)

    Lucas Champollion

    2010-12-01

    Full Text Available Recently, it has been claimed that event semantics does not go well together with quantification, especially if one rejects syntactic, LF-based approaches to quantifier scope. This paper shows that such fears are unfounded, by presenting a simple, variable-free framework which combines a Neo-Davidsonian event semantics with a type-shifting based account of quantifier scope. The main innovation is that the event variable is bound inside the verbal denotation, rather than at sentence level by existential closure. Quantifiers can then be interpreted in situ. The resulting framework combines the strengths of event semantics and type-shifting accounts of quantifiers and thus does not force the semanticist to posit either a default underlying word order or a syntactic LF-style level. It is therefore well suited for applications to languages where word order is free and quantifier scope is determined by surface order. As an additional benefit, the system leads to a straightforward account of negation, which has also been claimed to be problematic for event-based frameworks.ReferencesBarker, Chris. 2002. ‘Continuations and the nature of quantification’. Natural Language Semantics 10: 211–242.http://dx.doi.org/10.1023/A:1022183511876Barker, Chris & Shan, Chung-chieh. 2008. ‘Donkey anaphora is in-scope binding’. Semantics and Pragmatics 1: 1–46.Beaver, David & Condoravdi, Cleo. 2007. ‘On the logic of verbal modification’. In Maria Aloni, Paul Dekker & Floris Roelofsen (eds. ‘Proceedings of the Sixteenth Amsterdam Colloquium’, 3–9. Amsterdam, Netherlands: University of Amsterdam.Beghelli, Filippo & Stowell, Tim. 1997. ‘Distributivity and negation: The syntax of each and every’. In Anna Szabolcsi (ed. ‘Ways of scope taking’, 71–107. Dordrecht, Netherlands: Kluwer.Brasoveanu, Adrian. 2010. ‘Modified Numerals as Post-Suppositions’. In Maria Aloni, Harald Bastiaanse, Tikitu de Jager & Katrin Schulz (eds. ‘Logic, Language and Meaning’, Lecture Notes in Computer Science, vol. 6042, 203–212. Berlin, Germany: Springer.Carlson, Gregory N. 1977. Reference to Kinds in English. Ph.D. thesis, University of Massachusetts, Amherst, MA.Carlson, Gregory N. 1984. ‘Thematic roles and their role in semantic interpretation’. Linguistics 22: 259–279.http://dx.doi.org/10.1515/ling.1984.22.3.259Champollion, Lucas. 2010. Parts of a whole: Distributivity as a bridge between aspect and measurement. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA.Champollion, Lucas, Tauberer, Josh & Romero, Maribel. 2007. ‘The Penn Lambda Calculator: Pedagogical software for natural language semantics’. In Tracy Holloway King & Emily Bender (eds. ‘Proceedings of the Grammar Engineering Across Frameworks(GEAF 2007 Workshop’, Stanford, CA: CSLI Online Publications.Condoravdi, Cleo. 2002. ‘Punctual until as a scalar NPI’. In Sharon Inkelas & Kristin Hanson (eds. ‘The nature of the word’, 631–654. Cambridge, MA: MIT Press.Csirmaz, Aniko. 2006. ‘Aspect, Negation and Quantifiers’. In Liliane Haegeman, Joan Maling, James McCloskey & Katalin E. Kiss (eds. ‘Event Structure And The Left Periphery’, Studies in Natural Language and Linguistic Theory, vol. 68, 225–253. SpringerNetherlands.Davidson, Donald. 1967. ‘The logical form of action sentences’. In Nicholas Rescher (ed. ‘The logic of decision and action’, 81–95. Pittsburgh, PA: University of Pittsburgh Press.de Swart, Henriëtte. 1996. ‘Meaning and use of not . . . until’. Journal of Semantics 13: 221–263.http://dx.doi.org/10.1093/jos/13.3.221de Swart, Henriëtte & Molendijk, Arie. 1999. ‘Negation and the temporal structure of narrative discourse’. Journal of Semantics 16: 1–42.http://dx.doi.org/10.1093/jos/16.1.1Dowty, David R. 1979. Word meaning and Montague grammar. Dordrecht, Netherlands: Reidel.Eckardt, Regine. 2010. ‘A Logic for Easy Linking Semantics’. In Maria Aloni, Harald Bastiaanse, Tikitu de Jager & Katrin Schulz (eds. ‘Logic, Language and Meaning’, Lecture Notes in Computer Science, vo

  4. Advances in type-2 fuzzy sets and systems theory and applications

    CERN Document Server

    Mendel, Jerry; Tahayori, Hooman

    2013-01-01

    This book explores recent developments in the theoretical foundations and novel applications of general and interval type-2 fuzzy sets and systems, including: algebraic properties of type-2 fuzzy sets, geometric-based definition of type-2 fuzzy set operators, generalizations of the continuous KM algorithm, adaptiveness and novelty of interval type-2 fuzzy logic controllers, relations between conceptual spaces and type-2 fuzzy sets, type-2 fuzzy logic systems versus perceptual computers; modeling human perception of real world concepts with type-2 fuzzy sets, different methods for generating membership functions of interval and general type-2 fuzzy sets, and applications of interval type-2 fuzzy sets to control, machine tooling, image processing and diet.  The applications demonstrate the appropriateness of using type-2 fuzzy sets and systems in real world problems that are characterized by different degrees of uncertainty.

  5. Understanding physical activity intentions among French Canadians with type 2 diabetes: an extension of Ajzen's theory of planned behaviour

    Directory of Open Access Journals (Sweden)

    Godin Gaston

    2009-06-01

    Full Text Available Abstract Background Regular physical activity is considered a cornerstone for managing type 2 diabetes. However, in Canada, most individuals with type 2 diabetes do not meet national physical activity recommendations. When designing a theory-based intervention, one should first determine the key determinants of physical activity for this population. Unfortunately, there is a lack of information on this aspect among adults with type 2 diabetes. The purpose of this cross-sectional study is to fill this gap using an extended version of Ajzen's Theory of Planned Behavior (TPB as reference. Methods A total of 501 individuals with type 2 diabetes residing in the Province of Quebec (Canada completed the study. Questionnaires were sent and returned by mail. Results Multiple hierarchical regression analyses indicated that TPB variables explained 60% of the variance in intention. The addition of other psychosocial variables in the model added 7% of the explained variance. The final model included perceived behavioral control (? = .38, p Conclusion The findings suggest that interventions aimed at individuals with type 2 diabetes should ensure that people have the necessary resources to overcome potential obstacles to behavioral performance. Interventions should also favor the development of feelings of personal responsibility to exercise and promote the advantages of exercising for individuals with type 2 diabetes.

  6. Inflation and Singularity of a Bianchi Type-VII0 Universe with a Dirac Field in the Einstein—Cartan Theory

    Science.gov (United States)

    Huang, Zeng-Guang; Fang, Wei; Lu, Hui-Qing

    2011-08-01

    We discuss Bianchi type-VII0 cosmology with a Dirac field in the Einstein—Cartan (E-C) theory and obtain the equations of the Dirac and gravitational fields in the E-C theory. A Bianchi type-VII0 inflationary solution is found. When , the Universe may avoid singularity.

  7. Inflation and Singularity of a Bianchi Type-VII0 Universe with a Dirac Field in the Einstein—Cartan Theory

    International Nuclear Information System (INIS)

    We discuss Bianchi type-VII0 cosmology with a Dirac field in the Einstein—Cartan (E-C) theory and obtain the equations of the Dirac and gravitational fields in the E-C theory. A Bianchi type-VII0 inflationary solution is found. When (3)/16S2 - ?2 > 0, the Universe may avoid singularity. (geophysics, astronomy, and astrophysics)

  8. Stringy Unification of Type IIA and IIB Supergravities under N=2 D=10 Supersymmetric Double Field Theory

    OpenAIRE

    Jeon, Imtak; Lee, Kanghoon; Park, Jeong-hyuck; Suh, Yoonji

    2012-01-01

    To the full order in fermions, we construct D=10 type II supersymmetric double field theory. We spell the precise N=2 supersymmetry transformation rules as for 32 supercharges. The constructed action unifies type IIA and IIB supergravities in a manifestly covariant manner with respect to O(10,10) T-duality and a pair of local Lorentz groups, or Spin(1,9) \\times Spin(9,1), besides the usual general covariance of supergravities or the generalized diffeomorphism. While the theo...

  9. Training load quantification in triathlon

    OpenAIRE

    ROBERTO CEJUELA ANTA; JONATHAN ESTEVE-LANAO

    2011-01-01

    There are different Indices of Training Stress of varying complexity, to quantification Training load. Examples include the training impulse (TRIMP), the session (RPE), Lucia’s TRIMP or Summated Zone Score. But the triathlon, a sport to be combined where there are interactions between different segments, is a complication when it comes to quantify the training. The aim of this paper is to review current methods of quantification, and to propose a scale to quantify the training load in triat...

  10. Holographic-Type Gravitation via Non-Differentiability in Weyl-Dirac Theory

    Directory of Open Access Journals (Sweden)

    Mihai Pricop

    2013-08-01

    Full Text Available In the Weyl-Dirac non-relativistic hydrodynamics approach, the non-linear interaction between sub-quantum level and particle gives non-differentiable properties to the space. Therefore, the movement trajectories are fractal curves, the dynamics are described by a complex speed field and the equation of motion is identified with the geodesics of a fractal space which corresponds to a Schrodinger non-linear equation. The real part of the complex speed field assures, through a quantification condition, the compatibility between the Weyl-Dirac non-elativistic hydrodynamic model and the wave mechanics. The mean value of the fractal speed potential, identifies with the Shanon informational energy, specifies, by a maximization principle, that the sub-quantum level “stores” and “transfers” the informational energy in the form of force. The wave-particle duality is achieved by means of cnoidal oscillations modes of the state density, the dominance of one of the characters, wave or particle, being put into correspondence with two flow regimes (non-quasi-autonomous and quasi-autonomous of the Weyl-Dirac fluid. All these show a direct connection between the fractal structure of space and holographic principle.

  11. A relation between D=10, D=26 and D=32: Analysing the spacetime content of string-type field theories

    International Nuclear Information System (INIS)

    A detailed analysis of the spacetime content of string-type field theories is presented. We rigorously explain the appearance of D=10 and D=26, and argue that they are naturally contained in D=32. Our results suggest a new way to the compactification of the D=10 fermionic string and possible new (fermionic) models for D=4, D=6 and D=18. Some connected aspects are also discussed. (orig.)

  12. N=2 quiver gauge theories on A-type ALE spaces

    OpenAIRE

    Bruzzo, Ugo; Sala, Francesco; Szabo, Richard J.

    2014-01-01

    We survey and compare recent approaches to the computation of the partition functions and correlators of chiral BPS observables in $\\mathcal{N}=2$ gauge theories on ALE spaces based on quiver varieties and the minimal resolution $X_k$ of the $A_{k-1}$ toric singularity $\\mathbb{C}^2/\\mathbb{Z}_k$, in light of their recently conjectured duality with two-dimensional coset conformal field theories. We review and elucidate the rigorous constructions of gauge theories for a parti...

  13. Spectral analysis of polynomial potentials and its relation with ABJ/M-type theories

    Energy Technology Data Exchange (ETDEWEB)

    Garcia del Moral, M.P., E-mail: garciamormaria@uniovi.e [Departamento de Fisica, Universidad de Oviedo, Calvo Sotelo 18, 33007 Oviedo (Spain); Martin, I., E-mail: isbeliam@usb.v [Departamento de Fisica, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Navarro, L., E-mail: lnavarro@ma.usb.v [Departamento de Matematicas, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Perez, A.J., E-mail: ajperez@ma.usb.v [Departamento de Matematicas, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Restuccia, A., E-mail: arestu@usb.v [Departamento de Fisica, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of)

    2010-11-01

    We obtain a general class of polynomial potentials for which the Schroedinger operator has a discrete spectrum. This class includes all the scalar potentials in membrane, 5-brane, p-branes, multiple M2 branes, BLG and ABJM theories. We provide a proof of the discreteness of the spectrum of the associated Schroedinger operators. This is the first step in order to analyze BLG and ABJM supersymmetric theories from a non-perturbative point of view.

  14. Type and structure of timelike singularities in the general theory of relativity: from the gamma metric to the general solution

    International Nuclear Information System (INIS)

    A method is proposed that makes it possible to determine whether a timelike singularity corresponds to a point, linear, or other type of gravitational field source. It is shown that in the general theory of relativity it is also possible to have sources of a quite different type with no analogs in a space of finite curvature. An analysis is made of some well-known solutions containing timelike singularities whose type varies depending on the signs of the functions that occur in the solutions. The form of the solution near simple linear sources [W. Israel, Phys. Rev. D15, 935 (1977)] and generalized anisotropic solutions [S. L. Parnovsky, Physica (Utrecht) 104A, 210 (1980); E. M. Lifshitz and I. M. Khalatnikov, Sov. Phys. Usp. 6, 359 (1963)] is determined more accurately; the space-time described by the ? metric (3) is completely investigated; and the form of the metric near the ends and singular points of linear Weyl singularities is found

  15. Probabilistic bounding analysis in the Quantification of Margins and Uncertainties

    International Nuclear Information System (INIS)

    The current challenge of nuclear weapon stockpile certification is to assess the reliability of complex, high-consequent, and aging systems without the benefit of full-system test data. In the absence of full-system testing, disparate kinds of information are used to inform certification assessments such as archival data, experimental data on partial systems, data on related or similar systems, computer models and simulations, and expert knowledge. In some instances, data can be scarce and information incomplete. The challenge of Quantification of Margins and Uncertainties (QMU) is to develop a methodology to support decision-making in this informational context. Given the difficulty presented by mixed and incomplete information, we contend that the uncertainty representation for the QMU methodology should be expanded to include more general characterizations that reflect imperfect information. One type of generalized uncertainty representation, known as probability bounds analysis, constitutes the union of probability theory and interval analysis where a class of distributions is defined by two bounding distributions. This has the advantage of rigorously bounding the uncertainty when inputs are imperfectly known. We argue for the inclusion of probability bounds analysis as one of many tools that are relevant for QMU and demonstrate its usefulness as compared to other methods in a reliability example with imperfect input information.mation.

  16. Constraints on Nonlinear and Stochastic Growth Theories for Type 3 Solar Radio Bursts from the Corona to 1 AU

    Science.gov (United States)

    Cairns, Iver H.; Robinson, P. A.

    1998-01-01

    Existing, competing theories for coronal and interplanetary type III solar radio bursts appeal to one or more of modulational instability, electrostatic (ES) decay processes, or stochastic growth physics to preserve the electron beam, limit the levels of Langmuir-like waves driven by the beam, and produce wave spectra capable of coupling nonlinearly to generate the observed radio emission. Theoretical constraints exist on the wavenumbers and relative sizes of the wave bandwidth and nonlinear growth rate for which Langmuir waves are subject to modulational instability and the parametric and random phase versions of ES decay. A constraint also exists on whether stochastic growth theory (SGT) is appropriate. These constraints are evaluated here using the beam, plasma, and wave properties (1) observed in specific interplanetary type III sources, (2) predicted nominally for the corona, and (3) predicted at heliocentric distances greater than a few solar radii by power-law models based on interplanetary observations. It is found that the Langmuir waves driven directly by the beam have wavenumbers that are almost always too large for modulational instability but are appropriate to ES decay. Even for waves scattered to lower wavenumbers (by ES decay, for instance), the wave bandwidths are predicted to be too large and the nonlinear growth rates too small for modulational instability to occur for the specific interplanetary events studied or the great majority of Langmuir wave packets in type III sources at arbitrary heliocentric distances. Possible exceptions are for very rare, unusually intense, narrowband wave packets, predominantly close to the Sun, and for the front portion of very fast beams traveling through unusually dilute, cold solar wind plasmas. Similar arguments demonstrate that the ES decay should proceed almost always as a random phase process rather than a parametric process, with similar exceptions. These results imply that it is extremely rare for modulational instability or parametric decay to proceed in type III sources at any heliocentric distance: theories for type III bursts based on modulational instability or parametric decay are therefore not viable in general. In contrast, the constraint on SGT can be satisfied and random phase ES decay can proceed at all heliocentric distances under almost all circumstances. (The contrary circumstances involve unusually slow, broad beams moving through unusually hot regions of the Corona.) The analyses presented here strongly justify extending the existing SGT-based model for interplanetary type III bursts (which includes SGT physics, random phase ES decay, and specific electromagnetic emission mechanisms) into a general theory for type III bursts from the corona to beyond 1 AU. This extended theory enjoys strong theoretical support, explains the characteristics of specific interplanetary type III bursts very well, and can account for the detailed dynamic spectra of type III bursts from the lower corona and solar wind.

  17. Self-consistent nonperturbative theory: Treatment of colloidal-type interactions

    Science.gov (United States)

    Serrano-Illán, J.; Navascués, G.; Velasco, E.; Mederos, L.

    2003-07-01

    We generalize a recently proposed self-consistent nonperturbative theory for classical systems by introducing the effect of the interaction potential in the functional form of the correlation function. The theory may be relevant for colloidal systems characterized by interactions that can be expressed in terms of a hard core plus a short-ranged term, and it is applied to two- and three-dimensional systems with Yukawa interactions. The results for the correlation function are in very good agreement with simulations, which confirms the suitability of the functional form that we propose. The thermodynamic properties are also in fair agreement with the predictions obtained by simulation, and this agreement goes over to the complete phase diagram. We believe that the theory is capable of providing more reliable results than simulation in fluid regions of the phase diagram where signals of crystalization make it difficult to accurately obtain the location of the fluid-to-solid phase transition. The theoretical predictions remain accurate even at relatively low fluid densities, a region where the theory is not intended to perform well, and an explanation based on clustering effects is provided.

  18. On a generalization of renormalization group equations to quantum field theories of an arbitraty type

    International Nuclear Information System (INIS)

    A generalization of renormalization group equations to the theories with arbitrary Lagrangians including nonrenormalizable ones is presented. In the framework of dimensional regularization these equations enable us to determine the coefficient functions of higher poles starting from a simple pole or generalized ?-functions

  19. How Many Types of Thermodynamical Equilibrium are There: Relation to Information Theory and Holism

    CERN Document Server

    Koleva, M K

    2006-01-01

    Major revision of the thermodynamics is made in order to provide rigorous fundament for functional diversity of holistic type. It turns out that the new approach ensures reproducibility of the information as well.

  20. Search for different links with the same Jones' type polynomials: Ideas from graph theory and statistical mechanics

    CERN Document Server

    Przytycki, J H

    1995-01-01

    We describe in this talk three methods of constructing different links with the same Jones type invariant. All three can be thought as generalizations of mutation. The first combines the satellite construction with mutation. The second uses the notion of rotant, taken from the graph theory, the third, invented by Jones, transplants into knot theory the idea of the Yang-Baxter equation with the spectral parameter (idea employed by Baxter in the theory of solvable models in statistical mechanics). We extend the Jones result and relate it to Traczyk's work on rotors of links. We also show further applications of the Jones idea, e.g. to 3-string links in the solid torus. We stress the fact that ideas coming from various areas of mathematics (and theoretical physics) has been fruitfully used in knot theory, and vice versa. (This is the detailed version of the talk given at the Banach Center Colloquium, Warsaw, Poland, March 24, 1994: ``W poszukiwaniu nietrywialnego wezla z trywialnym wielomianem Jonesa: grafy i me...

  1. Session Types = Intersection Types + Union Types

    CERN Document Server

    Padovani, Luca

    2011-01-01

    We propose a semantically grounded theory of session types which relies on intersection and union types. We argue that intersection and union types are natural candidates for modeling branching points in session types and we show that the resulting theory overcomes some important defects of related behavioral theories. In particular, intersections and unions provide a native solution to the problem of computing joins and meets of session types. Also, the subtyping relation turns out to be a pre-congruence, while this is not always the case in related behavioral theories.

  2. Introduction to string theory

    International Nuclear Information System (INIS)

    Open and closed boson theories are discussed in a classical framework, highlighting the physical interpretation of conformal symmetry and the Virasoro (1970) algebra. The quantification of bosonic strings is done within the old covariant operational formalism. This method is much less elegant and powerful than the BRST quantification, but it quickly reveals the physical content of quantum theory. Generalization to theories with fermionic degrees of freedom is introduced: the Neveu-Schartz (1971) and Ramond (1971) models, their reduced supersymmetry (two dimensions) and the Gliozzi, Scherk and Olive (1977) projection which leads to a supersymmetry theory in the usual meaning of the term

  3. Cyclic uniaxial and biaxial hardening of type 304 stainless steel modeled by the viscoplasticity theory based on overstress

    Science.gov (United States)

    Yao, David; Krempl, Erhard

    1988-01-01

    The isotropic theory of viscoplasticity based on overstress does not use a yield surface or a loading and unloading criterion. The inelastic strain rate depends on overstress, the difference between the stress and the equilibrium stress, and is assumed to be rate dependent. Special attention is paid to the modeling of elastic regions. For the modeling of cyclic hardening, such as observed in annealed Type 304 stainless steel, and additional growth law for a scalar quantity which represents the rate independent asymptotic value of the equilibrium stress is added. It is made to increase with inelastic deformation using a new scalar measure which differentiates between nonproportional and proportional loading. The theory is applied to correlate uniaxial data under two step amplitude loading including the effect of further hardening at the high amplitude and proportional and nonproportional cyclic loadings. Results are compared with corresponding experiments.

  4. KK-monopoles and G-structures in M-theory/type IIA reductions

    Science.gov (United States)

    Danielsson, Ulf; Dibitetto, Giuseppe; Guarino, Adolfo

    2015-02-01

    We argue that M-theory/massive IIA backgrounds including KK-monopoles are suitably described in the language of G-structures and their intrinsic torsion. To this end, we study classes of minimal supergravity models that admit an interpretation as twisted reductions in which the twist parameters are not restricted to satisfy the Jacobi constraints ?? = 0 required by an ordinary Scherk-Schwarz reduction. We first derive the correspondence between four-dimensional data and torsion classes of the internal space and, then, check the one-to-one correspondence between higher-dimensional and four-dimensional equations of motion. Remarkably, the whole construction holds regardless of the Jacobi constraints, thus shedding light upon the string/M-theory interpretation of (smeared) KK-monopoles.

  5. Chern-Simons and Born-Infeld gravity theories and Maxwell algebras type

    International Nuclear Information System (INIS)

    Recently it was shown that standard odd- and even-dimensional general relativity can be obtained from a (2n + 1)-dimensional Chern-Simons Lagrangian invariant under the B2n+1 algebra and from a (2n)-dimensional Born-Infeld Lagrangian invariant under a subalgebra LB2n+1, respectively. Very recently, it was shown that the generalized Inoenue-Wigner contraction of the generalized AdS-Maxwell algebras provides Maxwell algebras of types Mm which correspond to the so-called Bm Lie algebras. In this article we report on a simple model that suggests a mechanism by which standard odd-dimensional general relativity may emerge as the weak coupling constant limit of a (2p + 1)-dimensional Chern-Simons Lagrangian invariant under the Maxwell algebra type M2m+1, if and only if m ? p. Similarly, we show that standard even-dimensional general relativity emerges as the weak coupling constant limit of a (2p)-dimensional Born-Infeld type Lagrangian invariant under a subalgebra LM2m of theMaxwell algebra type, if and only if m ? p. It is shown that when m 2m+1 and for a (2p)-dimensional Born-Infeld type Lagrangian invariant under the LM2m algebra. (orig.)

  6. The double Mellin-Barnes type integrals and their applications to convolution theory

    CERN Document Server

    Hai, Nguyen Thanh

    1992-01-01

    This book presents new results in the theory of the double Mellin-Barnes integrals popularly known as the general H-function of two variables.A general integral convolution is constructed by the authors and it contains Laplace convolution as a particular case and possesses a factorization property for one-dimensional H-transform. Many examples of convolutions for classical integral transforms are obtained and they can be applied for the evaluation of series and integrals.

  7. Electrostatic field in superconductors IV: theory of Ginzburg-Landau type.

    Czech Academy of Sciences Publication Activity Database

    Lipavský, Pavel; Kolá?ek, Jan

    2009-01-01

    Ro?. 23, 20-21 (2009), s. 4505-4511. ISSN 0217-9792 R&D Projects: GA ?R GA202/04/0585; GA ?R GA202/05/0173; GA AV ?R IAA1010312 Institutional research plan: CEZ:AV0Z10100521 Keywords : superconductivity * Ginzburg-Landau theory Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.408, year: 2009

  8. Electrostatic field in superconductors IV: theory of Ginzburg-Landau type.

    Czech Academy of Sciences Publication Activity Database

    Lipavský, P.; Kolá?ek, Jan

    Singapore : World Scientific Publ. Co, 2010 - (Kusmartsev, F.), s. 581-587 ISBN 978-981-4289-14-6. - (24). [International Workshop on Condensed Matter Theories /32./. Loughborough (GB), 12.08.2008-19.08.2008] Institutional research plan: CEZ:AV0Z10100521 Keywords : superconductor * electric field Subject RIV: BM - Solid Matter Physics ; Magnetism http://eproceedings.worldscinet.com/9789814289153/9789814289153.shtml

  9. Classical Morse theory revisited I -- Backward $\\lambda$-Lemma and homotopy type

    OpenAIRE

    Weber, Joa

    2014-01-01

    We introduce a tool, dynamical thickening, which overcomes the infamous discontinuity of the gradient flow endpoint map near non-degenerate critical points. More precisely, we interpret the stable foliations of certain Conley pairs $(N,L)$, established in [4], as a \\emph{dynamical thickening of the stable manifold}. As a first application and to illustrate efficiency of the concept we reprove a fundamental theorem of classical Morse theory, Milnor's homotopical cell attachme...

  10. Adaptation of learning resources based on the MBTI theory of psychological types

    OpenAIRE

    Amel Behaz; Mahieddine Djoudi

    2012-01-01

    Today, the resources available on the web increases significantly. The motivation for the dissemination of knowledge and their acquisition by learners is central to learning. However, learners show differences between the ways of learning that suits them best. The objective of the work presented in this paper is to study how it is possible to integrate models from cognitive theories and ontologies for the adaptation of educational resources. The goal is to provide the system capabilities to c...

  11. Classification of Bianchi Type i Spacetimes According to Their Proper Teleparallel Homothetic Vector Fields in the Teleparallel Theory of Gravitation

    Science.gov (United States)

    Shabbir, Ghulam; Khan, Suhail

    In this paper we explored teleparallel homothetic vector fields in Bianchi type I spacetimes in the teleparallel theory of gravitation using direct integration technique. It turns out that the dimensions of the teleparallel homothetic vector fields are 4, 5, 7 or 11 which are same in numbers as in general relativity. In the cases of 4, 5 or 7 proper teleparallel homothetic vector fields exist for the special choice of the spacetimes. In the case of 11 teleparallel homothetic vector fields all the torsion components are zero. The homothetic vector fields of general relativity are recovered in this case and the spacetime become Minkowski.

  12. Calculation of Fayet–Iliopoulos D-term in type I string theory revisited: T6/Z3 orbifold case

    International Nuclear Information System (INIS)

    The string one-loop computation of the Fayet–Iliopoulos D-term in type I string theory in the case of T6/Z3 orbifold compactification associated with annulus (planar) and the Möbius strip string worldsheet diagrams is reexamined. The mass extracted from the sum of these amplitudes through a limiting procedure is found to be non-vanishing, which is contrary to the earlier computation. The sum can be made finite by a rescaling of the modular parameter in the closed string channel

  13. BCS-type mean-field theory for t-J model in su(2|1) superalgebra representation

    OpenAIRE

    Kochetov, Evgueny; Mierzejewski, Marcin

    2000-01-01

    A simple version of the Bardeen-Cooper-Schrieffer (BCS)-type mean-field theory for the t-J model is developed. The present approach rigorously treats the constraint of no doubly occupied states and invokes two local order parameters to implement spontaneous breaking of the global U(1)\\times U(1) symmetry. This is achieved by identifying the Hubbard operators with generators of the su(2|1) superalgebra in the fundamental representation and employing the CP^{1|1} parameterizat...

  14. Statistical image quantification toward optimal scan fusion and change quantification

    Science.gov (United States)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  15. AdS3 xw (S3 x S3 x S1) solutions of type IIB string theory

    International Nuclear Information System (INIS)

    We analyse a recently constructed class of local solutions of type IIB supergravity that consist of a warped product of AdS3 with a sevendimensional internal space. In one duality frame the only other nonvanishing fields are the NS three-form and the dilaton. We analyse in detail how these local solutions can be extended to globally well-defined solutions of type IIB string theory, with the internal space having topology S3 x S3 x S1 and with properly quantised three-form flux. We show that many of the dual (0,2) SCFTs are exactly marginal deformations of the (0,2) SCFTs whose holographic duals are warped products of AdS3 with seven-dimensional manifolds of topology S3 x S2 x T2. (orig.)

  16. Modeling the size dependent pull-in instability of beam-type NEMS using strain gradient theory

    Scientific Electronic Library Online (English)

    Ali, Koochi; Hamid M., Sedighi; Mohamadreza, Abadyan.

    1806-18-01

    Full Text Available It is well recognized that size dependency of materials characteristics, i.e. size-effect, often plays a significant role in the performance of nano-structures. Herein, strain gradient continuum theory is employed to investigate the size dependent pull-in instability of beam-type nano-electromechani [...] cal systems (NEMS). Two most common types of NEMS i.e. nano-bridge and nano-cantilever are considered. Effects of electrostatic field and dispersion forces i.e. Casimir and van der Waals (vdW) attractions have been considered in the nonlinear governing equations of the systems. Two different solution methods including numerical and Rayleigh-Ritz have been employed to solve the constitutive differential equations of the system. Effect of dispersion forces, the size dependency and the importance of coupling between them on the instability performance are discussed.

  17. Warped anti-de Sitter spaces from brane intersections in type II string theory

    CERN Document Server

    Orlando, Domenico

    2010-01-01

    We consider explicit type II string constructions of backgrounds containing warped and squashed anti de Sitter spaces. These are obtained via Hopf T duality from brane intersections including dyonic black strings, plane waves and monopoles. We also study the supersymmetry of these solutions and discuss special values of the deformation parameters.

  18. Investigating Strength and Frequency Effects in Recognition Memory Using Type-2 Signal Detection Theory

    Science.gov (United States)

    Higham, Philip A.; Perfect, Timothy J.; Bruno, Davide

    2009-01-01

    Criterion- versus distribution-shift accounts of frequency and strength effects in recognition memory were investigated with Type-2 signal detection receiver operating characteristic (ROC) analysis, which provides a measure of metacognitive monitoring. Experiment 1 demonstrated a frequency-based mirror effect, with a higher hit rate and lower…

  19. Fixed point theory for compact absorbing contractions in extension type spaces

    OpenAIRE

    Donal ORegan

    2010-01-01

    Several new fixed point results for self maps in extension type spaces are presented in this paper. In particular we discuss compact absorbing contractions.Son presentados en este artículo varios resultados nuevos de punto fijo para autoaplicaciones en espacios de tipo extensión. En particular discutimos contracciones compactas absorbentes.

  20. Quantification and finitism

    OpenAIRE

    Marion, Mathieu

    1991-01-01

    ?My aim is to clarify Wittgenstein's foundational outlook. I shall argue that he was neither a strict fmitist, nor an intuitionist, but a finitist (Skolem and Goodstein.) In chapter I, I argue that Wittgenstein was a "revisionist" in philosophy of mathematics. In chapter II, I set up a distinction between Kronecker's divisor-theoretical approach to algebraic number theory and the set-theoretic style of Dedekind's ideal-theoretic approach, in order to show that Wittgenstein...

  1. Virus detection and quantification using electrical parameters

    Science.gov (United States)

    Ahmad, Mahmoud Al; Mustafa, Farah; Ali, Lizna M.; Rizvi, Tahir A.

    2014-10-01

    Here we identify and quantitate two similar viruses, human and feline immunodeficiency viruses (HIV and FIV), suspended in a liquid medium without labeling, using a semiconductor technique. The virus count was estimated by calculating the impurities inside a defined volume by observing the change in electrical parameters. Empirically, the virus count was similar to the absolute value of the ratio of the change of the virus suspension dopant concentration relative to the mock dopant over the change in virus suspension Debye volume relative to mock Debye volume. The virus type was identified by constructing a concentration-mobility relationship which is unique for each kind of virus, allowing for a fast (within minutes) and label-free virus quantification and identification. For validation, the HIV and FIV virus preparations were further quantified by a biochemical technique and the results obtained by both approaches corroborated well. We further demonstrate that the electrical technique could be applied to accurately measure and characterize silica nanoparticles that resemble the virus particles in size. Based on these results, we anticipate our present approach to be a starting point towards establishing the foundation for label-free electrical-based identification and quantification of an unlimited number of viruses and other nano-sized particles.

  2. SWATH enables precise label-free quantification on proteome scale.

    Science.gov (United States)

    Huang, Qiang; Yang, Lu; Luo, Ji; Guo, Lin; Wang, Zhiyuan; Yang, Xiangyun; Jin, Wenhai; Fang, Yanshan; Ye, Juanying; Shan, Bing; Zhang, Yaoyang

    2015-04-01

    MS-based proteomics has emerged as a powerful tool in biological studies. The shotgun proteomics strategy, in which proteolytic peptides are analyzed in data-dependent mode, enables a detection of the most comprehensive proteome (>10 000 proteins from whole-cell lysate). The quantitative proteomics uses stable isotopes or label-free method to measure relative protein abundance. The isotope labeling strategies are more precise and accurate compared to label-free methods, but labeling procedures are complicated and expensive, and the sample number and types are also limited. Sequential window acquisition of all theoretical mass spectra (SWATH) is a recently developed technique, in which data-independent acquisition is coupled with peptide spectral library match. In principle SWATH method is able to do label-free quantification in an MRM-like manner, which has higher quantification accuracy and precision. Previous data have demonstrated that SWATH can be used to quantify less complex systems, such as spiked-in peptide mixture or protein complex. Our study first time assessed the quantification performance of SWATH method on proteome scale using a complex mouse-cell lysate sample. In total 3600 proteins got identified and quantified without sample prefractionation. The SWATH method shows outstanding quantification precision, whereas the quantification accuracy becomes less perfect when protein abundances differ greatly. However, this inaccuracy does not prevent discovering biological correlates, because the measured signal intensities had linear relationship to the sample loading amounts; thus the SWATH method can predict precisely the significance of a protein. Our results prove that SWATH can provide precise label-free quantification on proteome scale. PMID:25560523

  3. Nonlocal theory of drift type waves in a collisionless dusty plasma

    International Nuclear Information System (INIS)

    A nonlocal theory is formulated to study drift waves in a collisionless multicomponent (dusty) plasma in a sheared slab geometry. The dynamics of dust particles and ions are treated by fluid models, whereas the electrons are assumed to follow the Boltzmann distribution. It is found that the usual stability of drift waves in a sheared slab geometry is destroyed by the presence of dust particles. A drift wave is excited which propagates with a new characteristic frequency modified by dust particles. This result is similar to our earlier work for the collisional dusty plasma [Chakraborty et al., Phys. Plasmas 8, 1514 (2001)

  4. A calculation methodology applied for fuel management in PWR type reactors using first order perturbation theory

    International Nuclear Information System (INIS)

    An attempt has been made to obtain a strategy coherent with the available instruments and that could be implemented with future developments. A calculation methodology was developed for fuel reload in PWR reactors, which evolves cell calculation with the HAMMER-TECHNION code and neutronics calculation with the CITATION code.The management strategy adopted consists of fuel element position changing at the beginning of each reactor cycle in order to decrease the radial peak factor. The bi-dimensional, two group First Order perturbation theory was used for the mathematical modeling. (L.C.J.A.)

  5. Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs

    CERN Document Server

    Pueschel, M; Pueschel, Markus; Moura, Jose M. F.

    2007-01-01

    This paper presents a systematic methodology based on the algebraic theory of signal processing to classify and derive fast algorithms for linear transforms. Instead of manipulating the entries of transform matrices, our approach derives the algorithms by stepwise decomposition of the associated signal models, or polynomial algebras. This decomposition is based on two generic methods or algebraic principles that generalize the well-known Cooley-Tukey FFT and make the algorithms' derivations concise and transparent. Application to the 16 discrete cosine and sine transforms yields a large class of fast algorithms, many of which have not been found before.

  6. Quantification of various phosphatidylcholines in liposomes by enzymatic assay.

    Science.gov (United States)

    Grohganz, Holger; Ziroli, Vittorio; Massing, Ulrich; Brandl, Martin

    2003-12-15

    The purpose of this research was to adapt a colorimetric, phospholipase D-based serum-phospholipid assay for the quantification of phosphatidylcholine (PC) in liposomes using a microtitre plate reader. PC from natural egg PC liposomes was quantified reliably. In contrast, poor sensitivity was found for liposomes composed of saturated PCs (di-palmitoyl-phosphatidylcholine [DPPC], hydrogenated egg PC). Triton X-100 was then added to the liposomes followed by heating above the phase transition temperature. This modified sample preparation resulted in recoveries of 102.6% +/- 1.0%, 104.4% +/- 7.6%, and 109.4% +/- 3.2% for E80, E80-3/cholesterol, and DPPC liposomes, respectively. Absolute quantification of unknown PCs against a choline chloride standard is feasible, but relative measurements against the very same PC are recommended whenever possible. Validation experiments revealed an absolute quantification limit of 1.25 microg per assay, a good linearity in the range of 25 to 1000 microg/mL PC (r2> or = 0.9990) and a quite high accuracy (99.8%-101.4% of theory) and precision (relative standard deviation < or = 3.2%) for all 3 PCs studied. The method is thus regarded as suitable for sensitive, rapid, and reliable routine quantification of PCs in liposomes. PMID:15198558

  7. A note on canonical bases and one-based types in supersimple theories

    CERN Document Server

    Chatzidakis, Zoé

    2012-01-01

    This paper studies the CBP, a model-theoretic property first discovered by Pillay and Ziegler. We first show a general decomposition result of types of canonical bases, which one can think of as a sort of primary decomposition. This decomposition is then used to show that existentially closed difference fields of any characteristic have the CBP. We also derive consequences of the CBP, and use these results for applications to differential and difference varieties, and algebraic dynamics.

  8. Theory of Decoupling in the Mixed Phase of Extremely Type-II Layered Superconductors

    OpenAIRE

    Rodriguez, J. P.

    2000-01-01

    The mixed phase of extremely type-II layered superconductors in perpendicular magnetic field is studied theoretically via the layered XY model with uniform frustration. A partial duality analysis is carried out in the weak-coupling limit. It consistently accounts for both intra-layer (pancake) and inter-layer (Josephson) vortex excitations. The main conclusion reached is that dislocations of the two-dimensional (2D) vortex lattices within layers drive a unique second-order m...

  9. Theory of amplifying instabilities from optical phonon-drift current interaction in a type i superlattice

    Science.gov (United States)

    Wallis, R. F.; Martin, B. G.

    1992-10-01

    A theoretical investigation has been made on amplifying instabilities that arise from the interaction of optical phonons and a dc drift current in a solid state plasma. We consider both an infinite and a truncated type I superlattice where alternate layers contain a dc drift current parallel to the interfaces. The dispersion relation for localized modes is obtained neglecting carrier damping and diffusion effects. Calculated results indicate that amplifying instabilities exist for a certain frequency range.

  10. Renormalizations and Rigidity Theory for Circle Homeomorphisms with Singularities of the Break Type

    Science.gov (United States)

    Khanin, K.; Khmelev, D.

    Circle homeomorphisms with singularities of the break type are considered in the case when rotation numbers have periodic continued fraction expansion. We establish hyperbolicity for renormalizations and then use it in order to prove the following rigidity result. Namely, we show that any two homeomorphisms with a single break point are smoothly conjugate to each other provided they have the same quadratic irrational rotation number and the same ``size'' of a break.

  11. A Global View on The Search for de-Sitter Vacua in (type IIA) String Theory

    OpenAIRE

    Chen, Xingang(Department of Physics, The University of Texas at Dallas, Richardson, TX, 75083, USA); Shiu, Gary; Sumitomo, Yoske; Tye, S. -H. Henry

    2011-01-01

    The search for classically stable Type IIA de-Sitter vacua typically starts with an ansatz that gives Anti-de-Sitter supersymmetric vacua and then raises the cosmological constant by modifying the compactification. As one raises the cosmological constant, the couplings typically destabilize the classically stable vacuum, so the probability that this approach will lead to a classically stable de-Sitter vacuum is Gaussianly suppressed. This suggests that classically stable de-...

  12. Theory and design of quantum cascade lasers in (111) n-type Si/SiGe

    OpenAIRE

    Valavanis, A.; Lever, L.; Evans, C A; Ikoni?, Z.; Kelsall, R.W.

    2009-01-01

    Although most work toward the realization of group IV quantum cascade lasers (QCLs) has focused on valence-band transitions, there are many desirable properties associated with the conduction band. We show that the commonly cited shortcomings of n-type Si/SiGe heterostructures can be overcome by moving to the (111) growth direction. Specifically, a large band offset and low effective mass are achievable and subband degeneracy is preserved. We predict net gain up to lattice t...

  13. Critical state theory for nonparallel flux line lattices in type-II superconductors

    OpenAIRE

    Badia, A.; Lopez, C.

    2001-01-01

    Coarse-grained flux density profiles in type-II superconductors with non-parallel vortex configurations are obtained by a proposed phenomenological least action principle. We introduce a functional $C[H(x)]$, which is minimized under a constraint of the kind $J$ belongs to $Delta$ for the current density vector, where $Delta$ is a bounded set. This generalizes the concept of critical current density introduced by C. P. Bean for parallel vortex configurations. In particular, ...

  14. A constrained theory of non-BCS type superconductivity in gapped Graphene

    OpenAIRE

    Vyas, Vivek M.; Panigrahi, Prasanta K.

    2011-01-01

    We show that gapped Graphene, with a local constraint that current arising from the two valley fermions are exactly equal, shows a non-BCS type superconductivity. Unlike the conventional mechanisms, this superconductivity phenomenon does not require any pairing. We estimate the critical temperature for superconducting-to-normal transition via Berezinskii-Kosterlitz-Thouless mechanism, and find that it is proportional to the gap.

  15. A New Survey of types of Uncertainties in Nonlinear System with Fuzzy Theory

    Directory of Open Access Journals (Sweden)

    Fereshteh Mohammadi

    2013-03-01

    Full Text Available This paper is an attempt to introduce a new framework to handle both uncertainty and time in spatial domain. The application of the fuzzy temporal constraint network (FTCN method is proposed for representation and reasoning of uncertain temporal data. A brief introduction of the fuzzy sets theory is followed by description of the FTCN method with its main algorithms. The paper then discusses the issues of incorporating fuzzy approach into current spatio-temporal processing framework. The general temporal data model is extended to accommodate uncertainties with temporal data and relationships among events. A theoretical FTCN process of fuzzy transition for the imprecise information is introduced with an example. A summary of the paper is given together with outlining some contributions of the paper and future research directions.

  16. Extension Theory and Krein-type Resolvent Formulas for Nonsmooth Boundary Value Problems

    DEFF Research Database (Denmark)

    Abels, Helmut; Grubb, Gerd

    2014-01-01

    The theory of selfadjoint extensions of symmetric operators, and more generally the theory of extensions of dual pairs, was implemented some years ago for boundary value problems for elliptic operators on smooth bounded domains. Recently, the questions have been taken up again for nonsmooth domains. In the present work we show that pseudodifferential methods can be used to obtain a full characterization, including Kre?n resolvent formulas, of the realizations of nonselfadjoint second-order operators on View the MathML sourceC32+? domains; more precisely, we treat domains with View the MathML sourceBp,232-smoothness and operators with View the MathML sourceHq1-coefficients, for suitable p>2(n?1)p>2(n?1) and q>nq>n. The advantage of the pseudodifferential boundary operator calculus is that the operators are represented by a principal part and a lower-order remainder, leading to regularity results; in particular we analyze resolvents, Poisson solution operators and Dirichlet-to-Neumann operators in this way, also in Sobolev spaces of negative order.

  17. Blaschke-type conditions in unbounded domains, generalized convexity and applications in perturbation theory

    CERN Document Server

    Favorov, S

    2012-01-01

    We introduce a new geometric characteristic of compact sets on the plane called $r$-convexity, which fits nicely into the concept of generalized convexity and extends essentially the conventional convexity. For a class of subharmonic functions on unbounded domains with $r$-convex compact complement, with the growth governed by the distance to the boundary, we obtain the Blaschke--type condition for their Riesz' measures. The result is applied to the study of the convergence of the discrete spectrum for the Schatten-von Neumann perturbations.

  18. To Theory One Class Linear Model Noclassical Volterra Type Integral Equation with Left Boundary Singular Point

    Directory of Open Access Journals (Sweden)

    Nusrat Rajabov

    2013-08-01

    Full Text Available In this work, we investigate one class of Volterra type integral equation, in model case, when kernels have first order fixed singularity and logarithmic singularity. In detail study the case, when n = 3. In depend of the signs parameters solution to this integral equation can contain three arbitrary constants, two arbitrary constants, one constant and may have unique solution. In the case when general solution of integral equation contains arbitrary constants, we stand and investigate different boundary value problems, when conditions are given in singular point. Besides for considered integral equation, the solution found cane represented in generalized power series. Some results obtained in the general model case.

  19. Electrical and mechanical fully coupled theory and experimental verification of Rosen-type piezoelectric transformers.

    Science.gov (United States)

    Hsu, Yu-Hsiang; Lee, Chih-Kung; Hsiao, Wen-Hsin

    2005-10-01

    A piezoelectric transformer is a power transfer device that converts its input and output voltage as well as current by effectively using electrical and mechanical coupling effects of piezoelectric materials. Equivalent-circuit models, which are traditionally used to analyze piezoelectric transformers, merge each mechanical resonance effect into a series of ordinary differential equations. Because of using ordinary differential equations, equivalent circuit models are insufficient to reflect the mechanical behavior of piezoelectric plates. Electromechanically, fully coupled governing equations of Rosen-type piezoelectric transformers, which are partial differential equations in nature, can be derived to address the deficiencies of the equivalent circuit models. It can be shown that the modal actuator concept can be adopted to optimize the electromechanical coupling effect of the driving section once the added spatial domain design parameters are taken into account, which are three-dimensional spatial dependencies of electromechanical properties. The maximum power transfer condition for a Rosen-type piezoelectric transformer is detailed. Experimental results, which lead us to a series of new design rules, also are presented to prove the validity and effectiveness of the theoretical predictions. PMID:16382636

  20. Quantification of natural phenomena

    International Nuclear Information System (INIS)

    The science is like a great spider's web in which unexpected connections appear and therefore it is frequently difficult to already know the consequences of new theories on those existent. The physics is a clear example of this. The Newton mechanics laws describe the physical phenomena observable accurately by means of our organs of the senses or by means of observation teams not very sophisticated. After their formulation at the beginning of the XVIII Century, these laws were recognized in the scientific world as a mathematical model of the nature. Together with the electrodynamics law, developed in the XIX century, and the thermodynamic one constitutes what we call the classic physics. The state of maturity of the classic physics at the end of last century it was such that some scientists believed that the physics was arriving to its end obtaining a complete description of the physical phenomena. The spider's web of the knowledge was supposed finished, or at least very near its termination. It ended up saying, in arrogant form, that if the initial conditions of the universe were known, we could determine the state of the same one in any future moment. Two phenomena related with the light would prove in firm form that mistaken that they were, creating unexpected connections in the great spider's web of the knowledge and knocking down part of her. The thermal radiation of the bodies and the fact that the light spreads to constant speed in the hole, without having an aant speed in the hole, without having an absolute system of reference with regard to which this speed is measured, they constituted the decisive factors in the construction of a new physics. The development of sophisticated of measure teams gave access to more precise information and it opened the microscopic world to the observation and confirmation of existent theories

  1. A recipe for EFT uncertainty quantification in nuclear physics

    Science.gov (United States)

    Furnstahl, R. J.; Phillips, D. R.; Wesolowski, S.

    2015-03-01

    The application of effective field theory (EFT) methods to nuclear systems provides the opportunity to rigorously estimate the uncertainties originating in the nuclear Hamiltonian. Yet this is just one source of uncertainty in the observables predicted by calculations based on nuclear EFTs. We discuss the goals of uncertainty quantification in such calculations and outline a recipe to obtain statistically meaningful error bars for their predictions. We argue that the different sources of theory error can be accounted for within a Bayesian framework, as we illustrate using a toy model.

  2. A recipe for EFT uncertainty quantification in nuclear physics

    International Nuclear Information System (INIS)

    The application of effective field theory (EFT) methods to nuclear systems provides the opportunity to rigorously estimate the uncertainties originating in the nuclear Hamiltonian. Yet this is just one source of uncertainty in the observables predicted by calculations based on nuclear EFTs. We discuss the goals of uncertainty quantification in such calculations and outline a recipe to obtain statistically meaningful error bars for their predictions. We argue that the different sources of theory error can be accounted for within a Bayesian framework, as we illustrate using a toy model. (paper)

  3. A Pieri-type formula for the K-theory of a flag manifold

    CERN Document Server

    Lenart, C; Lenart, Cristian; Sottile, Frank

    2004-01-01

    We derive explicit Pieri-type multiplication formulas in the Grothendieck ring of a flag variety. These expand the product of an arbitrary Schubert class and a special Schubert class in the basis of Schubert classes. These special Schubert classes are indexed by a cycle which has either the form (k-p+1,k-p+2,...,k+1) or the form (k+p,k+p-1,...,k), and are pulled back from a Grassmannian projection. Our formulas are in terms of certain labeled chains in the k-Bruhat order on the symmetric group and are combinatorial in that they involve no cancellations. We also show that the multiplicities in the Pieri formula are naturally certain binomial coefficients.

  4. Lovelock type gravity and small black holes in heterotic string theory

    International Nuclear Information System (INIS)

    We analyze near horizon behavior of small D-dimensional 2-charge black holes by modifying tree level effective action of heterotic string with all extended Gauss-Bonnet densities. We show that there is a nontrivial and unique choice of parameters, independent of D, for which the black hole entropy in any dimension is given by 4?(nw)1/2, which is exactly the statistical entropy of 1/2-BPS states of heterotic string compactified on T9-D x S1 with momentum n and winding w. This, in a sense, extends the results of Sen JHEP07(2005)073 to all dimensions. We also show that our Lovelock type action belongs to the more general class of actions sharing the similar behaviour on the AdS2 x SD-2 near horizon geometry

  5. Boussinesq Systems of Bona-Smith Type on Plane Domains: Theory and Numerical Analysis

    CERN Document Server

    Dougalis, Vassilios; Saut, Jean-Claude

    2009-01-01

    We consider a class of Boussinesq systems of Bona-Smith type in two space dimensions approximating surface wave flows modelled by the three-dimensional Euler equations. We show that various initial-boundary-value problems for these systems, posed on a bounded plane domain are well posed locally in time. In the case of reflective boundary conditions, the systems are discretized by a modified Galerkin method which is proved to converge in $L^2$ at an optimal rate. Numerical experiments are presented with the aim of simulating two-dimensional surface waves in complex plane domains with a variety of initial and boundary conditions, and comparing numerical solutions of Bona-Smith systems with analogous solutions of the BBM-BBM system.

  6. Understanding microwave heating effects in single mode type cavities-theory and experiment.

    Science.gov (United States)

    Robinson, John; Kingman, Sam; Irvine, Derek; Licence, Peter; Smith, Alastair; Dimitrakis, Georgios; Obermayer, David; Kappe, C Oliver

    2010-05-14

    This paper explains the phenomena which occur in commercially available laboratory microwave equipment, and highlights several situations where experimental observations are often misinterpreted as a 'microwave effect'. Electromagnetic simulations and heating experiments were used to show the quantitative effects of solvent type, solvent volume, vessel material, vessel internals and stirring rate on the distribution of the electric field, the power density and the rate of heating. The simulations and experiments show how significant temperature gradients can exist within the heated materials, and that very different results can be obtained depending on the method used to measure temperature. The overall energy balance is shown for a number of different solvents, and the interpretation and implications of using the results from commercially available microwave equipment are discussed. PMID:20428555

  7. Secret symmetries of type IIB superstring theory on AdS3 × S3 × M4

    International Nuclear Information System (INIS)

    We establish features of so-called Yangian secret symmetries for AdS3 type IIB superstring backgrounds, thus verifying the persistence of such symmetries to this new instance of the AdS/CFT correspondence. Specifically, we find two a priori different classes of secret symmetry generators. One class of generators, anticipated from the previous literature, is more naturally embedded in the algebra governing the integrable scattering problem. The other class of generators is more elusive and somewhat closer in its form to its higher-dimensional AdS5 counterpart. All of these symmetries respect left-right crossing. In addition, by considering the interplay between left and right representations, we gain a new perspective on the AdS5 case. We also study the RTT-realisation of the Yangian in AdS3 backgrounds, thus establishing a new incarnation of the Beisert–de Leeuw construction. (paper)

  8. Secret symmetries of type IIB superstring theory on Ad{{S}_{3}} × {{S}^{3}} × {{M}^{4}}

    Science.gov (United States)

    Pittelli, Antonio; Torrielli, Alessandro; Wolf, Martin

    2014-11-01

    We establish features of so-called Yangian secret symmetries for AdS3 type IIB superstring backgrounds, thus verifying the persistence of such symmetries to this new instance of the AdS/CFT correspondence. Specifically, we find two a priori different classes of secret symmetry generators. One class of generators, anticipated from the previous literature, is more naturally embedded in the algebra governing the integrable scattering problem. The other class of generators is more elusive and somewhat closer in its form to its higher-dimensional AdS5 counterpart. All of these symmetries respect left-right crossing. In addition, by considering the interplay between left and right representations, we gain a new perspective on the AdS5 case. We also study the RTT-realisation of the Yangian in AdS3 backgrounds, thus establishing a new incarnation of the Beisert–de Leeuw construction.

  9. Quantification of atmospheric water soluble inorganic and organic nitrogen

    OpenAIRE

    Benítez, Juan Manuel González

    2010-01-01

    The key aims of this project were: (i) investigation of atmospheric nitrogen deposition, focused on discrimination between bulk, wet and dry deposition, and between particulate matter and gas phase, (ii) accurate quantification of the contributions of dissolved organic and inorganic nitrogen to each type of deposition, and (iii) exploration of the origin and potential sources of atmospheric water soluble organic nitrogen (WSON). This project was particularly focused on the WSON fraction becau...

  10. Bianchi Type-I Massive String Magnetized Barotropic Perfect Fluid Cosmological Model in the Bimetric Theory of Gravitation

    International Nuclear Information System (INIS)

    We investigate the Bianchi type-I massive string magnetized barotropic perfect fluid cosmological model in Rosen's bimetric theory of gravitation with and without a magnetic field by applying the techniques used by Latelier (1979, 1980) and Stachel (1983). To obtain a deterministic model of the universe, it is assumed that the universe is filled with barotropic perfect fluid distribution. The physical and geometrical significance of the model are discussed. By comparing our model with the model of Bali et al. (2007), it is realized that there are no big-bang and big-crunch singularities in our model and T = 0 is not the time of the big bang, whereas the model of Bali et al. starts with a big bang at T = 0. Further, our model is in agreement with Bali et al. (2007) as time increases in the presence, as well as in the absence, of a magnetic field. (geophysics, astronomy, and astrophysics)

  11. Object Oriented Design Security Quantification

    OpenAIRE

    Suhel Ahmad Khan

    2011-01-01

    Quantification of security at early phase produces a significant improvement to understand the management of security artifacts for best possible results. The proposed study discusses a systematic approach to quantify security based on complexity factors which having impact on security attributes. This paper provides a road-map to researchers and software practitioner to assess, and preferably, quantify software security in design phase. A security assessment through complexity framework (SVD...

  12. k-string tensions in the 4-d SU(N)-inspired dual abelian-Higgs-type theory

    International Nuclear Information System (INIS)

    The k-string tensions are explored in the 4-d [U(1)]N-1-invariant dual abelian-Higgs-type theory. In the London limit of this theory, the Casimir scaling is found in the approximation when small-sized closed dual strings are disregarded. When these strings are treated in the dilute-plasma approximation, explicit corrections to the Casimir scaling are found. The leading correction due to the deviation from the London limit is also derived. Its N-ality dependence turns out to be the same as that of the first non-trivial correction produced by closed strings. It also turns out that this N-ality dependence coincides with that of the leading correction to the k-string tension, which emerges by way of the non-diluteness of the monopole plasma in the 3-d SU(N) Georgi-Glashow model. Finally, we prove that, in the latter model, Casimir scaling holds even at monopole densities close to the mean one, provided the string world sheet is flat. (author)

  13. Flux-induced Soft Terms on Type IIB/F-theory Matter Curves and Hypercharge Dependent Scalar Masses

    CERN Document Server

    Camara, Pablo G; Valenzuela, Irene

    2014-01-01

    Closed string fluxes induce generically SUSY-breaking soft terms on supersymmetric type IIB orientifold compactifications with D3/D7 branes. This was studied in the past by inserting those fluxes on the DBI+CS actions for adjoint D3/D7 fields, where D7-branes had no magnetic fluxes. In the present work we generalise those computations to the phenomenologically more relevant case of chiral bi-fundamental fields laying at 7-brane intersections and F-theory local matter curves. We also include the effect of 7-brane magnetic flux as well as more general closed string backgrounds, including the effect of distant (anti-)D3-branes. We discuss several applications of our results. We find that squark/slepton masses become in general flux-dependent in F-theory GUT's. Hypercharge-dependent non-universal scalar masses with a characteristic sfermion hierarchy m_E^2 < m_L^2 < m_Q^2 < m_D^2 < m_U^2 are obtained. There are also flavor-violating soft terms both for matter fields living at intersecting 7-branes or ...

  14. Flux-induced soft terms on type IIB/F-theory matter curves and hypercharge dependent scalar masses

    Science.gov (United States)

    Cámara, Pablo G.; Ibáñez, Luis E.; Valenzuela, Irene

    2014-06-01

    Closed string fluxes induce generically SUSY-breaking soft terms on supersymmetric type IIB orientifold compactifications with D3/D7 branes. This was studied in the past by inserting those fluxes on the DBI+CS actions for adjoint D3/D7 fields, where D7-branes had no magnetic fluxes. In the present work we generalise those computations to the phenomenologically more relevant case of chiral bi-fundamental fields laying at 7-brane intersections and F-theory local matter curves. We also include the effect of 7-brane magnetic flux as well as more general closed string backgrounds, including the effect of distant -branes. We discuss several applications of our results. We find that squark/slepton masses become in general flux-dependent in F-theory GUT's. Hypercharge-dependent non-universal scalar masses with a characteristic sfermion hierarchy m {/E 2} flavor-violating soft terms both for matter fields living at intersecting 7-branes or on D3-branes at singularities. They point at a very heavy sfermion spectrum to avoid FCNC constraints. We also discuss the possible microscopic description of the fine-tuning of the EW Higgs boson in compactifications with a MSSM spectrum.

  15. Evaluation of Bianchi type VI0 magnetized anisotropic dark energy models with constant deceleration parameter in bimetric theory of gravitation

    Science.gov (United States)

    Borkar, M. S.; Ameen, A.

    2015-01-01

    In this paper, Bianchi type VI0 magnetized anisotropic dark energy models with constant deceleration parameter have been studied by solving the Rosen's field equations in Bimetric theory of gravitation. The models corresponding to power law expansion and exponential law expansion have been evaluated and studied their nature geometrically and physically. It is seen that there is real visible matter (baryonic matter) suddenly appeared only for small interval of time 0.7 ? t theory is about 4% and the dark energy cause the accelerating expansion of the universe and several high precision observational experiments, especially the Wilkinson Microwave Anisotropic Probe (WMAP) satellite experiment (see [C. L. Bennett et al., Astrophys. J. Suppl. Ser.148 (2003) 1; WMAP Collab. (D. N. Spergel et al.), Astrophys. J. Suppl. Ser.148 (2003) 175; D. N. Spergel et al., Astrophys. J. Suppl.170 (2007) 377; WMAP Collab. (E. Komastu et al.), Astrophys. J. Suppl.180 (2009) 330; WMAP Collab. (G. Hinshaw et al.), Astrophys. J. Suppl.208 (2013) 19; Plank Collab. (P. A. R. Ade), arXiv:1303.5076; arXiv:1303.5082]) conclude that the dark energy occupies near about 73% of the energy of the universe and dark matter is about 23%. In exponential law of expansion, our model is fully occupied by real visible matter and there is no chance of dark energy and dark matter.

  16. Self-Organized Criticality as Witten-type Topological Field Theory with Spontaneously Broken BRST-Symmetry

    CERN Document Server

    Ovchinnikov, Igor V

    2011-01-01

    Here we propose a scenario according to which a generic self-organized critical (SOC) system can be looked upon as a Witten-type topological field theory (W-TFT) with spontaneously broken BRST-symmetry. One of the conditions for the SOC is the slow external driving that unambiguously suggests the Stratanovich interpretation of noise in the corresponding stochastic differential equation (SDE). This necessitates the use of the Parisi-Wu quantization of the SDE leading to a model with a BRST-exact action, \\emph i.e., to a W-TFT. For a general SDE with a mixed-type drift term (Langevin + Hamilton parts), the BRST-symmetry is spontaneously broken and there is the Goldstone mode of Fadeev-Popov ghosts. In the low-energy/long-wavelength limit, the ghosts represent instanton/avalanche modulii and being gapless are responsible for the critical distribution of avalanches. The above arguments are robust against a moderate variation of the SDE's parameters and the criticality is "self-tuned". Our proposition suggests tha...

  17. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  18. The necessity of operational risk management and quantification

    Directory of Open Access Journals (Sweden)

    Barbu Teodora Cristina

    2008-04-01

    Full Text Available Beginning with the fact that performant strategies of the financial institutions have programmes and management procedures for the banking risks, which have as main objective to minimize the probability of risk generation and the bank’s potential exposure, this paper wants to present the operational risk management and quantification methods. Also it presents the modality of minimum capital requirement for the operational risk. Therefore, the first part presents the conceptual approach of the operational risks through the point of view of the financial institutions exposed to this type of risk. The second part describes the management and evaluation methods for the operational risk. The final part of this article presents the approach assumed by a financial institution with a precise purpose: the quantification of the minimum capital requirements of the operational risk.

  19. Aerobic physical activity and resistance training: an application of the theory of planned behavior among adults with type 2 diabetes in a random, national sample of Canadians

    OpenAIRE

    Karunamuni Nandini; Trinh Linda; Courneya Kerry S; Plotnikoff Ronald C; Sigal Ronald J

    2008-01-01

    Abstract Background Aerobic physical activity (PA) and resistance training are paramount in the treatment and management of type 2 diabetes (T2D), but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB) in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random na...

  20. Quantum non-equilibrium and relaxation to equilibrium for a class of de Broglie-Bohm-type theories

    CERN Document Server

    Colin, Samuel

    2009-01-01

    The de Broglie-Bohm theory is about non-relativistic point-particles that move deterministically along trajectories. The theory reproduces the predictions of standard quantum theory, given that the distribution of particles over an ensemble of systems, all described by the same wavefunction $\\psi$, equals the quantum equilibrium distribution $|\\psi|^2$. Numerical simulations by Valentini and Westman have illustrated that non-equilibrium particle distributions may relax to quantum equilibrium after some time. Here we consider non-equilibrium distributions and their relaxation properties for a particular class of trajectory theories, first studied in detail by Deotto and Ghirardi, that are empirically equivalent to the de Broglie-Bohm theory theory in quantum equilibrium. For the examples of such theories that we consider, we find a speed-up of the relaxation compared to the ordinary de Broglie-Bohm theory theory.

  1. Self-organized criticality as Witten-type topological field theory with spontaneously broken Becchi-Rouet-Stora-Tyutin symmetry

    International Nuclear Information System (INIS)

    Here, a scenario is proposed, according to which a generic self-organized critical (SOC) system can be looked upon as a Witten-type topological field theory (W-TFT) with spontaneously broken Becchi-Rouet-Stora-Tyutin (BRST) symmetry. One of the conditions for the SOC is the slow driving noise, which unambiguously suggests Stratonovich interpretation of the corresponding stochastic differential equation (SDE). This, in turn, necessitates the use of Parisi-Sourlas-Wu stochastic quantization procedure, which straightforwardly leads to a model with BRST-exact action, i.e., to a W-TFT. In the parameter space of the SDE, there must exist full-dimensional regions where the BRST symmetry is spontaneously broken by instantons, which in the context of SOC are essentially avalanches. In these regions, the avalanche-type SOC dynamics is liberated from overwise a rightful dynamics-less W-TFT, and a Goldstone mode of Fadeev-Popov ghosts exists. Goldstinos represent moduli of instantons (avalanches) and being gapless are responsible for the critical avalanche distribution in the low-energy, long-wavelength limit. The above arguments are robust against moderate variations of the SDE's parameters and the criticality is 'self-tuned'. The proposition of this paper suggests that the machinery of W-TFTs may find its applications in many different areas of modern science studying various physical realizations of SOC. It also suggests that there may in principle exist a connection between s in principle exist a connection between some SOC's and the concept of topological quantum computing.

  2. Self-organized criticality as Witten-type topological field theory with spontaneously broken Becchi-Rouet-Stora-Tyutin symmetry

    Science.gov (United States)

    Ovchinnikov, Igor V.

    2011-05-01

    Here, a scenario is proposed, according to which a generic self-organized critical (SOC) system can be looked upon as a Witten-type topological field theory (W-TFT) with spontaneously broken Becchi-Rouet-Stora-Tyutin (BRST) symmetry. One of the conditions for the SOC is the slow driving noise, which unambiguously suggests Stratonovich interpretation of the corresponding stochastic differential equation (SDE). This, in turn, necessitates the use of Parisi-Sourlas-Wu stochastic quantization procedure, which straightforwardly leads to a model with BRST-exact action, i.e., to a W-TFT. In the parameter space of the SDE, there must exist full-dimensional regions where the BRST symmetry is spontaneously broken by instantons, which in the context of SOC are essentially avalanches. In these regions, the avalanche-type SOC dynamics is liberated from overwise a rightful dynamics-less W-TFT, and a Goldstone mode of Fadeev-Popov ghosts exists. Goldstinos represent moduli of instantons (avalanches) and being gapless are responsible for the critical avalanche distribution in the low-energy, long-wavelength limit. The above arguments are robust against moderate variations of the SDE’s parameters and the criticality is “self-tuned.” The proposition of this paper suggests that the machinery of W-TFTs may find its applications in many different areas of modern science studying various physical realizations of SOC. It also suggests that there may in principle exist a connection between some SOC‘s and the concept of topological quantum computing.

  3. Low energy expansion of the four-particle genus-one amplitude in type II superstring theory

    International Nuclear Information System (INIS)

    A diagrammatic expansion of coefficients in the low-momentum expansion of the genus-one four-particle amplitude in type II superstring theory is developed. This is applied to determine coefficients up to order s6 R4 (where s is a Mandelstam invariant and R the linearized super-curvature), and partial results are obtained beyond that order. This involves integrating powers of the scalar propagator on a toroidal world-sheet, as well as integrating over the modulus of the torus. At any given order in s the coefficients of these terms are given by rational numbers multiplying multiple zeta values (or Euler-Zagier sums) that, up to the order studied here, reduce to products of Riemann zeta values. We are careful to disentangle the analytic pieces from logarithmic threshold terms, which involves a discussion of the conditions imposed by unitarity. We further consider the compactification of the amplitude on a circle of radius r, which results in a plethora of terms that are power-behaved in r. These coefficients provide boundary 'data' that must be matched by any non-perturbative expression for the low-energy expansion of the four-graviton amplitude. The paper includes an appendix by Don Zagier

  4. Towards an integrative account of social cognition: marrying theory of mind and interactionism to study the interplay of Type 1 and Type 2 processes

    OpenAIRE

    VivianBohl; Woutervan den Bos

    2012-01-01

    Traditional theory of mind accounts of social cognition have been at the basis of most studies in the social cognitive neurosciences. However, in recent years, the need to go beyond traditional theory of mind accounts for understanding real life social interactions has become all the more pressing. At the same time it remains unclear whether alternative accounts, such as interactionism, can yield a sufficient description and explanation of social interactions. We argue that instead of conside...

  5. Medición volumétrica de grasa visceral abdominal con resonancia magnética y su relación con antropometría, en una población diabética / Quantification of visceral adipose tissue using magnetic resonance imaging compared with anthropometry, in type 2 diabetic patients

    Scientific Electronic Library Online (English)

    Cristóbal, Serrano García; Francisco, Barrera; Pilar, Labbé; Jessica, Liberona; Marco, Arrese; Pablo, Irarrázabal; Cristián, Tejos; Sergio, Uribe.

    2012-12-01

    Full Text Available [...] Abstract in english Background: Visceral fat accumulation is associated with the development of metabolic diseases. Anthropometry is one of the methods used to quantify it. aim: to evaluate the relationship between visceral adipose tissue volume (VAT), measured with magnetic resonance imaging (MRI), and anthropometric [...] indexes, such as body mass index (BMI) and waist circumference (WC), in type 2 diabetic patients (DM2). Patients and Methods: Twenty four type 2 diabetic patients aged 55 to 78 years (15 females) and weighting 61.5 to 97 kg, were included. The patients underwent MRI examination on a Philips Intera® 1.5T MR scanner. The MRI protocol included a spectral excitation sequence centered at the fat peak. The field of view included from L4-L5 to the diaphragmatic border. VAT was measured using the software Image J®. Weight, height, BMI, WC and body fat percentage (BF%), derived from the measurement offour skinfolds with the equation of Durnin and Womersley, were also measured. The association between MRIVAT measurement and anthropometry was evaluated using the Pearson's correlation coefficient. Results: Mean VAT was 2478 ± 758 ml, mean BMI29.5 ± 4.7 kg/m², and mean WC was 100 ± 9.7 cm. There was a poor correlation between VAT, BMI (r = 0.18) and WC (r = 0.56). Conclusions: BMI and WC are inaccurate predictors of VAT volume in type 2 diabetic patients.

  6. Uncertainty quantification for systems of conservation laws

    International Nuclear Information System (INIS)

    Uncertainty quantification through stochastic spectral methods has been recently applied to several kinds of non-linear stochastic PDEs. In this paper, we introduce a formalism based on kinetic theory to tackle uncertain hyperbolic systems of conservation laws with Polynomial Chaos (PC) methods. The idea is to introduce a new variable, the entropic variable, in bijection with our vector of unknowns, which we develop on the polynomial basis: by performing a Galerkin projection, we obtain a deterministic system of conservation laws. We state several properties of this deterministic system in the case of a general uncertain system of conservation laws. We then apply the method to the case of the inviscid Burgers' equation with random initial conditions and we present some preliminary results for the Euler system. We systematically compare results from our new approach to results from the stochastic Galerkin method. In the vicinity of discontinuities, the new method bounds the oscillations due to Gibbs phenomenon to a certain range through the entropy of the system without the use of any adaptative random space discretizations. It is found to be more precise than the stochastic Galerkin method for smooth cases but above all for discontinuous cases

  7. Uncertainty quantification for systems of conservation laws

    Science.gov (United States)

    Poëtte, Gaël; Després, Bruno; Lucor, Didier

    2009-04-01

    Uncertainty quantification through stochastic spectral methods has been recently applied to several kinds of non-linear stochastic PDEs. In this paper, we introduce a formalism based on kinetic theory to tackle uncertain hyperbolic systems of conservation laws with Polynomial Chaos (PC) methods. The idea is to introduce a new variable, the entropic variable, in bijection with our vector of unknowns, which we develop on the polynomial basis: by performing a Galerkin projection, we obtain a deterministic system of conservation laws. We state several properties of this deterministic system in the case of a general uncertain system of conservation laws. We then apply the method to the case of the inviscid Burgers' equation with random initial conditions and we present some preliminary results for the Euler system. We systematically compare results from our new approach to results from the stochastic Galerkin method. In the vicinity of discontinuities, the new method bounds the oscillations due to Gibbs phenomenon to a certain range through the entropy of the system without the use of any adaptative random space discretizations. It is found to be more precise than the stochastic Galerkin method for smooth cases but above all for discontinuous cases.

  8. The use of self-quantification systems for personal health information: big data management activities and prospects

    Science.gov (United States)

    2015-01-01

    Background Self-quantification is seen as an emerging paradigm for health care self-management. Self-quantification systems (SQS) can be used for tracking, monitoring, and quantifying health aspects including mental, emotional, physical, and social aspects in order to gain self-knowledge. However, there has been a lack of a systematic approach for conceptualising and mapping the essential activities that are undertaken by individuals who are using SQS in order to improve health outcomes. In this paper, we propose a new model of personal health information self-quantification systems (PHI-SQS). PHI-SQS model describes two types of activities that individuals go through during their journey of health self-managed practice, which are 'self-quantification' and 'self-activation'. Objectives In this paper, we aimed to examine thoroughly the first type of activity in PHI-SQS which is 'self-quantification'. Our objectives were to review the data management processes currently supported in a representative set of self-quantification tools and ancillary applications, and provide a systematic approach for conceptualising and mapping these processes with the individuals' activities. Method We reviewed and compared eleven self-quantification tools and applications (Zeo Sleep Manager, Fitbit, Actipressure, MoodPanda, iBGStar, Sensaris Senspod, 23andMe, uBiome, Digifit, BodyTrack, and Wikilife), that collect three key health data types (Environmental exposure, Physiological patterns, Genetic traits). We investigated the interaction taking place at different data flow stages between the individual user and the self-quantification technology used. Findings We found that these eleven self-quantification tools and applications represent two major tool types (primary and secondary self-quantification systems). In each type, the individuals experience different processes and activities which are substantially influenced by the technologies' data management capabilities. Conclusions Self-quantification in personal health maintenance appears promising and exciting. However, more studies are needed to support its use in this field. The proposed model will in the future lead to developing a measure for assessing the effectiveness of interventions to support using SQS for health self-management (e.g., assessing the complexity of self-quantification activities, and activation of the individuals).

  9. Sz.-Nagy-Foias theory and Lax-Phillips type semigroups in the description of quantum mechanical resonances

    International Nuclear Information System (INIS)

    A quantum mechanical version of the Lax-Phillips scattering theory was recently developed. This theory is a natural framework for the description of quantum unstable systems. However, since the spectrum of the generator of evolution in this theory is unbounded from below, the existing framework does not apply to a large class of quantum mechanical scattering problems. It is shown in this work that the fundamental mathematical structure underlying the Lax-Phillips theory, i.e., the Sz.-Nagy-Foias theory of contraction operators on Hilbert space, can be used for the construction of a formalism in which models associated with a semibounded spectrum may be accomodated

  10. Development of Quantification Method for Bioluminescence Imaging

    International Nuclear Information System (INIS)

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used focess. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  11. Development of Quantification Method for Bioluminescence Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il [Chonnam National University Hospital, Hwasun (Korea, Republic of); Choi, Eun Seo [Chosun University, Gwangju (Korea, Republic of); Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young [Inje University, Kimhae (Korea, Republic of)

    2009-10-15

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  12. Accessible quantification of multiparticle entanglement

    CERN Document Server

    Cianciaruso, Marco; Adesso, Gerardo

    2015-01-01

    Entanglement is a key ingredient for quantum technologies and a fundamental signature of quantumness in a broad range of phenomena encompassing many-body physics, thermodynamics, cosmology, and life sciences. For arbitrary multiparticle systems, the quantification of entanglement typically involves hard optimisation problems, and requires demanding tomographical techniques. In this paper we show that such difficulties can be overcome by developing an experimentally friendly method to evaluate measures of multiparticle entanglement via a geometric approach. The method provides exact analytical results for a relevant class of mixed states of $N$ qubits, and computable lower bounds to entanglement for any general state. For practical purposes, the entanglement determination requires local measurements in just three settings for any $N$. We demonstrate the power of our approach to quantify multiparticle entanglement in $N$-qubit bound entangled states and other states recently engineered in laboratory using quant...

  13. Cuantificación del carbono almacenado en formaciones vegetales amazónicas en "CICRA", Madre de Dios (Perú) / Quantification of the carbon storage in amazon vegetation types at "CICRA", Madre de Dios (Peru)

    Scientific Electronic Library Online (English)

    Carlos, Martel; Lianka, Cairampoma.

    2012-08-01

    Full Text Available La llanura amazónica peruana se caracteriza por la presencia de múltiples formaciones vegetales. Éstas cada vez reciben mayor impacto por actividades antropogénicas tales como la minería y tala. Todo esto, sumado al cambio climático global, genera desconcierto sobre el futuro de los bosques. La iden [...] tificación de los niveles de almacenamiento de carbono en áreas boscosas, y específicamente en cada formación vegetal, permitiría un mejor manejo de las zonas de conservación, así como identificar las áreas potenciales que servirían para el financiamiento de la absorción de carbono y otros servicios ambientales. El presente estudio fue desarrollado en la estación Biológica del Centro de Investigación y Capacitación Río Los Amigos (CICRA). En el CICRA se identificaron tres formaciones vegetales principales, el bosque de terraza, el bosque inundable y el aguajal. Siendo los bosques de terraza los de mayor extensión y mayor cantidad de carbono acumulado. Como resultado se valorizó la vegetación presente en el CICRA, en alrededor de 11 millones de dólares americanos. El ingreso a la oferta de los bonos de carbono promovería la conservación de los bosques. Abstract in english The Peruvian Amazon Basin is characterized by the presence of multiple vegetation types. They are being given great impact by human activities such as mining and, logging. All this, coupled with global climate change, creates confusion about the future of our forests. The identification of levels of [...] carbon storage in forested areas, and specifically in each vegetation type, would allow better management of conservation areas, and then identify potential areas that could serve to finance carbon sequestration and other environmental services. This study was conducted at the Biological Station for Research and Training Center Rio Los Amigos (CICRA, Spanish acronym). At the station three main formations were identified, alluvial terrace forests, flood terrace forests and Mauritia swamps. The alluvial terrace forest presents the most extensive area and the highest amount of carbon stored. As result, CICRA vegetations were valued at approx. 11 millions U.S. dollars. Admission to the supply of carbon credits could promote Amazon forest conservation.

  14. Contribution to the theory of toolgeneration for the rotor gearing of screw-type machines; Beitrag zur Theorie eines abwaelzenden Werkzeuges fuer Rotorverzahnungen von Schraubenmaschinen

    Energy Technology Data Exchange (ETDEWEB)

    Svigler, J. [Westboehmische Univ. Pilsen (Czech Republic). Fak. fuer Angewandte Wissenschaften

    2001-07-01

    The paper deals with designing screw machine rotor gearings and provides the general and particular theory of the design of a tool for gearing production. This tool is a generating cutter or a grinding or planning tool. The theory is based on kinematical principles, which, in contrast to the geometrical method, provide a simple and concrete process for the production of cutters. The concrete and rather simple theory of tool production makes possible a rapid realisation of necessary modifications to the tool, which guarantees the correct gearing changes. (orig.) [German] Der vorliegende Beitrag beschaeftigt sich mit der Gestaltung der Rotorverzahnung, er enthaelt eine allgemeine und ganz praezise Theorie der Entstehung und Herstellung eines Werkzeuges fuer diese Verzahnung. Dieses Werkzeug ist ein Abwaelzfraeser, eventuell ein Schleifwerkzeug. Die Theorie ist auf kinematischen Prinzipien gegruendet, die im Gegensatz zur geometrischen Betrachtung einen einfachen und anschaulichen Vorgang bei der Gestaltung des Werkzeugsfraesers ermoeglicht. Die anschauliche und transparente Darstellung des Werkzeugs ermoeglicht eine schnelle Realisierung der notwendigen Veraenderungen am Werkzeug, die das Erreichen der geforderten Veraenderung der Verzahnung garantiert. (orig.)

  15. Method of moments for the continuous transition between the Brillouin-Wigner-type and Rayleigh-Schroumldinger-type multireference coupled cluster theories.

    Czech Academy of Sciences Publication Activity Database

    Pittner, Ji?í; Piecuch, P.

    2009-01-01

    Ro?. 107, 8-12 (2009), s. 1209-1221. ISSN 0026-8976 R&D Projects: GA ?R GA203/07/0070; GA AV ?R 1ET400400413; GA AV ?R KSK4040110 Institutional research plan: CEZ:AV0Z40400503 Keywords : multireference coupled cluster theory * method of moments of coupled cluster equations * state-universal multireference coupled cluster approach Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.634, year: 2009

  16. Students' Personality Types, Intended Majors, and College Expectations: Further Evidence Concerning Psychological and Sociological Interpretations of Holland's Theory

    Science.gov (United States)

    Pike, Gary R.

    2006-01-01

    Because it focuses on the interactions between students and their environments, Holland's theory of vocational choice provides a powerful framework for studying college experiences. The present study assessed the relative merits of psychological and sociological interpretations of Holland's theory by examining the relationships among students' …

  17. Quantification model for energy consumption in edification

    Directory of Open Access Journals (Sweden)

    Mercader, Mª P.

    2012-12-01

    Full Text Available The research conducted in this paper focuses on the generation of a model for the quantification of energy consumption in building. This is to be done through one of the most relevant environmental impact indicators associated with weight per m2 of construction, as well as the energy consumption resulting from the manufacturing process of materials used in building construction. The practical application of the proposed model on different buildings typologies in Seville, will provide information regarding the building materials, the subsystems and the most relevant construction elements. Hence, we will be able to observe the impact the built surface has on the environment. The results obtained aim to reference the scientific community, providing quantitative data comparable to other types of buildings and geographical areas. Furthermore, it may also allow the analysis and the characterization of feasible solutions to reduce the environmental impact generated by the different materials, subsystems and construction elements commonly used in the different building types defined in this study.

    La investigación realizada en el presente trabajo plantea la generación de un modelo de cuantificación del consumo energético en edificación, a través de uno de los indicadores de impacto ambiental más relevantes asociados al peso por m2 de construcción, el consumo energético derivado del proceso de fabricación de los materiales de construcción empleados en edificación. La aplicación práctica del modelo propuesto sobre diferentes tipologías edificatorias en Sevilla aportará información respecto a los materiales de construcción, subsistemas y elementos constructivos más impactantes, permitiendo visualizar la influencia que presenta la superficie construida en cuanto al impacto ambiental generado. Los resultados obtenidos pretenden servir de referencia a la comunidad científica, aportando datos numéricos que podrán ser comparados en otras tipologías y ámbitos geográficos, a la vez que permitirán analizar y precisar mejoras en cuanto al impacto ambiental generado por los diferentes materiales, subsistemas y elementos constructivos habitualmente utilizados en las tipologías edificatorias definidas.

  18. Quantification of nanowire uptake by live cells

    KAUST Repository

    Margineanu, Michael B.

    2015-05-01

    Nanostructures fabricated by different methods have become increasingly important for various applications at the cellular level. In order to understand how these nanostructures “behave” and for studying their internalization kinetics, several attempts have been made at tagging and investigating their interaction with living cells. In this study, magnetic iron nanowires with an iron oxide layer are coated with (3-Aminopropyl)triethoxysilane (APTES), and subsequently labeled with a fluorogenic pH-dependent dye pHrodo™ Red, covalently bound to the aminosilane surface. Time-lapse live imaging of human colon carcinoma HCT 116 cells interacting with the labeled iron nanowires is performed for 24 hours. As the pHrodo™ Red conjugated nanowires are non-fluorescent outside the cells but fluoresce brightly inside, internalized nanowires are distinguished from non-internalized ones and their behavior inside the cells can be tracked for the respective time length. A machine learning-based computational framework dedicated to automatic analysis of live cell imaging data, Cell Cognition, is adapted and used to classify cells with internalized and non-internalized nanowires and subsequently determine the uptake percentage by cells at different time points. An uptake of 85 % by HCT 116 cells is observed after 24 hours incubation at NW-to-cell ratios of 200. While the approach of using pHrodo™ Red for internalization studies is not novel in the literature, this study reports for the first time the utilization of a machine-learning based time-resolved automatic analysis pipeline for quantification of nanowire uptake by cells. This pipeline has also been used for comparison studies with nickel nanowires coated with APTES and labeled with pHrodo™ Red, and another cell line derived from the cervix carcinoma, HeLa. It has thus the potential to be used for studying the interaction of different types of nanostructures with potentially any live cell types.

  19. A Leonard-Sanders-Budiansky-Koiter-Type Nonlinear Shell Theory with a Hierarchy of Transverse-Shearing Deformations

    Science.gov (United States)

    Nemeth, Michael P.

    2013-01-01

    A detailed exposition on a refined nonlinear shell theory suitable for nonlinear buckling analyses of laminated-composite shell structures is presented. This shell theory includes the classical nonlinear shell theory attributed to Leonard, Sanders, Koiter, and Budiansky as an explicit proper subset. This approach is used in order to leverage the exisiting experience base and to make the theory attractive to industry. In addition, the formalism of general tensors is avoided in order to expose the details needed to fully understand and use the theory. The shell theory is based on "small" strains and "moderate" rotations, and no shell-thinness approximations are used. As a result, the strain-displacement relations are exact within the presumptions of "small" strains and "moderate" rotations. The effects of transverse-shearing deformations are included in the theory by using analyst-defined functions to describe the through-the-thickness distributions of transverse-shearing strains. Constitutive equations for laminated-composite shells are derived without using any shell-thinness approximations, and simplified forms and special cases are presented.

  20. Application of the perturbation theory-differential formalism-for sensitivity analysis in steam generators of PWR type nuclear power plants

    International Nuclear Information System (INIS)

    An homogeneous model which simulates the stationary behavior of steam generators of PWR type reactors and uses the differential formalism of perturbation theory for analysing sensibility of linear and non-linear responses, is presented. The PERGEVAP computer code to calculate the temperature distribution in the steam generator and associated importance function, is developed. The code also evaluates effects of the thermohydraulic parameter variation on selected functionals. The obtained results are compared with results obtained by GEVAP computer code . (M.C.K.)

  1. Ex vivo activity quantification in micrometastases at the cellular scale using the ?-camera technique

    DEFF Research Database (Denmark)

    Chouin, Nicolas; Lindegren, Sture

    2013-01-01

    Targeted ?-therapy (TAT) appears to be an ideal therapeutic technique for eliminating malignant circulating, minimal residual, or micrometastatic cells. These types of malignancies are typically infraclinical, complicating the evaluation of potential treatments. This study presents a method of ex vivo activity quantification with an ?-camera device, allowing measurement of the activity taken up by tumor cells in biologic structures a few tens of microns.

  2. The relationship of theory of mind and executive functions to symptom type and severity in children with autism

    OpenAIRE

    Joseph, Robert M.; Tager–flusberg, Helen

    2004-01-01

    Although neurocognitive impairments in theory of mind and in executive functions have both been hypothesized to play a causal role in autism, there has been little research investigating the explanatory power of these impairments with regard to autistic symptomatology. The present study examined the degree to which individual differences in theory of mind and executive functions could explain variations in the severity of autism symptoms. Participants included 31 verbal, school-aged children ...

  3. Damage quantification of shear buildings using deflections obtained by modal flexibility

    International Nuclear Information System (INIS)

    This paper presents a damage quantification method for shear buildings using the damage-induced inter-storey deflections (DI-IDs) estimated by the modal flexibilities from ambient vibration measurements. This study intends to provide a basis for the damage quantification problem of more complex building structures by investigating a rather idealized type of structures, shear buildings. Damage in a structure represented by loss of stiffness generally induces additional deflection, which may contain essential information about the damage. From an analytical investigation, the general equation of damage quantification by the damage-induced deflection is proposed and its special case for shear buildings is also proposed based on the damage-induced inter-storey deflection. The proposed damage quantification method is advantageous compared to conventional FE updating approaches since the number of variables in the optimization problem is only dependent on the complexity of damage parametrization, not on the complexity of the structure. For this reason, the damage quantification for shear buildings is simplified to a form that does not require any FE updating. Numerical and experimental studies on a five-storey shear building were carried out for two damage scenarios with 10% column EI reductions. From the numerical study, it was found that the lower four natural frequencies and mode shapes were enough to make errors in the deflection estimation and the damage quantificationn estimation and the damage quantification below 1%. From the experimental study, deflections estimated by the modal flexibilities were found to agree well with the deflections obtained from static push-over tests. Damage quantifications by the proposed method were also found to agree well with true amounts of damage obtained from static push-over tests

  4. Risk Quantification and Evaluation Modelling

    Directory of Open Access Journals (Sweden)

    Manmohan Singh

    2014-07-01

    Full Text Available In this paper authors have discussed risk quantification methods and evaluation of risks and decision parameter to be used for deciding on ranking of the critical items, for prioritization of condition monitoring based risk and reliability centered maintenance (CBRRCM. As time passes any equipment or any product degrades into lower effectiveness and the rate of failure or malfunctioning increases, thereby lowering the reliability. Thus with the passage of time or a number of active tests or periods of work, the reliability of the product or the system, may fall down to a low value known as a threshold value, below which the reliability should not be allowed to dip. Hence, it is necessary to fix up the normal basis for determining the appropriate points in the product life cycle where predictive preventive maintenance may be applied in the programme so that the reliability (the probability of successful functioning can be enhanced, preferably to its original value, by reducing the failure rate and increasing the mean time between failure. It is very important for defence application where reliability is a prime work. An attempt is made to develop mathematical model for risk assessment and ranking them. Based on likeliness coefficient ?1 and risk coefficient ?2 ranking of the sub-systems can be modelled and used for CBRRCM.Defence Science Journal, Vol. 64, No. 4, July 2014, pp. 378-384, DOI:http://dx.doi.org/10.14429/dsj.64.6366 

  5. Synthesis of nanodiamond derivatives carrying amino functions and quantification by a modified Kaiser test

    Directory of Open Access Journals (Sweden)

    Gerald Jarre

    2014-11-01

    Full Text Available Nanodiamonds functionalized with different organic moieties carrying terminal amino groups have been synthesized. These include conjugates generated by Diels–Alder reactions of ortho-quinodimethanes formed in situ from pyrazine and 5,6-dihydrocyclobuta[d]pyrimidine derivatives. For the quantification of primary amino groups a modified photometric assay based on the Kaiser test has been developed and validated for different types of aminated nanodiamond. The results correspond well to values obtained by thermogravimetry. The method represents an alternative wet-chemical quantification method in cases where other techniques like elemental analysis fail due to unfavourable combustion behaviour of the analyte or other impediments.

  6. Prospects of using the second-order perturbation theory of the MP2 type in the theory of electron scattering by polyatomic molecules.

    Czech Academy of Sciences Publication Activity Database

    ?ársky, Petr

    2015-01-01

    Ro?. 191, ?. 2015 (2015), s. 191-192. ISSN 1551-7616 R&D Projects: GA MŠk OC09079; GA MŠk(CZ) OC10046; GA ?R GA202/08/0631 Grant ostatní: COST(XE) CM0805; COST(XE) CM0601 Institutional support: RVO:61388955 Keywords : electron -scattering * calculation of cross sections * second-order perturbation theory Subject RIV: CF - Physical ; Theoretical Chemistry

  7. Inverse theorems in the theory of approximation of vectors in a Banach space with exponential type entire vectors

    CERN Document Server

    Torba, S

    2008-01-01

    Arbitrary operator A on a Banach space X which is the generator of C_0-group with certain growth condition at infinity is considered. The relationship between its exponential type entire vectors and its spectral subspaces is found. Inverse theorems on connection between the degree of smoothness of vector $x\\in X$ with respect to operator A, the rate of convergence to zero of the best approximation of x by exponential type entire vectors for operator A, and the k-module of continuity are established. Also, a generalization of the Bernstein-type inequality is obtained. The results allow to obtain Bernstein-type inequalities in weighted L_p spaces.

  8. Quantification of nerolidol in mouse plasma using gas chromatography-mass spectrometry.

    Science.gov (United States)

    Saito, Alexandre Yukio; Sussmann, Rodrigo Antonio Ceschini; Kimura, Emilia Akemi; Cassera, Maria Belen; Katzin, Alejandro Miguel

    2015-07-10

    Nerolidol is a naturally occurring sesquiterpene found in the essential oils of many types of flowers and plants. It is frequently used in cosmetics, as a food flavoring agent, and in cleaning products. In addition, nerolidol is used as a skin penetration enhancer for transdermal delivery of therapeutic drugs. However, nerolidol is hemolytic at low concentrations. A simple and fast GC-MS method was developed for preliminary quantification and assessment of biological interferences of nerolidol in mouse plasma after oral dosing. Calibration curves were linear in the concentration range of 0.010-5?g/mL nerolidol in mouse plasma with correlation coefficients (r) greater than 0.99. Limits of detection and quantification were 0.0017 and 0.0035?g/mL, respectively. The optimized method was successfully applied to the quantification of nerolidol in mouse plasma. PMID:25880240

  9. Pancreas++ : Automated Quantification of Pancreatic Islet Cells in Microscopy Images

    Directory of Open Access Journals (Sweden)

    StuartMaudsley

    2013-01-01

    Full Text Available The microscopic image analysis of pancreatic Islet of Langerhans morphology is crucial for the investigation of diabetes and metabolic diseases. Besides the general size of the islet, the percentage and relative position of glucagon-containing alpha-, and insulin-containing beta-cells is also important for pathophysiological analyses, especially in rodents. Hence, the ability to identify, quantify and spatially locate peripheral and ‘involuted’ alpha-cells in the islet core is an important analytical goal. There is a dearth of software available for the automated and sophisticated positional-quantification of multiple cell types in the islet core. Manual analytical methods for these analyses, while relatively accurate, can suffer from a slow throughput rate as well as user-based biases. Here we describe a newly developed pancreatic islet analytical software program, Pancreas++, which facilitates the fully-automated, non-biased, and highly reproducible investigation of islet area and alpha- and beta-cell quantity as well as position within the islet for either single or large batches of fluorescent images. We demonstrate the utility and accuracy of Pancreas++ by comparing its performance to other pancreatic islet size and cell type (alpha, beta quantification methods. Our Pancreas++ analysis was significantly faster than other methods, while still retaining low error rates and a high degree of result correlation with the manually generated reference standard.

  10. Quantification of petroleum-type hydrocarbons in avian tissue

    Science.gov (United States)

    Gay, M.L.; Belisle, A.A.; Patton, J.F.

    1980-01-01

    Summary: Methods were developed for the analysis of 16 hydrocarbons in avian tissue. Mechanical extraction with pentane was followed by clean-up on Florisil and Silicar. Residues were determined by gas--liquid chromatography and gas-liquid, chromatography-mass spectrometry. The method was applied to the analysis of liver, kidney, fat, and brain tissue of mallard ducks (Anas platyrhynchos) fed a mixture of hydrocarbons. Measurable concentrations of all compounds analyzed were present in all tissues except brain. Highest concentrations were in fat.

  11. Carotid intraplaque neovascularization quantification software (CINQS).

    Science.gov (United States)

    Akkus, Zeynettin; van Burken, Gerard; van den Oord, Stijn C H; Schinkel, Arend F L; de Jong, Nico; van der Steen, Antonius F W; Bosch, Johan G

    2015-01-01

    Intraplaque neovascularization (IPN) is an important biomarker of atherosclerotic plaque vulnerability. As IPN can be detected by contrast enhanced ultrasound (CEUS), imaging-biomarkers derived from CEUS may allow early prediction of plaque vulnerability. To select the best quantitative imaging-biomarkers for prediction of plaque vulnerability, a systematic analysis of IPN with existing and new analysis algorithms is necessary. Currently available commercial contrast quantification tools are not applicable for quantitative analysis of carotid IPN due to substantial motion of the carotid artery, artifacts, and intermittent perfusion of plaques. We therefore developed a specialized software package called Carotid intraplaque neovascularization quantification software (CINQS). It was designed for effective and systematic comparison of sets of quantitative imaging biomarkers. CINQS includes several analysis algorithms for carotid IPN quantification and overcomes the limitations of current contrast quantification tools and existing carotid IPN quantification approaches. CINQS has a modular design which allows integrating new analysis tools. Wizard-like analysis tools and its graphical-user-interface facilitate its usage. In this paper, we describe the concept, analysis tools, and performance of CINQS and present analysis results of 45 plaques of 23 patients. The results in 45 plaques showed excellent agreement with visual IPN scores for two quantitative imaging-biomarkers (The area under the receiver operating characteristic curve was 0.92 and 0.93). PMID:25561454

  12. WaveletQuant, an improved quantification software based on wavelet signal threshold de-noising for labeled quantitative proteomic analysis

    Directory of Open Access Journals (Sweden)

    Li Song

    2010-04-01

    Full Text Available Abstract Background Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification. Results We developed a novel discrete wavelet transform (DWT and a 'Spatial Adaptive Algorithm' to remove noise and to identify true peaks. We programmed and compiled WaveletQuant using Visual C++ 2005 Express Edition. We then incorporated the WaveletQuant program in the Trans-Proteomic Pipeline (TPP, a commonly used open source proteomics analysis pipeline. Conclusions We showed that WaveletQuant was able to quantify more proteins and to quantify them more accurately than the ASAPRatio, a program that performs quantification in the TPP pipeline, first using known mixed ratios of yeast extracts and then using a data set from ovarian cancer cell lysates. The program and its documentation can be downloaded from our website at http://systemsbiozju.org/data/WaveletQuant.

  13. Theory of thermally activated vortex bundles flow over the directional-dependent potential barriers in type-II superocnductors

    OpenAIRE

    Chen, Wei Yeu

    2009-01-01

    The thermally activated vortex bundle flow over the directional-dependent energy barrier in type-II superconductors is investigated. The coherent oscillation frequency and the mean direction of the random collective pinning force of the vortex bundles are evaluated by applying the random walk theorem. The thermally activated vortex bundle flow velocity is obtained.The temperature- and field-dependent Hall and longitudinal resistivities induced by the bundle flow for type-II ...

  14. Quantification of fuel rod cladding failure during LOCA accident

    International Nuclear Information System (INIS)

    The paper describes methodology for quantification of fuel cladding failure as a result of Loss of Coolant Accident (LOCA) for WWER-440 reactor type. The methodology is based on external coupling of the thermo-hydraulic code RELAP5 and thermo-mechanical code TRANSURANUS. The thermo-hydraulic response of the unit to the accident is simulated by RELAP5 code, providing initial and boundary conditions for the thermo-mechanical simulation by the TRANSURANUS code. Cladding failure criterion of the TRANSURANUS code, derived and implemented into the code in the framework of EXTRA EURATOM fifth Framework Programme is used. Cladding failure probability is evaluated by the Monte Carlo algorithm varying the outer cladding temperature. In the second part of the paper, an example of application of the methodology for typical maximum design accident of the WWER-440 is given, presenting every step of methodology and typical failure rate for this type of accident (Authors)

  15. Quantifications and Modeling of Human Failure Events in a Fire PSA

    International Nuclear Information System (INIS)

    USNRC and EPRI developed guidance, 'Fire Human Reliability Analysis Guidelines, NUREG-1921', for estimating human error probabilities (HEPs) for HFEs under fire conditions. NUREG-1921 classifies HFEs into four types associated with the following human actions: - Type 1: New and existing Main Control Room (MCR) actions - Type 2: New and existing ex-MCR actions - Type 3: Actions associated with using alternate shutdown means (ASD) - Type 4: Actions relating to the error of commissions (EOCs) or error of omissions (EOOs) as a result of incorrect indications (SPI) In this paper, approaches for the quantifications and modeling of HFEs related to Type 1, 2 and 3 human actions are introduced. This paper introduced the human reliability analysis process for a fire PSA of Hanul Unit 3. A multiplier of 10 was used to re-estimate the HEPs for the preexisting internal human actions. The HEPs for all ex- MCR actions were assumed to be one. New MCR human actions were quantified using the scoping analysis method of NUREG-1921. If the quantified human action were identified to be risk-significant, detailed approaches (modeling and quantification) were used for incorporating fire situations into them. Multiple HFEs for single human action were defined and they were separately and were separately quantified to incorporate the specific fire situations into them. From this study, we can confirm that the modeling as well as quantifications of human actions is very important to appropriately treat them in PSA logic structures

  16. Model based quantification of EELS spectra

    International Nuclear Information System (INIS)

    Recent advances in model based quantification of electron energy loss spectra (EELS) are reported. The maximum likelihood method for the estimation of physical parameters describing an EELS spectrum, the validation of the model used in this estimation procedure, and the computation of the attainable precision, that is, the theoretical lower bound on the variance of these estimates, are discussed. Experimental examples on Au and GaAs samples show the power of the maximum likelihood method and show that the theoretical prediction of the attainable precision can be closely approached even for spectra with overlapping edges where conventional EELS quantification fails. To provide end-users with a low threshold alternative to conventional quantification, a user friendly program was developed which is freely available under a GNU public license

  17. Quantification of seismic risk. Modeling of dependencies

    International Nuclear Information System (INIS)

    The German PSA Guideline (as issued in 2005) includes methods for conducting risk analyses for internal and external hazards. These analyses are part of the comprehensive Probabilistic Safety Analysis (PSA) that has to be performed within the safety reviews for German nuclear power plants (NPP). In the recent past, the analytical tools for hazards established in this guideline have been challenged, particularly in the quantification of seismically induced core damage probabilities. This paper contains the results of recent research and development activities regarding Seismic PSA. New developments are presented on the comprehensive consideration of dependencies in modeling the seismic failure behavior of a NPP. The accident at the Fukushima Dai-ichi NPP in March 2011 gave reason and indications for checking again the models and results from calculating the seismic risk. Based on a general definition of risk, a model for estimating the seismically induced core damage probability is stepwise derived. It is assumed that the results of site specific Probabilistic Seismic Hazard Assessments (PSHA) are known for the NPP site under consideration. all possible hazard, event, structure, system or component dependencies, which have to be considered in case of an earthquake, are considered, analysed and assessed. Proposals for modelling each type of dependency identified are presented. The following dependencies are considered in this context: Hazard related dependencies, dependencies on the level of initiating events and Dependencies regarding failure of structures, systems and components (SSC). It examines the extend the dependencies have been considered so far in the seismic PSA models and what the consequences of neglecting them may be. The search for and the assessment of dependencies will be implemented in the systematic procedure to compile the seismic equipment list (SEL). The SEL contains all SSC, whose failures contribute to the seismically induced core damage probability. In a Seismic PSA, a vast quantity of data sets have to be handled to characterize SSC fragilities (which depend on the intensity of the earthquake) as well as all types of dependencies. For that purpose, a database is being developed. (author)

  18. Trace elements quantification in Portuguese red wines

    OpenAIRE

    Santos, Susana Isabel Barros Dos

    2011-01-01

    The aim of this thesis is to characterize Portuguese red wines in terms of trace elements composition. The wines were chosen so that all the country was represented and studied. For trace elements quantification (As, Hg, Cd, Ni and Pb) were tested various sample treatments including for all trace elements: acid digestion or presence and absence of spike. The need for H2O2 addition in order to oxidize organic compounds was analyzed for Hg, Cd, Ni and Pb. Quantification of all trace el...

  19. Analysis of New Type Air-conditioning for Loom Based on CFD Simulation and Theory of Statistics

    OpenAIRE

    Ruiliang Yang; Yide Zhou; Nannan Zhao; Gaoju Song

    2011-01-01

    Based on theory of statistics, main factors affecting effects of loom workshop’s large and small zone ventilation using the CFD simulation in this paper.  Firstly, four factors and three levels of orthogonal experimental table is applied to CFD simulation, the order from major to minor of four factors is obtained, which can provide theoretical basis for design and operation. Then single-factor experiment method is applied to CFD simulation, certain factor changing can be obtained w...

  20. Effects of three training types on vitality among older adults:A self-determination theory perspective

    OpenAIRE

    Solberg, Paul Andre; Hopkins, Will; Ommundsen, Yngvar; Halvari, Hallgeir

    2012-01-01

    Objectives To investigate effects of endurance, functional and strength training on subjective vitality in older adults. Using the self-determination theory (SDT) framework we tested the moderating effects of autonomy support and mediating effects of need satisfaction on participants’ changes in vitality. Design Parallel-groups randomized controlled trial. Methods 138 older adults (M = 74.2 years, SD = 4.5) were randomized to a training group or wait-list control, with assessments at base...

  1. Application of perturbation theory to sensitivity calculations of PWR type reactor cores using the two-channel model

    International Nuclear Information System (INIS)

    Sensitivity calculations are very important in design and safety of nuclear reactor cores. Large codes with a great number of physical considerations have been used to perform sensitivity studies. However, these codes need long computation time involving high costs. The perturbation theory has constituted an efficient and economical method to perform sensitivity analysis. The present work is an application of the perturbation theory (matricial formalism) to a simplified model of DNB (Departure from Nucleate Boiling) analysis to perform sensitivity calculations in PWR cores. Expressions to calculate the sensitivity coefficients of enthalpy and coolant velocity with respect to coolant density and hot channel area were developed from the proposed model. The CASNUR.FOR code to evaluate these sensitivity coefficients was written in Fortran. The comparison between results obtained from the matricial formalism of perturbation theory with those obtained directly from the proposed model makes evident the efficiency and potentiality of this perturbation method for nuclear reactor cores sensitivity calculations (author). 23 refs, 4 figs, 7 tabs

  2. Some Exact Solutions of Bianchi Type-II String Cosmological Models with Magnetic Field in Brans-Dicke Theory of Gravitation

    Science.gov (United States)

    Sharma, N. K.; Singh, J. K.

    2015-05-01

    The spatially homogeneous and totally anisotropic Bianchi type-II cosmological solutions of massive strings have been investigated in the presence as well as absence of the magnetic field in the framework of Brans-Dicke theory of gravitation formulated by Brans-Dicke (Phys. Rev. 124, 925, 1961). The energy conditions of the models have been examined. With the help of Takabayasi 's equation of state some exact solutions of this model have been obtained. The physical and kinematical behaviors of the models have also been discussed.

  3. Study on exploration theory and SAR technology for interlayer oxidation zone sandstone type uranium deposit and its application in Eastern Jungar Basin

    International Nuclear Information System (INIS)

    Started with analyzing the features of metallogenetic epoch and space distribution of typical interlayer oxidation zone sandstone type uranium deposit both in China and abroad and their relations of basin evolution, the authors have proposed the idea that the last unconformity mainly controls the metallogenetic epoch and the strength of structure activity after the last unconformity determines the deposit space. An exploration theory with the kernel from new events to the old one is put forward. The means and method to use SAR technology to identify ore-controlling key factors are discussed. An application study in Eastern Jungar Basin is performed

  4. BCS-like action and Lagrangian from the gradient expansion of the determinant of Fermi fields in QCD type, non-Abelian gauge theories with chiral anomalies

    OpenAIRE

    Mieck, Bernhard

    2009-01-01

    An effective field theory of BCS quark pairs is derived from an ordinary QCD type path integral with SU(3) non-Abelian gauge fields. We consider the BCS quark pairs as constituents of nuclei and as the remaining degrees of freedom in a coset decomposition SO(M,M)/U(M)xU(M) of a corresponding total self-energy. The underlying dimension 'M=24' is determined by the product of '2' isospin degrees of freedom, by the 4x4 Dirac gamma matrices with factor '4' and the '3' colour degr...

  5. Direct Theorems in the Theory of Approximation of the Banach Space Vectors by Entire Vectors of Exponential Type

    CERN Document Server

    Grushka, Ya

    2007-01-01

    For an arbitrary operator A on a Banach space X which is a generator of C_0-group with certain growth condition at the infinity, the direct theorems on connection between the smoothness degree of a vector $x\\in X$ with respect to the operator A, the order of convergence to zero of the best approximation of x by exponential type entire vectors for the operator A, and the k-module of continuity are given. Obtained results allows to acquire Jackson-type inequalities in many classic spaces of periodic functions and weighted $L_p$ spaces.

  6. Quantification of margins and uncertainties: A probabilistic framework

    International Nuclear Information System (INIS)

    Quantification of margins and uncertainties (QMU) was originally introduced as a framework for assessing confidence in nuclear weapons, and has since been extended to more general complex systems. We show that when uncertainties are strictly bounded, QMU is equivalent to a graphical model, provided confidence is identified with reliability one. In the more realistic case that uncertainties have long tails, we find that QMU confidence is not always a good proxy for reliability, as computed from the graphical model. We explore the possibility of defining QMU in terms of the graphical model, rather than through the original procedures. The new formalism, which we call probabilistic QMU, or pQMU, is fully probabilistic and mathematically consistent, and shows how QMU may be interpreted within the framework of system reliability theory.

  7. Splitting the Reference Time Temporal Anaphora and Quantification in DRT

    CERN Document Server

    Nelken, R; Nelken, Rani; Francez, Nissim

    1995-01-01

    This paper presents an analysis of temporal anaphora in sentences which contain quantification over events, within the framework of Discourse Representation Theory. The analysis in (Partee 1984) of quantified sentences, introduced by a temporal connective, gives the wrong truth-conditions when the temporal connective in the subordinate clause is "before" or "after". This problem has been previously analyzed in (de Swart 1991) as an instance of the proportion problem, and given a solution from a Generalized Quantifier approach. By using a careful distinction between the different notions of reference time, based on (Kamp and Reyle 1993), we propose a solution to this problem, within the framework of DRT. We show some applications of this solution to additional temporal anaphora phenomena in quantified sentences.

  8. Entanglement quantification by local unitaries

    OpenAIRE

    Monras, A.; Adesso, G.; Giampaolo, S. M.; Gualdi, G.; Davies, G. B.; Illuminati, F.

    2011-01-01

    Invariance under local unitary operations is a fundamental property that must be obeyed by every proper measure of quantum entanglement. However, this is not the only aspect of entanglement theory where local unitaries play a relevant role. In the present work we show that the application of suitable local unitary operations defines a family of bipartite entanglement monotones, collectively referred to as "mirror entanglement". They are constructed by first considering the (...

  9. Nonlocal problems for one class of nonlinear operator equations that arise in the theory of Sobolev type equations

    International Nuclear Information System (INIS)

    This paper considers Hilbert spaces forming a sequence of compact embeddings. Four nonlocal problems are studied for a nonlinear equation. Examples are given for nonlinear dissipative equations of Sobolev type that are reducible to an abstract nonlinear equation (e.g. equations of motion of a Kelvin-Voight fluid). 28 refs

  10. Theory of quantum frequency conversion and type-II parametric down-conversion in the high-gain regime

    International Nuclear Information System (INIS)

    Frequency conversion (FC) and type-II parametric down-conversion (PDC) processes serve as basic building blocks for the implementation of quantum optical experiments: type-II PDC enables the efficient creation of quantum states such as photon-number states and Einstein–Podolsky–Rosen (EPR)-states. FC gives rise to technologies enabling efficient atom–photon coupling, ultrafast pulse gates and enhanced detection schemes. However, despite their widespread deployment, their theoretical treatment remains challenging. Especially the multi-photon components in the high-gain regime as well as the explicit time-dependence of the involved Hamiltonians hamper an efficient theoretical description of these nonlinear optical processes. In this paper, we investigate these effects and put forward two models that enable a full description of FC and type-II PDC in the high-gain regime. We present a rigorous numerical model relying on the solution of coupled integro-differential equations that covers the complete dynamics of the process. As an alternative, we develop a simplified model that, at the expense of neglecting time-ordering effects, enables an analytical solution. While the simplified model approximates the correct solution with high fidelity in a broad parameter range, sufficient for many experimental situations, such as FC with low efficiency, entangled photon-pair generation and the heralding of single photons from type-II PDC, our investigations reveal that the rigo investigations reveal that the rigorous model predicts a decreased performance for FC processes in quantum pulse gate applications and an enhanced EPR-state generation rate during type-II PDC, when EPR squeezing values above 12 dB are considered. (paper)

  11. The Types of Axisymmetric Exact Solutions Closely Related to n-SOLITONS for Yang-Mills Theory

    Science.gov (United States)

    Zhong, Zai Zhe

    In this letter, we point out that if a symmetric 2×2 real matrix M(?,z) obeys the Belinsky-Zakharov equation and |det(M)|=1, then an axisymmetric Bogomol'nyi field exact solution for the Yang-Mills-Higgs theory can be given. By using the inverse scattering technique, some special Bogomol'nyi field exact solutions, which are closely related to the true solitons, are generated. In particular, the Schwarzschild-like solution is a two-soliton-like solution.

  12. Advancing agricultural greenhouse gas quantification*

    Science.gov (United States)

    Olander, Lydia; Wollenberg, Eva; Tubiello, Francesco; Herold, Martin

    2013-03-01

    1. Introduction Better information on greenhouse gas (GHG) emissions and mitigation potential in the agricultural sector is necessary to manage these emissions and identify responses that are consistent with the food security and economic development priorities of countries. Critical activity data (what crops or livestock are managed in what way) are poor or lacking for many agricultural systems, especially in developing countries. In addition, the currently available methods for quantifying emissions and mitigation are often too expensive or complex or not sufficiently user friendly for widespread use. The purpose of this focus issue is to capture the state of the art in quantifying greenhouse gases from agricultural systems, with the goal of better understanding our current capabilities and near-term potential for improvement, with particular attention to quantification issues relevant to smallholders in developing countries. This work is timely in light of international discussions and negotiations around how agriculture should be included in efforts to reduce and adapt to climate change impacts, and considering that significant climate financing to developing countries in post-2012 agreements may be linked to their increased ability to identify and report GHG emissions (Murphy et al 2010, CCAFS 2011, FAO 2011). 2. Agriculture and climate change mitigation The main agricultural GHGs—methane and nitrous oxide—account for 10%-12% of anthropogenic emissions globally (Smith et al 2008), or around 50% and 60% of total anthropogenic methane and nitrous oxide emissions, respectively, in 2005. Net carbon dioxide fluxes between agricultural land and the atmosphere linked to food production are relatively small, although significant carbon emissions are associated with degradation of organic soils for plantations in tropical regions (Smith et al 2007, FAO 2012). Population growth and shifts in dietary patterns toward more meat and dairy consumption will lead to increased emissions unless we improve production efficiencies and management. Developing countries currently account for about three-quarters of direct emissions and are expected to be the most rapidly growing emission sources in the future (FAO 2011). Reducing agricultural emissions and increasing carbon sequestration in the soil and biomass has the potential to reduce agriculture's contribution to climate change by 5.5-6.0 gigatons (Gt) of carbon dioxide equivalent (CO2eq)/year. Economic potentials, which take into account costs of implementation, range from 1.5 to 4.3 GT CO2eq/year, depending on marginal abatement costs assumed and financial resources committed, with most of this potential in developing countries (Smith et al 2007). The opportunity for mitigation in agriculture is thus significant, and, if realized, would contribute to making this sector carbon neutral. Yet it is only through a robust and shared understanding of how much carbon can be stored or how much CO2 is reduced from mitigation practices that informed decisions can be made about how to identify, implement, and balance a suite of mitigation practices as diverse as enhancing soil organic matter, increasing the digestibility of feed for cattle, and increasing the efficiency of nitrogen fertilizer applications. Only by selecting a portfolio of options adapted to regional characteristics and goals can mitigation needs be best matched to also serve rural development goals, including food security and increased resilience to climate change. Expansion of agricultural land also remains a major contributor of greenhouse gases, with deforestation, largely linked to clearing of land for cultivation or pasture, generating 80% of emissions from developing countries (Hosonuma et al 2012). There are clear opportunities for these countries to address mitigation strategies from the forest and agriculture sector, recognizing that agriculture plays a large role in economic and development potential. In this context, multiple development goals can be reinforced by specific climate funding granted on the basis of

  13. Anwendung der "Uncertainty Quantification" bei eisenbahndynamischen problemen

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter

    2013-01-01

    The paper describes the results of the application of "Uncertainty Quantification" methods in railway vehicle dynamics. The system parameters are given by probability distributions. The results of the application of the Monte-Carlo and generalized Polynomial Chaos methods to a simple bogie model will be discussed.

  14. Dynamic behaviors of spin-1/2 bilayer system within Glauber-type stochastic dynamics based on the effective-field theory

    International Nuclear Information System (INIS)

    The dynamic phase transitions (DPTs) and dynamic phase diagrams of the kinetic spin-1/2 bilayer system in the presence of a time-dependent oscillating external magnetic field are studied by using Glauber-type stochastic dynamics based on the effective-field theory with correlations for the ferromagnetic/ferromagnetic (FM/FM), antiferromagnetic/ferromagnetic (AFM/FM) and antiferromagnetic/antiferromagnetic (AFM/AFM) interactions. The time variations of average magnetizations and the temperature dependence of the dynamic magnetizations are investigated. The dynamic phase diagrams for the amplitude of the oscillating field versus temperature were presented. The results are compared with the results of the same system within Glauber-type stochastic dynamics based on the mean-field theory. - Highlights: • The Ising bilayer system is investigated within the Glauber dynamics based on EFT. • The time variations of average order parameters to find phases are studied. • The dynamic phase diagrams are found for the different interaction parameters. • The system displays the critical points as well as a re-entrant behavior

  15. Random data Cauchy theory for nonlinear wave equations of power-type on $\\mathbb{R}^3$

    OpenAIRE

    Luhrmann, Jonas; Mendelson, Dana

    2013-01-01

    We consider the defocusing nonlinear wave equation of power-type on $\\mathbb{R}^3$. We establish an almost sure global existence result with respect to a suitable randomization of the initial data. In particular, this provides examples of initial data of super-critical regularity which lead to global solutions. The proof is based upon Bourgain's high-low frequency decomposition and improved averaging effects for the free evolution of the randomized initial data.

  16. A Cahn-Hilliard-type phase-field theory for species diffusion coupled with large elastic deformations: Application to phase-separating Li-ion electrode materials

    Science.gov (United States)

    Di Leo, Claudio V.; Rejovitzky, Elisha; Anand, Lallit

    2014-10-01

    We formulate a unified framework of balance laws and thermodynamically-consistent constitutive equations which couple Cahn-Hilliard-type species diffusion with large elastic deformations of a body. The traditional Cahn-Hilliard theory, which is based on the species concentration c and its spatial gradient ?c, leads to a partial differential equation for the concentration which involves fourth-order spatial derivatives in c; this necessitates use of basis functions in finite-element solution procedures that are piecewise smooth and globally C1-continuous. In order to use standard C0-continuous finite-elements to implement our phase-field model, we use a split-method to reduce the fourth-order equation into two second-order partial differential equations (pdes). These two pdes, when taken together with the pde representing the balance of forces, represent the three governing pdes for chemo-mechanically coupled problems. These are amenable to finite-element solution methods which employ standard C0-continuous finite-element basis functions. We have numerically implemented our theory by writing a user-element subroutine for the widely used finite-element program Abaqus/Standard. We use this numerically implemented theory to first study the diffusion-only problem of spinodal decomposition in the absence of any mechanical deformation. Next, we use our fully coupled theory and numerical-implementation to study the combined effects of diffusion and stress on the lithiation of a representative spheroidal-shaped particle of a phase-separating electrode material.

  17. Preference for a vanishingly small cosmological constant in supersymmetric vacua in a Type IIB string theory model

    International Nuclear Information System (INIS)

    We study the probability distribution P(?) of the cosmological constant ? in a specific set of KKLT type models of supersymmetric IIB vacua. We show that, as we sweep through the quantized flux values in this flux compactification, P(?) behaves divergent at ?=0? and the median magnitude of ? drops exponentially as the number of complex structure moduli h2,1 increases. Also, owing to the hierarchical and approximate no-scale structure, the probability of having a positive Hessian (mass-squared matrix) approaches unity as h2,1 increases

  18. Exploring a type of central pattern generator based on Hindmarsh-Rose model: from theory to application.

    Science.gov (United States)

    Zhang, Dingguo; Zhang, Qing; Zhu, Xiangyang

    2015-02-01

    This paper proposes the idea that Hindmarsh-Rose (HR) neuronal model can be used to develop a new type of central pattern generator (CPG). Some key properties of HR model are studied and proved to meet the requirements of CPG. Pros and cons of HR model are provided. A CPG network based on HR model is developed and the related properties are investigated. We explore the bipedal primary gaits generated by the CPG network. The preliminary applications of HR model are tested on humanoid locomotion model and functional electrical stimulation (FES) walking system. The positive results of stimulation and experiment show the feasibility of HR model as a valid CPG. PMID:25146328

  19. Entanglement quantification by local unitaries

    CERN Document Server

    Monras, A; Giampaolo, S M; Gualdi, G; Davies, G B; Illuminati, F

    2011-01-01

    Invariance under local unitary operations is a fundamental property that must be obeyed by every proper measure of quantum entanglement. However, this is not the only aspect of entanglement theory where local unitaries play a relevant role. In the present work we show that the application of suitable local unitary operations defines a family of bipartite entanglement monotones, collectively referred to as "shield entanglement". They are constructed by first considering the (squared) Hilbert- Schmidt distance of the state from the set of states obtained by applying to it a given local unitary. To the action of each different local unitary there corresponds a different distance. We then minimize these distances over the sets of local unitaries with different spectra, obtaining an entire family of different entanglement monotones. We show that these shield entanglement monotones are organized in a hierarchical structure, and we establish the conditions that need to be imposed on the spectrum of a local unitary f...

  20. Effective field theory and Ab-initio calculation of p-type (Ga, Fe)N within LDA and SIC approximation

    Energy Technology Data Exchange (ETDEWEB)

    Salmani, E. [LMPHE, associe au CNRST (URAC 12), Faculte des Sciences, Universite Mohammed V-Agdal, Rabat (Morocco); Mounkachi, O. [Institute of Nanomaterials and Nanotechnology, MAScIR, Rabat (Morocco); Ez-Zahraouy, H., E-mail: ezahamid@fsr.ac.ma [LMPHE, associe au CNRST (URAC 12), Faculte des Sciences, Universite Mohammed V-Agdal, Rabat (Morocco); El Kenz, A. [LMPHE, associe au CNRST (URAC 12), Faculte des Sciences, Universite Mohammed V-Agdal, Rabat (Morocco); Hamedoun, M. [Institute of Nanomaterials and Nanotechnology, MAScIR, Rabat (Morocco); Benyoussef, A. [LMPHE, associe au CNRST (URAC 12), Faculte des Sciences, Universite Mohammed V-Agdal, Rabat (Morocco); Institute of Nanomaterials and Nanotechnology, MAScIR, Rabat (Morocco); Hassan II Academy of Science and Technology, Rabat (Morocco)

    2013-03-15

    Based on first-principles spin-density functional calculations, using the Korringa-Kohn-Rostoker method combined with the coherent potential approximation, we investigated the half-metallic ferromagnetic behavior of (Ga, Fe)N co-doped with carbon within the self-interaction-corrected local density approximation. Mechanism of hybridization and interaction between magnetic ions in p-type (Ga, Fe)N is investigated. Stability energy of ferromagnetic and disorder local moment states was calculated for different carbon concentration. The local density and the self-interaction-corrected approximations have been used to explain the strong ferromagnetic interaction observed and the mechanism that stabilizes this state. The transition temperature to the ferromagnetic state has been calculated within the effective field theory, with a Honmura-Kaneyoshi differential operator technique. - Highlights: Black-Right-Pointing-Pointer The paper focus on the study the magnetic properties and electronic structure of p-type (Ga, Fe)N within LDA and SIC approximation. Black-Right-Pointing-Pointer These methods allow us to explain the strong ferromagnetic interaction observed and the mechanism for its stability and the mechanism of hybridization and interaction between magnetic ions in p-type (Ga, Fe). Black-Right-Pointing-Pointer The results obtained are interesting and can be serve as a reference in the field of dilute magnetic semi conductor.

  1. Effective field theory and Ab-initio calculation of p-type (Ga, Fe)N within LDA and SIC approximation

    International Nuclear Information System (INIS)

    Based on first-principles spin-density functional calculations, using the Korringa–Kohn–Rostoker method combined with the coherent potential approximation, we investigated the half-metallic ferromagnetic behavior of (Ga, Fe)N co-doped with carbon within the self-interaction-corrected local density approximation. Mechanism of hybridization and interaction between magnetic ions in p-type (Ga, Fe)N is investigated. Stability energy of ferromagnetic and disorder local moment states was calculated for different carbon concentration. The local density and the self-interaction-corrected approximations have been used to explain the strong ferromagnetic interaction observed and the mechanism that stabilizes this state. The transition temperature to the ferromagnetic state has been calculated within the effective field theory, with a Honmura–Kaneyoshi differential operator technique. - Highlights: ? The paper focus on the study the magnetic properties and electronic structure of p-type (Ga, Fe)N within LDA and SIC approximation. ? These methods allow us to explain the strong ferromagnetic interaction observed and the mechanism for its stability and the mechanism of hybridization and interaction between magnetic ions in p-type (Ga, Fe). ? The results obtained are interesting and can be serve as a reference in the field of dilute magnetic semi conductor.

  2. Predictive Game Theory

    Science.gov (United States)

    Wolpert, David H.

    2005-01-01

    Probability theory governs the outcome of a game; there is a distribution over mixed strat.'s, not a single "equilibrium". To predict a single mixed strategy must use our loss function (external to the game's players. Provides a quantification of any strategy's rationality. Prove rationality falls as cost of computation rises (for players who have not previously interacted). All extends to games with varying numbers of players.

  3. New approach for the quantification of processed animal proteins in feed using light microscopy.

    Science.gov (United States)

    Veys, P; Baeten, V

    2010-07-01

    A revision of European Union's total feed ban on animal proteins in feed will need robust quantification methods, especially for control analyses, if tolerance levels are to be introduced, as for fishmeal in ruminant feed. In 2006, a study conducted by the Community Reference Laboratory for Animal Proteins in feedstuffs (CRL-AP) demonstrated the deficiency of the official quantification method based on light microscopy. The study concluded that the method had to be revised. This paper puts forward an improved quantification method based on three elements: (1) the preparation of permanent slides with an optical adhesive preserving all morphological markers of bones necessary for accurate identification and precision counting; (2) the use of a counting grid eyepiece reticle; and (3) new definitions for correction factors for the estimated portions of animal particles in the sediment. This revised quantification method was tested on feeds adulterated at different levels with bovine meat and bone meal (MBM) and fishmeal, and it proved to be effortless to apply. The results obtained were very close to the expected values of contamination levels for both types of adulteration (MBM or fishmeal). Calculated values were not only replicable, but also reproducible. The advantages of the new approach, including the benefits of the optical adhesive used for permanent slide mounting and the experimental conditions that need to be met to implement the new method correctly, are discussed. PMID:20432096

  4. Initial growth mechanism of atomic layer deposited titanium dioxide using cyclopentadienyl-type precursor: A density functional theory study

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Guangfen [College of Science, Beijing Institute of Technology, Beijing 100081 (China); College of Science, Hebei University of Science and Technology, Shijiazhuang 050018 (China); Ren, Jie, E-mail: renjie@fudan.edu.cn [College of Science, Hebei University of Science and Technology, Shijiazhuang 050018 (China); Zhang, Shaowen [College of Science, Beijing Institute of Technology, Beijing 100081 (China)

    2012-12-01

    The initial reaction mechanism of atomic layer deposited TiO{sub 2} thin film on the silicon surface using Cp*Ti(OCH{sub 3}){sub 3} as the metal precursor has been investigated by using the density functional theory. We find that Cp*Ti(OCH{sub 3}){sub 3} adsorbed state can be formed via the hydrogen bonding interaction between CH{sub 3}O ligands and the Si-OH sites, which is in good agreement with the quadrupole mass spectrometry (QMS) experimental observations. Moreover, the desorption of adsorbed Cp*Ti(OCH{sub 3}){sub 3} is favored in the thermodynamic equilibrium state. The elimination reaction of CH{sub 3}OH can occur more readily than that of Cp*H during the Cp*Ti(OCH{sub 3}){sub 3} pulse. This conclusion is also confirmed by the QMS experimental results. - Highlights: Black-Right-Pointing-Pointer Initial reaction mechanism of atomic layer deposition of TiO{sub 2} has been studied. Black-Right-Pointing-Pointer The Cp*Ti(OCH{sub 3}){sub 3} absorbed state on silicon surface is formed by hydrogen bonds. Black-Right-Pointing-Pointer The elimination of CH{sub 3}OH occurs more readily than that of Cp*H in Cp*Ti(OCH{sub 3}){sub 3}. Black-Right-Pointing-Pointer The Cp*Ti(OCH{sub 3}){sub 3} adsorbs on silicon surface via the CH{sub 3}O ligand.

  5. Theory of drift current instabilities in n-type GaAs taking into account the electron thermal pressure gradient

    Science.gov (United States)

    Martin, B. G.; Wallis, R. F.

    1991-11-01

    A theoretical investigation has been made of the effect of the electron thermal pressure gradient on the interaction of surface optical phonons and surface space-charge waves in n-type GaAs in which a drift current is flowing parallel to the surface. The range of frequency for which amplification of the slow surface space-charge wave occurs has been calculated in the nonretarded limit as a function of the ratio of the thermal pressure gradient coefficient ? to the drift velocity V0. For values of {?}/{V 0} greater than ˜ 0.15, the frequency range for amplification decreases with increasing {?}/{V 0}, reaching zero when the latter quantity reaches unity.

  6. Structure and dynamics of Xn-type clusters (n = 3, 4, 6) from spontaneous symmetry breaking theory

    International Nuclear Information System (INIS)

    On the basis of three symmetries of nature, homogeneity and isotropy of space and indistinguishability of identical particles, we have found a group of coordinate transformations that leaves invariant the electronic energy and the potential energy of nuclei in every molecule subjected to no external fields. From these transformations we derived the formula for the dynamical representation and proved that every molecule has at least one Raman-active, totally symmetric normal mode of vibration. As an example, we studied stable configurations and dynamics of Xn-type molecules (clusters), n = 3, 4, 6, within symmetry-adapted, second-order expansion of the electronic energy with respect to nuclear coordinates, around the united atom. Within this approximation, for a positive coefficient in the expansion, a homonuclear three- (four-, six-) atomic cluster has a stable configuration of D3h (Td, Oh) symmetry. Our calculated mutual ratios of vibrational frequencies for clusters with these geometries are in reasonable agreement with experiment. (paper)

  7. Cross recurrence quantification analysis of indefinite anaphora in Swedish dialog : an eye-tracking pilot experiment

    OpenAIRE

    Diderichsen, Philip

    2006-01-01

    A new method is used in an eye-tracking pilot experiment which shows that it is possible to detect differences in common ground associated with the use of minimally different types of indefinite anaphora. Following Richardson and Dale (2005), cross recurrence quantification analysis (CRQA) was used to show that the tandem eye movements of two Swedish-speaking interlocutors are slightly more coupled when they are using fully anaphoric indefinite expressions ...

  8. QuasR: quantification and annotation of short reads in R

    OpenAIRE

    Gaidatzis, Dimos; Lerch, Anita; Hahne, Florian; Stadler, Michael B.

    2014-01-01

    Summary: QuasR is a package for the integrated analysis of high-throughput sequencing data in R, covering all steps from read preprocessing, alignment and quality control to quantification. QuasR supports different experiment types (including RNA-seq, ChIP-seq and Bis-seq) and analysis variants (e.g. paired-end, stranded, spliced and allele-specific), and is integrated in Bioconductor so that its output can be directly processed for statistical analysis and visualization.

  9. DNA quantification by real time PCR and short tandem repeats (STRs) amplification results

    OpenAIRE

    Zoppis S; D’Alessio A; Rosini M; Vecchiotti C

    2012-01-01

    Determining the DNA amount in a forensic sample is fundamental for PCR-based analyses because if on one hand an excessive amount of template may cause the appearance of additional or out-of-scale peaks, by the other a low quantity can determine the appearance of stochastic phenomena affecting the PCR reaction and the subsequent interpretation of typing results. In the common practice of forensic genetics laboratories, the quantification results provided by Real Time PCR (qPCR) assume the role...

  10. Defect and damage evolution quantification in dynamically-deformed metals using orientation-imaging microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Gray, George T., III [Los Alamos National Laboratory; Livescu, Veronica [Los Alamos National Laboratory; Cerreta, Ellen K [Los Alamos National Laboratory

    2010-03-18

    Orientation-imaging microscopy offers unique capabilities to quantify the defects and damage evolution occurring in metals following dynamic and shock loading. Examples of the quantification of the types of deformation twins activated, volume fraction of twinning, and damage evolution as a function of shock loading in Ta are presented. Electron back-scatter diffraction (EBSD) examination of the damage evolution in sweeping-detonation-wave shock loading to study spallation in Cu is also presented.

  11. Localization and Relative Quantification of Carbon Nanotubes in Cells with Multispectral Imaging Flow Cytometry

    OpenAIRE

    Marangon, Iris; Boggetto, Nicole; Ménard-Moyon, Cécilia; Luciani, Nathalie; Wilhelm, Claire; Bianco, Alberto; Gazeau, Florence

    2013-01-01

    Carbon-based nanomaterials, like carbon nanotubes (CNTs), belong to this type of nanoparticles which are very difficult to discriminate from carbon-rich cell structures and de facto there is still no quantitative method to assess their distribution at cell and tissue levels. What we propose here is an innovative method allowing the detection and quantification of CNTs in cells using a multispectral imaging flow cytometer (ImageStream, Amnis). This newly developed device integrates both a high...

  12. Type T Marital Therapy.

    Science.gov (United States)

    Farley, Frank; Carlson, Jon

    1991-01-01

    Briefly reviews Farley's Type T theory of personality and then considers a range of issues in marital therapy from the perspective of Type T. Suggests that Type T theory may be relevant in dealing with infidelity, sexual problems, love, marital abuse, child rearing, drug and alcohol use, money, division of household labor, recreation, and…

  13. Uncertainty quantification for porous media flows

    Science.gov (United States)

    Christie, Mike; Demyanov, Vasily; Erbas, Demet

    2006-09-01

    Uncertainty quantification is an increasingly important aspect of many areas of computational science, where the challenge is to make reliable predictions about the performance of complex physical systems in the absence of complete or reliable data. Predicting flows of oil and water through oil reservoirs is an example of a complex system where accuracy in prediction is needed primarily for financial reasons. Simulation of fluid flow in oil reservoirs is usually carried out using large commercially written finite difference simulators solving conservation equations describing the multi-phase flow through the porous reservoir rocks. This paper examines a Bayesian Framework for uncertainty quantification in porous media flows that uses a stochastic sampling algorithm to generate models that match observed data. Machine learning algorithms are used to speed up the identification of regions in parameter space where good matches to observed data can be found.

  14. Uncertainty quantification for porous media flows

    International Nuclear Information System (INIS)

    Uncertainty quantification is an increasingly important aspect of many areas of computational science, where the challenge is to make reliable predictions about the performance of complex physical systems in the absence of complete or reliable data. Predicting flows of oil and water through oil reservoirs is an example of a complex system where accuracy in prediction is needed primarily for financial reasons. Simulation of fluid flow in oil reservoirs is usually carried out using large commercially written finite difference simulators solving conservation equations describing the multi-phase flow through the porous reservoir rocks. This paper examines a Bayesian Framework for uncertainty quantification in porous media flows that uses a stochastic sampling algorithm to generate models that match observed data. Machine learning algorithms are used to speed up the identification of regions in parameter space where good matches to observed data can be found

  15. A Type System for Parallel Components

    OpenAIRE

    Carvalho-junior, Francisco Heron; Lins, Rafael Dueire

    2009-01-01

    The # component model was proposed to improve the practice of parallel programming. This paper introduces a type system for # programming systems, aiming to lift the abstraction and safety of programming for parallel computing architectures by introducing a notion of abstract component based on universal and existential bounded quantification. Issues about the implementation of such type system in HPE, a # programming system, are also discussed.

  16. Scale Theory and Metal-Insulator Transition in Metallic N-Type Inp Semiconductor at Very Low Temperatures with Magnetic Field

    Science.gov (United States)

    El kaaouachi, A.; Abdia, R.; Nafidi, A.; Zatni, A.; Sahsah, H.; Biskupski, G.

    2010-04-01

    The metal-insulator transition (MIT) induced by magnetic field, in barely metallic and compensated n-type InP has been analyzed using a scale theory. The experiments were carried out at low temperature in the range (4.2-0.066 K) and in magnetic field up to 11 T. We have determined the magnetic field for which the conductivity changes from the metallic behaviour to insulator regime. On the metallic side of the MIT, the electrical conductivity is found to obey ? = ?0+mT1/2 down to 66 mK. The zero-temperature conductivity can be described by scaling laws. Physical explanation to the temperature dependence of the conductivity is given in metallic side of the MIT using a competition between different characteristic scale lengths involved in the mechanisms of conduction, like correlation length and interaction length.

  17. Quantification of atherosclerosis with MRI

    International Nuclear Information System (INIS)

    Cardiovascular disease due to atherosclerosis is a major cause of death in the United States. A major limitation in the current treatment of atherosclerosis is the lack of a quantitative means to non-invasively evaluate the extent of the disease. Recent studies suggest that Magnetic Resonance Imaging (MRI) has the potential for the detection of atherosclerotic plaque. It has been demonstrated that multi-dimensional pattern recognition can be applied to multi-pulse sequence MR images to identify different tissue types. The authors reported the identification of tissues involved in the atherosclerotic disease process, such as normal endothelium, smooth muscle, thrombus, fat or lipid, connective tissue and calcified plaque. The work reported in this abstract presents preliminary results of applying quantitative 3-D reconstruction to the problem of identifying and quantifying atherosclerotic plaque in vitro

  18. Tarde’s idea of quantification

    OpenAIRE

    Latour, Bruno

    2010-01-01

    Even though Tarde is said to have had a literary view of social science, he himself was deeply involved in statistics (especially criminal statistics) and took an essentially quantitative view of social phenomena. What is so paradoxical in his view of quantification is that it relies not only on the aggregates but also on the individual element. The paper reviews this paradox, the reason why Tarde was son intent on finding a quantitative grasp for establishing the social sciences and relates ...

  19. Near-optimal RNA-Seq quantification

    OpenAIRE

    Bray, Nicolas; Pimentel, Harold; Melsted, Páll; Pachter, Lior

    2015-01-01

    We present a novel approach to RNA-Seq quantification that is near optimal in speed and accuracy. Software implementing the approach, called kallisto, can be used to analyze 30 million unaligned paired-end RNA-Seq reads in less than 5 minutes on a standard laptop computer while providing results as accurate as those of the best existing tools. This removes a major computational bottleneck in RNA-Seq analysis.

  20. Quantification of nitrotyrosine in nitrated proteins

    OpenAIRE

    Yang, Hong; Zhang, Yingyi; Po?schl, Ulrich

    2010-01-01

    For kinetic studies of protein nitration reactions, we have developed a method for the quantification of nitrotyrosine residues in protein molecules by liquid chromatography coupled to a diode array detector of ultraviolet-visible absorption. Nitrated bovine serum albumin (BSA) and nitrated ovalbumin (OVA) were synthesized and used as standards for the determination of the protein nitration degree (ND), which is defined as the average number of nitrotyrosine residues divided by the total numb...

  1. Moral and imoral in economical quantification

    OpenAIRE

    Simion, Doina Maria

    2009-01-01

    Could there be something immoral in economic measurements and quantification? Could there be immorality in statistics? It is often said that statistics is a lie, an untruth,a delusion. Lies are dishonoring and deeply immoral, and are incriminated by both religious and juridical norms. Where do these accusations against statistics come from? They derive from the obvious modern strive for excessive simplifications, from ignoring scientific rigor and from eluding theoretical principles ...

  2. Quantum theory of open systems based on stochastic differential equations of generalized Langevin (non-Wiener) type

    International Nuclear Information System (INIS)

    It is shown that the effective Hamiltonian representation, as it is formulated in author’s papers, serves as a basis for distinguishing, in a broadband environment of an open quantum system, independent noise sources that determine, in terms of the stationary quantum Wiener and Poisson processes in the Markov approximation, the effective Hamiltonian and the equation for the evolution operator of the open system and its environment. General stochastic differential equations of generalized Langevin (non-Wiener) type for the evolution operator and the kinetic equation for the density matrix of an open system are obtained, which allow one to analyze the dynamics of a wide class of localized open systems in the Markov approximation. The main distinctive features of the dynamics of open quantum systems described in this way are the stabilization of excited states with respect to collective processes and an additional frequency shift of the spectrum of the open system. As an illustration of the general approach developed, the photon dynamics in a single-mode cavity without losses on the mirrors is considered, which contains identical intracavity atoms coupled to the external vacuum electromagnetic field. For some atomic densities, the photons of the cavity mode are “locked” inside the cavity, thus exhibiting a new phenomenon of radiation trapping and non-Wiener dynamics.

  3. New spin(7) holonomy metrics admitting G2 holonomy reductions and M-theory/type-IIA dualities

    International Nuclear Information System (INIS)

    As is well known, when D6 branes wrap a special Lagrangian cycle on a noncompact Calabi-Yau threefold in such a way that the internal string frame metric is a Kaehler one there exists a dual description, which is given in terms of a purely geometrical 11-dimensional background with an internal metric of G2 holonomy. It is also known that when D6 branes wrap a coassociative cycle of a noncompact G2 manifold in the presence of a self-dual two-form strength the internal part of the string frame metric is conformal to the G2 metric and there exists a dual description, which is expressed in terms of a purely geometrical 11-dimensional background with an internal noncompact metric of spin(7) holonomy. In the present work it is shown that any G2 metric participating in the first of these dualities necessarily participates in one of the second type. Additionally, several explicit spin(7) holonomy metrics admitting a G2 holonomy reduction along one isometry are constructed. These metrics can be described as R fibrations over a 6-dimensional Kaehler metric, thus realizing the pattern spin(7)?G2?(Kahler) mentioned above. Several of these examples are further described as fibrations over the Eguchi-Hanson gravitational instanton and, to the best of our knowledge, have not been previously considered in the literature.

  4. Predictive transport modelling of type I ELMy H-mode dynamics using a theory-motivated combined ballooning-peeling model

    International Nuclear Information System (INIS)

    This paper discusses predictive transport simulations of the type I ELMy high confinement mode (H-mode) with a theory-motivated edge localized mode (ELM) model based on linear ballooning and peeling mode stability theory. In the model, a total mode amplitude is calculated as a sum of the individual mode amplitudes given by two separate linear differential equations for the ballooning and peeling mode amplitudes. The ballooning and peeling mode growth rates are represented by mutually analogous terms, which differ from zero upon the violation of a critical pressure gradient and an analytical peeling mode stability criterion, respectively. The damping of the modes due to non-ideal magnetohydrodynamic effects is controlled by a term driving the mode amplitude towards the level of background fluctuations. Coupled to simulations with the JETTO transport code, the model qualitatively reproduces the experimental dynamics of type I ELMy H-mode, including an ELM frequency that increases with the external heating power. The dynamics of individual ELM cycles is studied. Each ELM is usually triggered by a ballooning mode instability. The ballooning phase of the ELM reduces the pressure gradient enough to make the plasma peeling unstable, whereby the ELM continues driven by the peeling mode instability, until the edge current density has been depleted to a stable level. Simulations with current ramp-up and ramp-down are studied as examples of situations in which pure peeling and p of situations in which pure peeling and pure ballooning mode ELMs, respectively, can be obtained. The sensitivity with respect to the ballooning and peeling mode growth rates is investigated. Some consideration is also given to an alternative formulation of the model as well as to a pure peeling model

  5. Quantification of Permafrost Creep by Remote Sensing

    Science.gov (United States)

    Roer, I.; Kaeaeb, A.

    2008-12-01

    Rockglaciers and frozen talus slopes are distinct landforms representing the occurrence of permafrost conditions in high mountain environments. The interpretation of ongoing permafrost creep and its reaction times is still limited due to the complex setting of interrelating processes within the system. Therefore, a detailed monitoring of rockglaciers and frozen talus slopes seems advisable to better understand the system as well as to assess possible consequences like rockfall hazards or debris-flow starting zones. In this context, remote sensing techniques are increasingly important. High accuracy techniques and data with high spatial and temporal resolution are required for the quantification of rockglacier movement. Digital Terrain Models (DTMs) derived from optical stereo, synthetic aperture radar (SAR) or laser scanning data are the most important data sets for the quantification of permafrost-related mass movements. Correlation image analysis of multitemporal orthophotos allow for the quantification of horizontal displacements, while vertical changes in landform geometry are computed by DTM comparisons. In the European Alps the movement of rockglaciers is monitored over a period of several decades by the combined application of remote sensing and geodetic methods. The resulting kinematics (horizontal and vertical displacements) as well as spatio-temporal variations thereof are considered in terms of rheology. The distinct changes in process rates or landform failures - probably related to permafrost degradation - are analysed in combination with data on surface and subsurface temperatures and internal structures (e.g., ice content, unfrozen water content).

  6. Extending Existential Quantification in Conjunctions of BDDs

    Directory of Open Access Journals (Sweden)

    Sean A. Weaver

    2006-06-01

    Full Text Available We introduce new approaches intended to speed up determining the satisfiability of a given Boolean formula ? expressed as a conjunction of Boolean functions. A common practice in such cases, when using constraint-oriented methods, is to represent the functions as BDDs, then repeatedly cluster BDDs containing one or more variables, and finally existentially quantify those variables away from the cluster. Clustering is essential because, in general, existential quantification cannot be applied unless the variables occur in only a single BDD. But, clustering incurs significant overhead and may result in BDDs that are too big to allow the process to complete in a reasonable amount of time. There are two significant contributions in this paper. First, we identify elementary conditions under which the existential quantification of a subset of variables V may be distributed over all BDDs without clustering. We show that when these conditions are satisfied, safe assignments to the variables of V are automatically generated. This is significant because these assignments can be applied, as though they were inferences, to simplify ?. Second, some efficient operations based on these conditions are introduced and can be integrated into existing frameworks of both search-oriented and constraint-oriented methods of satisfiability. All of these operations are relaxations in the use of existential quantification and therefore may fail to find one or more existing safe assignments. Finally, we compare and contrast the relationship of these operations to autarkies and present some preliminary results.

  7. Perturbation theory

    International Nuclear Information System (INIS)

    After noting some advantages of using perturbation theory some of the various types are related on a chart and described, including many-body nonlinear summations, quartic force-field fit for geometry, fourth-order correlation approximations, and a survey of some recent work. Alternative initial approximations in perturbation theory are also discussed. 25 references

  8. Shielding Theory

    Directory of Open Access Journals (Sweden)

    Ion N.Chiuta

    2009-05-01

    Full Text Available The paper determines relations for shieldingeffectiveness relative to several variables, includingmetal type, metal properties, thickness, distance,frequency, etc. It starts by presenting some relationshipsregarding magnetic, electric and electromagnetic fieldsas a pertinent background to understanding and applyingfield theory. Since literature about electromagneticcompatibility is replete with discussions about Maxwellequations and field theory only a few aspects arepresented.

  9. Stellar convection theory. III. Dynamical coupling of the two convection zones in A -type stars by penetrative motions

    International Nuclear Information System (INIS)

    Anelastic modal equations are used to examine thermal convection occurring over many density scale heights in the entire outer envelope of an A-type star, encompassing both the hydrogen and helium convectively unstable zones. Single-mode anelastic solutions for such compressible convection display strong overshooting of the motions into adjacent radiative zones. Such mixing would preclude diffusive separation of elements in the supposedly quiescent region between the two unstable zones. Indeed, the anelastic solutions reveal that the two zones of convective instability are dynamically coupled by the overshooting motions. The nonlinear single-mode equations admit two solutions for the same horizontal wavelength, and these are distinguished by the sense of the vertical velocity at the center of the three-dimensional cell. The upward directed flows experience large pressure effects when they penetrate into regions where the vertical scale height has become small compared to their horizontal scale. The fluctuating pressure can modify the density fluctuations so that the sense of the buoyancy force is changed, with buoyancy braking actually achieved near the top of the convection zone, even though the mean stratification is still superadiabatic. The pressure and buoyancy work there serves to decelerate the vertical motions and deflect them laterally, leading to strong horizontal shearing motions. Thus the shallow but highly unstable hydrogen ionization zone may serve to pre hydrogen ionization zone may serve to prevent convection with a horizontal scale comparable to supergranulation from getting through into the atmosphere with any significant portion of its original momentum. This suggests that strong horizontal shear flows should be present just below the surface of the star, and similarly that strong horizontal shear flows should be present just below the surface of the star, and similarly that the large-scale motions extending into the stable atmosphere would appear mainly as horizontal flows

  10. Methodological strategies for transgene copy number quantification in goats (Capra hircus) using real-time PCR.

    Science.gov (United States)

    Batista, Ribrio I T P; Luciano, Maria C S; Teixeira, Dárcio I A; Freitas, Vicente J F; Melo, Luciana M; Andreeva, Lyudmila E; Serova, Irina A; Serov, Oleg L

    2014-01-01

    Taking into account the importance of goats as transgenic models, as well as the rarity of copy number (CN) studies in farm animals, the present work aimed to evaluate methodological strategies for accurate and precise transgene CN quantification in goats using quantitative polymerase chain reaction (qPCR). Mouse and goat lines transgenic for human granulocyte-colony stimulating factor were used. After selecting the best genomic DNA extraction method to be applied in mouse and goat samples, intra-assay variations, accuracy and precision of CN quantifications were assessed. The optimized conditions were submitted to mathematical strategies and used to quantify CN in goat lines. The findings were as follows: validation of qPCR conditions is required, and amplification efficiency is the most important. Absolute and relative quantifications are able to produce similar results. For normalized absolute quantification, the same plasmid fragment used to generate goat lines must be mixed with wild-type goat genomic DNA, allowing the choice of an endogenous reference gene for data normalization. For relative quantifications, a resin-based genomic DNA extraction method is strongly recommended when using mouse tail tips as calibrators to avoid tissue-specific inhibitors. Efficient qPCR amplifications (?95%) allow reliable CN measurements with SYBR technology. TaqMan must be used with caution in goats if the nucleotide sequence of the endogenous reference gene is not yet well understood. Adhering to these general guidelines can result in more exact CN determination in goats. Even when working under nonoptimal circumstances, if assays are performed that respect the minimum qPCR requirements, good estimations of transgene CN can be achieved. PMID:25044808

  11. Characterization and LC-MS/MS based quantification of hydroxylated fullerenes

    Science.gov (United States)

    Chao, Tzu-Chiao; Song, Guixue; Hansmeier, Nicole; Westerhoff, Paul; Herckes, Pierre; Halden, Rolf U.

    2011-01-01

    Highly water-soluble hydroxylated fullerene derivatives are being investigated for a wide range of commercial products as well as for potential cytotoxicity. However, no analytical methods are currently available for their quantification at sub-ppm concentrations in environmental matrices. Here, we report on the development and comparison of liquid chromatography-ultra violet/visible spectroscopy (LC-UV/vis) and mass spectrometry (LC-MS) based detection and quantification methods for a commercial fullerols. We achieved good separation efficiency using an amide-type hydrophilic interaction liquid chromatography (HILIC) column (plate number >2000) under isocratic conditions with 90% acetonitrile as the mobile phase. The method detection limits (MDLs) ranged from 42.8 ng/mL (UV detection) to 0.19 pg/mL (using MS with multiple reaction monitoring, MRM). Other MS measurement modes achieved MDLs of 125 pg/mL (single quad scan, Q1) and 1.5 pg/mL (multiple ion monitoring, MI). Each detection method exhibited a good linear response over several orders of magnitude. Moreover, we tested the robustness of these methods in the presence of Suvanee River fulvic acids (SRFA) as an example of organic matter commonly found in environmental water samples. While SRFA significantly interfered with UV- and Q1-based quantifications, the interference was relatively low using MI or MRM (relative error in presence of SRFA: 8.6% and 2.5%, respectively). This first report of a robust MS-based quantification method for modified fullerenes dissolved in water suggests the feasibility of implementing MS techniques more broadly for identification and quantification of fullerols and other water-soluble fullerene derivatives in environmental samples. PMID:21294534

  12. Characterization and liquid chromatography-MS/MS based quantification of hydroxylated fullerenes.

    Science.gov (United States)

    Chao, Tzu-Chiao; Song, Guixue; Hansmeier, Nicole; Westerhoff, Paul; Herckes, Pierre; Halden, Rolf U

    2011-03-01

    Highly water-soluble hydroxylated fullerene derivatives are being investigated for a wide range of commercial products as well as for potential cytotoxicity. However, no analytical methods are currently available for their quantification at sub-ppm concentrations in environmental matrixes. Here, we report on the development and comparison of liquid chromatography-ultraviolet/visible spectroscopy (LC-UV/vis) and liquid chromatography-mass spectrometry (LC-MS) based detection and quantification methods for commercial fullerols. We achieved good separation efficiency using an amide-type hydrophilic interaction liquid chromatography (HILIC) column (plate number >2000) under isocratic conditions with 90% acetonitrile as the mobile phase. The method detection limits (MDLs) ranged from 42.8 ng/mL (UV detection) to 0.19 pg/mL (using MS with multiple reaction monitoring, MRM). Other MS measurement modes achieved MDLs of 125 pg/mL (single quad scan, Q1) and 1.5 pg/mL (multiple ion monitoring, MI). Each detection method exhibited a good linear response over several orders of magnitude. Moreover, we tested the robustness of these methods in the presence of Suvanee River fulvic acids (SRFA) as an example of organic matter commonly found in environmental water samples. While SRFA significantly interfered with UV- and Q1-based quantifications, the interference was relatively low using MI or MRM (relative error in presence of SRFA: 8.6% and 2.5%, respectively). This first report of a robust MS-based quantification method for modified fullerenes dissolved in water suggests the feasibility of implementing MS techniques more broadly for identification and quantification of fullerols and other water-soluble fullerene derivatives in environmental samples. PMID:21294534

  13. Aerobic physical activity and resistance training: an application of the theory of planned behavior among adults with type 2 diabetes in a random, national sample of Canadians

    Directory of Open Access Journals (Sweden)

    Karunamuni Nandini

    2008-12-01

    Full Text Available Abstract Background Aerobic physical activity (PA and resistance training are paramount in the treatment and management of type 2 diabetes (T2D, but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random national sample which was created by generating a random list of household phone numbers. The list was proportionate to the actual number of household telephone numbers for each Canadian province (with the exception of Quebec. These individuals completed self-report TPB constructs of attitude, subjective norm, perceived behavioral control and intention, and a 3-month follow-up that assessed aerobic PA and resistance training. Results TPB explained 10% and 8% of the variance respectively for aerobic PA and resistance training; and accounted for 39% and 45% of the variance respectively for aerobic PA and resistance training intentions. Conclusion These results may guide the development of appropriate PA interventions for aerobic PA and resistance training based on the TPB.

  14. Protocol for Quantification of Defects in Natural Fibres for Composites

    DEFF Research Database (Denmark)

    Mortensen, Ulrich Andreas; Madsen, Bo

    2014-01-01

    Natural bast-type plant fibres are attracting increasing interest for being used for structural composite applications where high quality fibres with good mechanical properties are required. A protocol for the quantification of defects in natural fibres is presented. The protocol is based on the experimental method of optical microscopy and the image analysis algorithms of the seeded region growing method and Otsu’s method. The use of the protocol is demonstrated by examining two types of differently processed flax fibres to give mean defect contents of 6.9 and 3.9%, a difference which is tested to be statistically significant. The protocol is evaluated with respect to the selection of image analysis algorithms, and Otsu’s method is found to be a more appropriate method than the alternative coefficient of variation method. The traditional way of defining defect size by area is compared to the definition of defect size by width, and it is shown that both definitions can be used to give unbiased findings for the comparison between fibre types. Finally, considerations are given with respect to true measures of defect content, number of determinations, and number of significant figures used for the descriptive statistics.

  15. High-throughput multiplex microsatellite marker assay for detection and quantification of adulteration in Basmati rice (Oryza sativa).

    Science.gov (United States)

    Archak, Sunil; Lakshminarayanareddy, V; Nagaraju, Javaregowda

    2007-07-01

    Basmati rice is a very special type of aromatic rice known world-wide for its extra long grains and pleasant and distinct aroma. Traditional Basmati rice cultivars, confined to Indo-Gangetic regions of the Indian subcontinent, are often reported to be adulterated with crossbred Basmati varieties and long-grain non-Basmati varieties in the export market. At present, there is no commercial scale technology to reliably detect adulteration. We report here a CE-based multiplex microsatellite marker assay for detection as well as quantification of adulteration in Basmati rice samples. The single-tube assay multiplexes eight microsatellite loci to generate variety-specific allele profiles that can detect adulteration from 1% upwards. The protocol also incorporates a quantitative-competitive PCR-based analysis for quantification of adulteration. Accuracy of quantification has been shown to be +/-1.5%. The experiments used to develop and validate the methodology are described. PMID:17577195

  16. Efficient Quantification of Uncertainties in Complex Computer Code Results Project

    National Aeronautics and Space Administration — This proposal addresses methods for efficient quantification of margins and uncertainties (QMU) for models that couple multiple, large-scale commercial or...

  17. Aerodynamic Modeling with Heterogeneous Data Assimilation and Uncertainty Quantification Project

    National Aeronautics and Space Administration — Clear Science Corp. proposes to develop an aerodynamic modeling tool that assimilates data from different sources and facilitates uncertainty quantification. The...

  18. Development of a VHH-Based Erythropoietin Quantification Assay

    DEFF Research Database (Denmark)

    Kol, Stefan; Beuchert Kallehauge, Thomas

    2015-01-01

    Erythropoietin (EPO) quantification during cell line selection and bioreactor cultivation has traditionally been performed with ELISA or HPLC. As these techniques suffer from several drawbacks, we developed a novel EPO quantification assay. A camelid single-domain antibody fragment directed against human EPO was evaluated as a capturing antibody in a label-free biolayer interferometry-based quantification assay. Human recombinant EPO can be specifically detected in Chinese hamster ovary cell supernatants in a sensitive and pH-dependent manner. This method enables rapid and robust quantification of EPO in a high-throughput setting.

  19. Interpretivistic Conception of Quantification: Tool for Enhancing Quality of Life?

    Directory of Open Access Journals (Sweden)

    Denis Larrivee

    2013-11-01

    Full Text Available Quality of life is fast becoming the standard measure of outcome in clinical trials, residential satisfaction, and educational achievement, to name several social settings, with the consequent proliferation of assessment instruments. Yet its interpretation and definition provoke widespread disagreement, thereby rendering the significance of quantification uncertain. Moreover, quality, or qualia, is philosophically distinct from quantity, or quantitas, and so it is unclear how quantification can serve to modulate quality. Is it thus possible for quantification to enhance quality of life? We propose here that an interpretivistic conception of quantification may offer a more valid approach by which to address quality of life in sociological research.

  20. Efficient Quantification of Uncertainties in Complex Computer Code Results Project

    National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...

  1. Quantification of Uncertainties in Integrated Spacecraft System Models Project

    National Aeronautics and Space Administration — The proposed effort is to investigate a novel uncertainty quantification (UQ) approach based on non-intrusive polynomial chaos (NIPC) for computationally efficient...

  2. Lax-Phillips scattering theory with perturbations of the type: V(x)=(?(x))/|x|?, where ?=2-(n)/s, ? is an element of Ls(Rn), s > 2 and s ? (n)/2

    International Nuclear Information System (INIS)

    A scattering theory for the wave equation with compactly supported perturbations was developed by Lax-Phillips in 1967. Using Enss approach, Phillips developed a Lax-Phillips scattering theory with short range perturbations of the type: V(x)=o((1)/|x|?), ? > 2. In this paper we develop a scattering theory for more general perturbations, i.e. for V(x)=(?(x))/|x|?, where ?=2-(n)/s, ? is an element of Ls(Rn), s > 2 and s ? (n)/2. Refs

  3. Use of X-ray fluorescence for the quantification of multicomponent samples composition

    International Nuclear Information System (INIS)

    X-ray spectrometry became an indispensable tool in those cases in which a quick and accurate determination of the elemental concentration of samples of different origin is necessary. Research aimed to improve the results given by different radiation-detector source combinations of characteristic radiation are focused on the improvement of the precision of results obtained and to lower detection limits. In this sense, the quantification techniques of characteristic radiation provide the most accurate results. Quantification by appropriate detectors of X-ray intensity and energy discrimination allow to perform three types of spectrometric studies: 1) electronic structure (information on chemical binding); 2) atomic structure by diffraction (crystallography); 3) elemental composition (spectrochemistry). The applications of X ray spectroscopy in the determination of relative elementary concentrations is considered in this work

  4. DNA quantification by real time PCR and short tandem repeats (STRs amplification results

    Directory of Open Access Journals (Sweden)

    Zoppis S

    2012-11-01

    Full Text Available Determining the DNA amount in a forensic sample is fundamental for PCR-based analyses because if on one hand an excessive amount of template may cause the appearance of additional or out-of-scale peaks, by the other a low quantity can determine the appearance of stochastic phenomena affecting the PCR reaction and the subsequent interpretation of typing results. In the common practice of forensic genetics laboratories, the quantification results provided by Real Time PCR (qPCR assume the role of “boundary line” between the possibility for a given DNA sample to be subjected or not to the subsequent analytical steps, on the basis of an optimal amount of DNA in the range indicated by the manufacturer of the specific commercial kit.However, some studies have shown the possibility to obtain STR typing results even with an extremely low DNA concentration or, paradoxically, equal to zero (1. Regardless of the amount of DNA used for the quantification of the testing sample, specific software are able to use the standard curve to calculate concentration values far below the manufacturer’s reported optimal detection limit (0.023 ng/?L. Consequently, laboratories have to face the critical decision to interrupt the analyses giving up the possibility to obtain a genetic profile -although partial- or to try the amplification of the extract with the awareness of the interpretation issues that this implies.The authors will present the quantification results obtained by qPCR performed on numerous samples collected from items of forensic interest, subjected to DNA extraction using magnetic beads. Following the quantification step, the extracts were subjected to DNA amplification and STR typing using last generation commercial kits. Samples that showed quantification values below the limit of detection for the method were included in the analysis in order to check the existence of a correlation between the DNA quantification results by qPCR and the possibility of obtaining a genetic profile useful for identification purposes.Our study, performed on 558 samples from forensic casework items, has shown a correlation between the DNA amount resulted from qPCR analysis and the possibility of obtaining a genetic profile useful for identification purposes.In spite of the increasing sensitivity of last generation commercial kits for STR analysis, as demonstrated by the ability to detect allelic peaks from extremely low DNA quantities (with concentrations far below the limit of detection for the specific quantification kit, even corresponding to 0 or “Undetermined”, the results obtained show a correlation between qPCR quantification values and STR typing results. Thus the qPCR method confirms being today a useful and valid instrument for both qualitative and quantitative evaluation of genetic samples for human identification purposes.

  5. Computer Model Inversion and Uncertainty Quantification in the Geosciences

    Science.gov (United States)

    White, Jeremy T.

    The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.

  6. Quantification of the detriment and comparison of health risks. Methodological problems

    International Nuclear Information System (INIS)

    Some of the methodological problems involved in the quantitative estimate of the health detriment of different energy sources and in risk comparison are described. First, the question of determining the detriment is discussed from the point of view of the distortions introduced in the quantification when dealing with risks for which the amount of information available varies widely. The main criteria applied to classifying types of detriment are then recalled. Finally, the problems involved in comparisons are outlined: spatial and temporal variations in the types of detriment, operation under normal and accident conditions, and the risks to the public and workers. (author)

  7. Physiologic upper limits of pore size of different blood capillary types and another perspective on the dual pore theory of microvascular permeability

    Directory of Open Access Journals (Sweden)

    Sarin Hemant

    2010-08-01

    Full Text Available Abstract Background Much of our current understanding of microvascular permeability is based on the findings of classic experimental studies of blood capillary permeability to various-sized lipid-insoluble endogenous and non-endogenous macromolecules. According to the classic small pore theory of microvascular permeability, which was formulated on the basis of the findings of studies on the transcapillary flow rates of various-sized systemically or regionally perfused endogenous macromolecules, transcapillary exchange across the capillary wall takes place through a single population of small pores that are approximately 6 nm in diameter; whereas, according to the dual pore theory of microvascular permeability, which was formulated on the basis of the findings of studies on the accumulation of various-sized systemically or regionally perfused non-endogenous macromolecules in the locoregional tissue lymphatic drainages, transcapillary exchange across the capillary wall also takes place through a separate population of large pores, or capillary leaks, that are between 24 and 60 nm in diameter. The classification of blood capillary types on the basis of differences in the physiologic upper limits of pore size to transvascular flow highlights the differences in the transcapillary exchange routes for the transvascular transport of endogenous and non-endogenous macromolecules across the capillary walls of different blood capillary types. Methods The findings and published data of studies on capillary wall ultrastructure and capillary microvascular permeability to lipid-insoluble endogenous and non-endogenous molecules from the 1950s to date were reviewed. In this study, the blood capillary types in different tissues and organs were classified on the basis of the physiologic upper limits of pore size to the transvascular flow of lipid-insoluble molecules. Blood capillaries were classified as non-sinusoidal or sinusoidal on the basis of capillary wall basement membrane layer continuity or lack thereof. Non-sinusoidal blood capillaries were further sub-classified as non-fenestrated or fenestrated based on the absence or presence of endothelial cells with fenestrations. The sinusoidal blood capillaries of the liver, myeloid (red bone marrow, and spleen were sub-classified as reticuloendothelial or non-reticuloendothelial based on the phago-endocytic capacity of the endothelial cells. Results The physiologic upper limit of pore size for transvascular flow across capillary walls of non-sinusoidal non-fenestrated blood capillaries is less than 1 nm for those with interendothelial cell clefts lined with zona occludens junctions (i.e. brain and spinal cord, and approximately 5 nm for those with clefts lined with macula occludens junctions (i.e. skeletal muscle. The physiologic upper limit of pore size for transvascular flow across the capillary walls of non-sinusoidal fenestrated blood capillaries with diaphragmed fenestrae ranges between 6 and 12 nm (i.e. exocrine and endocrine glands; whereas, the physiologic upper limit of pore size for transvascular flow across the capillary walls of non-sinusoidal fenestrated capillaries with open 'non-diaphragmed' fenestrae is approximately 15 nm (kidney glomerulus. In the case of the sinusoidal reticuloendothelial blood capillaries of myeloid bone marrow, the transvascular transport of non-endogenous macromolecules larger than 5 nm into the bone marrow interstitial space takes place via reticuloendothelial cell-mediated phago-endocytosis and transvascular release, which is the case for systemic bone marrow imaging agents as large as 60 nm in diameter. Conclusions The physiologic upper limit of pore size in the capillary walls of most non-sinusoidal blood capillaries to the transcapillary passage of lipid-insoluble endogenous and non-endogenous macromolecules ranges between 5 and 12 nm. Therefore, macromolecules larger than the physiologic upper limits of pore size in the non-sinusoidal blood capillary types generally do not accumulate within the respective tissue interstitial

  8. The influence of sampling design on tree-ring-based quantification of forest growth.

    Science.gov (United States)

    Nehrbass-Ahles, Christoph; Babst, Flurin; Klesse, Stefan; Nötzli, Magdalena; Bouriaud, Olivier; Neukom, Raphael; Dobbertin, Matthias; Frank, David

    2014-09-01

    Tree-rings offer one of the few possibilities to empirically quantify and reconstruct forest growth dynamics over years to millennia. Contemporaneously with the growing scientific community employing tree-ring parameters, recent research has suggested that commonly applied sampling designs (i.e. how and which trees are selected for dendrochronological sampling) may introduce considerable biases in quantifications of forest responses to environmental change. To date, a systematic assessment of the consequences of sampling design on dendroecological and-climatological conclusions has not yet been performed. Here, we investigate potential biases by sampling a large population of trees and replicating diverse sampling designs. This is achieved by retroactively subsetting the population and specifically testing for biases emerging for climate reconstruction, growth response to climate variability, long-term growth trends, and quantification of forest productivity. We find that commonly applied sampling designs can impart systematic biases of varying magnitude to any type of tree-ring-based investigations, independent of the total number of samples considered. Quantifications of forest growth and productivity are particularly susceptible to biases, whereas growth responses to short-term climate variability are less affected by the choice of sampling design. The world's most frequently applied sampling design, focusing on dominant trees only, can bias absolute growth rates by up to 459% and trends in excess of 200%. Our findings challenge paradigms, where a subset of samples is typically considered to be representative for the entire population. The only two sampling strategies meeting the requirements for all types of investigations are the (i) sampling of all individuals within a fixed area; and (ii) fully randomized selection of trees. This result advertises the consistent implementation of a widely applicable sampling design to simultaneously reduce uncertainties in tree-ring-based quantifications of forest growth and increase the comparability of datasets beyond individual studies, investigators, laboratories, and geographical boundaries. PMID:24729489

  9. Quantification of collagen contraction in three-dimensional cell culture.

    Science.gov (United States)

    Kopanska, Katarzyna S; Bussonnier, Matthias; Geraldo, Sara; Simon, Anthony; Vignjevic, Danijela; Betz, Timo

    2015-01-01

    Many different cell types including fibroblasts, smooth muscle cells, endothelial cells, and cancer cells exert traction forces on the fibrous components of the extracellular matrix. This can be observed as matrix contraction both macro- and microscopically in three-dimensional (3D) tissues models such as collagen type I gels. The quantification of local contraction at the micron scale, including its directionality and speed, in correlation with other parameters such as cell invasion, local protein or gene expression, can provide useful information to study wound healing, organism development, and cancer metastasis. In this article, we present a set of tools to quantify the flow dynamics of collagen contraction, induced by cells migrating out of a multicellular cancer spheroid into a three-dimensional (3D) collagen matrix. We adapted a pseudo-speckle technique that can be applied to bright-field and fluorescent microscopy time series. The image analysis presented here is based on an in-house written software developed in the Matlab (Mathworks) programming environment. The analysis program is freely available from GitHub following the link: http://dx.doi.org/10.5281/zenodo.10116. This tool provides an automatized technique to measure collagen contraction that can be utilized in different 3D cellular systems. PMID:25640438

  10. Estimating influence of cofragmentation on peptide quantification and identification in iTRAQ experiments by simulating multiplexed spectra.

    Science.gov (United States)

    Li, Honglan; Hwang, Kyu-Baek; Mun, Dong-Gi; Kim, Hokeun; Lee, Hangyeore; Lee, Sang-Won; Paek, Eunok

    2014-07-01

    Isobaric tag-based quantification such as iTRAQ and TMT is a promising approach to mass spectrometry-based quantification in proteomics as it provides wide proteome coverage with greatly increased experimental throughput. However, it is known to suffer from inaccurate quantification and identification of a target peptide due to cofragmentation of multiple peptides, which likely leads to under-estimation of differentially expressed peptides (DEPs). A simple method of filtering out cofragmented spectra with less than 100% precursor isolation purity (PIP) would decrease the coverage of iTRAQ/TMT experiments. In order to estimate the impact of cofragmentation on quantification and identification of iTRAQ-labeled peptide samples, we generated multiplexed spectra with varying degrees of PIP by mixing the two MS/MS spectra of 100% PIP obtained in global proteome profiling experiments on gastric tumor-normal tissue pair proteomes labeled by 4-plex iTRAQ. Despite cofragmentation, the simulation experiments showed that more than 99% of multiplexed spectra with PIP greater than 80% were correctly identified by three different database search engines-MODa, MS-GF+, and Proteome Discoverer. Using the multiplexed spectra that have been correctly identified, we estimated the effect of cofragmentation on peptide quantification. In 74% of the multiplexed spectra, however, the cancer-to-normal expression ratio was compressed, and a fair number of spectra showed the "ratio inflation" phenomenon. On the basis of the estimated distribution of distortions on quantification, we were able to calculate cutoff values for DEP detection from cofragmented spectra, which were corrected according to a specific PIP and probability of type I (or type II) error. When we applied these corrected cutoff values to real cofragmented spectra with PIP larger than or equal to 70%, we were able to identify reliable DEPs by removing about 25% of DEPs, which are highly likely to be false positives. Our experimental results provide useful insight into the effect of cofragmentation on isobaric tag-based quantification methods. The simulation procedure as well as the corrected cutoff calculation method could be adopted for quantifying the effect of cofragmentation and reducing false positives (or false negatives) in the DEP identification with general quantification experiments based on isobaric labeling techniques. PMID:24918111

  11. The thermodynamic, electronic and elastic properties of the early-transition-metal diborides with AlB2-type structure: A density functional theory study

    International Nuclear Information System (INIS)

    Highlights: • The thermodynamic characters of TMB2 have been firstly studied using the QHA method. • WB2 and TaB2 are good candidates for the structural application at high temperature. • Most of the early-transition-metal diborides cannot be easily machined. • The correlations between elastic constants and VECs of TMB2 have been discussed. - Abstract: The thermodynamic, electronic and elastic properties of a class of early-transition-metal diborides (TMB2, TM = Sc, Ti, V, Cr, Y, Zr, Nb, Mo, Hf, Ta, W) with AlB2-type structure have been investigated using the quasi-harmonic Debye model and the ab initio calculation based on the density functional theory, respectively. According to the characters of temperature dependent bulk modulus and coefficient of thermal expansion, the TMB2 compounds can be divided into three groups. The results also indicate that 4d- and 5d-TMB2 compounds are good high-temperature structural materials. The five independent stiffness coefficients, bulk and shear moduli of the diborides are obtained and well agreement with the available experimental and theoretical data. The correlations between elastic properties and electronic structure are discussed in detail. Due to the high values of hardness, the VIB-transition-metal diborides with relatively high B/G and B/C44 ratios are still difficult to machine with usual methods

  12. Rate theory modeling of defect evolution under cascade damage conditions: the influence of vacancy-type cascade remnants and application to the defect production characterization by microstructural analysis

    International Nuclear Information System (INIS)

    Recent computational and experimental studies have confirmed that high energy cascades produce clustered defects of both vacancy- and interstitial-types as well as isolated point defects. However, the production probability, configuration, stability and other characteristics of the cascade clusters are not well understood in spite of the fact that clustered defect production would substantially affect the irradiation-induced microstructures and the consequent property changes in a certain range of temperatures and displacement rates. In this work, a model of point defect and cluster evolution in irradiated materials under cascade damage conditions was developed by combining the conventional reaction rate theory and the results from the latest molecular dynamics simulation studies. This paper provides a description of the model and a model-based fundamental investigation of the influence of configuration, production efficiency and the initial size distribution of cascade-produced vacancy clusters. In addition, using the model, issues on characterizing cascade-induced defect production by microstructural analysis will be discussed. In particular, the determination of cascade vacancy cluster configuration, surviving defect production efficiency and cascade-interaction volume is attempted by analyzing the temperature dependence of swelling rate and loop growth rate in austenitic steels and model alloys. (author)

  13. Nonequilibrium magnetic properties in a two-dimensional kinetic mixed Ising system within the effective-field theory and Glauber-type stochastic dynamics approach.

    Science.gov (United States)

    Erta?, Mehmet; Deviren, Bayram; Keskin, Mustafa

    2012-11-01

    Nonequilibrium magnetic properties in a two-dimensional kinetic mixed spin-2 and spin-5/2 Ising system in the presence of a time-varying (sinusoidal) magnetic field are studied within the effective-field theory (EFT) with correlations. The time evolution of the system is described by using Glauber-type stochastic dynamics. The dynamic EFT equations are derived by employing the Glauber transition rates for two interpenetrating square lattices. We investigate the time dependence of the magnetizations for different interaction parameter values in order to find the phases in the system. We also study the thermal behavior of the dynamic magnetizations, the hysteresis loop area, and dynamic correlation. The dynamic phase diagrams are presented in the reduced magnetic field amplitude and reduced temperature plane and we observe that the system exhibits dynamic tricritical and reentrant behaviors. Moreover, the system also displays a double critical end point (B), a zero-temperature critical point (Z), a critical end point (E), and a triple point (TP). We also performed a comparison with the mean-field prediction in order to point out the effects of correlations and found that some of the dynamic first-order phase lines, which are artifacts of the mean-field approach, disappeared. PMID:23214741

  14. Calibration improves uncertainty quantification in production forecasting

    Energy Technology Data Exchange (ETDEWEB)

    McVay, D.A.; Lee, W.J.; Alvarado, M.G.

    2005-07-01

    Despite recent advances in uncertainty quantification, the petroleum industry continues to underestimate the uncertainties associated with reservoir production forecasts. This paper describes a calibration process that can improve quantification of uncertainties associated with reservoir performance prediction. Existing methods underestimate uncertainty because they fail to account for all, and particularly unknown, factors affecting reservoir performance and because they do not investigate all combinations of reservoir parameter values. However, the primary limitation of existing methods is that their reliability cannot be verified because the testing of an estimate of uncertainty from existing methods yields only one sample for what is inherently a statistical result. Verification and improvement of uncertainty estimates can be achieved with calibration - comparison of actual performance with previous uncertainty estimates and then using the results to scale subsequent uncertainty estimates. Calibration of uncertainty estimates can be achieved with a more frequent, if not continuous, process of data acquisition. model calibration, model prediction and uncertainty assessment, similar to the process employed in weather forecasting. Improved ability to quantify production forecast uncertainty should result in better investment decision making and, ultimately, increased profitability. (author)

  15. The direct l-type resonance spectrum of CF3CCH in the vibrational state v10=3: Extension of the theory of reduction to H6n terms.

    Czech Academy of Sciences Publication Activity Database

    Wötzel, U.; Mäder, H.; Harder, H.; Pracna, Petr; Sarka, K.

    780-781, - (2007), s. 206-221. ISSN 0022-2860 R&D Projects: GA AV ?R 1ET400400410 Institutional research plan: CEZ:AV0Z40400503 Keywords : symmetric top * Fourier transform microwave spectroscopy * direct l-type resonance and rotational spectrum * theory of reduction Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.486, year: 2007

  16. Uncertainty Quantification for Airfoil Icing using Polynomial Chaos Expansions

    CERN Document Server

    DeGennaro, Anthony M; Martinelli, Luigi

    2014-01-01

    The formation and accretion of ice on the leading edge of a wing can be detrimental to airplane performance. Complicating this reality is the fact that even a small amount of uncertainty in the shape of the accreted ice may result in a large amount of uncertainty in aerodynamic performance metrics (e.g., stall angle of attack). The main focus of this work concerns using the techniques of Polynomial Chaos Expansions (PCE) to quantify icing uncertainty much more quickly than traditional methods (e.g., Monte Carlo). First, we present a brief survey of the literature concerning the physics of wing icing, with the intention of giving a certain amount of intuition for the physical process. Next, we give a brief overview of the background theory of PCE. Finally, we compare the results of Monte Carlo simulations to PCE-based uncertainty quantification for several different airfoil icing scenarios. The results are in good agreement and confirm that PCE methods are much more efficient for the canonical airfoil icing un...

  17. Processing and quantification of x-ray energy dispersive spectra in the Analytical Electron Microscope

    International Nuclear Information System (INIS)

    Spectral processing in x-ray energy dispersive spectroscopy deals with the extraction of characteristic signals from experimental data. In this text, the four basic procedures for this methodology are reviewed and their limitations outlined. Quantification, on the other hand, deals with the interpretation of the information obtained from spectral processing. Here the limitations are for the most part instrumental in nature. The prospects of higher voltage operation does not, in theory, present any new problems and may in fact prove to be more desirable assuming that electron damage effects do not preclude analysis. 28 refs., 6 figs

  18. A critique of methods in the quantification of risks, costs and benefits in the Societal choice of energy options

    International Nuclear Information System (INIS)

    A discussion is presented on the assessment of the risks, costs and benefits of proposed nuclear power plants and their alternatives. Topics discussed include: information adequacy and simplifying assumptions; the social cost of information; the quantification of subjective values; the various quantitative methods such as statistical and probability theory; engineering or scientific estimation; the modeling of the ecological, economic and social effects of alternative energy sources. (U.K.)

  19. Gauge Theories of Gravitation

    OpenAIRE

    Blagojevic?, Milutin; Hehl, Friedrich W.

    2012-01-01

    During the last five decades, gravity, as one of the fundamental forces of nature, has been formulated as a gauge field theory of the Weyl-Cartan-Yang-Mills type. The resulting theory, the Poincar\\'e gauge theory of gravity, encompasses Einstein's gravitational theory as well as the teleparallel theory of gravity as subcases. In general, the spacetime structure is enriched by Cartan's torsion and the new theory can accommodate fermionic matter and its spin in a perfectly nat...

  20. Review of Hydroelasticity Theories

    DEFF Research Database (Denmark)

    Chen, Xu-jun; Wu, You-sheng

    2006-01-01

    Existing hydroelastic theories are reviewed. The theories are classified into different types: two-dimensional linear theory, two-dimensional nonlinear theory, three-dimensional linear theory and three-dimensional nonlinear theory. Applications to analysis of very large floating structures (VLFS) are reviewed and discussed in details. Special emphasis is placed on papers from China and Japan (in native languages) as these papers are not generally publicly known in the rest of the world.

  1. Using psychological theory to understand the clinical management of type 2 diabetes in Primary Care: a comparison across two European countries

    Directory of Open Access Journals (Sweden)

    Johnston Marie

    2009-08-01

    Full Text Available Abstract Background Long term management of patients with Type 2 diabetes is well established within Primary Care. However, despite extensive efforts to implement high quality care both service provision and patient health outcomes remain sub-optimal. Several recent studies suggest that psychological theories about individuals' behaviour can provide a valuable framework for understanding generalisable factors underlying health professionals' clinical behaviour. In the context of the team management of chronic disease such as diabetes, however, the application of such models is less well established. The aim of this study was to identify motivational factors underlying health professional teams' clinical management of diabetes using a psychological model of human behaviour. Methods A predictive questionnaire based on the Theory of Planned Behaviour (TPB investigated health professionals' (HPs' cognitions (e.g., beliefs, attitudes and intentions about the provision of two aspects of care for patients with diabetes: prescribing statins and inspecting feet. General practitioners and practice nurses in England and the Netherlands completed parallel questionnaires, cross-validated for equivalence in English and Dutch. Behavioural data were practice-level patient-reported rates of foot examination and use of statin medication. Relationships between the cognitive antecedents of behaviour proposed by the TPB and healthcare teams' clinical behaviour were explored using multiple regression. Results In both countries, attitude and subjective norm were important predictors of health professionals' intention to inspect feet (Attitude: beta = .40; Subjective Norm: beta = .28; Adjusted R2 = .34, p 2 = .40, p Conclusion Using the TPB, we identified modifiable factors underlying health professionals' intentions to perform two clinical behaviours, providing a rationale for the development of targeted interventions. However, we did not observe a relationship between health professionals' intentions and our proxy measure of team behaviour. Significant methodological issues were highlighted concerning the use of models of individual behaviour to explain behaviours performed by teams. In order to investigate clinical behaviours performed by teams it may be necessary to develop measures that reflect the collective cognitions of the members of the team to facilitate the application of these theoretical models to team behaviours.

  2. Thermal behavior of dynamic magnetizations, hysteresis loop areas and correlations of a cylindrical Ising nanotube in an oscillating magnetic field within the effective-field theory and the Glauber-type stochastic dynamics approach

    Energy Technology Data Exchange (ETDEWEB)

    Deviren, Bayram, E-mail: bayram.deviren@nevsehir.edu.tr [Department of Physics, Nevsehir University, 50300 Nevsehir (Turkey); Keskin, Mustafa [Department of Physics, Erciyes University, 38039 Kayseri (Turkey)

    2012-02-20

    The dynamical aspects of a cylindrical Ising nanotube in the presence of a time-varying magnetic field are investigated within the effective-field theory with correlations and Glauber-type stochastic approach. Temperature dependence of the dynamic magnetizations, dynamic total magnetization, hysteresis loop areas and correlations are investigated in order to characterize the nature of dynamic transitions as well as to obtain the dynamic phase transition temperatures and compensation behaviors. Some characteristic phenomena are found depending on the ratio of the physical parameters in the surface shell and core, i.e., five different types of compensation behaviors in the Néel classification nomenclature exist in the system. -- Highlights: ? Kinetic cylindrical Ising nanotube is investigated using the effective-field theory. ? The dynamic magnetizations, hysteresis loop areas and correlations are calculated. ? The effects of the exchange interactions have been studied in detail. ? Five different types of compensation behaviors have been found. ? Some characteristic phenomena are found depending on ratio of physical parameters.

  3. Thermal behavior of dynamic magnetizations, hysteresis loop areas and correlations of a cylindrical Ising nanotube in an oscillating magnetic field within the effective-field theory and the Glauber-type stochastic dynamics approach

    International Nuclear Information System (INIS)

    The dynamical aspects of a cylindrical Ising nanotube in the presence of a time-varying magnetic field are investigated within the effective-field theory with correlations and Glauber-type stochastic approach. Temperature dependence of the dynamic magnetizations, dynamic total magnetization, hysteresis loop areas and correlations are investigated in order to characterize the nature of dynamic transitions as well as to obtain the dynamic phase transition temperatures and compensation behaviors. Some characteristic phenomena are found depending on the ratio of the physical parameters in the surface shell and core, i.e., five different types of compensation behaviors in the Néel classification nomenclature exist in the system. -- Highlights: ? Kinetic cylindrical Ising nanotube is investigated using the effective-field theory. ? The dynamic magnetizations, hysteresis loop areas and correlations are calculated. ? The effects of the exchange interactions have been studied in detail. ? Five different types of compensation behaviors have been found. ? Some characteristic phenomena are found depending on ratio of physical parameters.

  4. Survey and Evaluate Uncertainty Quantification Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guang; Engel, David W.; Eslinger, Paul W.

    2012-02-01

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon capture processes. As such, we will develop, as needed and beyond existing capabilities, a suite of robust and efficient computational tools for UQ to be integrated into a CCSI UQ software framework.

  5. Identification and quantification of Rhizoctonia solani and R. oryzae using real-time polymerase chain reaction.

    Science.gov (United States)

    Okubara, P A; Schroeder, K L; Paulitz, T C

    2008-07-01

    Rhizoctonia solani and R. oryzae are the principal causal agents of Rhizoctonia root rot in dryland cereal production systems of the Pacific Northwest. To facilitate the identification and quantification of these pathogens in agricultural samples, we developed SYBR Green I-based real-time quantitative-polymerase chain reaction (Q-PCR) assays specific to internal transcribed spacers ITS1 and ITS2 of the nuclear ribosomal DNA of R. solani and R. oryzae. The assays were diagnostic for R. solani AG-2-1, AG-8, and AG-10, three genotypes of R. oryzae, and an AG-I-like binucleate Rhizoctonia species. Quantification was reproducible at or below a cycle threshold (Ct) of 33, or 2 to 10 fg of mycelial DNA from cultured fungi, 200 to 500 fg of pathogen DNA from root extracts, and 20 to 50 fg of pathogen DNA from soil extracts. However, pathogen DNA could be specifically detected in all types of extracts at about 100-fold below the quantification levels. Soils from Ritzville, WA, showing acute Rhizoctonia bare patch harbored 9.4 to 780 pg of R. solani AG-8 DNA per gram of soil.. Blastn, primer-template duplex stability, and phylogenetic analyses predicted that the Q-PCR assays will be diagnostic for isolates from Australia, Israel, Japan, and other countries. PMID:18943261

  6. LC-MS3 quantification of O-glycopeptides in human serum.

    Science.gov (United States)

    Sanda, Miloslav; Pompach, Petr; Benicky, Julius; Goldman, Radoslav

    2013-08-01

    Quantitative analysis of site-specific glycosylation of proteins is a challenging part of glycoproteomic research. Multiple enrichment steps are typically required in the analytical workflows to achieve adequate characterization of the site-specific glycoforms. In spite of recent advances, quantitative workflows need further development. Here, we report a selective and sensitive MS2 followed by further fragmentation in the linear IT-MS analyzer (MS3) multiple reaction monitoring workflow mass spectrometric method for direct analysis of O-glycopeptides in difficult matrix such as serum. Method optimization was performed using two serum glycoproteins, hemopexin (HPX) and sex hormone binding globulin. With the optimized MS3 workflow, we were able to analyze major glycoforms of HPX directly in human serum. Quantification of the minor glycoforms of HPX and glycoforms of sex hormone binding globulin required enrichment of the protein because these analytes were below the sensitivity of the 4000 quadrupole ion trap hybrid mass spectrometer in the complex serum background. In conclusion, we present a quantitative method for site-specific analysis of O-glycosylation with general applicability to mucin-type glycoproteins. Our results document reliable application of the optimized MS3 multiple reaction monitoring workflow to the relative quantification of O-glycosylation microheterogeneity of HPX in human serum. Introduction of isotopically labeled standards would be desirable to achieve absolute quantification of the analytes. The possibility to analyze serum samples directly represents a significant improvement of the quantitative glycopeptide workflows with the potential for use in clinical applications. PMID:23765987

  7. Ecosystem Service Potentials, Flows and Demands – Concepts for Spatial Localisation, Indication and Quantification

    Directory of Open Access Journals (Sweden)

    Benjamin Burkhard

    2014-06-01

    Full Text Available The high variety of ecosystem service categorisation systems, assessment frameworks, indicators, quantification methods and spatial localisation approaches allows scientists and decision makers to harness experience, data, methods and tools. On the other hand, this variety of concepts and disagreements among scientists hamper an integration of ecosystem services into contemporary environmental management and decision making. In this article, the current state of the art of ecosystem service science regarding spatial localisation, indication and quantification of multiple ecosystem service supply and demand is reviewed and discussed. Concepts and tables for regulating, provisioning and cultural ecosystem service definitions, distinguishing between ecosystem service potential supply (stocks, flows (real supply and demands as well as related indicators for quantification are provided. Furthermore, spatial concepts of service providing units, benefitting areas, spatial relations, rivalry, spatial and temporal scales are elaborated. Finally, matrices linking CORINE land cover types to ecosystem service potentials, flows, demands and budget estimates are provided. The matrices show that ecosystem service potentials of landscapes differ from flows, especially for provisioning ecosystem services.

  8. Recurrence quantification analysis of global stock markets

    Science.gov (United States)

    Bastos, João A.; Caiado, Jorge

    2011-04-01

    This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.

  9. Homogeneity of Inorganic Glasses : Quantification and Ranking

    DEFF Research Database (Denmark)

    Jensen, Martin; Zhang, L.

    2011-01-01

    Homogeneity of glasses is a key factor determining their physical and chemical properties and overall quality. However, quantification of the homogeneity of a variety of glasses is still a challenge for glass scientists and technologists. Here, we show a simple approach by which the homogeneity of different glass products can be quantified and ranked. This approach is based on determination of both the optical intensity and dimension of the striations in glasses. These two characteristic values areobtained using the image processing method established recently. The logarithmic ratio between the dimension and the intensity is used to quantify and rank the homogeneity of glass products. Compared with the refractive index method, the image processing method has a wider detection range and a lower statistical uncertainty.

  10. Common cause failures: identification and quantification

    International Nuclear Information System (INIS)

    In the context of Probabilistic Safety Analysis (PSA), treatment of Common Cause Failures (CCFs) may have critical influence on the credibility of the studies, on the question of completeness and on interpretation of results. A Nordic project 'Risk analysis', initiated in 1985, has among its main objectives to perform in-depth studies of dependent failures and human interactions, and generally to investigate assumptions and limitations of current PSAs. During the first phase of the project the activities concentrated on performing a Benchmark Exercise (BE) concerning CCF-data. Preliminary results of the exercise are presented in this report. The main findings concern both procedures for search for CCFs, use of classification systems, and quantification of CCF-contributions by means of direct assessment and use of parametric models

  11. Uncertainty quantification of an Aviation Environmental Toolsuite

    International Nuclear Information System (INIS)

    This paper describes uncertainty quantification (UQ) of a complex system computational tool that supports policy-making for aviation environmental impact. The paper presents the methods needed to create a tool that is “UQ-enabled” with a particular focus on how to manage the complexity of long run times and massive input/output datasets. These methods include a process to quantify parameter uncertainties via data, documentation and expert opinion, creating certified surrogate models to accelerate run-times while maintaining confidence in results, and executing a range of mathematical UQ techniques such as uncertainty propagation and global sensitivity analysis. The results and discussion address aircraft performance, aircraft noise, and aircraft emissions modeling

  12. Quantification Methods of Management Skills in Shipping

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2012-04-01

    Full Text Available Romania can not overcome the financial crisis without business growth, without finding opportunities for economic development and without attracting investment into the country. Successful managers find ways to overcome situations of uncertainty. The purpose of this paper is to determine the managerial skills developed by the Romanian fluvial shipping company NAVROM (hereinafter CNFR NAVROM SA, compared with ten other major competitors in the same domain, using financial information of these companies during the years 2005-2010. For carrying out the work it will be used quantification methods of managerial skills to CNFR NAVROM SA Galati, Romania, as example mentioning the analysis of financial performance management based on profitability ratios, net profit margin, suppliers management, turnover.

  13. N = 1 Field Theory Duality from M-theory

    OpenAIRE

    Schmaltz, Martin; Sundrum, Raman

    1997-01-01

    We investigate Seiberg's N=1 field theory duality for four-dimensional supersymmetric QCD with the M-theory 5-brane. We find that the M-theory configuration for the magnetic dual theory arises via a smooth deformation of the M-theory configuration for the electric theory. The creation of Dirichlet 4-branes as Neveu-Schwarz 5-branes are passed through each other in Type IIA string theory is given a nice derivation from M-theory.

  14. In vivo cell tracking and quantification method in adult zebrafish

    Science.gov (United States)

    Zhang, Li; Alt, Clemens; Li, Pulin; White, Richard M.; Zon, Leonard I.; Wei, Xunbin; Lin, Charles P.

    2012-03-01

    Zebrafish have become a powerful vertebrate model organism for drug discovery, cancer and stem cell research. A recently developed transparent adult zebrafish using double pigmentation mutant, called casper, provide unparalleled imaging power in in vivo longitudinal analysis of biological processes at an anatomic resolution not readily achievable in murine or other systems. In this paper we introduce an optical method for simultaneous visualization and cell quantification, which combines the laser scanning confocal microscopy (LSCM) and the in vivo flow cytometry (IVFC). The system is designed specifically for non-invasive tracking of both stationary and circulating cells in adult zebrafish casper, under physiological conditions in the same fish over time. The confocal imaging part in this system serves the dual purposes of imaging fish tissue microstructure and a 3D navigation tool to locate a suitable vessel for circulating cell counting. The multi-color, multi-channel instrument allows the detection of multiple cell populations or different tissues or organs simultaneously. We demonstrate initial testing of this novel instrument by imaging vasculature and tracking circulating cells in CD41: GFP/Gata1: DsRed transgenic casper fish whose thrombocytes/erythrocytes express the green and red fluorescent proteins. Circulating fluorescent cell incidents were recorded and counted repeatedly over time and in different types of vessels. Great application opportunities in cancer and stem cell researches are discussed.

  15. Quantification of bronchial dimensions at MDCT using dedicated software

    International Nuclear Information System (INIS)

    This study aimed to assess the feasibility of quantification of bronchial dimensions at MDCT using dedicated software (BronCare). We evaluated the reliability of the software to segment the airways and defined criteria ensuring accurate measurements. BronCare was applied on two successive examinations in 10 mild asthmatic patients. Acquisitions were performed at pneumotachographically controlled lung volume (65% TLC), with reconstructions focused on the right lung base. Five validation criteria were imposed: (1) bronchus type: segmental and subsegmental; (2) lumen area (LA)>4 mm2; (3) bronchus length (Lg) > 7 mm; (4) confidence index - giving the percentage of the bronchus not abutted by a vessel - (CI) >55% for validation of wall area (WA) and (5) a minimum of 10 contiguous cross-sectional images fulfilling the criteria. A complete segmentation procedure on both acquisitions made possible an evaluation of LA and WA in 174/223 (78%) and 171/174 (98%) of bronchi, respectively. The validation criteria were met for 56/69 (81%) and for 16/69 (23%) of segmental bronchi and for 73/102 (72%) and 58/102 (57%) of subsegmental bronchi, for LA and WA, respectively. In conclusion, BronCare is reliable to segment the airways in clinical practice. The proposed criteria seem appropriate to select bronchi candidates for measurement. (orig.)

  16. Hydration free energy of hard-sphere solute over a wide range of size studied by various types of solution theories

    OpenAIRE

    N.Matubayasi; M. Kinoshita; Nakahara, M

    2007-01-01

    The hydration free energy of hard-sphere solute is evaluated over a wide range of size using the method of energy representation, information-theoretic approach, reference interaction site model, and scaled-particle theory. The former three are distribution function theories and the hydration free energy is formulated to reflect the solution structure through distribution functions. The presence of the volume-dependent term is pointed out for the distribution function theories, and the asympt...

  17. Assessment of molecular recognition element for the quantification of human epidermal growth factor using surface plasmon resonance

    Scientific Electronic Library Online (English)

    Ira Amira, Rosti; Ramakrishnan Nagasundara, Ramanan; Tau Chuan, Ling; Arbakariya B, Ariff.

    2013-11-15

    Full Text Available Background: A method for the selection of suitable molecular recognition element (MRE) for the quantification of human epidermal growth factor (hEGF) using surface plasmon resonance (SPR) is presented. Two types of hEGF antibody, monoclonal and polyclonal, were immobilized on the surface of chip and [...] validated for its characteristics and performance in the quantification of hEGF. Validation of this analytical procedure was to demonstrate the stability and suitability of antibody for the quantification of target protein. Results: Specificity, accuracy and precision for all samples were within acceptable limit for both antibodies. The affinity and kinetic constant of antibodies-hEGF binding were evaluated using a 1:1 Langmuir interaction model. The model fitted well to all binding responses simultaneously. Polyclonal antibody (pAb) has better affinity (K D = 7.39e-10 M) than monoclonal antibody (mAb) (K D = 9.54e-9 M). Further evaluation of kinetic constant demonstrated that pAb has faster reaction rate during sample injection, slower dissociation rate during buffer injection and higher level of saturation state than mAb. Besides, pAb has longer shelf life and greater number of cycle run. Conclusions: Thus, pAb was more suitable to be used as a stable MRE for further quantification works from the consideration of kinetic, binding rate and shelf life assessment.

  18. Identification and quantification of selected chemicals in laser pyrolysis products of mammalian tissues

    Science.gov (United States)

    Spleiss, Martin; Weber, Lothar W.; Meier, Thomas H.; Treffler, Bernd

    1995-01-01

    Liver and muscle tissue have been irradiated with a surgical CO2-laser. The prefiltered fumes were adsorbed on different sorbents (activated charcoal type NIOSH and Carbotrap) and desorbed with different solvents (carbondisulphide and acetone). Analysis was done by gas chromatography/mass spectrometry. An updated list of identified substances is shown. Typical Maillard reaction products as found in warmed over flavour as aldehydes, aromatics, heterocyclic and sulphur compounds were detected. Quantification of some toxicological relevant substances is presented. The amounts of these substances are given in relation to the laser parameters and different tissues for further toxicological assessment.

  19. Synthesis and Review: Advancing agricultural greenhouse gas quantification

    Science.gov (United States)

    Olander, Lydia P.; Wollenberg, Eva; Tubiello, Francesco N.; Herold, Martin

    2014-07-01

    Reducing emissions of agricultural greenhouse gases (GHGs), such as methane and nitrous oxide, and sequestering carbon in the soil or in living biomass can help reduce the impact of agriculture on climate change while improving productivity and reducing resource use. There is an increasing demand for improved, low cost quantification of GHGs in agriculture, whether for national reporting to the United Nations Framework Convention on Climate Change (UNFCCC), underpinning and stimulating improved practices, establishing crediting mechanisms, or supporting green products. This ERL focus issue highlights GHG quantification to call attention to our existing knowledge and opportunities for further progress. In this article we synthesize the findings of 21 papers on the current state of global capability for agricultural GHG quantification and visions for its improvement. We conclude that strategic investment in quantification can lead to significant global improvement in agricultural GHG estimation in the near term.

  20. Uncertainty Quantification for Production Navier-Stokes Solvers Project

    National Aeronautics and Space Administration — The uncertainty quantification methods developed under this program are designed for use with current state-of-the-art flow solvers developed by and in use at NASA....

  1. Quantification of the sequestration of indium 111 labelled platelets

    International Nuclear Information System (INIS)

    A simple method is proposed for an accurate quantification of the splenic and/or hepatic sequestration of the 111In-labelled platelets. It could be allow a better prediction of the efficiency of splenectomy in idiopathic thrombocytopenic purpura

  2. The parallel reaction monitoring method contributes to a highly sensitive polyubiquitin chain quantification

    International Nuclear Information System (INIS)

    Highlights: •The parallel reaction monitoring method was applied to ubiquitin quantification. •The ubiquitin PRM method is highly sensitive even in biological samples. •Using the method, we revealed that Ufd4 assembles the K29-linked ubiquitin chain. -- Abstract: Ubiquitylation is an essential posttranslational protein modification that is implicated in a diverse array of cellular functions. Although cells contain eight structurally distinct types of polyubiquitin chains, detailed function of several chain types including K29-linked chains has remained largely unclear. Current mass spectrometry (MS)-based quantification methods are highly inefficient for low abundant atypical chains, such as K29- and M1-linked chains, in complex mixtures that typically contain highly abundant proteins. In this study, we applied parallel reaction monitoring (PRM), a quantitative, high-resolution MS method, to quantify ubiquitin chains. The ubiquitin PRM method allows us to quantify 100 attomole amounts of all possible ubiquitin chains in cell extracts. Furthermore, we quantified ubiquitylation levels of ubiquitin-proline-?-galactosidase (Ub-P-?gal), a historically known model substrate of the ubiquitin fusion degradation (UFD) pathway. In wild-type cells, Ub-P-?gal is modified with ubiquitin chains consisting of 21% K29- and 78% K48-linked chains. In contrast, K29-linked chains are not detected in UFD4 knockout cells, suggesting that Ufd4 assembles the K29-linked ubiquitin chain(s) on Ub-P-?gal in vivo. Thus, the ubiquitin PRM is a novel, useful, quantitative method for analyzing the highly complicated ubiquitin system

  3. Quantification of mRNA expression by competitive PCR using non-homologous competitors containing a shifted restriction site

    OpenAIRE

    Watzinger, Franz; Ho?rth, Elfriede; Lion, Thomas

    2001-01-01

    Despite the recent introduction of real-time PCR methods, competitive PCR techniques continue to play an important role in nucleic acid quantification because of the significantly lower cost of equipment and consumables. Here we describe a shifted restriction-site competitive PCR (SRS-cPCR) assay based on a modified type of competitor. The competitor fragments are designed to contain a recognition site for a restriction endonuclease that is also present in the target ...

  4. On the universality of PIV uncertainty quantification by image matching:

    OpenAIRE

    Sciacchitano, A.; Scarano, F.; Wieneke, B.

    2013-01-01

    The topic of uncertainty quantification in particle image velocimetry (PIV) is recognized as very relevant in the experimental fluid mechanics community, especially when dealing with turbulent flows, where PIV plays a prime role as diagnostic tool. The issue is particularly important when PIV is used to assess the validity of results obtained with computational fluid dynamics (CFD). An approach for PIV data uncertainty quantification based on image matching has been introduced by Sciacchitano...

  5. FRANX. Application for analysis and quantification of the APS fire

    International Nuclear Information System (INIS)

    The FRANX application has been developed by EPRI within the Risk and Reliability User Group in order to facilitate the process of quantification and updating APS Fire (also covers floods and earthquakes). By applying fire scenarios are quantified in the central integrating the tasks performed during the APS fire. This paper describes the main features of the program to allow quantification of an APS Fire. (Author)

  6. A Micropillar Compression Methodology for Ductile Damage Quantification :

    OpenAIRE

    Tasan, Cc; Hoefnagels, Jpm; Geers, Mgd

    2012-01-01

    Microstructural damage evolution is reported to influence significantly the failures of new high-strength alloys. Its accurate quantification is, therefore, critical for (1) microstructure optimization and (2) continuum damage models to predict failures of these materials. As existing methodologies do not fulfill the requirements, there is an active search for an accurate damage quantification methodology. In this article, a new, micropillar, compression-based methodology is presented, whereb...

  7. Quantification of global myocardial oxygenation in humans: initial experience

    OpenAIRE

    Gropler Robert J; Woodard Pamela K; Lyons Matt; Lesniak Donna; Connor Robert, O.; McCommis Kyle S; Zheng Jie

    2010-01-01

    Abstract Purpose To assess the feasibility of our newly developed cardiovascular magnetic resonance (CMR) methods to quantify global and/or regional myocardial oxygen consumption rate (MVO2) at rest and during pharmacologically-induced vasodilation in normal volunteers. Methods A breath-hold T2 quantification method is developed to calculate oxygen extraction fraction (OEF) and MVO2 rate at rest and/or during hyperemia, using a two-compartment model. A previously reported T2 quantification me...

  8. Quantification of protein complexes by blue native electrophoresis.

    Science.gov (United States)

    Heidler, Juliana; Strecker, Valentina; Csintalan, Florian; Bleier, Lea; Wittig, Ilka

    2013-01-01

    Blue native electrophoresis (BNE) is a long established method for the analysis of native protein complexes. Applications of BNE range from investigating subunit composition, stoichiometry, and assembly of single protein complexes to profiling of whole complexomes. BNE is an indispensible tool to diagnostically analyze cells and tissues from patients with mitochondrial disorders or model organisms. Since functional proteomic studies often require quantification of protein complexes, we describe here different quantification methods subsequent to protein complex separation by BNE. PMID:23996189

  9. Interpretivistic Conception of Quantification: Tool for Enhancing Quality of Life?

    OpenAIRE

    Denis Larrivee; Adriana Gini

    2013-01-01

    Quality of life is fast becoming the standard measure of outcome in clinical trials, residential satisfaction, and educational achievement, to name several social settings, with the consequent proliferation of assessment instruments. Yet its interpretation and definition provoke widespread disagreement, thereby rendering the significance of quantification uncertain. Moreover, quality, or qualia, is philosophically distinct from quantity, or quantitas, and so it is unclear how quantification c...

  10. Quantification Model for Estimating Temperature Field Distributions of Apple Fruit

    OpenAIRE

    Zhang, Min; Yang, Le; Zhao, Huizhong; Zhang, Leijie; Zhong, Zhiyou; Liu, Yanling; Chen, Jianhua

    2010-01-01

    A quantification model of transient heat conduction was provided to simulate apple fruit temperature distribution in the cooling process. The model was based on the energy variation of apple fruit of different points. It took into account, heat exchange of representative elemental volume, metabolism heat and external heat. The following conclusions could be obtained: first, the quantification model can satisfactorily describe the tendency of apple fruit temperature distribution in the cooling...

  11. Quantification of total pigments in citrus essential oils by thermal wave resonant cavity photopyroelectric spectroscopy.

    Science.gov (United States)

    López-Muñoz, Gerardo A; Antonio-Pérez, Aurora; Díaz-Reyes, J

    2015-05-01

    A general theory of thermal wave resonant cavity photopyroelectric spectroscopy (TWRC-PPE) was recently proposed by Balderas-López (2012) for the thermo-optical characterisation of substances in a condensed phase. This theory is used to quantify the total carotenoids and chlorophylls in several folded and un-folded citrus essential oils to demonstrate the viability of using this technique as an alternative analytical method for the quantification of total pigments in citrus oils. An analysis of variance (ANOVA) reveals significant differences (p oils. The experimental results show that TWRC-PPE spectroscopy can be used to quantify concentrations up to five times higher of total carotenoids and chlorophylls in citrus oils than UV-Vis spectroscopy without sample preparation or dilution. The optical limits of this technique and possible interference are also described. PMID:25529658

  12. Uncertainty Quantification with Applications to Engineering Problems

    DEFF Research Database (Denmark)

    Bigoni, Daniele

    2015-01-01

    The systematic quantification of the uncertainties affecting dynamical systems and the characterization of the uncertainty of their outcomes is critical for engineering design and analysis, where risks must be reduced as much as possible. Uncertainties stem naturally from our limitations in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor-train (STT) decomposition, a novel high-order method for the effective propagation of uncertainties which aims at providing an exponential convergence rate while tackling the curse of dimensionality. The curse of dimensionality is a problem that afflicts many methods based on meta-models, for which the computational cost increases exponentially with the number of inputs of the approximated function – which we will call dimension in the following. The STT-decomposition is based on the Polynomial Chaos (PC) approximation and the low-rank decomposition of the function describing the Quantity of Interest of the considered problem. The low-rank decomposition is obtained through the discrete tensor-train decomposition, which is constructed using an optimization algorithm for the selection of the relevant points on which the function needs to be evaluated. The selection of these points is informed by the approximated function and thus it is able to adapt to its features. The number of function evaluations needed for the construction grows only linearly with the dimension and quadratically with the rank. In this work we will present and use the functional counterpart of this low-rank decomposition and, after proving some auxiliary properties, we will apply PC on it, obtaining the STT-decomposition. This will allow the decoupling of each dimension, leading to a much cheaper construction of the PC surrogate. In the associated paper, the capabilities of the STT-decomposition are checked on commonly used test functions and on an elliptic problem with random inputs. This work will also present three active research directions aimed at improving the efficiency of the STT-decomposition. In this context, we propose three new strategies for solving the ordering problem suffered by the tensor-train decomposition, for computing better estimates with respect to the norms usually employed in UQ and for the anisotropic adaptivity of the method. The second part of this work presents engineering applications of the UQ framework. Both the applications are characterized by functions whose evaluation is computationally expensive and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose characteristics are uncertain. These analysis are carried out using mostly PC methods, and resorting to random sampling methods for comparison and when strictly necessary. The second application of the UQ framework is on the propagation of the uncertainties entering a fully non-linear and dispersive model of water waves. This computationally challenging task is tackled with the adoption of state-of-the-art software for its numerical solution and of efficient PC methods. The aim of this study is the construction of stochastic benchmarks where to test UQ methodologies before being applied to full-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix.

  13. Motivational predictors of increases in physical activity behaviour, health, and well-being among patients with Diabetes Mellitius Type 2 and cardiovascular disease:Testing self-determination theory in a randomized clincal trial

    OpenAIRE

    Healey, Jeanette

    2013-01-01

    A randomized clinical trial tested an experimental model and the self-determination theory (SDT) process model of changes in physical activity (PA) behaviour, health, and wellbeing. Adult patients (N=137) of both sexes, all diagnosed with diabetes mellitus type 2 and cardiovascular disease, were recruited to a one-year experiment. They were randomly assigned to an organized exercise intervention group or to a non-exercise control group. At baseline and after 12 months we measured the followin...

  14. Electrophoresis Gel Quantification with a Flatbed Scanner and Versatile Lighting from a Screen Scavenged from a Liquid Crystal Display (LCD) Monitor

    Science.gov (United States)

    Yeung, Brendan; Ng, Tuck Wah; Tan, Han Yen; Liew, Oi Wah

    2012-01-01

    The use of different types of stains in the quantification of proteins separated on gels using electrophoresis offers the capability of deriving good outcomes in terms of linear dynamic range, sensitivity, and compatibility with specific proteins. An inexpensive, simple, and versatile lighting system based on liquid crystal display backlighting is…

  15. High-Order Metrics for Model Uncertainty Quantification and Validation

    International Nuclear Information System (INIS)

    It is well known that the true values of measured and computed data are impossible to know exactly because of various uncontrollable errors and uncertainties arising in the data measurement and interpretation reduction processes. Hence, all inferences, predictions, engineering computations, and other applications of measured and/or computed data are necessarily based on weighted averages over the possibly true values, with weights indicating the degree of plausibility of each value. Furthermore, combination of data from different sources involves a weighted propagation (e.g., via sensitivities) of all uncertainties, requiring reasoning from incomplete information and using probability theory for extracting optimal values together with 'best-estimate' uncertainties from often sparse, incomplete, error-afflicted, and occasionally discrepant data. The current state-of-the-art data assimilation/model calibration methodologies1 for large-scale nonlinear systems cannot take into account uncertainties higher-order than secondorder (i.e., covariances) thereby failing to quantify fully the deviations of the problem under consideration from a normal (Gaussian) multivariate distribution. Such deviations would be quantified by the third- and fourth-order moments (skewness and kurtosis) of the model's predicted results (responses). These higher-order moments would be constructed by combining modeling and experimental uncertainties (which also incorporate the corresponding skewnesslso incorporate the corresponding skewness and kurtosis information), using derivatives of the model responses with respect to the model's parameters. This paper presents explicit expressions for skewness and kurtosis of computed responses, thereby permitting quantification of the deviations of the computed response uncertainties from multivariate normality. In addition, this paper presents a new and most efficient procedure for computing the second-order response derivatives with respect to model parameters using the 'adjoint sensitivity analysis procedure' (ASAP)

  16. Identification and Quantification of Carbonate Species Using Rock-Eval Pyrolysis

    Directory of Open Access Journals (Sweden)

    Pillot D.

    2013-03-01

    Full Text Available This paper presents a new reliable and rapid method to characterise and quantify carbonates in solid samples based on monitoring the CO2 flux emitted by progressive thermal decomposition of carbonates during a programmed heating. The different peaks of destabilisation allow determining the different types of carbonates present in the analysed sample. The quantification of each peak gives the respective proportions of these different types of carbonates in the sample. In addition to the chosen procedure presented in this paper, using a standard Rock-Eval 6 pyrolyser, calibration characteristic profiles are also presented for the most common carbonates in nature. This method should allow different types of application for different disciplines, either academic or industrial.

  17. Matrix Theory on Non-Orientable Surfaces

    OpenAIRE

    Zwart, Gysbert

    1997-01-01

    We construct the Matrix theory descriptions of M-theory on the Mobius strip and the Klein bottle. In a limit, these provide the matrix string theories for the CHL string and an orbifold of type IIA string theory.

  18. Perfusion quantification using Gaussian process deconvolution.

    DEFF Research Database (Denmark)

    Andersen, I K; Szymkowiak, A

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated as a constraint in the method. The GPD method, which automatically estimates the noise level in each voxel, has the advantage that model parameters are optimized automatically. The GPD is compared to singular value decomposition (SVD) using a common threshold for the singular values, and to SVD using a threshold optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as data from healthy volunteers. It is shown that GPD is comparable to SVD with a variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion. GPD provides a better estimate of the entire IRF. As the signal-to-noise ratio (SNR) increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes.

  19. Quantification of Zolpidem in Canine Plasma

    Directory of Open Access Journals (Sweden)

    Mario Giorgi

    2012-01-01

    Full Text Available Problem statement: Zolpidem is a non-benzodiazepine hypnotic agent currently used in human medicine. In contrast to benzodiazepines, zolpidem preferentially binds with the GABAA complex ?? receptors while poorly interacting with the other ? receptor complexes. Recent studies have suggested that ZP may be used to initiate sedation and diminish severe anxiety responses in dogs. The aim of the present study is to develop and validate a new HPLC-FL based method to quantify zolpidem in canine plasma. Approach: Several parameters both in the extraction and in the detection method were evaluated. The applicability of the method was determined by administering zolpidem to one dog. Results: The final mobile phase was acetonitrile: KH2PO4 (15 mM; pH 6.0 40:60 v/v, with a flow rate of 1 mL min-1 and excitation and emission wave lengths of 254 and 400 nm, respectively. The best extraction solvent was CH2Cl2:Et2O (3:7 v/v, this gave recoveries ranging from 83-95%. The limit of quantification was 1 ng mL-1. The chromatographic runs were specific with no interfering peaks at the retention times of the analyte. The other validation parameters were in agreement with the EMEA. Conclusion/Recommendations: This method (extraction, separation and applied techniques is simple and effective. This technique may have applications for pharmacokinetic or toxicological studies.

  20. Tissue quantification for development of pediatric phantom

    International Nuclear Information System (INIS)

    The optimization of the risk- benefit ratio is a major concern in the pediatric radiology, due to the greater vulnerability of children to the late somatic effects and genetic effects of exposure to radiation compared to adults. In Brazil, it is estimated that the causes of death from head trauma are 18 % for the age group between 1-5 years and the radiograph is the primary diagnostic test for the detection of skull fracture . Knowing that the image quality is essential to ensure the identification of structures anatomical and minimizing errors diagnostic interpretation, this paper proposed the development and construction of homogeneous phantoms skull, for the age group 1-5 years. The construction of the phantoms homogeneous was performed using the classification and quantification of tissue present in the skull of pediatric patients. In this procedure computational algorithms were used, using Matlab, to quantify distinct biological tissues present in the anatomical regions studied , using pictures retrospective CT scans. Preliminary data obtained from measurements show that between the ages of 1-5 years, assuming an average anteroposterior diameter of the pediatric skull region of the 145.73 ± 2.97 mm, can be represented by 92.34 mm ± 5.22 of lucite and 1.75 ± 0:21 mm of aluminum plates of a provision of PEP (Pacient equivalent phantom). After its construction, the phantoms will be used for image and dose optimization in pediatric protocols process to examinations of computerized radiography

  1. Shape regression for vertebra fracture quantification

    Science.gov (United States)

    Lund, Michael Tillge; de Bruijne, Marleen; Tanko, Laszlo B.; Nielsen, Mads

    2005-04-01

    Accurate and reliable identification and quantification of vertebral fractures constitute a challenge both in clinical trials and in diagnosis of osteoporosis. Various efforts have been made to develop reliable, objective, and reproducible methods for assessing vertebral fractures, but at present there is no consensus concerning a universally accepted diagnostic definition of vertebral fractures. In this project we want to investigate whether or not it is possible to accurately reconstruct the shape of a normal vertebra, using a neighbouring vertebra as prior information. The reconstructed shape can then be used to develop a novel vertebra fracture measure, by comparing the segmented vertebra shape with its reconstructed normal shape. The vertebrae in lateral x-rays of the lumbar spine were manually annotated by a medical expert. With this dataset we built a shape model, with equidistant point distribution between the four corner points. Based on the shape model, a multiple linear regression model of a normal vertebra shape was developed for each dataset using leave-one-out cross-validation. The reconstructed shape was calculated for each dataset using these regression models. The average prediction error for the annotated shape was on average 3%.

  2. Reliability of the quantification of eels measurements

    International Nuclear Information System (INIS)

    Over the past few years at ORNL a significant effort has been expended on developing the transition metal boride TiB2 for applications that take advantage of its high hardness, high melting point, and chemical inertness. The major portion of this work has centered on the TiB2-Ni system. AEM has been used extensively to provide microstructural information for structure-property correlations. Particular emphasis has been placed on identifying phases that develop during high-temperature exposure. Because of the presence of boron and other low-atomic-number elements in these materials, electron energy loss spectroscopy (EELS) has been an important tool in these investigations. However, repeated analyses of secondary phases, as well as of TiB2 itself, lead to the conclusion that compositions determined by quantification of EELS spectra with the use of both K and L ionization edges were suspect. In an earlier paper the authors presented some of these results and suggested that the problem might lie in the computation of cross sections for L absorption edges. Since that time our interest in this problem has continued and the authors have concentrated on identifying the source of the inaccuracy with greater certainty. 12 references, 3 figures

  3. Cross recurrence quantification for cover song identification

    International Nuclear Information System (INIS)

    There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from real-world dynamics even though these are not necessarily deterministic and stationary. In the present study, we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose, we here propose a recurrence quantification analysis measure that allows the tracking of potentially curved and disrupted traces in cross recurrence plots (CRPs). We apply this measure to CRPs constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Roessler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.

  4. Perfusion quantification using Gaussian process deconvolution

    DEFF Research Database (Denmark)

    Andersen, I K; Szymkowiak, A

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated as a constraint in the method. The GPD method, which automatically estimates the noise level in each voxel, has the advantage that model parameters are optimized automatically. The GPD is compared to singular value decomposition (SVD) using a common threshold for the singular values, and to SVD using a threshold optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as data from healthy volunteers. It is shown that GPD is comparable to SVD with a variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion. GPD provides a better estimate of the entire IRF. As the signal-to-noise ratio (SNR) increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes.

  5. APPLICATIONS DE LA THEORIE DES JEUX A L'EDUCATION: POUR QUELS TYPES ET NIVEAUX D'EDUCATION, QUELS MODELES, QUELS RESULTATS?

    OpenAIRE

    Garrouste, Christelle; Loi, Massimo

    2009-01-01

    This paper examines the use of game theory in educational sciences. It describes the evolution of game theory from the defining axioms of Von Neumann and Morgenstern in 1944 to the present. After the Introduction and this description, the third part presents the methodology used to select research articles compared in this study. The final fourth part presents the results of this literature review. The study reveals a similar interest on the part of economists and educators for the applicat...

  6. A methodology for uncertainty quantification in quantitative technology valuation based on expert elicitation

    Science.gov (United States)

    Akram, Muhammad Farooq Bin

    The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.

  7. Methodological considerations in quantification of oncological FDG PET studies

    Energy Technology Data Exchange (ETDEWEB)

    Vriens, Dennis; Visser, Eric P.; Geus-Oei, Lioe-Fee de; Oyen, Wim J.G. [Radboud University Nijmegen Medical Centre, Department of Nuclear Medicine, Nijmegen (Netherlands)

    2010-07-15

    This review aims to provide insight into the factors that influence quantification of glucose metabolism by FDG PET images in oncology as well as their influence on repeated measures studies (i.e. treatment response assessment), offering improved understanding both for clinical practice and research. Structural PubMed searches have been performed for the many factors affecting quantification of glucose metabolism by FDG PET. Review articles and references lists have been used to supplement the search findings. Biological factors such as fasting blood glucose level, FDG uptake period, FDG distribution and clearance, patient motion (breathing) and patient discomfort (stress) all influence quantification. Acquisition parameters should be adjusted to maximize the signal to noise ratio without exposing the patient to a higher than strictly necessary radiation dose. This is especially challenging in pharmacokinetic analysis, where the temporal resolution is of significant importance. The literature is reviewed on the influence of attenuation correction on parameters for glucose metabolism, the effect of motion, metal artefacts and contrast agents on quantification of CT attenuation-corrected images. Reconstruction settings (analytical versus iterative reconstruction, post-reconstruction filtering and image matrix size) all potentially influence quantification due to artefacts, noise levels and lesion size dependency. Many region of interest definitions are available, but increased complexity does not necessarily result in improved performance. Different methods for the quantification of the tissue of interest can introduce systematic and random inaccuracy. This review provides an up-to-date overview of the many factors that influence quantification of glucose metabolism by FDG PET. (orig.) 3.

  8. Uncertainty quantification of bacterial aerosol neutralization in shock heated gases

    Science.gov (United States)

    Schulz, J. C.; Gottiparthi, K. C.; Menon, S.

    2015-01-01

    A potential method for the neutralization of bacterial endospores is the use of explosive charges since the high thermal and mechanical stresses in the post-detonation flow are thought to be sufficient in reducing the endospore survivability to levels that pose no significant health threat. While several experiments have attempted to quantify endospore survivability by emulating such environments in shock tube configurations, numerical simulations are necessary to provide information in scenarios where experimental data are difficult to obtain. Since such numerical predictions require complex, multi-physics models, significant uncertainties could be present. This work investigates the uncertainty in determining the endospore survivability from using a reduced order model based on a critical endospore temperature. Understanding the uncertainty in such a model is necessary in quantifying the variability in predictions using large-scale, realistic simulations of bacterial endospore neutralization by explosive charges. This work extends the analysis of previous large-scale simulations of endospore neutralization [Gottiparthi et al. in (Shock Waves, 2014. doi:10.1007/s00193-014-0504-9)] by focusing on the uncertainty quantification of predicting endospore neutralization. For a given initial mass distribution of the bacterial endospore aerosol, predictions of the intact endospore percentage using nominal values of the input parameters match the experimental data well. The uncertainty in these predictions are then investigated using the Dempster-Shafer theory of evidence and polynomial chaos expansion. The studies show that the endospore survivability is governed largely by the endospore's mass distribution and their exposure or residence time at the elevated temperatures and pressures. Deviations from the nominal predictions can be as much as 20-30 % in the intermediate temperature ranges. At high temperatures, i.e., strong shocks, which are of the most interest, the residence time is observed to be a dominant parameter, and this coupled with the analysis resulting from the Dempster-Shafer theory of evidence seems to indicate that achieving confident predictions of less than 1 % endospore viability can only occur by extending the residence time of the fluid-particle interaction.

  9. Stochastic methods for uncertainty quantification in radiation transport

    Science.gov (United States)

    Fichtl, Erin D.

    The use of stochastic spectral expansions, specifically generalized polynomial chaos (gPC) and Karhunen-Loeve (KL) expansions, is investigated for uncertainty quantification in radiation transport. The gPC represents second-order random processes in terms of an expansion of orthogonal polynomials of random variables. The KL expansion is a Fourier-type expansion that represents a second-order random process with known covariance function in terms of a set of uncorrelated random variables and the eigenmodes of the covariance function. The flux and, in multiplying materials, the k-eigenvalue, which are the problem unknowns, are always expanded in a gPC expansion since their covariance functions are also unknown. This work assumes a single uncertain input---the total macroscopic cross section---although this does not represent a limitation of the approaches considered here. Two particular types of input parameter uncertainty are investigated: The cross section as a univariate Gaussian, log-normal, gamma or beta random variable, and the cross section as a spatially varying Gaussian or log-normal random process. In the first case, a gPC expansion in terms of a univariate random variable suffices, while in the second, a truncated KL expansion is first necessary followed by a gPC expansion in terms of multivariate random variables. Two solution methods are examined: The Stochastic Finite Element Method (SFEM) and the Stochastic Collocation Method (SCM). The SFEM entails taking Galerkin projections onto the orthogonal basis, which yields a system of fully-coupled equations for the PC coefficients of the flux and the k-eigenvalue. This system is linear when there is no multiplication and can be solved using Richardson iteration, employing a standard operator splitting such as block Gauss-Seidel or block Jacobi, or a Krylov iterative method, which can be preconditioned using these splittings. When multiplication is present, the SFEM system is non-linear and a Newton-Krylov method is employed to solve it. The SCM utilizes a suitable quadrature rule to compute the moments or PC coefficients of the flux and k-eigenvalue, and thus involves the solution of a system of independent deterministic transport equations. The accuracy and efficiency of the two methods are compared and contrasted. Both are shown to accurately compute the PC coefficients of the unknown, and numerical proof is provided that the two methods are in fact equivalent in certain cases. The PC coefficients are used to compute the moments and probability density functions of the unknowns, which are shown to be accurate by comparing with Monte Carlo results. An analytic diffusion analysis, corroborated by numerical results, reveals that the random transport equation is well approximated by a deterministic diffusion equation when the medium is diffusive with respect to the average cross section but without constraint on the amplitude of the random fluctuations. Our work shows that stochastic spectral expansions are a viable alternative to random sampling-based uncertainty quantification techniques since both provide a complete characterization of the distribution of the flux and the k-eigenvalue. Furthermore, it is demonstrated that, unlike perturbation methods, SFEM and SCM can handle large parameter uncertainty.

  10. D. M. Armstrong on the Identity Theory of Mind

    OpenAIRE

    Shanjendu Nath

    2013-01-01

    The Identity theory of mind occupies an important place in the history of philosophy. This theory is one of the important representations of the materialistic philosophy. This theory is known as "Materialist Monist Theory of Mind". Sometimes it is called "Type Physicalism", "Type Identity" or "Type-Type Theory" or "Mind-Brain Identity Theory". This theory appears in the philosophical domain as a reaction to the failure of Behaviourism. A number of philosophers developed this theory and among...

  11. Quantification of protein backbone hydrogen-deuterium exchange rates by solid state NMR spectroscopy

    International Nuclear Information System (INIS)

    We present the quantification of backbone amide hydrogen-deuterium exchange rates (HDX) for immobilized proteins. The experiments make use of the deuterium isotope effect on the amide nitrogen chemical shift, as well as on proton dilution by deuteration. We find that backbone amides in the microcrystalline ?-spectrin SH3 domain exchange rather slowly with the solvent (with exchange rates negligible within the individual 15N-T1 timescales). We observed chemical exchange for 6 residues with HDX exchange rates in the range from 0.2 to 5 s-1. Backbone amide 15N longitudinal relaxation times that we determined previously are not significantly affected for most residues, yielding no systematic artifacts upon quantification of backbone dynamics (Chevelkov et al. 2008b). Significant exchange was observed for the backbone amides of R21, S36 and K60, as well as for the sidechain amides of N38, N35 and for W41?. These residues could not be fit in our previous motional analysis, demonstrating that amide proton chemical exchange needs to be considered in the analysis of protein dynamics in the solid-state, in case D2O is employed as a solvent for sample preparation. Due to the intrinsically long 15N relaxation times in the solid-state, the approach proposed here can expand the range of accessible HDX rates in the intermediate regime that is not accessible so far with exchange quench and MEXICO type experiments.e quench and MEXICO type experiments.

  12. Quantification of the adrenal cortex hormones with radioimmunoassay

    Energy Technology Data Exchange (ETDEWEB)

    Badillo A, V.; Carrera D, A. A.; Ibarra M, C. M., E-mail: vbadillocren@hotmail.co [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Calle Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico)

    2010-10-15

    The pathologies of the adrenal cortex -adrenal insufficiency and Cushing syndrome- have their origin on the deficit or hypersecretion of some of the hormones that are secreted by the adrenal cortex, which is divided in three zones anatomically defined: the external zone, also called the zona glomerulosa, which is the main production site of aldosterone and mineralocorticoids; the internal zone, or zona reticularis, that produces androgens; and the external zone, or zone 1 orticotrop, which is responsible for producing glucocorticoids. In this work, a quantitative analysis of those hormones and their pathologic trigger was made; the quantification was made in the laboratory by means of highly sensitive and specific techniques, in this case, the radioimmunoassay, in which a radioisotope I-125 is used. This technique is based on the biochemical bond-type reaction, because it requires of a substance called the linker, which bonds to another called ligand. This reaction is also known as antigen-antibody (Ag-Ab), where the results of the reaction will depend on the quantity of antigen in the sample and on its affinity for the antibody. In this work, a 56 patients (of which 13 were men and 43 women) study was made. The cortisol, the ACTH, the androsterone and the DHEA values were very elevated in the majority of the cases corresponding to women, predominating cortisol; while in men, a notorious elevation of the 17 {alpha}-OH-PRG and of the DHEA-SO{sub 4} was observed. Based on that, we can conclude that 51 of them did not have mayor complications, because they just went to the laboratory once, while the remaining 5 had a medical monitoring, and they visited the laboratory more than one occasion, tell about a difficulty on their improvement. According to the results, an approximate relation of 8:2 women:men, respectively, becomes clear to the hormonal pathologies of the adrenal cortex. (Author)

  13. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to ure is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  14. Quantification of isotopic turnover in agricultural systems

    Science.gov (United States)

    Braun, A.; Auerswald, K.; Schnyder, H.

    2012-04-01

    The isotopic turnover, which is a proxy for the metabolic rate, is gaining scientific importance. It is quantified for an increasing range of organisms, from microorganisms over plants to animals including agricultural livestock. Additionally, the isotopic turnover is analyzed on different scales, from organs to organisms to ecosystems and even to the biosphere. In particular, the quantification of the isotopic turnover of specific tissues within the same organism, e.g. organs like liver and muscle and products like milk and faeces, has brought new insights to improve understanding of nutrient cycles and fluxes, respectively. Thus, the knowledge of isotopic turnover is important in many areas, including physiology, e.g. milk synthesis, ecology, e.g. soil retention time of water, and medical science, e.g. cancer diagnosis. So far, the isotopic turnover is quantified by applying time, cost and expertise intensive tracer experiments. Usually, this comprises two isotopic equilibration periods. A first equilibration period with a constant isotopic input signal is followed by a second equilibration period with a distinct constant isotopic input signal. This yields a smooth signal change from the first to the second signal in the object under consideration. This approach reveals at least three major problems. (i) The input signals must be controlled isotopically, which is almost impossible in many realistic cases like free ranging animals. (ii) Both equilibration periods may be very long, especially when the turnover rate of the object under consideration is very slow, which aggravates the first problem. (iii) The detection of small or slow pools is improved by large isotopic signal changes, but large isotopic changes also involve a considerable change in the input material; e.g. animal studies are usually carried out as diet-switch experiments, where the diet is switched between C3 and C4 plants, since C3 and C4 plants differ strongly in their isotopic signal. The additional change in nutrition induces changes in physiology that are likely to bias the estimation of the isotopic turnover. We designed an experiment with lactating cows which were successively exposed to the diet's natural isotopic variation and a diet-switch. We examined whether the same turnover information can be obtained from the natural (uncontrolled, short-term) isotopic variation as from the diet-switch experiment. Statistical methods to retrieve the turnover characteristics comprised multi-pool compartmental modeling for the diet-switch experiment as well as correlation analysis to perform wiggle-matching and quantification of autocorrelation (geostatistics) for the analysis of the natural variation. All three methods yielded similar results but differed in their strengths and weaknesses that will be highlighted. Combining the strengths of the new methods can make this tool even more advantageous than diet-switch experiments in many cases. In particular, the new approach empowers studying isotope turnover under a wider range of keepings, wildlife conditions and species, yielding turnover estimates that are not biased by changes in nutrition.

  15. Quantification of water in hydrous ringwoodite

    Science.gov (United States)

    Thomas, Sylvia-Monique; Jacobsen, Steven; Bina, Craig; Reichart, Patrick; Moser, Marcus; Hauri, Erik; Koch-Müller, Monika; Smyth, Joseph; Dollinger, Günther

    2014-12-01

    Ringwoodite, ?-(Mg,Fe)2SiO4, in the lower 150 km of Earth’s mantle transition zone (410-660 km depth) can incorporate up to 1.5-2 wt% H2O as hydroxyl defects. We present a mineral-specific IR calibration for the absolute water content in hydrous ringwoodite by combining results from Raman spectroscopy, secondary ion mass spectrometery (SIMS) and proton-proton (pp)-scattering on a suite of synthetic Mg- and Fe-bearing hydrous ringwoodites. H2O concentrations in the crystals studied here range from 0.46 to 1.7 wt% H2O (absolute methods), with the maximum H2O in the same sample giving 2.5 wt% by SIMS calibration. Anchoring our spectroscopic results to absolute H-atom concentrations from pp-scattering measurements, we report frequency-dependent integrated IR-absorption coefficients for water in ringwoodite ranging from 78180 to 158880 L mol-1cm-2, depending upon frequency of the OH absorption. We further report a linear wavenumber IR calibration for H2O quantification in hydrous ringwoodite across the Mg2SiO4-Fe2SiO4 solid solution, which will lead to more accurate estimations of the water content in both laboratory-grown and naturally occurring ringwoodites. Re-evaluation of the IR spectrum for a natural hydrous ringwoodite inclusion in diamond from the study of Pearson et al. (2014) indicates the crystal contains 1.43 ± 0.27 wt% H2O, thus confirming near-maximum amounts of H2O for this sample from the transition zone.

  16. Superspace conformal field theory

    International Nuclear Information System (INIS)

    Conformal sigma models and WZW models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type I supergroups, the classification of conformal sigma models and their embedding into string theory.

  17. Superspace conformal field theory

    International Nuclear Information System (INIS)

    Conformal sigma models and Wess–Zumino–Witten (WZW) models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type-I supergroups, the classification of conformal sigma models and their embedding into string theory. (review)

  18. Superspace conformal field theory

    Energy Technology Data Exchange (ETDEWEB)

    Quella, Thomas [Koeln Univ. (Germany). Inst. fuer Theoretische Physik; Schomerus, Volker [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2013-07-15

    Conformal sigma models and WZW models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type I supergroups, the classification of conformal sigma models and their embedding into string theory.

  19. GMO quantification: valuable experience and insights for the future.

    Science.gov (United States)

    Milavec, Mojca; Dobnik, David; Yang, Litao; Zhang, Dabing; Gruden, Kristina; Zel, Jana

    2014-10-01

    Cultivation and marketing of genetically modified organisms (GMOs) have been unevenly adopted worldwide. To facilitate international trade and to provide information to consumers, labelling requirements have been set up in many countries. Quantitative real-time polymerase chain reaction (qPCR) is currently the method of choice for detection, identification and quantification of GMOs. This has been critically assessed and the requirements for the method performance have been set. Nevertheless, there are challenges that should still be highlighted, such as measuring the quantity and quality of DNA, and determining the qPCR efficiency, possible sequence mismatches, characteristics of taxon-specific genes and appropriate units of measurement, as these remain potential sources of measurement uncertainty. To overcome these problems and to cope with the continuous increase in the number and variety of GMOs, new approaches are needed. Statistical strategies of quantification have already been proposed and expanded with the development of digital PCR. The first attempts have been made to use new generation sequencing also for quantitative purposes, although accurate quantification of the contents of GMOs using this technology is still a challenge for the future, and especially for mixed samples. New approaches are needed also for the quantification of stacks, and for potential quantification of organisms produced by new plant breeding techniques. PMID:25182968

  20. Automated lobar quantification of emphysema in patients with severe COPD

    International Nuclear Information System (INIS)

    Automated lobar quantification of emphysema has not yet been evaluated. Unenhanced 64-slice MDCT was performed in 47 patients evaluated before bronchoscopic lung-volume reduction. CT images reconstructed with a standard (B20) and high-frequency (B50) kernel were analyzed using a dedicated prototype software (MevisPULMO) allowing lobar quantification of emphysema extent. Lobar quantification was obtained following (a) a fully automatic delineation of the lobar limits by the software and (b) a semiautomatic delineation with manual correction of the lobar limits when necessary and was compared with the visual scoring of emphysema severity per lobe. No statistically significant difference existed between automated and semiautomated lobar quantification (p>0.05 in the five lobes), with differences ranging from 0.4 to 3.9%. The agreement between the two methods (intraclass correlation coefficient, ICC) was excellent for left upper lobe (ICC=0.94), left lower lobe (ICC=0.98), and right lower lobe (ICC=0.80). The agreement was good for right upper lobe (ICC=0.68) and moderate for middle lobe (IC=0.53). The Bland and Altman plots confirmed these results. A good agreement was observed between the software and visually assessed lobar predominance of emphysema (kappa 0.78; 95% CI 0.64-0.92). Automated and semiautomated lobar quantifications of emphysema are concordant and show good agreement with visual scoring. (orig.)

  1. Lamb wave-based damage quantification and probability of detection modeling for fatigue life assessment of riveted lap joint

    Science.gov (United States)

    He, Jingjing; Wang, Dengjiang; Zhang, Weifang

    2015-03-01

    This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.

  2. Electric cell-substrate impedance sensing for the quantification of endothelial proliferation, barrier function, and motility.

    Science.gov (United States)

    Szulcek, Robert; Bogaard, Harm Jan; van Nieuw Amerongen, Geerten P

    2014-01-01

    Electric Cell-substrate Impedance Sensing (ECIS) is an in vitro impedance measuring system to quantify the behavior of cells within adherent cell layers. To this end, cells are grown in special culture chambers on top of opposing, circular gold electrodes. A constant small alternating current is applied between the electrodes and the potential across is measured. The insulating properties of the cell membrane create a resistance towards the electrical current flow resulting in an increased electrical potential between the electrodes. Measuring cellular impedance in this manner allows the automated study of cell attachment, growth, morphology, function, and motility. Although the ECIS measurement itself is straightforward and easy to learn, the underlying theory is complex and selection of the right settings and correct analysis and interpretation of the data is not self-evident. Yet, a clear protocol describing the individual steps from the experimental design to preparation, realization, and analysis of the experiment is not available. In this article the basic measurement principle as well as possible applications, experimental considerations, advantages and limitations of the ECIS system are discussed. A guide is provided for the study of cell attachment, spreading and proliferation; quantification of cell behavior in a confluent layer, with regard to barrier function, cell motility, quality of cell-cell and cell-substrate adhesions; and quantification of wound healing and cellular responses to vasoactive stimuli. Representative results are discussed based on human microvascular (MVEC) and human umbilical vein endothelial cells (HUVEC), but are applicable to all adherent growing cells. PMID:24747269

  3. Gamma camera based Positron Emission Tomography: a study of the viability on quantification

    International Nuclear Information System (INIS)

    Positron Emission Tomography (PET) is a Nuclear Medicine imaging modality for diagnostic purposes. Pharmaceuticals labeled with positron emitters are used and images which represent the in vivo biochemical process within tissues can be obtained. The positron/electron annihilation photons are detected in coincidence and this information is used for object reconstruction. Presently, there are two types of systems available for this imaging modality: the dedicated systems and those based on gamma camera technology. In this work, we utilized PET/SPECT systems, which also allows for the traditional Nuclear Medicine studies based on single photon emitters. There are inherent difficulties which affect quantification of activity and other indices. They are related to the Poisson nature of radioactivity, to radiation interactions with patient body and detector, noise due to statistical nature of these interactions and to all the detection processes, as well as the patient acquisition protocols. Corrections are described in the literature and not all of them are implemented by the manufacturers: scatter, attenuation, random, decay, dead time, spatial resolution, and others related to the properties of each equipment. The goal of this work was to assess these methods adopted by two manufacturers, as well as the influence of some technical characteristics of PET/SPECT systems on the estimation of SUV. Data from a set of phantoms were collected in 3D mode by one camera and 2D, by the other. We concluded that quantification is viable in PET/SPECT systems, including the estimation of SUVs. This is only possible if, apart from the above mentioned corrections, the camera is well tuned and coefficients for sensitivity normalization and partial volume corrections are applied. We also verified that the shapes of the sources used for obtaining these factors play a role on the final results and should be delt with carefully in clinical quantification. Finally, the choice of the region of interest is critical and it should be the same used to calculate the correction factors. (author)

  4. Iron overload in the liver diagnostic and quantification

    International Nuclear Information System (INIS)

    Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification

  5. Iron overload in the liver diagnostic and quantification.

    Science.gov (United States)

    Alústiza, Jose M; Castiella, Agustin; De Juan, Maria D; Emparanza, Jose I; Artetxe, Jose; Uranga, Maite

    2007-03-01

    Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification. PMID:17166681

  6. Iron overload in the liver diagnostic and quantification

    Energy Technology Data Exchange (ETDEWEB)

    Alustiza, Jose M. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)]. E-mail: jmalustiza@osatek.es; Castiella, Agustin [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Juan, Maria D. de [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Emparanza, Jose I. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Artetxe, Jose [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Uranga, Maite [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)

    2007-03-15

    Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification.

  7. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    Energy Technology Data Exchange (ETDEWEB)

    Rearden, Bradley T [ORNL; Mueller, Don [ORNL

    2011-01-01

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification is useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.

  8. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    International Nuclear Information System (INIS)

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as keff, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification is useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalizedn through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.

  9. A Micropillar Compression Methodology for Ductile Damage Quantification

    Science.gov (United States)

    Tasan, C. C.; Hoefnagels, J. P. M.; Geers, M. G. D.

    2012-03-01

    Microstructural damage evolution is reported to influence significantly the failures of new high-strength alloys. Its accurate quantification is, therefore, critical for (1) microstructure optimization and (2) continuum damage models to predict failures of these materials. As existing methodologies do not fulfill the requirements, there is an active search for an accurate damage quantification methodology. In this article, a new, micropillar, compression-based methodology is presented, whereby damage evolution can be quantified successfully through the degradation of the modulus caused by previous deformation.

  10. Large-Scale Inverse Problems and Quantification of Uncertainty

    CERN Document Server

    Biegler, Lorenz; Ghattas, Omar

    2010-01-01

    Large-scale inverse problems and associated uncertainty quantification has become an important area of research, central to a wide range of science and engineering applications. Written by leading experts in the field, Large-scale Inverse Problems and Quantification of Uncertainty focuses on the computational methods used to analyze and simulate inverse problems. The text provides PhD students, researchers, advanced undergraduate students, and engineering practitioners with the perspectives of researchers in areas of inverse problems and data assimilation, ranging from statistics and large-sca

  11. Leakage quantification of compressed air on pipes using thermovision

    Directory of Open Access Journals (Sweden)

    Dudi? Slobodan P.

    2012-01-01

    Full Text Available Nondestructive testing methods are increasingly in use. With these methods it is possible to obtain the desired information about the system, without altering or damaging it in any way. This paper examines the possibilities of applying these methods in the quantification of losses incurred by leaking of compressed air from the system in the terms of increasing energy efficiency of the system. The emphasis is on the application of ultrasound detector and IR (infrared thermographic camera. The potentials and limitations of these technologies are analyzed for leakage quantification on a steel pipe in compressed air systems, as well as the reliability and accuracy of the results thus obtained.

  12. Half-Metallic p -Type LaAlO3/EuTiO3 Heterointerface from Density-Functional Theory

    Science.gov (United States)

    Lu, Hai-Shuang; Cai, Tian-Yi; Ju, Sheng; Gong, Chang-De

    2015-03-01

    The two-dimensional electron gas (2DEG) observed at the LaAlO3/SrTiO3 heterointerface has attracted intense research interest in recent years. The high mobility, electric tunability, and giant persistent photoconductivity suggest its potential for electronic and photonic applications. The lack of a p -type counterpart as well as a highly spin-polarized carrier in the LaAlO3/SrTiO3 system, however, restricts its widespread application, since both multiple carriers and high spin polarization are very desirable for electronic devices. Here, we report a system of LaAlO3/EuTiO3 digital heterostructures that may overcome these limitations. Results from first-principles calculations reveal that the 2DEG in the n -type LaAlO3/EuTiO3 is a normal ferromagnet. The p -type two-dimensional hole gas, on the other hand, is a 100% spin-polarized half-metal. For digital heterostructures with alternating n -type and p -type interfaces, a magnetic-field-driven insulator-to-metal transition, together with spatially separated electrons and holes, can be realized by tuning the intrinsic polar field. At low temperatures, the spin-polarized electron-hole pairs may result in spin-triplet exciton condensation, which provides an experimentally accessible system for achieving the theoretically proposed dissipationless spin transport. Our findings open a path for exploring spintronics at the heterointerface of transition-metal oxides.

  13. Analyzing Social Interactions: Promises and Challenges of Cross Recurrence Quantification Analysis

    DEFF Research Database (Denmark)

    Fusaroli, Riccardo; Konvalinka, Ivana

    2014-01-01

    The scientific investigation of social interactions presents substantial challenges: interacting agents engage each other at many different levels and timescales (motor and physiological coordination, joint attention, linguistic exchanges, etc.), often making their behaviors interdependent in non-linear ways. In this paper we review the current use of Cross Recurrence Quantification Analysis (CRQA) in the analysis of social interactions, and assess its potential and challenges. We argue that the method can sensitively grasp the dynamics of human interactions, and that it has started producing valuable knowledge about them. However, much work is still necessary: more systematic analyses and interpretation of the recurrence indexes and more consistent reporting of the results, more emphasis on theory-driven studies, exploring interactions involving more than 2 agents and multiple aspects of coordination, and assessing and quantifying complementary coordinative mechanisms. These challenges are discussed and operationalized in recommendations to further develop the field.

  14. Basic concepts in quantum information theory

    International Nuclear Information System (INIS)

    Full text: Quantum information theory provides a framework for the description of quantum systems and their applications in the context of quantum computation and quantum communication. Although several of the basic concepts on which such theory is built are reminiscent of those of (classical) Information Theory, the new rules provided by quantum mechanics introduce properties which have no classical counterpart and that are responsible for most of the applications. In particular, entangled states appear as one of the basic resources in this context. In this lecture I will introduce the basic concepts and applications in Quantum Information, particularly stressing the definition of entanglement, its quantification, and its applications. (author)

  15. Methane-oxygen electrochemical coupling in an ionic liquid: a robust sensor for simultaneous quantification.

    Science.gov (United States)

    Wang, Zhe; Guo, Min; Baker, Gary A; Stetter, Joseph R; Lin, Lu; Mason, Andrew J; Zeng, Xiangqun

    2014-10-21

    Current sensor devices for the detection of methane or natural gas emission are either expensive and have high power requirements or fail to provide a rapid response. This report describes an electrochemical methane sensor utilizing a non-volatile and conductive pyrrolidinium-based ionic liquid (IL) electrolyte and an innovative internal standard method for methane and oxygen dual-gas detection with high sensitivity, selectivity, and stability. At a platinum electrode in bis(trifluoromethylsulfonyl)imide (NTf2)-based ILs, methane is electro-oxidized to produce CO2 and water when an oxygen reduction process is included. The in situ generated CO2 arising from methane oxidation was shown to provide an excellent internal standard for quantification of the electrochemical oxygen sensor signal. The simultaneous quantification of both methane and oxygen in real time strengthens the reliability of the measurements by cross-validation of two ambient gases occurring within a single sample matrix and allows for the elimination of several types of random and systematic errors in the detection. We have also validated this IL-based methane sensor employing both conventional solid macroelectrodes and flexible microfabricated electrodes using single- and double-potential step chronoamperometry. PMID:25093213

  16. Reference tissue quantification of DCE-MRI data without a contrast agent calibration

    Science.gov (United States)

    Walker-Samuel, Simon; Leach, Martin O.; Collins, David J.

    2007-02-01

    The quantification of dynamic contrast-enhanced (DCE) MRI data conventionally requires a conversion from signal intensity to contrast agent concentration by measuring a change in the tissue longitudinal relaxation rate, R1. In this paper, it is shown that the use of a spoiled gradient-echo acquisition sequence (optimized so that signal intensity scales linearly with contrast agent concentration) in conjunction with a reference tissue-derived vascular input function (VIF), avoids the need for the conversion to Gd-DTPA concentration. This study evaluates how to optimize such sequences and which dynamic time-series parameters are most suitable for this type of analysis. It is shown that signal difference and relative enhancement provide useful alternatives when full contrast agent quantification cannot be achieved, but that pharmacokinetic parameters derived from both contain sources of error (such as those caused by differences between reference tissue and region of interest proton density and native T1 values). It is shown in a rectal cancer study that these sources of uncertainty are smaller when using signal difference, compared with relative enhancement (15 ± 4% compared with 33 ± 4%). Both of these uncertainties are of the order of those associated with the conversion to Gd-DTPA concentration, according to literature estimates.

  17. Reference tissue quantification of DCE-MRI data without a contrast agent calibration

    International Nuclear Information System (INIS)

    The quantification of dynamic contrast-enhanced (DCE) MRI data conventionally requires a conversion from signal intensity to contrast agent concentration by measuring a change in the tissue longitudinal relaxation rate, R1. In this paper, it is shown that the use of a spoiled gradient-echo acquisition sequence (optimized so that signal intensity scales linearly with contrast agent concentration) in conjunction with a reference tissue-derived vascular input function (VIF), avoids the need for the conversion to Gd-DTPA concentration. This study evaluates how to optimize such sequences and which dynamic time-series parameters are most suitable for this type of analysis. It is shown that signal difference and relative enhancement provide useful alternatives when full contrast agent quantification cannot be achieved, but that pharmacokinetic parameters derived from both contain sources of error (such as those caused by differences between reference tissue and region of interest proton density and native T1 values). It is shown in a rectal cancer study that these sources of uncertainty are smaller when using signal difference, compared with relative enhancement (15 ± 4% compared with 33 ± 4%). Both of these uncertainties are of the order of those associated with the conversion to Gd-DTPA concentration, according to literature estimates

  18. MALDI-MS and multivariate analysis for the detection and quantification of different milk species.

    Science.gov (United States)

    Nicolaou, Nicoletta; Xu, Yun; Goodacre, Royston

    2011-04-01

    The extensive consumption of milk and dairy products makes these foodstuffs targets for potential adulteration with financial gains for unscrupulous producers. Such practices must be detected as these can impact negatively on product quality, labelling and even health. Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-ToF-MS) is a potentially useful technique, with proven abilities in protein identification and more recently through the use of internal standards for quantification purposes of specific proteins or peptides. In the current work, we therefore aim to explore the accuracy and attributes of MALDI-ToF-MS with chemometrics for the detection and quantification of milk adulteration. Three binary mixtures containing cows' and goats', cows' and sheep's, and goats' and sheep's milk and a fourth tertiary mixture containing all types of milk were prepared and analysed directly using MALDI-ToF-MS. In these mixtures, the milk concentrations of each milk varied from 0% to 100% in 5% steps. Multivariate statistical methods including partial least squares (PLS) regression and non-linear Kernel PLS regression were employed for multivariate calibration and final interpretation of the results. The results for PLS and KPLS were encouraging with between 2% and 13% root mean squared error of prediction on independent data; KPLS slightly outperformed PLS. We believe that these results show that MALDI-ToF-MS has excellent potential for future use in the dairy industry as a rapid method of detection and enumeration in milk adulteration. PMID:21298416

  19. Suitability of Tedlar gas sampling bags for siloxane quantification in landfill gas.

    Science.gov (United States)

    Ajhar, M; Wens, B; Stollenwerk, K H; Spalding, G; Yüce, S; Melin, T

    2010-06-30

    Landfill or digester gas can contain man-made volatile methylsiloxanes (VMS), usually in the range of a few milligrams per normal cubic metre (Nm(3)). Until now, no standard method for siloxane quantification exists and there is controversy with respect to which sampling procedure is most suitable. This paper presents an analytical and a sampling procedure for the quantification of common VMS in biogas via GC-MS and polyvinyl fluoride (Tedlar) bags. Two commercially available Tedlar bag models are studied. One is equipped with a polypropylene valve with integrated septum, the other with a dual port fitting made from stainless steel. Siloxane recovery in landfill gas samples is investigated as a function of storage time, temperature, surface-to-volume ratio and background gas. Recovery was found to depend on the type of fitting employed. The siloxanes sampled in the bag with the polypropylene valve show high and stable recovery, even after more than 30 days. Sufficiently low detection limits below 10 microg Nm(-3) and good reproducibility can be achieved. The method is therefore well applicable to biogas, greatly facilitating sampling in comparison with other common techniques involving siloxane enrichment using sorption media. PMID:20685441

  20. Localization and relative quantification of carbon nanotubes in cells with multispectral imaging flow cytometry.

    Science.gov (United States)

    Marangon, Iris; Boggetto, Nicole; Ménard-Moyon, Cécilia; Luciani, Nathalie; Wilhelm, Claire; Bianco, Alberto; Gazeau, Florence

    2013-01-01

    Carbon-based nanomaterials, like carbon nanotubes (CNTs), belong to this type of nanoparticles which are very difficult to discriminate from carbon-rich cell structures and de facto there is still no quantitative method to assess their distribution at cell and tissue levels. What we propose here is an innovative method allowing the detection and quantification of CNTs in cells using a multispectral imaging flow cytometer (ImageStream, Amnis). This newly developed device integrates both a high-throughput of cells and high resolution imaging, providing thus images for each cell directly in flow and therefore statistically relevant image analysis. Each cell image is acquired on bright-field (BF), dark-field (DF), and fluorescent channels, giving access respectively to the level and the distribution of light absorption, light scattered and fluorescence for each cell. The analysis consists then in a pixel-by-pixel comparison of each image, of the 7,000-10,000 cells acquired for each condition of the experiment. Localization and quantification of CNTs is made possible thanks to some particular intrinsic properties of CNTs: strong light absorbance and scattering; indeed CNTs appear as strongly absorbed dark spots on BF and bright spots on DF with a precise colocalization. This methodology could have a considerable impact on studies about interactions between nanomaterials and cells given that this protocol is applicable for a large range of nanomaterials, insofar as they are capable of absorbing (and/or scattering) strongly enough the light. PMID:24378540