Quantum Theory without Quantification
Piron, Constantin
1996-01-01
After having explained Samuel Clarke's conception of the new philosophy of physical reality, we will treat the electron field in this context as a field modifying the void. From this we will be able to derive the so-called quantum rules just from Noether's theorem on conserved currents. Thus quantum theory appears as a kind of nonlocal field theory, in fact a new theory.
Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
Recurrence quantification analysis theory and best practices
Jr, Jr; Marwan, Norbert
2015-01-01
The analysis of recurrences in dynamical systems by using recurrence plots and their quantification is still an emerging field. Over the past decades recurrence plots have proven to be valuable data visualization and analysis tools in the theoretical study of complex, time-varying dynamical systems as well as in various applications in biology, neuroscience, kinesiology, psychology, physiology, engineering, physics, geosciences, linguistics, finance, economics, and other disciplines. This multi-authored book intends to comprehensively introduce and showcase recent advances as well as established best practices concerning both theoretical and practical aspects of recurrence plot based analysis. Edited and authored by leading researcher in the field, the various chapters address an interdisciplinary readership, ranging from theoretical physicists to application-oriented scientists in all data-providing disciplines.
On Irrelevance and Algorithmic Equality in Predicative Type Theory
Abel, Andreas
2012-01-01
Dependently typed programs contain an excessive amount of static terms which are necessary to please the type checker but irrelevant for computation. To separate static and dynamic code, several static analyses and type systems have been put forward. We consider Pfenning's type theory with irrelevant quantification which is compatible with a type-based notion of equality that respects eta-laws. We extend Pfenning's theory to universes and large eliminations and develop its meta-theory. Subject reduction, normalization and consistency are obtained by a Kripke model over the typed equality judgement. Finally, a type-directed equality algorithm is described whose completeness is proven by a second Kripke model.
Guarded dependent type theory with coinductive types
Bizjak, Aleš; Grathwohl, Hans Bugge; Clouston, Ranald; Birkedal, Lars; Møgelberg, Rasmus Ejlers
2015-01-01
We present guarded dependent type theory, gDTT, an extensional dependent type theory with a later' modality and clock quantifiers for programming and proving with guarded recursive and coinductive types. The later modality is used to ensure the productivity of recursive definitions in a modular......, type based, way. Clock quantifiers are used for controlled elimination of the later modality and for encoding coinductive types using guarded recursive types. Key to the development of gDTT are novel type and term formers involving what we call delayed substitutions’. These generalise the applicative...
Linear contextual modal type theory
Schack-Nielsen, Anders; Schürmann, Carsten
Abstract. When one implements a logical framework based on linear type theory, for example the Celf system [?], one is immediately con- fronted with questions about their equational theory and how to deal with logic variables. In this paper, we propose linear contextual modal type theory that gives...... a mathematical account of the nature of logic variables. Our type theory is conservative over intuitionistic contextual modal type theory proposed by Nanevski, Pfenning, and Pientka. Our main contributions include a mechanically checked proof of soundness and a working implementation....
Guallart, Nino
2014-01-01
Pure type systems arise as a generalisation of simply typed lambda calculus. The contemporary development of mathematics has renewed the interest in type theories, as they are not just the object of mere historical research, but have an active role in the development of computational science and core mathematics. It is worth exploring some of them in depth, particularly predicative Martin-L\\"of's intuitionistic type theory and impredicative Coquand's calculus of constructions. The logical and...
Computational semantics in type theory
Ranta, Aarne
2006-01-01
This paper aims to show how Montague-style grammars can be completely formalized and thereby declaratively implemented by using the Grammatical Framework GF. The implementation covers the fundamental operations of Montague’s PTQ model: the construction of analysis trees, the linearization of trees into strings, and the interpretation of trees as logical formulas. Moreover, a parsing algorithm is derived from the grammar. Given that GF is a constructive type theory with dependent types, the te...
Definitional Extension in Type Theory
Xue, Tao
2014-01-01
When we extend a type system, the relation between the original system and its extension is an important issue we want to know. Conservative extension is a traditional relation we study with. But in some cases, like coercive subtyping, it is not strong enough to capture all the properties, more powerful relation between the systems is required. We bring the idea definitional extension from mathematical logic into type theory. In this paper, we study the notion of definitional extension for t...
A "Toy" Model for Operational Risk Quantification using Credibility Theory
Hans B\\"uhlmann; Shevchenko, Pavel V.; Mario V. W\\"uthrich
2009-01-01
To meet the Basel II regulatory requirements for the Advanced Measurement Approaches in operational risk, the bank's internal model should make use of the internal data, relevant external data, scenario analysis and factors reflecting the business environment and internal control systems. One of the unresolved challenges in operational risk is combining of these data sources appropriately. In this paper we focus on quantification of the low frequency high impact losses exceeding some high thr...
Causality in Time Series: Its Detection and Quantification by Means of Information Theory
Hlaváčková-Schindler, Kateřina
New York : Springer, 2008 - (Emmert-Streib, F.; Dehmer, M.), s. 183-207 ISBN 978-0-387-84815-0. - (Computer Science) R&D Projects: GA MŠk 2C06001 Institutional research plan: CEZ:AV0Z10750506 Keywords : causality * time series * information theory Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2009/AS/schindler-causality in time series its detection and quantification by means of information theory.pdf
A computable type theory for control systems
Collins, P.J.; Guo, L.; Baillieul, J.
2009-01-01
In this paper, we develop a theory of computable types suitable for the study of control systems. The theory uses type-two effectivity as the underlying computational model, but we quickly develop a type system which can be manipulated abstractly, but for which all allowable operations are guaranteed to be computable. We apply the theory to the study of hybrid systems, reachability analysis, and control synthesis.
Dissipative relativistic fluid theories of divergence type
We investigate the theories of dissipative relativistic fluids in which all of the dynamical equations can be written as total-divergence equations. Extending the analysis of Liu, Mueller, and Ruggeri, we find the general theory of this type. We discuss various features of these theories, including the causality of the full nonlinear evolution equations and the nature and stability of the equilibrium states
Some Properties of Type I' String Theory
Schwarz, John H.
1999-01-01
The T-dual formulation of Type I superstring theory, sometimes called Type I' theory, has a number of interesting features. Here we review some of them including the role of D0-branes and D8-branes in controlling possible gauge symmetry enhancement.
Completeness in Hybrid Type Theory
Areces, Carlos; Blackburn, Patrick Rowan; Huertas, Antonia; Manzano, Maria
way we interpret @i in propositional and first-order hybrid logic. This means: interpret @iαa , where αa is an expression of any type a , as an expression of type a that rigidly returns the value that αa receives at the i-world. The axiomatization and completeness proofs are generalizations of those...... found in propositional and first-order hybrid logic, and (as is usual inhybrid logic) we automatically obtain a wide range of completeness results for stronger logics and languages. Our approach is deliberately low-tech. We don’t, for example, make use of Montague’s intensional type s, or Fitting...
Uncertainty Quantification and Propagation in Nuclear Density Functional Theory
Schunck, N; Higdon, D; Sarich, J; Wild, S M
2015-01-01
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going efforts seek to better root nuclear DFT in the theory of nuclear forces [see Duguet et al., this issue], energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in finite nuclei. In this paper, we review recent efforts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.
Uncertainty Quantification and Propagation in Nuclear Density Functional Theory
Schunck, N; McDonnell, J D; Higdon, D; Sarich, J; Wild, S M
2015-03-17
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this paper, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.
Uncertainty quantification and propagation in nuclear density functional theory
Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going efforts seek to better root nuclear DFT in the theory of nuclear forces (see Duguet et al., this Topical Issue), energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in finite nuclei. In this paper, we review recent efforts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statistical analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature. (orig.)
Quantification of digital forensic hypotheses using probability theory
Overill, RE; Silomon, JAM; Tse, HKS; Chow, KP
2013-01-01
The issue of downloading illegal material from a website onto a personal digital device is considered from the perspective of conventional (Pascalian) probability theory. We present quantitative results for a simple model system by which we analyse and counter the putative defence case that the forensically recovered illegal material was downloaded accidentally by the defendant. The model is applied to two actual prosecutions involving possession of child pornography.
Quantification of Uncertainties in Nuclear Density Functional theory
Schunck, N; Higdon, D; Sarich, J; Wild, S
2014-01-01
Reliable predictions of nuclear properties are needed as much to answer fundamental science questions as in applications such as reactor physics or data evaluation. Nuclear density functional theory is currently the only microscopic, global approach to nuclear structure that is applicable throughout the nuclear chart. In the past few years, a lot of effort has been devoted to setting up a general methodology to assess theoretical uncertainties in nuclear DFT calculations. In this paper, we summarize some of the recent progress in this direction. Most of the new material discussed here will be be published in separate articles.
Type II string theory and modularity
This paper, in a sense, completes a series of three papers. In the previous two, we have explored the possibility of refining the K-theory partition function in type II string theories using elliptic cohomology. In the present paper, we make that more concrete by defining a fully quantized free field theory based on elliptic cohomology of 10-dimensional spacetime. Moreover, we describe a concrete scenario how this is related to compactification of F-theory on an elliptic curve leading to IIA and IIB theories. We propose an interpretation of the elliptic curve in the context of elliptic cohomology. We discuss the possibility of orbifolding of the elliptic curves and derive certain properties of F-theory. We propose a link of this to type IIB modularity, the structure of the topological lagrangian of M-theory, and Witten's index of loop space Dirac operators. The discussion suggests a S1-lift of type IIB and an F-theoretic model for type I obtained by orbifolding that for type IIB
Quantification of margins and mixed uncertainties using evidence theory and stochastic expansions
The objective of this paper is to implement Dempster–Shafer Theory of Evidence (DSTE) in the presence of mixed (aleatory and multiple sources of epistemic) uncertainty to the reliability and performance assessment of complex engineering systems through the use of quantification of margins and uncertainties (QMU) methodology. This study focuses on quantifying the simulation uncertainties, both in the design condition and the performance boundaries along with the determination of margins. To address the possibility of multiple sources and intervals for epistemic uncertainty characterization, DSTE is used for uncertainty quantification. An approach to incorporate aleatory uncertainty in Dempster–Shafer structures is presented by discretizing the aleatory variable distributions into sets of intervals. In view of excessive computational costs for large scale applications and repetitive simulations needed for DSTE analysis, a stochastic response surface based on point-collocation non-intrusive polynomial chaos (NIPC) has been implemented as the surrogate for the model response. The technique is demonstrated on a model problem with non-linear analytical functions representing the outputs and performance boundaries of two coupled systems. Finally, the QMU approach is demonstrated on a multi-disciplinary analysis of a high speed civil transport (HSCT). - Highlights: • Quantification of margins and uncertainties (QMU) methodology with evidence theory. • Treatment of both inherent and epistemic uncertainties within evidence theory. • Stochastic expansions for representation of performance metrics and boundaries. • Demonstration of QMU on an analytical problem. • QMU analysis applied to an aerospace system (high speed civil transport)
Divergence-type theory of conformal fields
Peralta-Ramos, J
2009-01-01
We present a nonlinear hydrodynamical description of a conformal plasma within the framework of divergence-type theories (DTTs), which are not based on a gradient expansion. We compare the equations of the DTT and the second-order theory (based on conformal invariants), for the case of Bjorken ow. The approach to ideal hydrodynamics is faster in the DTT, indicating that our results can be useful in the study of early-time dynamics in relativistic heavy-ion collisions.
Fixed point theory in metric type spaces
Agarwal, Ravi P; O’Regan, Donal; Roldán-López-de-Hierro, Antonio Francisco
2015-01-01
Written by a team of leading experts in the field, this volume presents a self-contained account of the theory, techniques and results in metric type spaces (in particular in G-metric spaces); that is, the text approaches this important area of fixed point analysis beginning from the basic ideas of metric space topology. The text is structured so that it leads the reader from preliminaries and historical notes on metric spaces (in particular G-metric spaces) and on mappings, to Banach type contraction theorems in metric type spaces, fixed point theory in partially ordered G-metric spaces, fixed point theory for expansive mappings in metric type spaces, generalizations, present results and techniques in a very general abstract setting and framework. Fixed point theory is one of the major research areas in nonlinear analysis. This is partly due to the fact that in many real world problems fixed point theory is the basic mathematical tool used to establish the existence of solutions to problems which arise natur...
Game-theoretic Interpretation of Type Theory Part I: Intuitionistic Type Theory with Universes
Yamada, Norihiro
2016-01-01
We present a game semantics for intuitionistic type theory. Concretely, we propose categories with families of games and strategies for both extensional and intensional type theories, which support dependent product, dependent sum, and Id-types as well as universes. The intensional interpretation of the Id-types in particular has interesting phenomena: It admits the principle of uniqueness of identity proofs as well as Streicher's first and second Criteria of Intensionality, but refutes the t...
Double Field Theory of Type II Strings
Hohm, Olaf; Kwak, Seung Ki; Zwiebach, Barton
2011-01-01
We use double field theory to give a unified description of the low energy limits of type IIA and type IIB superstrings. The Ramond-Ramond potentials fit into spinor representations of the duality group O(D, D) and field-strengths are obtained by acting with the Dirac operator on the potentials. The action, supplemented by a Spin+ (D, D)-covariant self-duality condition on field strengths, reduces to the IIA and IIB theories in different frames. As usual, the NS-NS gravitational variables are...
Explicit Substitutions for Contextual Type Theory
Abel, Andreas; 10.4204/EPTCS.34.3
2010-01-01
In this paper, we present an explicit substitution calculus which distinguishes between ordinary bound variables and meta-variables. Its typing discipline is derived from contextual modal type theory. We first present a dependently typed lambda calculus with explicit substitutions for ordinary variables and explicit meta-substitutions for meta-variables. We then present a weak head normalization procedure which performs both substitutions lazily and in a single pass thereby combining substitution walks for the two different classes of variables. Finally, we describe a bidirectional type checking algorithm which uses weak head normalization and prove soundness.
Multi-level Contextual Type Theory
Mathieu Boespflug
2011-10-01
Full Text Available Contextual type theory distinguishes between bound variables and meta-variables to write potentially incomplete terms in the presence of binders. It has found good use as a framework for concise explanations of higher-order unification, characterize holes in proofs, and in developing a foundation for programming with higher-order abstract syntax, as embodied by the programming and reasoning environment Beluga. However, to reason about these applications, we need to introduce meta^2-variables to characterize the dependency on meta-variables and bound variables. In other words, we must go beyond a two-level system granting only bound variables and meta-variables. In this paper we generalize contextual type theory to n levels for arbitrary n, so as to obtain a formal system offering bound variables, meta-variables and so on all the way to meta^n-variables. We obtain a uniform account by collapsing all these different kinds of variables into a single notion of variabe indexed by some level k. We give a decidable bi-directional type system which characterizes beta-eta-normal forms together with a generalized substitution operation.
Multi-level Contextual Type Theory
Boespflug, Mathieu; 10.4204/EPTCS.71.3
2011-01-01
Contextual type theory distinguishes between bound variables and meta-variables to write potentially incomplete terms in the presence of binders. It has found good use as a framework for concise explanations of higher-order unification, characterize holes in proofs, and in developing a foundation for programming with higher-order abstract syntax, as embodied by the programming and reasoning environment Beluga. However, to reason about these applications, we need to introduce meta^2-variables to characterize the dependency on meta-variables and bound variables. In other words, we must go beyond a two-level system granting only bound variables and meta-variables. In this paper we generalize contextual type theory to n levels for arbitrary n, so as to obtain a formal system offering bound variables, meta-variables and so on all the way to meta^n-variables. We obtain a uniform account by collapsing all these different kinds of variables into a single notion of variabe indexed by some level k. We give a decidable ...
Doyle, Laurance R.; McCowan, Brenda; Hanser, Sean F.; Chyba, Christopher; Bucci, Taylor; Blue, J. E.
2008-06-01
We assess the effectiveness of applying information theory to the characterization and quantification of the affects of anthropogenic vessel noise on humpback whale (Megaptera novaeangliae) vocal behavior in and around Glacier Bay, Alaska. Vessel noise has the potential to interfere with the complex vocal behavior of these humpback whales which could have direct consequences on their feeding behavior and thus ultimately on their health and reproduction. Humpback whale feeding calls recorded during conditions of high vessel-generated noise and lower levels of background noise are compared for differences in acoustic structure, use, and organization using information theoretic measures. We apply information theory in a self-referential manner (i.e., orders of entropy) to quantify the changes in signaling behavior. We then compare this with the reduction in channel capacity due to noise in Glacier Bay itself treating it as a (Gaussian) noisy channel. We find that high vessel noise is associated with an increase in the rate and repetitiveness of sequential use of feeding call types in our averaged sample of humpback whale vocalizations, indicating that vessel noise may be modifying the patterns of use of feeding calls by the endangered humpback whales in Southeast Alaska. The information theoretic approach suggested herein can make a reliable quantitative measure of such relationships and may also be adapted for wider application to many species where environmental noise is thought to be a problem.
J. Ellen Blue
2008-05-01
Full Text Available We assess the effectiveness of applying information theory to the characterization and quantification of the affects of anthropogenic vessel noise on humpback whale (Megaptera novaeangliae vocal behavior in and around Glacier Bay, Alaska. Vessel noise has the potential to interfere with the complex vocal behavior of these humpback whales which could have direct consequences on their feeding behavior and thus ultimately on their health and reproduction. Humpback whale feeding calls recorded during conditions of high vessel-generated noise and lower levels of background noise are compared for differences in acoustic structure, use, and organization using information theoretic measures. We apply information theory in a self-referential manner (i.e., orders of entropy to quantify the changes in signaling behavior. We then compare this with the reduction in channel capacity due to noise in Glacier Bay itself treating it as a (Gaussian noisy channel. We find that high vessel noise is associated with an increase in the rate and repetitiveness of sequential use of feeding call types in our averaged sample of humpback whale vocalizations, indicating that vessel noise may be modifying the patterns of use of feeding calls by the endangered humpback whales in Southeast Alaska. The information theoretic approach suggested herein can make a reliable quantitative measure of such relationships and may also be adapted for wider application to many species where environmental noise is thought to be a problem.
McDonnell, J. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schunck, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Higdon, D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sarich, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Wild, S. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Nazarewicz, W. [Michigan State Univ., East Lansing, MI (United States); Oak Ridge National Lab., Oak Ridge, TN (United States); Univ. of Warsaw, Warsaw (Poland)
2015-03-24
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
Aldegunde, Manuel; Kermode, James R.; Zabaras, Nicholas
2016-04-01
This paper presents the development of a new exchange-correlation functional from the point of view of machine learning. Using atomization energies of solids and small molecules, we train a linear model for the exchange enhancement factor using a Bayesian approach which allows for the quantification of uncertainties in the predictions. A relevance vector machine is used to automatically select the most relevant terms of the model. We then test this model on atomization energies and also on bulk properties. The average model provides a mean absolute error of only 0.116 eV for the test points of the G2/97 set but a larger 0.314 eV for the test solids. In terms of bulk properties, the prediction for transition metals and monovalent semiconductors has a very low test error. However, as expected, predictions for types of materials not represented in the training set such as ionic solids show much larger errors.
Dynamic Typing: Syntax and Proof Theory
Henglein, Fritz
1994-01-01
Dynamic typing, coercions, dynamically typed lambda-calculus, type inference coherence, completions, safety, minimality......Dynamic typing, coercions, dynamically typed lambda-calculus, type inference coherence, completions, safety, minimality...
Field theory in Goedel-type spacetimes
Marecki, Piotr [Institut fuer Theoretische Physik, Universitaet Leipzig, 04009 Leipzig (Germany)
2008-07-01
I will discuss mathematical aspects of the massless scalar field in spacetimes of Goedel type. Due to their high symmetry, these spacetimes might provide an arena for the next step of development of concrete models of quantum fields in curved spacetimes, such as these developed already for the de Sitter spacetime. While the motion of the sources of Goedel spacetimes (dust with non-vanishing vorticity) is physically interesting and not too-implausible, a difficulty with causality is encountered: sufficiently large regions of Goedel spacetimes posses CTCs. A complete picture of the classical solutions of the wave equation, which will be presented, sheds some light on the seriousness of this difficulty from the point of view of classical field theory and provides a link to known treatments of quantum fields in simple non-globally hyperbolic spacetimes such as time-like cylinders etc. I present an algebraic construction of the solutions based on the symmetry-generators of Goedel-type spacetimes and a connection to the analysis of unitary irreducible representations of SU(1,1).
Chiron: A Set Theory with Types, Undefinedness, Quotation, and Evaluation
Farmer, William M.
2013-01-01
Chiron is a derivative of von-Neumann-Bernays-G\\"odel (NBG) set theory that is intended to be a practical, general-purpose logic for mechanizing mathematics. Unlike traditional set theories such as Zermelo-Fraenkel (ZF) and NBG, Chiron is equipped with a type system, lambda notation, and definite and indefinite description. The type system includes a universal type, dependent types, dependent function types, subtypes, and possibly empty types. Unlike traditional logics such as first-order log...
Distributions of countable models of theories with continuum many types
Popkov, Roman A.; Sudoplatov, Sergey V.
2012-01-01
We present distributions of countable models and correspondent structural characteristics of complete theories with continuum many types: for prime models over finite sets relative to Rudin-Keisler preorders, for limit models over types and over sequences of types, and for other countable models of theory.
Simple Type Theory as Framework for Combining Logics
Benzmueller, Christoph
2010-01-01
Simple type theory is suited as framework for combining classical and non-classical logics. This claim is based on the observation that various prominent logics, including (quantified) multimodal logics and intuitionistic logics, can be elegantly embedded in simple type theory. Furthermore, simple type theory is sufficiently expressive to model combinations of embedded logics and it has a well understood semantics. Off-the-shelf reasoning systems for simple type theory exist that can be uniformly employed for reasoning within and about combinations of logics.
Type Arithmetics: Computation based on the theory of types
Kiselyov, Oleg
2001-01-01
The present paper shows meta-programming turn programming, which is rich enough to express arbitrary arithmetic computations. We demonstrate a type system that implements Peano arithmetics, slightly generalized to negative numbers. Certain types in this system denote numerals. Arithmetic operations on such types-numerals - addition, subtraction, and even division - are expressed as type reduction rules executed by a compiler. A remarkable trait is that division by zero becomes a type error - ...
On Types of Observables in Constrained Theories
Anderson, Edward
2016-01-01
The Kuchar observables notion is shown to apply only to a limited range of theories. Relational mechanics, slightly inhomogeneous cosmology and supergravity are used as examples that require further notions of observables. A suitably general notion of A-observables is then given to cover all of these cases. `A' here stands for `algebraic substructure'; A-observables can be defined by association with each closed algebraic substructure of a theory's constraints. Both constrained algebraic stru...
Orbifolds of M-theory and type II string theories in two dimensions
We consider several orbifold compactifications of M-theory and theircorresponding type II duals in two space-time dimensions. In particular, we show that while the orbifold compactification of M-theory on T9/J9 is dual to the orbifold compactification of type IIB string theory on T8/I8, the same orbifold T8/I8 of type IIA string theory is dual to M-theory compactified on a smooth product manifold K3 x T5. Similarly, while the orbifold compactification of M-theory on (K3 x T5)/?. J5 is dual to the orbifold compactification of type IIB string theory on (K3 x T4)/?.I4, the same orbifold of type IIA string theory is dual to the orbifold T4 x (K3 x S1)/?.J1 of M-theory. The spectrum of various orbifold compactifications of M-theory and type II string theories on both sides are compared giving evidence in favor of these duality conjectures. We also comment on a connection between the Dasgupta-Mukhi-Witten conjecture and the Dabholkar-Park-Sen conjecture for the six-dimensional orbifold models of type IIB string theory and M-theory. (orig.)
Uncertainty Quantification of Composite Laminate Damage with the Generalized Information Theory
J. Lucero; F. Hemez; T. Ross; K.Kline; J.Hundhausen; T. Tippetts
2006-05-01
This work presents a survey of five theories to assess the uncertainty of projectile impact induced damage on multi-layered carbon-epoxy composite plates. Because the types of uncertainty dealt with in this application are multiple (variability, ambiguity, and conflict) and because the data sets collected are sparse, characterizing the amount of delamination damage with probability theory alone is possible but incomplete. This motivates the exploration of methods contained within a broad Generalized Information Theory (GIT) that rely on less restrictive assumptions than probability theory. Probability, fuzzy sets, possibility, and imprecise probability (probability boxes (p-boxes) and Dempster-Shafer) are used to assess the uncertainty in composite plate damage. Furthermore, this work highlights the usefulness of each theory. The purpose of the study is not to compare directly the different GIT methods but to show that they can be deployed on a practical application and to compare the assumptions upon which these theories are based. The data sets consist of experimental measurements and finite element predictions of the amount of delamination and fiber splitting damage as multilayered composite plates are impacted by a projectile at various velocities. The physical experiments consist of using a gas gun to impact suspended plates with a projectile accelerated to prescribed velocities, then, taking ultrasound images of the resulting delamination. The nonlinear, multiple length-scale numerical simulations couple local crack propagation implemented through cohesive zone modeling to global stress-displacement finite element analysis. The assessment of damage uncertainty is performed in three steps by, first, considering the test data only; then, considering the simulation data only; finally, performing an assessment of total uncertainty where test and simulation data sets are combined. This study leads to practical recommendations for reducing the uncertainty and improving the prediction accuracy of the damage modeling and finite element simulation.
Kripke Semantics for Martin-L\\"of's Extensional Type Theory
Awodey, Steve
2011-01-01
It is well-known that simple type theory is complete with respect to non-standard set-valued models. Completeness for standard models only holds with respect to certain extended classes of models, e.g., the class of cartesian closed categories. Similarly, dependent type theory is complete for locally cartesian closed categories. However, it is usually difficult to establish the coherence of interpretations of dependent type theory, i.e., to show that the interpretations of equal expressions are indeed equal. Several classes of models have been used to remedy this problem. We contribute to this investigation by giving a semantics that is standard, coherent, and sufficiently general for completeness while remaining relatively easy to compute with. Our models interpret types of Martin-L\\"of's extensional dependent type theory as sets indexed over posets or, equivalently, as fibrations over posets. This semantics can be seen as a generalization to dependent type theory of the interpretation of intuitionistic firs...
Methods of earthquake resistance quantification for old types of PWR type nuclear power plants
The basic principles are presented of the technology developed by the US NRC. The Systematic Evaluation Programme consists of a detailed inspection of the plant, of an analysis of existing reports and calculations, and of a new assessment of critical buildings and technological equipment. This programme is followed by a seismic Probabilistic Safety Analysis and a Seismic Margin Review. Instead of the failure probability this analysis introduces the high confidential of a low probability of failure, which identifies an acceleration value such that the probability of component failure is less than 5 %. For WWER-type reactors the technique according to the Systematic Evaluation Programme seems to be the most viable. (M.D.) 12 refs., 2 figs
On Types of Observables in Constrained Theories
Anderson, Edward
2016-01-01
The Kuchar observables notion is shown to apply only to a limited range of theories. Relational mechanics, slightly inhomogeneous cosmology and supergravity are used as examples that require further notions of observables. A suitably general notion of A-observables is then given to cover all of these cases. `A' here stands for `algebraic substructure'; A-observables can be defined by association with each closed algebraic substructure of a theory's constraints. Both constrained algebraic structures and associated notions of A-observables form bounded lattices.
Numerical domain wall type solutions in φ4 theory
The well known domain wall type solutions are nowadays of great physical interest in classical field theory. These solutions can mostly be found only approximately. Recently the Hilbert-Chapman-Enskog method was successfully applied to obtain this type solutions in Φ4 theory. The goal of the present paper is to verify these perturbative results by numerical computations. (author)
Numerical Domain Wall Type Solutions in phi**4 Theory
Karkowski, J.; Swierczynski, Z.
1996-01-01
The well known domain wall type solutions are nowadays of great physical interest in classical field theory. These solutions can mostly be found only approximately. Recently the Hilbert-Chapman-Enskog method was succesfully applied to obtain this type solutions in phi**4 theory. The goal of the present paper is to verify these perturbative results by numerical computations.
Motion in Bimetric Type Theories of Gravity
Kahil, M E
2015-01-01
The problem of motion for different test particles, charged and spinning objects of constant spinning tensor in different versions of bimetric theory of gravity is obtained by deriving their corresponding path and path deviation equations, using a modified Bazanski in presence of Riemannian geometry. This method enables us to find path and path deviation equations of different objects orbiting very strong gravitational fields.
Water type quantification in the Skagerrak, the Kattegat and off the Jutland west coast
Trond Kristiansen
2015-04-01
Full Text Available An extensive data series of salinity, nutrients and coloured dissolved organic material (CDOM was collected in the Skagerrak, the northern part of the Kattegat and off the Jutland west coast in April each year during the period 1996–2000, by the Institute of Marine Research in Norway. In this month, after the spring bloom, German Bight Water differs from its surrounding waters by a higher nitrate content and higher nitrate/phosphate and nitrate/silicate ratios. The spreading of this water type into the Skagerrak is of special interest with regard to toxic algal blooms. The quantification of the spatial distributions of the different water types required the development of a new algorithm for the area containing the Norwegian Coastal Current, while an earlier Danish algorithm was applied for the rest of the area. From the upper 50 m a total of 2227 observations of salinity and CDOM content have been used to calculate the mean concentration of water from the German Bight, the North Sea (Atlantic water, the Baltic Sea and Norwegian rivers. The Atlantic Water was the dominant water type, with a mean concentration of 79%, German Bight Water constituted 11%, Baltic Water 8%, and Norwegian River Water 2%. At the surface the mean percentages of these water types were found to be 68%, 15%, 15%, and 3%, respectively. Within the northern part of the Skagerrak, closer to the Norwegian coast, the surface waters were estimated to consist of 74% Atlantic Water, 20% Baltic Water, and 7% Norwegian River Water. The analysis indicates that the content of German Bight Water in this part is less than 5%.
Closed tachyon solitons in type II string theory
García-Etxebarria, Iñaki; Uranga, Angel M
2015-01-01
Type II theories can be described as the endpoint of closed string tachyon condensation in certain orbifolds of supercritical type 0 theories. In this paper, we study solitons of this closed string tachyon and analyze the nature of the resulting defects in critical type II theories. The solitons are classified by the real K-theory groups KO of bundles associated to pairs of supercritical dimensions. For real codimension 4 and 8, corresponding to $KO({\\bf S}^4)={\\bf Z}$ and $KO({\\bf S}^8)={\\bf Z}$, the defects correspond to a gravitational instanton and a fundamental string, respectively. We apply these ideas to reinterpret the worldsheet GLSM, regarded as a supercritical theory on the ambient toric space with closed tachyon condensation onto the CY hypersurface, and use it to describe charged solitons under discrete isometries. We also suggest the possible applications of supercritical strings to the physical interpretation of the matrix factorization description of F-theory on singular spaces.
Closed tachyon solitons in type II string theory
García-Etxebarria, Iñaki; Montero, Miguel; Uranga, Angel M.
2015-09-01
Type II theories can be described as the endpoint of closed string tachyon condensation in certain orbifolds of supercritical type 0 theories. In this paper, we study solitons of this closed string tachyon and analyze the nature of the resulting defects in critical type II theories. The solitons are classified by the real K-theory groups KO of bundles associated to pairs of supercritical dimensions. For real codimension 4 and 8, corresponding to $KO({\\bf S}^4)={\\bf Z}$ and $KO({\\bf S}^8)={\\bf Z}$, the defects correspond to a gravitational instanton and a fundamental string, respectively. We apply these ideas to reinterpret the worldsheet GLSM, regarded as a supercritical theory on the ambient toric space with closed tachyon condensation onto the CY hypersurface, and use it to describe charged solitons under discrete isometries. We also suggest the possible applications of supercritical strings to the physical interpretation of the matrix factorization description of F-theory on singular spaces.
Hesheng Tang; Yu Su; Jiao Wang
2015-08-01
The paper describes a procedure for the uncertainty quantification (UQ) using evidence theory in buckling analysis of semi-rigid jointed frame structures under mixed epistemic–aleatory uncertainty. The design uncertainties (geometrical, material, strength, and manufacturing) are often prevalent in engineering applications. Due to lack of knowledge or incomplete, inaccurate, unclear information in the modeling, simulation, measurement, and design, there are limitations in using only one framework (probability theory) to quantify uncertainty in a system because of the impreciseness of data or knowledge. Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. Unfortunately, propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than propagation of a probabilistic representation for uncertainty. In order to alleviate the computational difficulties in the evidence theory based UQ analysis, a differential evolution-based computational strategy for propagation of epistemic uncertainty in a system with evidence theory is presented here. A UQ analysis for the buckling load of steel-plane frames with semi-rigid connections is given herein to demonstrate accuracy and efficiency of the proposed method.
On representation theory of affine Hecke algebras of type B
Miemietz, Vanessa
2007-01-01
Ariki's and Grojnowski's approach to the representation theory of affine Hecke algebras of type $A$ is applied to type $B$ with unequal parameters to obtain -- under certain restrictions on the eigenvalues of the lattice operators -- analogous multiplicity-one results and a classification of irreducibles with partial branching rules as in type $A$.
van der Put, Robert M F; de Haan, Alex; van den IJssel, Jan G M; Hamidi, Ahd; Beurret, Michel
2015-11-27
Due to the rapidly increasing introduction of Haemophilus influenzae type b (Hib) and other conjugate vaccines worldwide during the last decade, reliable and robust analytical methods are needed for the quantitative monitoring of intermediate samples generated during fermentation (upstream processing, USP) and purification (downstream processing, DSP) of polysaccharide vaccine components. This study describes the quantitative characterization of in-process control (IPC) samples generated during the fermentation and purification of the capsular polysaccharide (CPS), polyribosyl-ribitol-phosphate (PRP), derived from Hib. Reliable quantitative methods are necessary for all stages of production; otherwise accurate process monitoring and validation is not possible. Prior to the availability of high performance anion exchange chromatography methods, this polysaccharide was predominantly quantified either with immunochemical methods, or with the colorimetric orcinol method, which shows interference from fermentation medium components and reagents used during purification. Next to an improved high performance anion exchange chromatography-pulsed amperometric detection (HPAEC-PAD) method, using a modified gradient elution, both the orcinol assay and high performance size exclusion chromatography (HPSEC) analyses were evaluated. For DSP samples, it was found that the correlation between the results obtained by HPAEC-PAD specific quantification of the PRP monomeric repeat unit released by alkaline hydrolysis, and those from the orcinol method was high (R(2)=0.8762), and that it was lower between HPAEC-PAD and HPSEC results. Additionally, HPSEC analysis of USP samples yielded surprisingly comparable results to those obtained by HPAEC-PAD. In the early part of the fermentation, medium components interfered with the different types of analysis, but quantitative HPSEC data could still be obtained, although lacking the specificity of the HPAEC-PAD method. Thus, the HPAEC-PAD method has the advantage of giving a specific response compared to the orcinol assay and HPSEC, and does not show interference from various components that can be present in intermediate and purified PRP samples. PMID:25045809
Extensions of flat functors and theories of presheaf type
Caramello, Olivia
2014-01-01
We develop a general theory of extensions of flat functors along geometric morphisms of toposes, and apply it to the study of the class of theories whose classifying topos is equivalent to a presheaf topos. As a result, we obtain a characterization theorem providing necessary and sufficient semantic conditions for a theory to be of presheaf type. This theorem subsumes all the previous partial results obtained on the subject and has several corollaries which can be used in practice for testing...
Type IIB string theory, S-duality, and generalized cohomology
Kriz, Igor [Department of Mathematics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: ikriz@umich.edu; Sati, Hisham [Department of Physics, University of Adelaide, Adelaide, SA 5005 (Australia) and Department of Pure Mathematics, University of Adelaide, Adelaide, SA 5005 (Australia)]. E-mail: hsati@maths.adelaide.edu.au
2005-05-30
In the presence of background Neveu-Schwarz flux, the description of the Ramond-Ramond fields of type IIB string theory using twisted K-theory is not compatible with S-duality. We argue that other possible variants of twisted K-theory would still not resolve this issue. We propose instead a connection of S-duality with elliptic cohomology, and a possible T-duality relation of this to a previous proposal for IIA theory, and higher-dimensional limits. In the process, we obtain some other results which may be interesting on their own. In particular, we prove a conjecture of Witten that the 11-dimensional spin cobordism group vanishes on K(Z,6), which eliminates a potential new {theta}-angle in type IIB string theory.
Type IIB string theory, S-duality, and generalized cohomology
In the presence of background Neveu-Schwarz flux, the description of the Ramond-Ramond fields of type IIB string theory using twisted K-theory is not compatible with S-duality. We argue that other possible variants of twisted K-theory would still not resolve this issue. We propose instead a connection of S-duality with elliptic cohomology, and a possible T-duality relation of this to a previous proposal for IIA theory, and higher-dimensional limits. In the process, we obtain some other results which may be interesting on their own. In particular, we prove a conjecture of Witten that the 11-dimensional spin cobordism group vanishes on K(Z,6), which eliminates a potential new θ-angle in type IIB string theory
Intensional Type Theory with Guarded Recursive Types qua Fixed Points on Universes
Birkedal, Lars; Mogelberg, R.E.
Guarded recursive functions and types are useful for giving semantics to advanced programming languages and for higher-order programming with infinite data types, such as streams, e.g., for modeling reactive systems. We propose an extension of intensional type theory with rules for forming fixed...... points of guarded recursive functions. Guarded recursive types can be formed simply by taking fixed points of guarded recursive functions on the universe of types. Moreover, we present a general model construction for constructing models of the intensional type theory with guarded recursive functions and...... types. When applied to the groupoid model of intensional type theory with the universe of small discrete groupoids, the construction gives a model of guarded recursion for which there is a one-to-one correspondence between fixed points of functions on the universe of types and fixed points of (suitable...
Introduction to type-2 fuzzy logic control theory and applications
Mendel, Jerry M; Tan, Woei-Wan; Melek, William W; Ying, Hao
2014-01-01
Written by world-class leaders in type-2 fuzzy logic control, this book offers a self-contained reference for both researchers and students. The coverage provides both background and an extensive literature survey on fuzzy logic and related type-2 fuzzy control. It also includes research questions, experiment and simulation results, and downloadable computer programs on an associated website. This key resource will prove useful to students and engineers wanting to learn type-2 fuzzy control theory and its applications.
Applying genre theory to improve exposition-type essay writing
Martínez Lirola, María; Tabuenca Cuevas, María Felicidad
2010-01-01
The study reported in this paper focuses on the use of the Genre Theory in multilingual classrooms as an appropriate framework for English L2 writing. Our students' mother tongues were Spanish, Valencian, French, Flemish, Italian, German and Rumanian. The Genre Theory was applied to increase students' literacy skills through the study of text types and specific grammar structures that appear in these texts. As an adequate evaluation process had to be implemented, the computer programme Markin...
Gauge theory on a space with linear Lie type fuzziness
Khorrami, M; Shariati, A
2013-01-01
The U(1) gauge theory on a space with Lie type noncommutativity is constructed. The construction is based on the group of translation in Fourier space, which in contrast to space itself is commutative. In analogy with lattice gauge theory, the object playing the role of flux of field strength per plaquette, as well as the action, are constructed. It is observed that the theory, in comparison with ordinary U(1) gauge theory, has an extra gauge field component. This phenomena is reminiscent of similar ones in formulation of SU(N) gauge theory in space with canonical noncommutativity, and also appearance of gauge field component in discrete direction of Connes' construction of the Standard Model.
Closed tachyon solitons in type II string theory
Type II theories can be described as the endpoint of closed string tachyon condensation in certain orbifolds of supercritical type 0 theories. In this paper, we study solitons of this closed string tachyon and analyze the nature of the resulting defects in critical type II theories. The solitons are classified by the real K-theory groups KO of bundles associated to pairs of supercritical dimensions. For real codimension 4 and 8, corresponding to KO(S4) = Z and KO(S8) = Z, the defects correspond to a gravitational instanton and a fundamental string, respectively. We apply these ideas to reinterpret the worldsheet GLSM, regarded as a supercritical theory on the ambient toric space with closed tachyon condensation onto the CY hypersurface, and use it to describe charged solitons under discrete isometries. We also suggest the possible applications of supercritical strings to the physical interpretation of the matrix factorization description of F-theory on singular spaces. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Closed tachyon solitons in type II string theory
Garcia-Etxebarria, Inaki [Max Planck Institute for Physics, Munich (Germany); Montero, Miguel [Instituto de Fisica Teorica IFT-UAM/CSIC, C/Nicolas Cabrera 13-15, Universidad Autonoma de Madrid (Spain); Departamento de Fisica Teorica, Universidad Autonoma de Madrid (Spain); Uranga, Angel M. [Instituto de Fisica Teorica IFT-UAM/CSIC, C/Nicolas Cabrera 13-15, Universidad Autonoma de Madrid (Spain)
2015-09-15
Type II theories can be described as the endpoint of closed string tachyon condensation in certain orbifolds of supercritical type 0 theories. In this paper, we study solitons of this closed string tachyon and analyze the nature of the resulting defects in critical type II theories. The solitons are classified by the real K-theory groups KO of bundles associated to pairs of supercritical dimensions. For real codimension 4 and 8, corresponding to KO(S{sup 4}) = Z and KO(S{sup 8}) = Z, the defects correspond to a gravitational instanton and a fundamental string, respectively. We apply these ideas to reinterpret the worldsheet GLSM, regarded as a supercritical theory on the ambient toric space with closed tachyon condensation onto the CY hypersurface, and use it to describe charged solitons under discrete isometries. We also suggest the possible applications of supercritical strings to the physical interpretation of the matrix factorization description of F-theory on singular spaces. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Type IIB theory on half-flat manifolds
In this paper we derive the low-energy effective action of type IIB theory compactified on half-flat manifolds and show that this precisely coincides with the low-energy effective action of type IIA theory compactified on a Calabi-Yau manifold in the presence of NS three-form fluxes. In this way we provide a further check of the recently formulated conjecture that half-flat manifolds appear as mirror partners of Calabi-Yau manifolds when NS fluxes are turned on
Non-critical type 0 string theories and their field theory duals
In this paper we continue the study of the non-critical type 0 string and its field theory duals. We begin by reviewing some facts and conjectures about these theories. We move on to our proposal for the type 0 effective action in any dimension, its RR fields and their Chern-Simons couplings. We then focus on the case without compact dimensions and study its field theory duals. We show that one can parameterize all dual physical quantities in terms of a finite number of unknown parameters. By making some further assumptions on the tachyon couplings, one can still make some 'model independent' statements
Module-based Hybrid Uncertainty Quantification for Multi-physics Applications: Theory and Software
Tong, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Iaccarino, Gianluca [Stanford Univ., CA (United States); Mittal, Akshay [Stanford Univ., CA (United States)
2013-10-08
In this project we proposed to develop an innovative uncertainty quantification methodology that captures the best of the two competing approaches in UQ, namely, intrusive and non-intrusive approaches. The idea is to develop the mathematics and the associated computational framework and algorithms to facilitate the use of intrusive or non-intrusive UQ methods in different modules of a multi-physics multi-module simulation model in a way that physics code developers for different modules are shielded (as much as possible) from the chores of accounting for the uncertain ties introduced by the other modules. As the result of our research and development, we have produced a number of publications, conference presentations, and a software product.
Restoration of Lorentz Symmetry for Lifshitz Type Scalar Theory
Kikuchi, Kengo
2011-01-01
The purpose of this paper is to present our study on the restoration of the Lorentz symmetry for a Lifshitz type scalar theory in the infrared region by using non-perturbative methods. We apply the Wegner-Houghton equation, which is one of the exact renormalization group equations, to Lifshitz theory. Analyzing the equation for a z=2, d=3+1 Lifshitz type scalar model, and using some variable transformations, it is found that broken symmetry terms vanishes in the infrared region. This shows that the Lifshitz scalar model dynamically restores the Lorentz symmetry at low energy. Our result gives a definition of ultraviolet complete renormalizable scalar field theories with nontrivial interaction terms of \\hat{\\lambda}_{n}\\phi^{n} (n=4, 6, 8, 10).
Rainich theory for type D aligned Einstein-Maxwell solutions
Ferrando, Joan Josep; Sáez, Juan Antonio
2007-01-01
The original Rainich theory for the non-null Einstein-Maxwell solutions consists of a set of algebraic conditions and the Rainich (differential) equation. We show here that the subclass of type D aligned solutions can be characterized just by algebraic restrictions.
Calabi-Yau compactifications of type IIB superstring theory
Böhm, R
2001-01-01
Starting from a non-self-dual action for ten dimensional type IIB supergravity this theory is compactified on a Calabi-Yau 3-fold and 4- fold. The compactification are thereby performed in the limit, in which the volumina of the manifolds are large against the string scale.
A model of PCF in guarded type theory
Paviotti, Marco; Mgelberg, Rasmus Ejlers; Birkedal, Lars
Guarded recursion is a form of recursion where recursive calls are guarded by delay modalities. Previous work has shown how guarded recursion is useful for constructing logics for reasoning about programming languages with advanced features, as well as for constructing and reasoning about elements...... of coinductive types. In this paper we investigate how type theory with guarded recursion can be used as a metalanguage for denotational semantics useful both for constructing models and for proving properties of these. We do this by constructing a fairly intensional model of PCF and proving it...... computationally adequate. The model construction is related to Escardo's metric model for PCF, but here everything is carried out entirely in type theory with guarded recursion, including the formulation of the operational semantics, the model construction and the proof of adequacy...
A model of PCF in guarded type theory
Paviotti, Marco; Møgelberg, Rasmus Ejlers; Birkedal, Lars
2015-01-01
Guarded recursion is a form of recursion where recursive calls are guarded by delay modalities. Previous work has shown how guarded recursion is useful for constructing logics for reasoning about programming languages with advanced features, as well as for constructing and reasoning about elements...... of coinductive types. In this paper we investigate how type theory with guarded recursion can be used as a metalanguage for denotational semantics useful both for constructing models and for proving properties of these. We do this by constructing a fairly intensional model of PCF and proving it...... computationally adequate. The model construction is related to Escardo's metric model for PCF, but here everything is carried out entirely in type theory with guarded recursion, including the formulation of the operational semantics, the model construction and the proof of adequacy...
Constructive Type Theory and the Dialogical Approach to Meaning
Shahid Rahman
2013-12-01
Full Text Available In its origins Dialogical logic constituted one part of a new movement called the Erlangen School or Erlangen Constructivism. Its goal was to provide a new start to a general theory of language and of science. According to the Erlangen-School, language is not just a fact that we discover, but a human cultural accomplishment whose construction reason can and should control. The resulting project of intentionally constructing a scientific language was called the Orthosprache-project. Unfortunately, the Orthosprache-project was not further developed and seemed to fade away. It is possible that one of the reasons for this fading away is that the link between dialogical logic and Orthosprache was not sufficiently developed - in particular, the new theory of meaning to be found in dialogical logic seemed to be cut off from both the project of establishing the basis for scientific language and also from a general theory of meaning. We would like to contribute to clarifying one possible way in which a general dialogical theory of meaning could be linked to dialogical logic. The idea behind the proposal is to make use of constructive type theory in which logical inferences are preceded by the description of a fully interpreted language. The latter, we think, provides the means for a new start not only for the project of Orthosprache, but also for a general dialogical theory of meaning.
A ground many-valued type theory and its extensions
Běhounek, Libor
Linz : Johannes Kepler Universität, 2014 - (Flaminio, T.; Godo, L.; Gottwald, S.; Klement, E.). s. 15-18 [Linz Seminar on Fuzzy Set Theory /35./. 18.02.2014-22.02.2014, Linz] R&D Projects: GA MŠk ED1.1.00/02.0070 Grant ostatní: GA MŠk EE2.3.30.0010 Institutional support: RVO:67985807 Keywords : type theory * many-valued logic s * higher-order logic * teorie typů * vícehodnotové logiky * logika vyššího řádu Subject RIV: BA - General Mathematics
D-Branes in Type IIA and Type IIB Theories from Tachyon Condensation
Kluson, J.
2000-01-01
In this paper we will construct all BPS and non-BPS D-branes in Type IIA and Type IIB theories from tachyon condensation. We also propose form of Wess-Zumino term for non-BPS D-brane and we will show that tachyon condensation in this term leads to standard Wess-Zumino term for BPS D-brane.
Formation of social types in the theory of Orrin Klapp
Trifunović Vesna
2007-01-01
Full Text Available Theory of Orrin Klapp about social types draws attention to important functions that these types have within certain societies as well as that it is preferable to take them into consideration if our goal is more complete knowledge of that society. For Klapp, social types are important social symbols, which in an interesting way reflect society they are part of and for that reason this author dedicates his work to considering their meanings and social functions. He thinks that we can not understand a society without the knowledge about the types with which its members are identified and which serve them as models in their social activity. Hence, these types have cognitive value since, according to Klapp, they assist in perception and "contain the truth", and therefore the knowledge of them allows easier orientation within the social system. Social types also offer insight into the scheme of the social structure, which is otherwise invisible and hidden, but certainly deserves attention if we wish clearer picture about social relations within specific community. The aim of this work is to present this very interesting and inspirative theory of Orrin Klapp, pointing out its importance but also its weaknesses which should be kept in mind during its application in further research.
Dirac theory on a space with linear Lie type fuzziness
Shariati, A. (MSc); Khorrami, M; Fatollahi, A. H.
2012-01-01
A spinor theory on a space with linear Lie type noncommutativity among spatial coordinates is presented. The model is based on the Fourier space corresponding to spatial coordinates, as this Fourier space is commutative. When the group is compact, the real space exhibits lattice characteristics (as the eigenvalues of space operators are discrete), and the similarity of such a \\emph{lattice} with ordinary lattices is manifested, among other things, in a phenomenon resembling the famous \\emph{f...
D-branes in type I string theory
Frau, M.; Gallot, L.; Lerda, A.; Strigazzi, P.
2000-01-01
We review the boundary state description of D-branes in type I string theory and show that the only stable non-BPS configurations are the D-particle and the D-instanton. We also compute the gauge and gravitational interactions of the non-BPS D-particles and compare them with the interactions of the dual non-BPS states of the heterotic string, finding complete agreement.
D-branes in type I string theory
Frau, M; Lerda, A; Strigazzi, P
2001-01-01
We review the boundary state description of D-branes in type I string theory and show that the only stable non-BPS configurations are the D-particle and the D-instanton. We also compute the gauge and gravitational interactions of the non-BPS D-particles and compare them with the interactions of the dual non-BPS states of the heterotic string, finding complete agreement.
Fuzzy type theory as higher order fuzzy logic
Novák, Vilém
Bangkok : Assumption University of Bangkok , 2005, s. 21-26. ISBN 974-615-226-2. [InTech'05 /6./. Phuket (TH), 14.12.2005-16.12.2005] R&D Projects: GA ČR(CZ) GA201/04/1033 Institutional research plan: CEZ:AV0Z10750506 Keywords : fuzzy type theory * fuzzy logic * LPi-logic * Lukasiewicz logic Subject RIV: BA - General Mathematics
Multivariate Bonferroni-type inequalities theory and applications
Chen, John
2014-01-01
Multivariate Bonferroni-Type Inequalities: Theory and Applications presents a systematic account of research discoveries on multivariate Bonferroni-type inequalities published in the past decade. The emergence of new bounding approaches pushes the conventional definitions of optimal inequalities and demands new insights into linear and Fréchet optimality. The book explores these advances in bounding techniques with corresponding innovative applications. It presents the method of linear programming for multivariate bounds, multivariate hybrid bounds, sub-Markovian bounds, and bounds using Hamil
D-branes and KK-theory in type I string theory
We analyse unstable D-brane systems in type-I string theory. Generalizing the proposal in hep-th/0108085, we give a physical interpretation for real KK-theory and claim that the D-branes embedded in a product space XxY which are made from the unstable Dp-brane system wrapped on Y are classified by a real KK-theory group KKOp-1(X,Y). The field contents of the unstable D-brane systems are systematically described by a hidden Clifford algebra structure.We also investigate the matrix theory based on non-BPS D-instantons and show that the spectrum of D-branes in the theory is exactly what we expect in type-I string theory, including stable non-BPS D-branes with Z2 charge. We explicitly construct the D-brane solutions in the framework of BSFT and analyse the physical property making use of the Clifford algebra. (author)
Type I/heterotic duality and M-theory amplitudes
Green, Michael B
2016-01-01
This paper investigates relationships between low-energy four-particle scattering amplitudes with external gauge particles and gravitons in the E_8 X E_8 and SO(32) heterotic string theories and the type I and type IA superstring theories by considering a variety of tree level and one-loop Feynman diagrams describing such amplitudes in eleven-dimensional supergravity in a Horava--Witten background compactified on a circle. This accounts for a number of perturbative and non-perturbative aspects of low order higher derivative terms in the low-energy expansion of string theory amplitudes, which are expected to be protected by half maximal supersymmetry from receiving corrections beyond one or two loops. It also suggests the manner in which type I/heterotic duality may be realised for certain higher derivative interactions that are not so obviously protected. For example, our considerations suggest that R**4 interactions (where R is the Riemann curvature) might receive no perturbative corrections beyond one loop ...
Irregular singularities in Liouville theory and Argyres-Douglas type gauge theories, I
Gaiotto, D. [Institute for Advanced Study (IAS), Princeton, NJ (United States); Teschner, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2012-03-15
Motivated by problems arising in the study of N=2 supersymmetric gauge theories we introduce and study irregular singularities in two-dimensional conformal field theory, here Liouville theory. Irregular singularities are associated to representations of the Virasoro algebra in which a subset of the annihilation part of the algebra act diagonally. In this paper we define natural bases for the space of conformal blocks in the presence of irregular singularities, describe how to calculate their series expansions, and how such conformal blocks can be constructed by some delicate limiting procedure from ordinary conformal blocks. This leads us to a proposal for the structure functions appearing in the decomposition of physical correlation functions with irregular singularities into conformal blocks. Taken together, we get a precise prediction for the partition functions of some Argyres-Douglas type theories on S{sup 4}. (orig.)
Hui, Kai Hwee; Ambrosi, Adriano; Sofer, Zdeněk; Pumera, Martin; Bonanni, Alessandra
2015-05-01
Graphene doped with heteroatoms can show new or improved properties as compared to the original undoped material. It has been reported that the type of heteroatoms and the doping conditions can have a strong influence on the electronic and electrochemical properties of the resulting material. Here, we wish to compare the electrochemical behavior of two n-type and two p-type doped graphenes, namely boron-doped graphenes and nitrogen-doped graphenes containing different amounts of heteroatoms. We show that the boron-doped graphene containing a higher amount of dopants provides the best electroanalytical performance in terms of calibration sensitivity, selectivity and linearity of response for the detection of gallic acid normally used as the standard probe for the quantification of antioxidant activity of food and beverages. Our findings demonstrate that the type and amount of heteroatoms used for the doping have a profound influence on the electrochemical detection of gallic acid rather than the structural properties of the materials such as amounts of defects, oxygen functionalities and surface area. This finding has a profound influence on the application of doped graphenes in the field of analytical chemistry.Graphene doped with heteroatoms can show new or improved properties as compared to the original undoped material. It has been reported that the type of heteroatoms and the doping conditions can have a strong influence on the electronic and electrochemical properties of the resulting material. Here, we wish to compare the electrochemical behavior of two n-type and two p-type doped graphenes, namely boron-doped graphenes and nitrogen-doped graphenes containing different amounts of heteroatoms. We show that the boron-doped graphene containing a higher amount of dopants provides the best electroanalytical performance in terms of calibration sensitivity, selectivity and linearity of response for the detection of gallic acid normally used as the standard probe for the quantification of antioxidant activity of food and beverages. Our findings demonstrate that the type and amount of heteroatoms used for the doping have a profound influence on the electrochemical detection of gallic acid rather than the structural properties of the materials such as amounts of defects, oxygen functionalities and surface area. This finding has a profound influence on the application of doped graphenes in the field of analytical chemistry. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr01045d
Enhanced gauge symmetry in type II string theory
We show how enhanced gauge symmetry in type II string theory compactified on a Calabi-Yau threefold arises from singularities in the geometry of the target space. When the target space of the type IIA string acquires a genus g curve C of AN-1 singularities, we find that an SU(N) gauge theory with g adjoint hypermultiplets appears at the singularity. The new massless states correspond to solitons wrapped about the collapsing cycles, and their dynamics is described by a twisted supersymmetric gauge theory on C x R4. We reproduce this result from an analysis of the S-dual D-manifold. We check that the predictions made by this model about the nature of the Higgs branch, the monodromy of period integrals, and the asymptotics of the one-loop topological amplitude are in agreement with geometrical computations. In one of our examples we find that the singularity occurs at strong coupling in the heterotic dual proposed by Kachru and Vafa. (orig.)
Enhanced gauge symmetry in type II string theory
Katz, S; Plesser, M R; Katz, Sheldon; Morrison, David R; Plesser, M Ronen
1996-01-01
We show how enhanced gauge symmetry in type II string theory compactified on a Calabi--Yau threefold arises from singularities in the geometry of the target space. When the target space of the type IIA string acquires a genus g curve C of A_{N-1} singularities, we find that an SU(N) gauge theory with g adjoint hypermultiplets appears at the singularity. The new massless states correspond to solitons wrapped about the collapsing cycles, and their dynamics is described by a twisted supersymmetric gauge theory on C\\times \\R^4. We reproduce this result from an analysis of the S-dual D-manifold. We check that the predictions made by this model about the nature of the Higgs branch, the monodromy of period integrals, and the asymptotics of the one-loop topological amplitude are in agreement with geometrical computations. In one of our examples we find that the singularity occurs at strong coupling in the heterotic dual proposed by Kachru and Vafa.
Uncertainty quantification for proton-proton fusion in chiral effective field theory
Acharya, B; Ekström, A; Forssén, C; Platter, L
2016-01-01
We compute the $S$-factor of the proton-proton ($pp$) fusion reaction using chiral effective field theory ($\\chi$EFT) up to next-to-next-to-leading order (NNLO) and perform a rigorous uncertainty analysis of the results. We quantify the uncertainties due to (i) the computational method used to compute the $pp$ cross section in momentum space, (ii) the statistical uncertainties in the low-energy coupling constants of $\\chi$EFT, (iii) the systematic uncertainty due to the $\\chi$EFT cutoff, and (iv) systematic variations in the database used to calibrate the nucleon-nucleon interaction. We also examine the robustness of the polynomial extrapolation procedure, which is commonly used to extract the threshold $S$-factor and its energy-derivatives. By performing a statistical analysis of the polynomial fit of the energy-dependent $S$-factor at several different energy intervals, we eliminate a systematic uncertainty that can arise from the choice of the fit interval in our calculations. In addition, we explore the s...
D-brane Instantons in Type II String Theory
Blumenhagen, Ralph; /Munich, Max Planck Inst.; Cvetic, Mirjam; /Pennsylvania U.; Kachru, Shamit; /Stanford U., Phys. Dept. /SLAC; Weigand, Timo; /SLAC
2009-06-19
We review recent progress in determining the effects of D-brane instantons in N=1 supersymmetric compactifications of Type II string theory to four dimensions. We describe the abstract D-brane instanton calculus for holomorphic couplings such as the superpotential, the gauge kinetic function and higher fermionic F-terms. This includes a discussion of multi-instanton effects and the implications of background fluxes for the instanton sector. Our presentation also highlights, but is not restricted to the computation of D-brane instanton effects in quiver gauge theories on D-branes at singularities. We then summarize the concrete consequences of stringy D-brane instantons for the construction of semi-realistic models of particle physics or SUSY-breaking in compact and non-compact geometries.
Quantum Bianchi Type IX Cosmology in K-Essence Theory
Espinoza-García, Abraham; Socorro, J.; Pimentel, Luis O.
2014-09-01
We use one of the simplest forms of the K-essence theory and apply it to the anisotropic Bianchi type IX cosmological model, with a barotropic perfect fluid modeling the usual matter content. We show that the most important contribution of the scalar field occurs during a stiff matter phase. Also, we present a canonical quantization procedure of the theory which can be simplified by reinterpreting the scalar field as an exotic part of the total matter content. The solutions to the Wheeler-DeWitt equation were found using the Bohmian formulation Bohm (Phys. Rev. 85(2):166, 1952) of quantum mechanics, employing the amplitude-real-phase approach Moncrief and Ryan (Phys. Rev. D 44:2375, 1991), where the ansatz for the wave function is of the form Ψ( ℓ μ )= χ( ϕ) W( ℓ μ ), where S is the superpotential function, which plays an important role in solving the Hamilton-Jacobi equation.
A type reduction theory for systems with replicated components
Mazur, Tomasz
2012-01-01
The Parameterised Model Checking Problem asks whether an implementation $Impl(t)$ satisfies a specification $\\Spec(t)$ for all instantiations of parameter $t$. In general, $t$ can determine numerous entities: the number of processes used in a network, the type of data, the capacities of buffers, etc. The main theme of this paper is automation of uniform verification of a subclass of PMCP with the parameter of the first kind, i.e. the number of processes in the network. We use CSP as our formalism. We present a type reduction theory, which, for a given verification problem, establishes a function $\\phi$ that maps all (sufficiently large) instantiations $T$ of the parameter to some fixed type $\\shiftedhat{T}$ and allows us to deduce that if $\\Spec(\\shiftedhat{T})$ is refined by $\\phi(Impl(T))$, then (subject to certain assumptions) $\\Spec(T)$ is refined by $Impl(T)$. The theory can be used in practice by combining it with a suitable abstraction method that produces a $t$-independent process $Abstr$ that is refi...
Type II Superstring Field Theory: Geometric Approach and Operadic Description
Jurco, Branislav
2013-01-01
We outline the construction of type II superstring field theory leading to a geometric and algebraic BV master equation, analogous to Zwiebach's construction for the bosonic string. The construction uses the small Hilbert space. Elementary vertices of the non-polynomial action are described with the help of a properly formulated minimal area problem. They give rise to an infinite tower of superstring field products defining a $\\mathcal{N}=1$ generalization of a loop homotopy Lie algebra, the genus zero part generalizing a homotopy Lie algebra. Finally, we give an operadic interpretation of the construction.
Type II superstring field theory: geometric approach and operadic description
Jurčo, Branislav; Münster, Korbinian
2013-04-01
We outline the construction of type II superstring field theory leading to a geometric and algebraic BV master equation, analogous to Zwiebach's construction for the bosonic string. The construction uses the small Hilbert space. Elementary vertices of the non-polynomial action are described with the help of a properly formulated minimal area problem. They give rise to an infinite tower of superstring field products defining a {N} = 1 generalization of a loop homotopy Lie algebra, the genus zero part generalizing a homotopy Lie algebra. Finally, we give an operadic interpretation of the construction.
Church-style type theories over finitary weakly implicative logics
Běhounek, Libor
Vienna : Vienna University of Technology, 2014 - (Baaz, M.; Ciabattoni, A.; Hetzl, S.). s. 131-133 [LATD 2014. Logic , Algebra and Truth Degrees. 16.07.2014-19.07.2014, Vienna] R&D Projects: GA MŠk ED1.1.00/02.0070 Grant ostatní: GA MŠk EE2.3.30.0010 Institutional support: RVO:67985807 Keywords : type theory * higher-order logic * weakly implicative logic s * teorie typů * logika vyššího řádu * slabě implikační logiky Subject RIV: BA - General Mathematics
Nucleation of vacuum bubbles in Brans-Dicke type theory
Kim, Hongsu; Lee, Bum-Hoon(Center for Quantum Spacetime, Sogang University, Seoul, 121-742, Republic of Korea); Lee, WonWoo; Lee, Young Jae; Yeom, Dong-han(Leung Center for Cosmology and Particle Astrophysics, National Taiwan University, Taipei 10617, Taiwan)
2010-01-01
In this paper, we explore the nucleation of vacuum bubbles in the Brans-Dicke type theory of gravity. In the Euclidean signature, we evaluate the fields at the vacuum bubbles as solutions of the Euler-Lagrange equations of motion as well as the bubble nucleation probabilities by integrating the Euclidean action. We illustrate three possible ways to obtain vacuum bubbles: true vacuum bubbles for \\omega>-3/2, false vacuum bubbles for \\omega-3/2 when the vacuum energy of the false vacuum in the ...
String scattering from D-branes in type 0 theories
We derive fully covariant expressions for all two-point scattering amplitudes involving a closed string tachyon and massless strings from the Dirichlet brane in type 0 theories. The amplitude for two massless D-brane fluctuations to produce a closed string tachyon is also evaluated. We then examine in detail these string scattering amplitudes in order to extract world-volume couplings of the tachyon with itself and with massless fields on a D-brane. We find that the tachyon appears as an overall coupling function in the Born-Infeld action and conjecture the form of the function
Compactifications of type IIB string theory and F-theory models using toric geometry
In this work we focus on the toric construction of type IIB and F-theory models. After introducing the main concepts of type IIB orientifold and F-theory compactifications as well as their connection via the Sen limit, we provide the toric tools to explicitly construct and describe the manifolds involved in our setups. On the type IIB side, we study the 'Large Volume Scenario' on four-modulus, 'Swiss cheese' Calabi-Yau manifolds obtained from four-dimensional simplicial lattice polytopes. We thoroughly analyze the possibility of generating neutral, non-perturbative superpotentials from Euclidean D3-branes in the presence of chirally intersecting D7-branes. We find that taking proper account of the Freed-Witten anomaly on non-spin cycles and the Kaehler cone conditions imposes severe constraints on the models. Nevertheless, we are able to create setups where the constraints are solved, and up to three moduli are stabilized. In the case of F-theory compactifications, we make use of toric geometry to construct a class of grand unified theory (GUT) models in F-theory. The base manifolds are hypersurfaces of the four-dimensional projective space with toric point and curve blowups. The associated Calabi-Yau fourfolds are complete intersections of two hypersurfaces in the P[231] fibered toric sixfolds. We construct SO(10) GUT models on suitable divisors of the basis manifolds using the spectral cover construction. By means of abelian fluxes we break the SO(10) gauge group to SU(5)xU(1) which is interpreted as a flipped SU(5) model. With the GUT Higgses in this model it is possible to further break the gauge symmetry to the Standard Model. We present several phenomenologically attractive examples in detail. (author)
A Co-Operative Phenomena Type Local Realistic Theory
Buonomano, V
1999-01-01
We analyze a conceivable type of local realistic theory, which we call a co-operative phenomena type local realistic theory. In an experimental apparatus to measure second or fourth order interference effects, it images that their exists a stable global pattern or mode in a hypothesized medium that is at least the size of the coherence volume of all the involved beams. If you change the position of a mirror, beam splitter, polarizer, state preparation, or block a beam then a new and different stable global state is entered very quickly. In an interferometer a photon passes only one arm of the apparatus but knows if the other arm is open or closed since the global pattern through which it travels through contains this information and guides it appropriately. In a polarization correlation experiment, two distant polarizers are part of the same global pattern or state which is very rapidly determined by the whole apparatus. It is experimentally testable. The situation in relationship to the special relativity is...
Balint, Adam; Tenk, Miklós; Deim, Zoltán; Rasmussen, Thomas Bruun; Uttenthal, Åse; Csagola, Attila; Tuboly, Tamás; Farsang, Attila; Berg, Mikael; Belak, Sandor
2009-01-01
A real-time PCR assay, based on Primer-Probe Energy Transfer (PriProET), was developed to improve the detection and quantification of porcine circovirus type 2 (PVC2). PCV2 is recognised as the essential infectious agent in post-weaning multisystemic wasting syndrome (PMWS) and has been associated...
Khademi, April; Hosseinzadeh, Danoush
2014-03-01
Alzheimer's disease (AD) is the most common form of dementia in the elderly characterized by extracellular deposition of amyloid plaques (AP). Using animal models, AP loads have been manually measured from histological specimens to understand disease etiology, as well as response to treatment. Due to the manual nature of these approaches, obtaining the AP load is labourious, subjective and error prone. Automated algorithms can be designed to alleviate these challenges by objectively segmenting AP. In this paper, we focus on the development of a novel algorithm for AP segmentation based on robust preprocessing and a Type II fuzzy system. Type II fuzzy systems are much more advantageous over the traditional Type I fuzzy systems, since ambiguity in the membership function may be modeled and exploited to generate excellent segmentation results. The ambiguity in the membership function is defined as an adaptively changing parameter that is tuned based on the local contrast characteristics of the image. Using transgenic mouse brains with AP ground truth, validation studies were carried out showing a high degree of overlap and low degree of oversegmentation (0.8233 and 0.0917, respectively). The results highlight that such a framework is able to handle plaques of various types (diffuse, punctate), plaques with varying Aβ concentrations as well as intensity variation caused by treatment effects or staining variability.
Dirac theory on a space with linear Lie type fuzziness
Shariati, A; Fatollahi, A H; 10.1142/S0217751X12501059
2012-01-01
A spinor theory on a space with linear Lie type noncommutativity among spatial coordinates is presented. The model is based on the Fourier space corresponding to spatial coordinates, as this Fourier space is commutative. When the group is compact, the real space exhibits lattice characteristics (as the eigenvalues of space operators are discrete), and the similarity of such a \\emph{lattice} with ordinary lattices is manifested, among other things, in a phenomenon resembling the famous \\emph{fermion doubling} problem. A projection is introduced to make the dynamical number of spinors equal to that corresponding to the ordinary space. The actions for free and interacting spinors (with Fermi-like interactions) are presented. The Feynman rules are extracted and 1-loop corrections are investigated.
String Scattering from D-branes in Type 0 Theories
Garousi, M R
1999-01-01
We derive fully covariant expressions for all two-point scattering amplitudes involving closed string tachyon and massless strings from Dirichlet brane in type 0 theories. The amplitude for two massless D-brane fluctuations to produce closed string tachyon is also evaluated. We then examine in detail these string scattering amplitudes in order to extract world-volume couplings of the tachyon with itself and with massless fields on a D-brane. We find that the tachyon appears as an overall coupling function in the Born-Infeld action. For D3-brane, the coupling function is the same as the tachyon coupling function to the Ramond-Ramond field in the bulk space. Hence, the effective Yang-Mills coupling is slightly different from the one suggested in hep-th/9812089.
Maupetit-Méhouas, Stéphanie; Mariot, Virginie; Reynes, Christelle; Bertrand, Gyulène; Feuillet, François; Carel, Jean-Claude; Simon, Dominique; Bihan, Hélène; Gajdos, Vincent; Devouge, Eve; Shenoy, Savitha; Agbo-Kpati, Placide; Ronan, Anne; Naud, Catherine; Lienhardt-Roussie, Anne
2010-01-01
Abstract BACKGROUND: Pseudohypoparathyroidism type Ib (PHP-Ib) is due to epigenetic changes at the imprinted GNAS locus including loss of methylation at the A/B differentially methylated region (DMR) and sometimes at the XL and AS DMRs and gain of methylation at the NESP DMR. Objective: To investigate if quantitative measurement of the methylation at the GNAS DMRs identifies subtypes of PHP-Ib. DESIGN AND METHODS: In 19 patients with PHP-Ib and 7 controls, methylation was ...
Herbst-Kralovetz, Melissa M.; Pyles, Richard B.
2006-01-01
Alternative strategies for controlling the growing herpes simplex virus type 2 (HSV-2) epidemic are needed. A novel class of immunomodulatory microbicides has shown promise as antiherpetics, including intravaginally applied CpG-containing oligodeoxynucleotides that stimulate toll-like receptor 9 (TLR9). In the current study, we quantified protection against experimental genital HSV-2 infection provided by an alternative nucleic acid-based TLR agonist, polyinosine-poly(C) (PIC) (TLR3 agonist)....
What is the Nature of a Post-Materialist Paradigm? Three Types of Theories.
Schwartz, Gary E
2016-01-01
What does it mean to have a post-materialist theory? I propose that there are three classes or categories of theories. (1) Type I post-materialist theories: neo-physical theories that are derived from materialist theories, where the materialist theories are still seen as primary and are viewed as being fundamentally necessary to create "non-material" (yet physical) phenomena such as consciousness. (2) Type II post-materialist theories: post-materialist theories of consciousness existing alongside materialist theories, where each class of theories are seen as primary and are viewed as not being derivable from (i.e. are not reducible to) the other And (3) Type I post-materialist theories: where materialist theories are derived from, and are a subset of, more inclusive post-materialist theories of consciousness; here post-materialist theories are seen as primary and are viewed as the ultimate origin of material systems. Type I theories are the least controversial, Type III are the most controversial. The three types of theories are considered in the context of the history of the emergence of post-materialist science. PMID:26898794
Quantification of Aerosol Type, and Sources of Aerosols Over the Indo- Gangetic Plain
Kedia, Sumita; Ramachandran, S.; Holben, Brent N.; Tripathi, S. N.
2014-01-01
Differences and similarities in aerosol characteristics, for the first time, over two environmentally distinct locations in Indo-Gangetic plain (IGP) e Kanpur (KPR) (urban location) and Gandhi College (GC) (rural site) are examined. Aerosol optical depths (AODs) exhibit pronounced seasonal variability with higher values during winter and premonsoon. Aerosol fine mode fraction (FMF) and Ångstrom exponent (a) are higher over GC than KPR indicating relatively higher fine mode aerosol concentration over GC. Higher FMF over GC is attributed to local biomass burning activities. Analysis of AOD spectra revealed that aerosol size distribution is dominated by wide range of fine mode fractions or mixture of modes during winter and postmonsoon, while during premonsoon and monsoon coarse mode aerosols are more abundant. Single scattering albedo (SSA) is lower over GC than KPR. SSA spectra reveals the abundance of fine mode (coarse mode) absorbing (scattering) aerosols during winter and postmonsoon (premonsoon and monsoon). Spectral SSA features reveal that OC contribution to enhanced absorption is negligible. Analysis shows that absorbing aerosols can be classified as Mostly Black Carbon (BC), and Mixed BC and Dust over IGP. Mixed BC and dust is always higher over KPR, while Mostly BC is higher over GC throughout the year. The amount of long range transported dust exhibits a gradient between KPR (higher) and GC (lower). Results on seasonally varying aerosol types, and absorbing aerosol types and their gradients over an aerosol hotspot are important to tune models and to reduce the uncertainty in radiative and climate impact of aerosols.
Several analytical techniques that are currently available can be used to determine the spatial distribution and amount of austenite, ferrite and precipitate phases in steels. The application of magnetic force microscopy, in particular, to study the local microstructure of stainless steels is beneficial due to the selectivity of this technique for detection of ferromagnetic phases. In the comparison of Magnetic Force Microscopy and Electron Back-Scatter Diffraction for the morphological mapping and quantification of ferrite, the degree of sub-surface measurement has been found to be critical. Through the use of surface shielding, it has been possible to show that Magnetic Force Microscopy has a measurement depth of 105–140 nm. A comparison of the two techniques together with the depth of measurement capabilities are discussed. - Highlights: • MFM used to map distribution and quantify ferrite in type 321 stainless steels. • MFM results compared with EBSD for same region, showing good spatial correlation. • MFM gives higher area fraction of ferrite than EBSD due to sub-surface measurement. • From controlled experiments MFM depth sensitivity measured from 105 to 140 nm. • A correction factor to calculate area fraction from MFM data is estimated
In vivo quantification of brain injury in adult Niemann-Pick Disease Type C.
Zaaraoui, Wafaa; Crespy, Lydie; Rico, Audrey; Faivre, Anthony; Soulier, Elisabeth; Confort-Gouny, Sylviane; Cozzone, Patrick J; Pelletier, Jean; Ranjeva, Jean-Philippe; Kaphan, Elsa; Audoin, Bertrand
2011-06-01
Development of surrogate markers is necessary to assess the potential efficacy of new therapeutics in Niemann-Pick Disease Type C (NP-C). In the present study, magnetization transfer ratio (MTR) imaging, a quantitative MRI imaging technique sensitive to subtle brain microstructural changes, was applied in two patients suffering from adult NP-C. Statistical mapping analysis was performed to compare each patient's MTR maps with those of a group of 34 healthy controls to quantify and localize the extent of brain injury of each patient. Using this method, pathological changes were evidenced in the cerebellum, the thalami and the lenticular nuclei in both patients and also in the fronto-temporal cortices in the patient with the worse functional deficit. In addition, white matter changes were located in the midbrain, the cerebellum and the fronto-temporal lobes in the patient with the higher level of disability and in only one limited periventricular white matter region in the other patient. A 6-month follow-up was performed in the patient with the lower functional deficit and evidenced significant extension of grey matter (GM) and white matter (WM) injuries during the following period (14% of increased injury for GM and 53% for WM). This study demonstrates that significant brain injury related to clinical deficit can be assessed in vivo in adult NP-C using MTR imaging. Although preliminary, these findings suggest that MTR imaging may be a relevant candidate for the development of biomarker in NP-C. PMID:21397539
Five Type 304L stainless steel specimens were subjected to incrementally increasing values of plastic strain. At each value of strain, the associated static stress was recorded and the specimen was subjected to positron annihilation spectroscopy (PAS) using the Doppler Broadening method. A calibration curve for the 'S' parameter as a function of stress was developed based on the five specimens. Seven different specimens (blind specimens labeled B1-B7) of 304L stainless steel were subjected to values of stress inducing plastic deformation. The values of stress ranged from 310 to 517 MPa. The seven specimens were subjected to PAS post-loading using the Doppler Broadening method, and the results were compared against the developed curve from the previous five specimens. It was found that a strong correlation exists between the 'S' parameter, stress, and strain up to a strain value of 15%, corresponding to a stress value of 500 MPa, beyond which saturation of the 'S' parameter occurs. Research Highlights: → Specimens were initially in an annealed/recrystallized condition. → Calibration results indicate positron annihilation measurements yield correlation. → Deformation produced by cold work was likely larger than the maximum strain.
Walters, Thomas W., E-mail: Thomas.Walters@inl.gov [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID, 83415-6188 (United States); Walters, Leon C. [Argonne National Laboratory, 9700 Cass Ave., Argonne, IL, 60439 (United States); Schoen, Marco P.; Naidu, D. Subbaram [Idaho State University, 921 S. 8th Avenue, Pocatello, ID, 83201 (United States); Dickerson, Charles [Positron Systems, Inc., 1500 Alvin Ricken Dr., Pocatello, ID, 83201-2783 (United States); Perrenoud, Ben C. [Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID, 83415-6188 (United States)
2011-04-15
Five Type 304L stainless steel specimens were subjected to incrementally increasing values of plastic strain. At each value of strain, the associated static stress was recorded and the specimen was subjected to positron annihilation spectroscopy (PAS) using the Doppler Broadening method. A calibration curve for the 'S' parameter as a function of stress was developed based on the five specimens. Seven different specimens (blind specimens labeled B1-B7) of 304L stainless steel were subjected to values of stress inducing plastic deformation. The values of stress ranged from 310 to 517 MPa. The seven specimens were subjected to PAS post-loading using the Doppler Broadening method, and the results were compared against the developed curve from the previous five specimens. It was found that a strong correlation exists between the 'S' parameter, stress, and strain up to a strain value of 15%, corresponding to a stress value of 500 MPa, beyond which saturation of the 'S' parameter occurs. Research Highlights: {yields} Specimens were initially in an annealed/recrystallized condition. {yields} Calibration results indicate positron annihilation measurements yield correlation. {yields} Deformation produced by cold work was likely larger than the maximum strain.
Type IIA flux compactifications. Vacua, effective theories and cosmological challenges
Koers, Simon
2009-07-30
In this thesis, we studied a number of type IIA SU(3)-structure compactifications with 06-planes on nilmanifolds and cosets, which are tractable enough to allow for an explicit derivation of the low energy effective theory. In particular we calculated the mass spectrum of the light scalar modes, using N = 1 supergravity techniques. For the torus and the Iwasawa solution, we have also performed an explicit Kaluza-Klein reduction, which led to the same result. For the nilmanifold examples we have found that there are always three unstabilized moduli corresponding to axions in the RR sector. On the other hand, in the coset models, except for SU(2) x SU(2), all moduli are stabilized. We discussed the Kaluza-Klein decoupling for the supersymmetric AdS vacua and found that it requires going to the Nearly-Calabi Yau limited. We searched for non-trivial de Sitter minima in the original flux potential away from the AdS vacuum. Finally, in chapter 7, we focused on a family of three coset spaces and constructed non-supersymmetric vacua on them. (orig.)
Type IIA flux compactifications. Vacua, effective theories and cosmological challenges
In this thesis, we studied a number of type IIA SU(3)-structure compactifications with 06-planes on nilmanifolds and cosets, which are tractable enough to allow for an explicit derivation of the low energy effective theory. In particular we calculated the mass spectrum of the light scalar modes, using N = 1 supergravity techniques. For the torus and the Iwasawa solution, we have also performed an explicit Kaluza-Klein reduction, which led to the same result. For the nilmanifold examples we have found that there are always three unstabilized moduli corresponding to axions in the RR sector. On the other hand, in the coset models, except for SU(2) x SU(2), all moduli are stabilized. We discussed the Kaluza-Klein decoupling for the supersymmetric AdS vacua and found that it requires going to the Nearly-Calabi Yau limited. We searched for non-trivial de Sitter minima in the original flux potential away from the AdS vacuum. Finally, in chapter 7, we focused on a family of three coset spaces and constructed non-supersymmetric vacua on them. (orig.)
On SYM theory and all order Bulk Singularity Structures of BPS Strings in type II theory
Hatefi, Ehsan
2016-01-01
The complete form of the S-matrix elements of three supersymmetric Yang-Mills (SYM), namely a transverse scalar field, two world volume gauge fields and a Potential $C_{n-1}$ Ramond-Ramond (RR) form field has been investigated. Basically, in order to find out an infinite number of $t,s, (t+s+u)$ channel bulk singularity structures of that particular mixed closed-open amplitude, we employ all the conformal field theory techniques to $$ and explore all the entire correlation functions as well as all order $\\alpha'$ contact interactions of these SYM couplings. The comparisons with the other symmetric analysis $$ and $$ are also carried out in detail. Various couplings from pull-Back of branes, Myers terms and several generalized Bianchi identities should be taken into account to be able to reconstruct all order $\\alpha'$ bulk singularities of type IIB(IIA) superstring theory. Finally, we make a comment on how to derive without any ambiguity all order $\\alpha'$ contact terms of those elements of the S-matrix that...
Type-Token Dichotomy in the Identity Theory of Mind
Nath, Dr. Shanjendu
2014-01-01
Identity theory of mind occupies an important place in the history of philosophy of mind. According to his theory mental events are nothing but physical events in the brain. This theory came into existence as a reaction of behaviourism and developed by U. T. Place, J. J. C. Smart, H. Feigl and others. But there is a debate among the profounder of the theory and this is- whether it is said about concrete particulars, (e.g., individual instances of occurring in particular subject at particular ...
Rotational Invariance in the M(atrix) Formulation of Type IIB Theory
Sethi, S K; Sethi, Savdeep; Susskind, Leonard
1997-01-01
The matrix model formulation of M-theory can be generalized by compactification to ten-dimensional type II string theory, formulated in the infinite momentum frame. Both the type IIA and IIB string theories can be formulated in this way. In the M-theory and type IIA cases, the transverse rotational invariance is manifest, but in the IIB case, one of the transverse dimensions materializes in a completely different way from the other seven. The full O(8) rotational symmetry then follows in a surprising way from the electric-magnetic duality of supersymmetric Yang-Mills field theory.
A Nominal Theory of Objects with Dependent Types
Odersky, Martin; Cremet, Vincent; Rckl, Christine; Zenger, Matthias
2002-01-01
We design and study newObj, a calculus and dependent type system for objects and classes which can have types as members. Type members can be aliases, abstract types, or new types. The type system can model the essential concepts of Java's inner classes as well as virtual types and family polymorphism found in BETA or gbeta. It can also model most concepts of SML-style module systems, including sharing constraints and higher-order functors, but excluding applicative functors. The type system ...
Approximate Newton-type methods via theory of control
Yap, Chui Ying; Leong, Wah June
2014-12-01
In this paper, we investigate the possible use of control theory, particularly theory on optimal control to derive some numerical methods for unconstrained optimization problems. Based upon this control theory, we derive a Levenberg-Marquardt-like method that guarantees greatest descent in a particular search region. The implementation of this method in its original form requires inversion of a non-sparse matrix or equivalently solving a linear system in every iteration. Thus, an approximation of the proposed method via quasi-Newton update is constructed. Numerical results indicate that the new method is more effective and practical.
Song, Fenhong; El-Demerdash, Aref; Lee, Shwn-Ji Susie H
2012-11-01
A flow injection tandem mass spectrometry method (FI-MS/MS) has been developed to detect enzyme phosphodiesterase type 5 inhibitors, including tadalafil, sildenafil, and vardenafil. Multiple reaction monitoring (MRM) was used to detect the drugs and product ion ratios were used for identification. FI-MS/MS was used for semi-quantification and liquid chromatography tandem mass spectrometry (LC-MS/MS) was used for further confirmation and quantification. One of 13 samples has been found to be adulterated with prescription levels of tadalafil and also low level of sildenafil. The method can be used for screening large numbers of herbal products for adulteration since it takes less than 1 min without chromatographic separation on a column. PMID:22695818
Banerjee, Sourav; Ahmed, Riaz
2013-06-01
A systematic framework for incubation of damage- state quantification in composites is almost absent in the current practice. Identification and quantification of the material state at its early stage has become significantly important in the field of structural health monitoring. Interaction between the intrinsic material state and ultrasonic wave signals, e.g., nonlinear ultrasonic, higher harmonic generation, etc., in metals are quite well known and well documented in the literature. However, it is extremely challenging to quantify the precursor to damage state in composite materials. Thus, in this paper, a comparatively simple but efficient novel approach is proposed to quantify the "incubation of damage" state using scanning acoustic microscopy. The proposed approach exploits the hybrid microcontinuum field theory to quantify the intrinsic (multi-scale) damage state. Defying the conventional route of bottom-up multi-scale modeling methods, a hybrid top-down approach is presented, which is then correlated to the ultrasonic signature obtained from the materials. A parameter to quantify incubation of damage at meso-scale has been identified in this paper. The intrinsic length-scale-dependent parameter called "damage entropy" closely resembles the material state resulting from fatigue, extreme environments, operational hazards or spatio-temporal variability, etc. The proposed quantification process involves a fusion between micromorphic physics and high-frequency ultrasonics in an unconventional way. The proposed approach is validated through an experimental study conducted on glass-fiber reinforced polymer composites which are mechanically fatigued. Specimens were characterized under a scanning acoustic microscope at 50 and 100 MHz. The imaging data and the sensor signals are characterized to quantify the incubation of damage state by the new parameter damage entropy. PMID:25004477
Krichever-Novikov type algebras theory and applications
Schlichenmaier, Martin
2014-01-01
Krichever and Novikov introduced certain classes of infinite dimensionalLie algebrasto extend the Virasoro algebra and its related algebras to Riemann surfaces of higher genus. The author of this book generalized and extended them toa more general setting needed by the applications. Examples of applications are Conformal Field Theory, Wess-Zumino-Novikov-Witten models, moduli space problems, integrable systems, Lax operator algebras, and deformation theory of Lie algebra. Furthermore they constitute an important class of infinite dimensional Lie algebras which due to their geometric origin are
Properties of vacuum and brane spectrum of Type IIB string theory
Tetiana Obikhod
2015-10-01
Full Text Available F-theory has been receiving more attention in the past few years because its rich structure allows to solve many problems of the Standard Model and Grand Unification Theory. This theory is also important because of the necessity to solve the problem of vacuum stability. A simpler solution of F-theory is used to describe the Type IIB string theory. For the classification of D-brane charges in superstring theory of Type IIB is applied K-theory. This approach provides an access to gauge fields connected with vector bundles, classified by K-theory. This technique implements the resolution of issues related to the structure of scales and hierarchies, the gauge group and charged matter content.
Precise iteration formulae of the Maslov-type index theory for symplectic paths
In this paper, using homotopy components of symplectic matrices, and basic properties of the Maslov-type index theory, we establish precise iteration formulae of the Maslov-type index theory for any path in the symplectic group starting from the identity. (author)
Initial layer theory and model equations of Volterra type
It is demonstrated here that there exist initial layers to singularly perturbed Volterra equations whose thicknesses are not of order of magnitude of 0(ε), ε → 0. It is also shown that the initial layer theory is extremely useful because it allows one to construct the approximate solution to an equation, which is almost identical to the exact solution. (author)
Toyota, Akie; Akiyama, Hiroshi; Sugimura, Mitsunori; Watanabe, Takahiro; Kikuchi, Hiroyuki; Kanamori, Hisayuki; Hino, Akihiro; Esaka, Muneharu; Maitani, Tamio
2006-04-01
Because the labeling of grains and feed- and foodstuffs is mandatory if the genetically modified organism (GMO) content exceeds a certain level of approved genetically modified varieties in many countries, there is a need for a rapid and useful method of GMO quantification in food samples. In this study, a rapid detection system was developed for Roundup Ready Soybean (RRS) quantification using a combination of a capillary-type real-time PCR system, a LightCycler real-time PCR system, and plasmid DNA as the reference standard. In addition, we showed for the first time that the plasmid and genomic DNA should be similar in the established detection system because the PCR efficiencies of using plasmid DNA and using genomic DNA were not significantly different. The conversion factor (Cf) to calculate RRS content (%) was further determined from the average value analyzed in three laboratories. The accuracy and reproducibility of this system for RRS quantification at a level of 5.0% were within a range from 4.46 to 5.07% for RRS content and within a range from 2.0% to 7.0% for the relative standard deviation (RSD) value, respectively. This system rapidly monitored the labeling system and had allowable levels of accuracy and precision. PMID:16636447
New Type IIB Backgrounds and Aspects of Their Field Theory Duals
Caceres, Elena; Nune, Carlos
2014-01-01
In this paper we study aspects of geometries in Type IIA and Type IIB String theory and elaborate on their field theory dual pairs. The backgrounds are associated with reductions to Type IIA of solutions with $G_2$ holonomy in eleven dimensions. We classify these backgrounds according to their G-structure, perform a non-Abelian T-duality on them and find new Type IIB configurations presenting {\\it dynamical} $SU(2)$-structure. We study some aspects of the associated field theories defined by these new backgrounds. Various technical details are clearly spelled out.
The Clairaut-type formalism for degenerate Lagrangian theories
Duplij, Steven
2010-01-01
A self-consistent description of degenerate Lagrangian theories is made by introducing a Clairaut-like version of the Hamiltonian formalism. A generalization of the Legendre transform to the case when the Hessian is zero is done using the mixed (envelope/general) solutions of the multidimensional Clairaut equation. The corresponding system of equations of motion is equivalent to the Lagrange equations and has a subsytem for "unresolved" velocities. Then it is presented in the Hamiltonian-like form by introducing a new (non-Lie) bracket. This is a "shortened" formalism since finally it does not contain "nondynamical" (degenerate) momenta at all, and therefore there is no notion of constraint: nothing to constrain. It is shown that any classical degenerate Lagrangian theory in its Clairaut-like Hamiltonian form is equivalent to the many-time classical dynamics.
A matrix model for heterotic Spin(32)/Z2 and type I string theory
We consider heterotic string theories in the DLCQ. We derive that the matrix model of the Spin(32)/Z2 heterotic theory is the theory living on N D-strings in type I wound on a circle with no Spin(32)/Z2 Wilson line on the circle. This is an O(N) gauge theory. We rederive the matrix model for the E8xE8 heterotic string theory, explicitly taking care of the Wilson line around the lightlike circle. The result is the same theory as for Spin(32)/Z2 except that now there is a Wilson line on the circle. We also see that the integer N labeling the sector of the O(N) matrix model is not just the momentum around the lightlike circle, but a shifted momentum depending on the Wilson line. We discuss the aspect of level matching, GSO projections and why, from the point of view of matrix theory the E8xE8 theory, and not the Spin(32)/Z2, develops an 11th dimension for strong coupling. Furthermore a matrix theory for type I is derived. This is again the O(N) theory living on the D-strings of type I. For small type I coupling the system is 0+1-dimensional quantum mechanics
Quantification and Domain Restriction in Basque
Etxeberria, Urtzi
2005-01-01
The main goal of this dissertation is to contribute to the understanding of the internal structure of Basque quantification in particular and natural language quantification in general within the framework of Generalized Quantifier Theory. The proposal put forward in it demonstrates that the standard analysis of Generalized Quantifiers is correct and that it can account for quantification in natural languages, pace alternative analyses that argue for a revision. Assuming that quantification i...
Bianchi Type VI1 Viscous Fluid Cosmological Model in Wesson´s Theory of Gravitation
KHADEKAR, Govardhan S.; AVACHAR, Gajanan Rambhau
2007-01-01
Field equations of a scale invariant theory of gravitation proposed by Wesson [1, 2] are obtained in the presence of viscous fluid with the aid of Bianchi type VIh space-time with the time dependent gauge function (Dirac gauge). It is found that Bianchi type VIh (h = 1) space-time with viscous fluid is feasible in this theory, whereas Bianchi type VIh (h = -1, 0) space-times are not feasible in this theory, even in the presence of viscosity. For the feasible case, by assuming a relat...
Natural dark matter from type I string theory
We study neutralino dark matter within a semi-realistic type I string model, where supersymmetry breaking arises from F-terms of moduli fields parameterised in terms of Goldstino angles, which automatically gives rise to non-universal soft third sfamily and gaugino masses. We study the fine-tuning sensitivities for dark matter and electroweak symmetry breaking across the parameter space of the type I string model, and compare the results to a similar analysis in the non-universal MSSM. Within the type I string model we find that neutralino dark matter can be naturally implemented in the ?-tilde bulk region, the Z0 resonance region and the maximally tempered Bino/Wino/Higgsino region, in agreement with the results of the non-universal MSSM analysis. We also find that in the type I string model the 'well-tempered' Bino/Wino region is less fine-tuned than in the MSSM, whereas the ?-tilde co-annihilation region exhibits a significantly higher degree of fine-tuning than in the MSSM
On M-theory and the symmetries of type II string effective actions
We study the ''ordinary'' Scherk-Schwarz dimensional reduction of the bosonic sector of the low energy effective action of a hypothetical M-theory on S1 x S1?T2. We thus obtain the low energy effective actions of type IIA string theory in both ten and nine space-time dimensions. We point out how to obtain the O(1,1) invariance of the NS-NS sector of the string effective action correctly in nine dimensions. We dimensionally reduce the type IIB string effective action on S1 and show that the resulting nine-dimensional theory can be mapped, purely from the bosonic consideration, exactly to the type IIA theory by an O(1,1) or Buscher's T-duality transformations. We then give a dynamical argument, in analogy with that for the type IIB theory in ten dimensions, to show how an S-duality in the type IIA theory can be understood from the underlying nine-dimensional theory by compactifying M-theory on a T-dual torus T2. (orig.)
Eady Solitary Waves: A Theory of Type B Cyclogenesis.
Mitsudera, Humio
1994-11-01
Localized baroclinic instability in a weakly nonlinear, long-wave limit using an Eady model is studied. The resulting evolution equations have a form of the KdV type, including extra terms representing linear coupling. Baroclinic instability is triggered locally by the collision between two neutral solitary waves (one trapped at the upper boundary and the other at the lower boundary) if their incident amplitudes are sufficiently large. This characteristic is explained from the viewpoint of resonance when the relative phase speed, which depends on the amplitudes, is less than a critical value. The upper and lower disturbances grow in a coupled manner (resembling a normal-mode structure) initially, but they reverse direction slowly as the amplitudes increase, and eventually separate from each other.The motivation of this study is to investigate a type of extratropical cyclogenesis that involves a preexisting upper trough (termed as Type B development) from the viewpoint of resonant solitary waves. Two cases are of particular interest. First, the author examines a case where an upper disturbance preexists over an undisturbed low-level waveguide. The solitary waves exhibit behavior similar to that conceived by Hoskins et al. for Type B development; the lower disturbance is forced one sidedly by a preexisting upper disturbance initially, but in turn forces the latter once the former attains a sufficient amplitude, thus resulting in mutual reinforcement. Second, if a weak perturbation exists at the surface ahead of the preexisting strong upper disturbance, baroclinic instability is triggered when the two waves interact. Even though the amplitude of the lower disturbance is initially much weaker, it is intensified quickly and catches up with the amplitude of the upper disturbance, so that the coupled vertical structure resembles that of an unstable normal mode eventually. These results describe the observed behavior in Type B atmospheric cyclogenesis quite well.
Applications of differential sensitivity theory for extremum-type responses
A recently developed sensitivity theory for nonlinear systems with responses defined at critical points, e.g. maxima, minima, or saddle points, of a function of the system's state variables and parameters is applied to a protected transient with scram on high power level in the Fast Flux Test Facility. The single-phase segment of the fast reactor safety code MELT-III B is used to model this transient. Two responses of practical importance, viz. The maximum fuel temperature in the hot channel, and the maximum normalized reactor power level, are considered. For the purposes of sensitivity analysis, a complete characterization of such responses requires consideration of both the numerical value of the response at the maximum, and the location in phase-space where the maximum occurs. This is because variations in the system parameters alter not only the value at this maximum but also alter the location of the maximum in phase-space
On the theory of supernova type Ia explosion.
Liberman, M. A.
2000-03-01
A self-consistent model of a white dwarf burning in supernova Ia events is presented which includes the consequent stages of the flame, the spontaneous explosion and the detonation. The spontaneous explosion triggers the detonation, which incinerates the rest of the pre-expanded star. The expansion of the white dwarf during the flame stage of burning leads to the production of intermediate mass elements (S, Si, Ca etc.) in agreement with the observed spectrum. Stability analysis of the thermonuclear detonation in a white dwarf shows that the detonation is unstable and self-quenching at high densities of the degenerate matter ρ > 2.1·107 g/cm3 and it becomes stable at lower densities. The detonation overcomes gravitational binding and causes mass ejection. The proposed theory provides the physical basis for the explanation of the observed spectrum of supernovae Ia.
Gödel-type metrics in Einstein-Aether theory II: nonflat background in arbitrary dimensions
Gürses, Metin; Şentürk, Çetin
2016-05-01
It was previously proved that the Gödel-type metrics with flat three-dimensional background metric solve exactly the field equations of the Einstein-Aether theory in four dimensions. We generalize this result by showing that the stationary Gödel-type metrics with nonflat background in D dimensions solve exactly the field equations of the Einstein-Aether theory. The reduced field equations are the (D-1)-dimensional Euclidean Ricci-flat and the (D-1)-dimensional source-free Maxwell equations, and the parameters of the theory are left free except c1-c3=1. We give a method to produce exact solutions of the Einstein-Aether theory from the Gödel-type metrics in D dimensions. By using this method, we present explicit exact solutions to the theory by considering the particular cases: (D-1)-dimensional Euclidean flat, conformally flat, and Tangherlini backgrounds.
Godel Type Metrics in Einstein-Aether Theory II: Nonflat Background in Arbitrary Dimensions
Gurses, Metin
2015-01-01
It was previously proved that the G\\"{o}del-type metrics with flat three-dimensional background metric solve exactly the field equations of the Einstein-Aether theory in four dimensions. We generalize this result by showing that the stationary G\\"{o}del-type metrics with nonflat background in $D$ dimensions solve exactly the field equations of the Einstein-Aether theory. The reduced field equations are the $(D-1)$-dimensional Euclidean Ricci-flat and the $(D-1)$-dimensional source-free Maxwell equations, and the parameters of the theory are left free except $c_{1}-c_{3}=1$. We give a method to produce exact solutions of the Einstein-Aether theory from the G\\"{o}del-type metrics in $D$ dimensions. By using this method, we present explicit exact solutions to the theory by considering the particular cases: ($D-1$)-dimensional Euclidean flat, conformally flat, and Tangherlini backgrounds.
Gdel-type Spacetimes in Induced Matter Gravity Theory
Carrion, H L; Teixeira, A F F
1999-01-01
A five-dimensional (5D) generalized Gdel-type manifolds are examined in the light of the equivalence problem techniques, as formulated by Cartan. The necessary and sufficient conditions for local homogeneity of these 5D manifolds are derived. The local equivalence of these homogeneous Riemannian manifolds is studied. It is found that they are characterized by three essential parameters $k$, $m^2$ and $\\omega$: identical triads $(k, m^2, \\omega)$ correspond to locally equivalent 5D manifolds. An irreducible set of isometrically nonequivalent 5D locally homogeneous Riemannian generalized Gdel-type metrics are exhibited. A classification of these manifolds based on the essential parameters is presented, and the Killing vector fields as well as the corresponding Lie algebra of each class are determined. It is shown that the generalized Gdel-type 5D manifolds admit maximal group of isometry $G_r$ with $r=7$, $r=9$ or $r=15$ depending on the essential parameters $k$, $m^2$ and $\\omega$. The breakdown of causa...
Lifshitz-type SU(N) lattice gauge theory in five dimensions
Kanazawa, Takuya
2015-01-01
We present a lattice formulation of non-Abelian Lifshitz-type gauge theories. Due to anisotropic scaling of space and time, the theory is asymptotically free even in five dimensions. We show results of Monte Carlo simulations that suggest a smooth approach to the continuum limit.
Quantum Field Theory Applications of Heun Type Functions
Birkandan, T
2016-01-01
After a brief introduction to Heun type functions we note that the actual solutions of the eigenvalue equation emerging in the calculation of the one loop contribution to QCD from the Belavin-Polyakov-Schwarz-Tyupkin instanton and the similar calculation for a Dirac particle coupled to a scalar $CP^1$ model in two dimensions can be given in terms of confluent Heun equation in their original forms. These equations were previously modified to be solved by more elementary functions. We also show that polynomial solutions with discrete eigenvalues are impossible to find in the unmodified equations.
Theory of zeolite supralattices: Se in zeolite Linde type A
We study theoretically properties of Se clusters in zeolites, and choose zeolite Linde type A (LTA) as a prototype system. The geometries of free-space Se clusters are first determined, and we report the energetics and electronic and vibrational properties of these clusters. The work on clusters includes an investigation of the energetics of C3-C1 defect formation in Se rings and chains. The electronic properties of two Se crystalline polymorphs, trigonal Se and -monoclinic Se, are also determined. Electronic and vibrational properties of the zeolite LTA are investigated. Next we investigate the electronic and optical properties of ring-like Se clusters inside the large -cages of LTA. We find that Se clusters inside cages of silaceous LTA have very little interaction with the zeolite, and that the HOMO-LUMO gaps (HOMO standing for highest occupied molecular orbital and LUMO for lowest unoccupied molecular orbital) are nearly those of the isolated cluster. The HOMO-LUMO gaps of Se6, Se8, and Se12 are found to be similar, which makes it difficult to identify them experimentally by absorption spectroscopy. We find that the zeolite/Se8 nanocomposite is lower in energy than the two separated systems. We also investigate two types of infinite chain encapsulated in LTA. Finally, we carry out finite-temperature molecular dynamics simulations for an encapsulated Se12 cluster, which shows cluster melting and formation of nanoscale Se droplets in the?-cages of LTA. (author)
Weyl's Predicative Classical Mathematics as a Logic-Enriched Type Theory
Adams, Robin
2008-01-01
We construct a logic-enriched type theory LTTW that corresponds closely to the predicative system of foundations presented by Hermann Weyl in Das Kontinuum. We formalise many results from that book in LTTW, including Weyl's definition of the cardinality of a set and several results from real analysis, using the proof assistant Plastic that implements the logical framework LF. This case study shows how type theory can be used to reprezsent a non-constructive foundation for mathematics.
Denotation of syntax and metaprogramming in contextual modal type theory (CMTT)
Gabbay, Murdoch
2012-01-01
The modal logic S4 can be used via a Curry-Howard style correspondence to obtain a lambda-calculus. Modal (boxed) types are intuitively interpreted as `closed syntax of the calculus'. This lambda-calculus is called modal type theory --- this is the basic case of a more general contextual modal type theory, or CMTT. CMTT has never been given a denotational semantics in which modal types are given denotation as closed syntax. We show how this can indeed be done, with a twist. We also use the denotation to prove some properties of the system.
$\\mathcal{N}=2$ supersymmetric field theories on 3-manifolds with A-type boundaries
Aprile, Francesco
2016-01-01
General half-BPS A-type boundary conditions are formulated for N=2 supersymmetric field theories on compact 3-manifolds with boundary. We observe that under suitable conditions manifolds of the real A-type admitting two complex supersymmetries (related by charge conjugation) possess, besides a contact structure, a natural integrable toric foliation. A boundary, or a general co-dimension-1 defect, can be inserted along any leaf of this preferred foliation to produce manifolds with boundary that have the topology of a solid torus. We show that supersymmetric field theories on such manifolds can be endowed with half-BPS A-type boundary conditions. We specify the natural curved space generalization of the A-type projection of bulk supersymmetries and analyze the resulting A-type boundary conditions in generic 3d non-linear sigma models and YM/CS-matter theories.
Krull dimension of types in a class of first-order theories
Domenico, Zambella
2008-01-01
We study a class of first-order theories whose complete quantifier-free types with one free variable either have a trivial positive part or are isolated by a positive quantifier-free formula--plus a few other technical requirements. The theory of vector spaces and the theory fields are examples. We prove the amalgamation property and the existence of a model-companion. We show that the model-companion is strongly minimal. We also prove that the length of any increasing sequence of prime types...
Type-I D-branes in an H-flux and twisted KO-theory
Witten has argued that charges of type-I D-branes in the presence of an H-flux, take values in twisted KO-theory. We begin with the study of real bundle gerbes and their holonomy. We then introduce the notion of real bundle gerbe KO-theory which we establish is a geometric realization of twisted KO-theory. We examine the relation with twisted K-theory, the Chern character and provide some examples. We conclude with some open problems. (author)
An orbifold of Type 0B strings and non-supersymmetric gauge theories
We study a Z2 orbifold of Type 0B string theory by reflection of six of the coordinates (this theory may also be thought of as a Z4 orbifold of Type IIB by a rotation by π in three independent planes). We show that the only massless mode localized on the fixed fourplane R3,1 is a U(1) gauge field. After introducing D3-branes parallel to the fixed fourplane we find non-supersymmetric non-abelian gauge theories on their worldvolume. One of our results is that the theory on equal numbers of electric and magnetic D3-branes placed at the fourplane is the Z4 orbifold of N=4 supersymmetric Yang-Mills theory by the center of its R-symmetry group
Abelian gauge symmetries and fluxed instantons in compactifications of type IIB and F-theory
We discuss the role of Abelian gauge symmetries in type IIB orientifold compactifications and their F-theory uplift. Particular emphasis is placed on U(1)s which become massive through the geometric Stueckelberg mechanism in type IIB. We present a proposal on how to take such geometrically massive U(1)s and the associated fluxes into account in the Kaluza-Klein reduction of F-theory with the help of non-harmonic forms. Evidence for this proposal is obtained by working out the F-theory effective action including such non-harmonic forms and matching the results with the known type IIB expressions. We furthermore discuss how world-volume fluxes on D3-brane instantons affect the instanton charge with respect to U(1) gauge symmetries and the chiral zero mode spectrum. The classical partition function of M5-instantons in F-theory is discussed and compared with the type IIB results for D3-brane instantons. The type IIB match allows us to determine the correct M5 partition function. Selection rules for the absence of chiral charged zero modes on M5-instantons in backgrounds with G4 flux are discussed and compared with the type IIB results. The dimensional reduction of the democratic formulation of M-theory is presented in the appendix.
Abelian gauge symmetries and fluxed instantons in compactifications of type IIB and F-theory
Buenaventura Kerstan, Max Bromo
2013-11-13
We discuss the role of Abelian gauge symmetries in type IIB orientifold compactifications and their F-theory uplift. Particular emphasis is placed on U(1)s which become massive through the geometric Stueckelberg mechanism in type IIB. We present a proposal on how to take such geometrically massive U(1)s and the associated fluxes into account in the Kaluza-Klein reduction of F-theory with the help of non-harmonic forms. Evidence for this proposal is obtained by working out the F-theory effective action including such non-harmonic forms and matching the results with the known type IIB expressions. We furthermore discuss how world-volume fluxes on D3-brane instantons affect the instanton charge with respect to U(1) gauge symmetries and the chiral zero mode spectrum. The classical partition function of M5-instantons in F-theory is discussed and compared with the type IIB results for D3-brane instantons. The type IIB match allows us to determine the correct M5 partition function. Selection rules for the absence of chiral charged zero modes on M5-instantons in backgrounds with G{sub 4} flux are discussed and compared with the type IIB results. The dimensional reduction of the democratic formulation of M-theory is presented in the appendix.
Consequence of the accidental release of radioactivity from a nuclear power plant is assessed in terms of exposure or dose to the members of the public. Assessment of risk is routed through this dose computation. Dose computation basically depends on the basic dose assessment model and exposure pathways. One of the exposure pathways is the ingestion of contaminated food. The aim of the present paper is to compute the uncertainty associated with the risk to the members of the public due to the ingestion of contaminated food. The governing parameters of the ingestion dose assessment model being imprecise, we have approached evidence theory to compute the bound of the risk. The uncertainty is addressed by the belief and plausibility fuzzy measures.
Ingale, S. V.; Datta, D.
2010-10-01
Consequence of the accidental release of radioactivity from a nuclear power plant is assessed in terms of exposure or dose to the members of the public. Assessment of risk is routed through this dose computation. Dose computation basically depends on the basic dose assessment model and exposure pathways. One of the exposure pathways is the ingestion of contaminated food. The aim of the present paper is to compute the uncertainty associated with the risk to the members of the public due to the ingestion of contaminated food. The governing parameters of the ingestion dose assessment model being imprecise, we have approached evidence theory to compute the bound of the risk. The uncertainty is addressed by the belief and plausibility fuzzy measures.
D-branes and dual gauge theories in type 0 strings
We consider the type 0 theories, obtained from the closed NSR string by a diagonal GSO projection which excludes space-time fermions, and study the D-branes in these theories. The low-energy dynamics of N coincident D-branes is governed by a U(N) gauge theory coupled to adjoint scalar fields. It is tempting to look for the type 0 string duals of such bosonic gauge theories in the background of the R-R charged p-brane classical solutions. This results in a picture analogous to the one recently proposed by Polyakov (hep-th/9809057). One of the serious problems that needs to be resolved is the closed string tachyon mode which couples to the D-branes and appears to cause an instability. We study the tachyon terms in the type 0 effective action and argue that the background R-R flux provides a positive shift of the (mass)2 of the tachyon. Thus, for sufficiently large flux, the tachyonic instability may be cured, removing the most basic obstacle to constructing the type 0 duals of non-supersymmetric gauge theories. We further find that the tachyon acquires an expectation value in the presence of the R-R flux. This effect is crucial for breaking the conformal invariance in the dual description of the 3 + 1-dimensional non-supersymmetric gauge theory
Murray, A. Brad; Gasparini, Nicole M.; Goldstein, Evan B.; van der Wegen, Mick
2016-05-01
In Earth-surface science, numerical models are used for a range of purposes, from making quantitatively accurate predictions for practical or scientific purposes ('simulation' models) to testing hypotheses about the essential causes of poorly understood phenomena ('exploratory' models). We argue in this contribution that whereas established methods for uncertainty quantification (UQ) are appropriate (and crucial) for simulation models, their application to exploratory models are less straightforward, and in some contexts not relevant. Because most models fall between the end members of simulation and exploratory models, examining the model contexts under which UQ is most and least appropriate is needed. Challenges to applying state-of-the-art UQ to Earth-surface science models center on quantifying 'model-form' uncertainty-the uncertainty in model predictions related to model imperfections. These challenges include: 1) the difficulty in deterministically comparing model predictions to observations when positive feedbacks and associated autogenic dynamics (a.k.a. 'free' morphodynamics) determine system behavior over the timescales of interest (a difficulty which could be mitigated in a UQ approach involving statistical comparisons); 2) the lack of available data sets at sufficiently large space and/or time scales; 3) the inability to disentangle uncertainties arising from model parameter values and model form in some cases; and 4) the inappropriateness of model 'validation' in the UQ sense for models toward the exploratory end member of the modeling spectrum.
Digital Games for Type 1 and Type 2 Diabetes: Underpinning Theory With Three Illustrative Examples
Gammon, Shauna; Dixon, Mavis C; MacRury, Sandra M; Fergusson, Michael J; Miranda Rodrigues, Francisco; Mourinho Baptista, Telmo; Yang, Stephen P
2015-01-01
Digital games are an important class of eHealth interventions in diabetes, made possible by the Internet and a good range of affordable mobile devices (eg, mobile phones and tablets) available to consumers these days. Gamifying disease management can help children, adolescents, and adults with diabetes to better cope with their lifelong condition. Gamification and social in-game components are used to motivate players/patients and positively change their behavior and lifestyle. In this paper, we start by presenting the main challenges facing people with diabetes—children/adolescents and adults—from a clinical perspective, followed by three short illustrative examples of mobile and desktop game apps and platforms designed by Ayogo Health, Inc. (Vancouver, BC, Canada) for type 1 diabetes (one example) and type 2 diabetes (two examples). The games target different age groups with different needs—children with type 1 diabetes versus adults with type 2 diabetes. The paper is not meant to be an exhaustive review of all digital game offerings available for people with type 1 and type 2 diabetes, but rather to serve as a taster of a few of the game genres on offer today for both types of diabetes, with a brief discussion of (1) some of the underpinning psychological mechanisms of gamified digital interventions and platforms as self-management adherence tools, and more, in diabetes, and (2) some of the hypothesized potential benefits that might be gained from their routine use by people with diabetes. More research evidence from full-scale evaluation studies is needed and expected in the near future that will quantify, qualify, and establish the evidence base concerning this gamification potential, such as what works in each age group/patient type, what does not, and under which settings and criteria. PMID:25791276
Digital games for type 1 and type 2 diabetes: underpinning theory with three illustrative examples.
Kamel Boulos, Maged N; Gammon, Shauna; Dixon, Mavis C; MacRury, Sandra M; Fergusson, Michael J; Miranda Rodrigues, Francisco; Mourinho Baptista, Telmo; Yang, Stephen P
2015-01-01
Digital games are an important class of eHealth interventions in diabetes, made possible by the Internet and a good range of affordable mobile devices (eg, mobile phones and tablets) available to consumers these days. Gamifying disease management can help children, adolescents, and adults with diabetes to better cope with their lifelong condition. Gamification and social in-game components are used to motivate players/patients and positively change their behavior and lifestyle. In this paper, we start by presenting the main challenges facing people with diabetes-children/adolescents and adults-from a clinical perspective, followed by three short illustrative examples of mobile and desktop game apps and platforms designed by Ayogo Health, Inc. (Vancouver, BC, Canada) for type 1 diabetes (one example) and type 2 diabetes (two examples). The games target different age groups with different needs-children with type 1 diabetes versus adults with type 2 diabetes. The paper is not meant to be an exhaustive review of all digital game offerings available for people with type 1 and type 2 diabetes, but rather to serve as a taster of a few of the game genres on offer today for both types of diabetes, with a brief discussion of (1) some of the underpinning psychological mechanisms of gamified digital interventions and platforms as self-management adherence tools, and more, in diabetes, and (2) some of the hypothesized potential benefits that might be gained from their routine use by people with diabetes. More research evidence from full-scale evaluation studies is needed and expected in the near future that will quantify, qualify, and establish the evidence base concerning this gamification potential, such as what works in each age group/patient type, what does not, and under which settings and criteria. PMID:25791276
Cartan's equations define a topological field theory of the BF type
Cartan's first and second structure equations together with first and second Bianchi identities can be interpreted as equations of motion for the tetrad, the connection and a set of two-form fields TI and RJI. From this viewpoint, these equations define by themselves a field theory. Restricting the analysis to four-dimensional spacetimes (keeping gravity in mind), it is possible to give an action principle of the BF type from which these equations of motion are obtained. The action turns out to be equivalent to a linear combination of the Nieh-Yan, Pontrjagin, and Euler classes, and so the field theory defined by the action is topological. Once Einstein's equations are added, the resulting theory is general relativity. Therefore, the current results show that the relationship between general relativity and topological field theories of the BF type is also present in the first-order formalism for general relativity
Cartan's equations define a topological field theory of the BF type
Cuesta, Vladimir; Montesinos, Merced
2007-11-01
Cartan’s first and second structure equations together with first and second Bianchi identities can be interpreted as equations of motion for the tetrad, the connection and a set of two-form fields TI and RJI. From this viewpoint, these equations define by themselves a field theory. Restricting the analysis to four-dimensional spacetimes (keeping gravity in mind), it is possible to give an action principle of the BF type from which these equations of motion are obtained. The action turns out to be equivalent to a linear combination of the Nieh-Yan, Pontrjagin, and Euler classes, and so the field theory defined by the action is topological. Once Einstein’s equations are added, the resulting theory is general relativity. Therefore, the current results show that the relationship between general relativity and topological field theories of the BF type is also present in the first-order formalism for general relativity.
Abelian gauge invariance of the WZ-type coupling in ABJM theory
Dongmin Jang
2015-09-01
Full Text Available We construct the interaction terms between the worldvolume fields of multiple M2-branes and 3-form gauge field of 11-dimensional supergravity, in the context of ABJM theory. The obtained WessZumino-type coupling is simultaneously invariant under the UL(NUR(N non-Abelian gauge transformation of the ABJM theory and the Abelian gauge transformation of the 3-form field in 11-dimensional supergravity.
Hamiltonian BRST deformation of a class of n-dimensional BF-type theories
Consistent hamiltonian interactions that can be added to an abelian free BF-type class of theories in any n≥4 spacetime dimensions are constructed in the framework of the hamiltonian BRST deformation based on cohomological techniques. The resulting model is an interacting field theory in higher dimensions with an open algebra of on-shell reducible first-class constraints. We argue that the hamiltonian couplings are related to a natural structure of Poisson manifold on the target space. (author)
A COSSERAT-TYPE PLATE THEORY AND ITS APPLICATION TO CARBON NANOTUBE MICROSTRUCTURE
Abdellatif Selmi; Hedi Hassis; Issam Doghri; Hatem Zenzri
2014-01-01
The predictive capabilities of plate and shell theories greatly depend on their underlying kinematic assumptions. In this study, we develop a Cosserat-type elastic plate theory which accounts for rotations around the normal to the mid-surface plane (so-called drilling rotations). Internal loads, equilibrium equations, boundary conditions and constitutive equations are derived. The case of a Single Walled carbon Nanotube (SWNT) modelled as a Cosserat medium is taken here as a reference example...
Nonperturbative type IIB model building in the F-theory framework
This dissertation is concerned with the topic of non-perturbative string theory, which is generally considered to be the most promising approach to a consistent description of quantum gravity. The five known 10-dimensional perturbative string theories are all interconnected by numerous dualities, such that an underlying non-perturbative 11-dimensional theory, called M-theory, is postulated. Due to several technical obstacles, little is known about the fundamental objects in this theory. There exists an alternative non-perturbative description to type IIB string theory, namely F-theory. Here the SL(2;Z) self-duality of IIB theory is geometrized in the form of an elliptic fibration over the space-time. Moreover, higher-dimensional objects like 7-branes are included via singularities into the geometric picture. This formally elegant description, however, requires significant technical effort for the construction of suitable compactification geometries, as many different aspects necessarily have to be dealt with at the same time. On the other hand, the generation of essential GUT building blocks like certain Yukawa couplings or spinor representations is easier compared to perturbative string theory. The goal of this study is therefore to formulate a unified theory within the framework of F-theory, that satisfies basic phenomenological constraints. Within this thesis, at first E3-brane instantons in type IIB string theory - 4-dimensional objects that are entirely wrapped around the invisible dimensions of space-time - are matched with M5-branes in F-theory. Such objects are of great importance in the generation of critical Yukawa couplings or the stabilization of the free parameters of a theory. Certain properties of M5-branes then allow to derive a new criterion for E3-branes to contribute to the superpotential. In the aftermath of this analysis, several compactification geometries are constructed and checked for basic properties that are relevant for semi-realistic unified model building. An important aspect is the proper handling of the gauge flux on the 7-branes. Via the spectral cover description - which at first requires further refinements - chiral matter can be generated and the unified gauge group can be broken to the Standard Model. Ultimately, in this thesis an explicit unified model based on the gauge group SU(5) is constructed within the F-theory framework, such that an acceptable phenomenology and the observed three chiral matter generations are obtained. (orig.)
Nonperturbative type IIB model building in the F-theory framework
Jurke, Benjamin Helmut Friedrich
2011-02-28
This dissertation is concerned with the topic of non-perturbative string theory, which is generally considered to be the most promising approach to a consistent description of quantum gravity. The five known 10-dimensional perturbative string theories are all interconnected by numerous dualities, such that an underlying non-perturbative 11-dimensional theory, called M-theory, is postulated. Due to several technical obstacles, little is known about the fundamental objects in this theory. There exists an alternative non-perturbative description to type IIB string theory, namely F-theory. Here the SL(2;Z) self-duality of IIB theory is geometrized in the form of an elliptic fibration over the space-time. Moreover, higher-dimensional objects like 7-branes are included via singularities into the geometric picture. This formally elegant description, however, requires significant technical effort for the construction of suitable compactification geometries, as many different aspects necessarily have to be dealt with at the same time. On the other hand, the generation of essential GUT building blocks like certain Yukawa couplings or spinor representations is easier compared to perturbative string theory. The goal of this study is therefore to formulate a unified theory within the framework of F-theory, that satisfies basic phenomenological constraints. Within this thesis, at first E3-brane instantons in type IIB string theory - 4-dimensional objects that are entirely wrapped around the invisible dimensions of space-time - are matched with M5-branes in F-theory. Such objects are of great importance in the generation of critical Yukawa couplings or the stabilization of the free parameters of a theory. Certain properties of M5-branes then allow to derive a new criterion for E3-branes to contribute to the superpotential. In the aftermath of this analysis, several compactification geometries are constructed and checked for basic properties that are relevant for semi-realistic unified model building. An important aspect is the proper handling of the gauge flux on the 7-branes. Via the spectral cover description - which at first requires further refinements - chiral matter can be generated and the unified gauge group can be broken to the Standard Model. Ultimately, in this thesis an explicit unified model based on the gauge group SU(5) is constructed within the F-theory framework, such that an acceptable phenomenology and the observed three chiral matter generations are obtained. (orig.)
Bianchi Type VI1 Viscous Fluid Cosmological Model in Wesson´s Theory of Gravitation
Khadekar, G. S.; Avachar, G. R.
2007-03-01
Field equations of a scale invariant theory of gravitation proposed by Wesson [1, 2] are obtained in the presence of viscous fluid with the aid of Bianchi type VIh space-time with the time dependent gauge function (Dirac gauge). It is found that Bianchi type VIh (h = 1) space-time with viscous fluid is feasible in this theory, whereas Bianchi type VIh (h = -1, 0) space-times are not feasible in this theory, even in the presence of viscosity. For the feasible case, by assuming a relation connecting viscosity and metric coefficient, we have obtained a nonsingular-radiating model. We have discussed some physical and kinematical properties of the models.
Krull dimension of types in a class of first-order theories
Zambella, Domenico
2011-01-01
We study a class of first-order theories whose complete quantifier-free types with one free variable either have a trivial positive part or are isolated by a positive quantifier-free formula---plus a few other technical requirements. The theory of vector spaces and the theory fields are examples. We prove the amalgamation property and the existence of a model-companion. We show that the model-companion is strongly minimal. We also prove that the length of any increasing sequence of...
Real Separated Algebraic Curves, Quadrature Domains, Ahlfors Type Functions and Operator Theory
Yakubovich, D V
2005-01-01
The aim of this paper is to inter-relate several algebraic and analytic objects, such as real-type algebraic curves, quadrature domains, functions on them and rational matrix functions with special properties, and some objects from Operator Theory, such as vector Toeplitz operators and subnormal operators. Our tools come from operator theory, but some of our results have purely algebraic formulation. We make use of Xia's theory of subnormal operators and of the previous results by the author in this direction. We also correct (in Section 5) some inaccuracies in two papers by the author in Revista Matematica Iberoamericana (1998).
Yang, Paul; Gambino, Nicola; Kock, Joachim
2015-01-01
The two parts of the present volume contain extended conference abstracts corresponding to selected talks given by participants at the "Conference on Geometric Analysis" (thirteen abstracts) and at the "Conference on Type Theory, Homotopy Theory and Univalent Foundations" (seven abstracts), both held at the Centre de Recerca Matemàtica (CRM) in Barcelona from July 1st to 5th, 2013, and from September 23th to 27th, 2013, respectively. Most of them are brief articles, containing preliminary presentations of new results not yet published in regular research journals. The articles are the result of a direct collaboration between active researchers in the area after working in a dynamic and productive atmosphere. The first part is about Geometric Analysis and Conformal Geometry; this modern field lies at the intersection of many branches of mathematics (Riemannian, Conformal, Complex or Algebraic Geometry, Calculus of Variations, PDE's, etc) and relates directly to the physical world, since many natural phenomena...
How to obtain a covariant Breit type equation from relativistic Constraint Theory
It is shown that, by an appropriate modification of the structure of the interaction potential, the Breit equation can be incorporated into a set of two compatible manifestly covariant wave equations, derived from the general rules of Constraint Theory. The complementary equation to the covariant Breit type equation determines the evolution law in the relative time variable. The interaction potential can be systematically calculated in perturbation theory from Feynman diagrams. The normalization condition of the Breit wave function is determined. The wave equation is reduced, for general classes of potential, to a single Pauli-Schroedinger type equation. (author). 27 refs
The D10R4 term in type IIB string theory
The modular invariant coefficient of the D2kR4 term in the effective action of type IIB superstring theory is expected to satisfy Poisson equation on the fundamental domain of SL(2,Z). Under certain assumptions, we obtain the equation satisfied by D10R4 using the tree level and one loop results for four graviton scattering in type II string theory. This leads to the conclusion that the perturbative contributions to D10R4 vanish above three loops, and also predicts the coefficients at two and three loops
Cosmic string solution in a Born-Infeld type theory of gravity
Rocha, W.J. da [Universidade de Brasilia (UnB), DF (Brazil). Inst. de Fisica; Naves de Oliveira, A.L. [Universidade Federal de Vicosa (UFV), Rio Paranaiba, MG (Brazil); Guimaraes, M.E.X. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Inst. de Fisica
2009-07-01
Full text. Advances in the formal structure of string theory point to the emergence, and necessity, of a scalar-tensorial theory of gravity. It seems that, at least at high energy scales, the Einstein's theory is not enough to explain the gravitational phenomena. In other words, the existence of a scalar (gravitational) field acting as a mediator of the gravitational interaction together with the usual purely rank-2 tensorial field is, indeed, a natural prediction of unification models as supergravity, superstrings and M-theory. This type of modified gravitation was first introduced in a different context in the 60's in order to incorporate the Mach's principle into relativity, but nowadays it acquired different sense in cosmology and gravity theories. Although such unification theories are the most acceptable, they all exist in higher dimensional spaces. The compactification from these higher dimensions to the 4-dimensional physics is not unique and there exist many effective theories of gravity which come from the unification process. Each of them must, of course, satisfy some predictions. Here, in this paper, we will deal with one of them. The so-called NDL theory. One important assumption in General Relativity is that all field interact in the same way with gravity. This is the so called Strong Equivalence Principle (SEP). It is well known, with good accuracy, that this is true when we concern with matter to matter interaction, i.e, the Weak Equivalence Principle(WEP) is tested. But, until now, there is no direct observational confirmation of this affirmation to the gravity to gravity interaction. In an extension of the field theoretical description of General Relativity constructed by is used to propose an alternative field theory of gravity. In this theory gravitons propagate in a different spacetime. The velocity of propagation of the gravitational waves in this theory does not coincide with the General Relativity predictions. (author)
Bianchi Type-I, V and VIo models in modified generalized scalar–tensor theory
T Singh; R Chaubey
2007-08-01
In modified generalized scalar–tensor (GST) theory, the cosmological term is a function of the scalar field and its derivatives $\\dot{}^{2}$. We obtain exact solutions of the field equations in Bianchi Type-I, V and VIo space–times. The evolution of the scale factor, the scalar field and the cosmological term has been discussed. The Bianchi Type-I model has been discussed in detail. Further, Bianchi Type-V and VIo models can be studied on the lines similar to Bianchi Type-I model.
Pirayavaraporn, Chompak; Rades, Thomas; Gordon, Keith C; Tucker, Ian G
2013-01-01
nonplasticizing water also influences coalescence of Eudragit RLPO; so there is a need to quantify the different types of water in Eudragit RLPO. The purpose of this study was to distinguish the types of water present in Eudragit RLPO polymer and to investigate the water loss kinetics for these different types of...... be differentiated (dipole interaction of water with quaternary ammonium groups, water cluster, and water indirectly and directly binding to the carbonyl groups of the polymer) but it was not possible to distinguish whether the different types of water were lost at different rates. It is suggested......Coalescence of polymer particles in polymer matrix tablets influences drug release. The literature has emphasized that coalescence occurs above the glass transition temperature (Tg) of the polymer and that water may plasticize (lower Tg) the polymer. However, we have shown previously that...
[18F]MK-9470 is an inverse agonist for the type 1 cannabinoid (CB1) receptor allowing its use in PET imaging. We characterized the kinetics of [18F]MK-9470 and evaluated its ability to quantify CB1 receptor availability in the rat brain. Dynamic small-animal PET scans with [18F]MK-9470 were performed in Wistar rats on a FOCUS-220 system for up to 10 h. Both plasma and perfused brain homogenates were analysed using HPLC to quantify radiometabolites. Displacement and blocking experiments were done using cold MK-9470 and another inverse agonist, SR141716A. The distribution volume (VT) of [18F]MK-9470 was used as a quantitative measure and compared to the use of brain uptake, expressed as SUV, a simplified method of quantification. The percentage of intact [18F]MK-9470 in arterial plasma samples was 80 ± 23 % at 10 min, 38 ± 30 % at 40 min and 13 ± 14 % at 210 min. A polar radiometabolite fraction was detected in plasma and brain tissue. The brain radiometabolite concentration was uniform across the whole brain. Displacement and pretreatment studies showed that 56 % of the tracer binding was specific and reversible. VT values obtained with a one-tissue compartment model plus constrained radiometabolite input had good identifiability (≤10 %). Ignoring the radiometabolite contribution using a one-tissue compartment model alone, i.e. without constrained radiometabolite input, overestimated the [18F]MK-9470 VT, but was correlated. A correlation between [18F]MK-9470 VT and SUV in the brain was also found (R 2 = 0.26-0.33; p ≤ 0.03). While the presence of a brain-penetrating radiometabolite fraction complicates the quantification of [18F]MK-9470 in the rat brain, its tracer kinetics can be modelled using a one-tissue compartment model with and without constrained radiometabolite input. (orig.)
Segami, Shoji; Nakanishi, Yoichi; Sato, Masa H; Maeshima, Masayoshi
2010-08-01
Most plants have two types of H(+)-translocating inorganic pyrophosphatases (H(+)-PPases), I and II, which differ in primary sequence and K(+) dependence of enzyme function. Arabidopsis thaliana has three genes for H(+)-PPases: one for type I and two for type II. The type I H(+)-PPase requires K(+) for maximal enzyme activity and functions together with H(+)-ATPase in vacuolar membranes. The physiological role of the type II enzyme, which does not require K(+), is not clear. We focused on the type II enzymes (AtVHP2;1 and AtVHP2;2) of A. thaliana. Total amounts of AtVHP2s were quantified immunochemically using a specific antibody and determined to be 22 and 12 ng mg(-1) of total protein in the microsomal fractions of suspension-cultured cells and young roots, respectively, and the values are approximately 0.1 and 0.2%, respectively, of the vacuolar H(+)-PPase. In plants, AtVHP2s were detected immunochemically in all tissues except mature leaves, and were abundant in roots and flowers. The intracellular localization of AtVHP2s in suspension cells was determined by sucrose density gradient centrifugation and immunoblotting. Comparison with a number of marker proteins revealed localization in the Golgi apparatus and the trans-Golgi network. These results suggest that the type II H(+)-PPase functions as a proton pump in the Golgi and related vesicles in young tissues, although its content is very low compared with the type I enzyme. PMID:20605924
The Classification of Gun’s Type Using Image Recognition Theory
M.L.Kulthon Kasemsan
2014-01-01
The research aims to develop the Gun’s Type and Models Classification (GTMC) system using image recognition theory. It is expected that this study can serve as a guide for law enforcement agencies or at least serve as the catalyst for a similar type of research. Master image storage and image recognition are the two main processes. The procedures involved original images, scaling, gray scale, canny edge detector, SUSAN corner detector, block matching template, and finally gun type’s recogniti...
What does it take for a specific prospect theory type household to engage in risky investment?
Hlouskova, Jaroslava; Tsigaris, Panagiotis
2012-01-01
This research note examines the conditions which will induce a prospect theory type investor, whose reference level is set by playing it safe, to invest in a risky asset. The conditions indicate that this type of investor requires a large equity premium to invest in risky assets. However, once she does invest because of a large risk premium, she becomes aggressive and buys/sells till an externally imposed upper/lower bound is reached.
Ding, Ming; Hvid, I
2000-01-01
traditionally been measured using model-based histomorphometric methods on two-dimensional (2-D) sections. However, no quantitative study has been published based on three-dimensional (3-D) methods on the age-related changes in structure model type and trabecular thickness for human peripheral (tibial......) cancellous bone. In this study, 160 human proximal tibial cancellous bone specimens from 40 normal donors, aged 16 to 85 years, were collected. These specimens were micro-computed tomography (micro-CT) scanned, then the micro-CT images were segmented using optimal thresholds. From accurate 3-D data sets......, structure model type and trabecular thickness were quantified by means of novel 3-D methods. Structure model type was assessed by calculating the structure model index (SMI). The SMI was quantified based on a differential analysis of the triangulated bone surface of a structure. This technique allows...
Universal properties of type IIB and F-theory flux compactifications at large complex structure
Marsh, M. C. David; Sousa, Kepa
2016-03-01
We consider flux compactifications of type IIB string theory and F-theory in which the respective superpotentials at large complex structure are dominated by cubic or quartic terms in the complex structure moduli. In this limit, the low-energy effective theory exhibits universal properties that are insensitive to the details of the compactification manifold or the flux configuration. Focussing on the complex structure and axio-dilaton sector, we show that there are no vacua in this region and the spectrum of the Hessian matrix is highly peaked and consists only of three distinct eigenvalues (0, 2 m 3/2 2 and 8 m 3/2 2 ), independently of the number of moduli. We briefly comment on how the inclusion of Kähler moduli affect these findings. Our results generalise those of Brodie & Marsh [1], in which these universal properties were found in a subspace of the large complex structure limit of type IIB compactifications.
Universal Properties of Type IIB and F-theory Flux Compactifications at Large Complex Structure
Marsh, M C David
2015-01-01
We consider flux compactifications of type IIB string theory and F-theory in which the respective superpotentials at large complex structure are dominated by cubic or quartic terms in the complex structure moduli. In this limit, the low-energy effective theory for the complex structure and axio-dilaton sector exhibits universal properties that are insensitive to the details of the compactification manifold or the flux configuration. We show that there are no vacua in this region and the spectrum of the Hessian matrix is highly peaked and consists only of three distinct eigenvalues ($0$, $2m_{3/2}^2$ and $8m_{3/2}^2$), independently of the number of moduli. We briefly comment on how the inclusion of K\\"ahler moduli affect these findings. Our results generalise those of Brodie & Marsh [1], in which these universal properties were found in a subspace of the large complex structure limit of type IIB compactifications.
Isotropization of Bianchi type models and a new FRW solution in Brans-Dicke theory
Cervantes-Cota, J L; Cervantes-Cota, Jorge L.; Nahmad, Marcos
2001-01-01
Using scaled variables we are able to integrate an equation valid for isotropic and anisotropic Bianchi type I, V, IX models in Brans-Dicke (BD) theory. We analyze known and new solutions for these models in relation with the possibility that anisotropic models asymptotically isotropize, and/or possess inflationary properties. In particular, a new solution of curve ($k\
Localized energy associated with Bianchi-Type VI universe in $f(R)$ theory of gravity
Korunur, M
2016-01-01
In the present work, focusing on one of the most popular problems in modern gravitation theories, we consider generalized Lanndau-Liftshitz energy-momentum relation to calculate energy distribution of the Bianchi-Type VI spacetime in $f(R)$ gravity. Additionally, the results are specified by using some well-known $f(R)$-gravity models.
T-duality in type II string theory via noncommutative geometry and beyond
This brief survey on how noncommutative and nonassociative geometry appears naturally in the study of T-duality in type II string theory, is essentially a transcript of my talks given at the 21st Nishinomiya-Yukawa Memorial Symposium on Theoretical Physics: Noncommutative Geometry and Quantum Spacetime in Physics, Japan, 11-15 November 2006. (author)
The central error of M. W. Evans ECE theory - a type mismatch
Bruhn, G W
2006-01-01
This note corrects an erroneous article by M.W. Evans on his GCUFT theory which he took over in his GCUFT book. Due to Evans' bad habit of suppressing seemingly unimportant indices type match errors occur that cannot be removed. In addition some further errors of that article/book chapter are pointed out.
Pedersen, Henrik; Carlsen, Morten; Nielsen, Jens Bredal
1999-01-01
Two alpha-amylase-producing strains of Aspergillus oryzae, a wild-type strain and a recombinant containing additional copies of the alpha-amylase gene, were characterized,vith respect to enzyme activities, localization of enzymes to the mitochondria or cytosol, macromolecular composition, and...
Exceptional field theory. I. E6(6)-covariant form of M-theory and type IIB
Hohm, Olaf; Samtleben, Henning
2014-03-01
We present the details of the recently constructed E6(6)-covariant extension of 11-dimensional supergravity. This theory requires a 5+27-dimensional spacetime in which the "internal" coordinates transform in the 27 ¯ of E6(6). All fields are E6(6) tensors and transform under (gauged) internal generalized diffeomorphisms. The "Kaluza-Klein" vector field acts as a gauge field for the E6(6)-covariant "E-bracket" rather than a Lie bracket, requiring the presence of 2-forms akin to the tensor hierarchy of gauged supergravity. We construct the complete and unique action that is gauge invariant under generalized diffeomorphisms in the internal and external coordinates. The theory is subject to covariant section constraints on the derivatives, implying that only a subset of the extra 27 coordinates is physical. We give two solutions of the section constraints: the first preserves GL(6) and embeds the action of the complete (i.e. untruncated) 11-dimensional supergravity; the second preserves GL(5)×SL(2) and embeds complete type IIB supergravity. As a byproduct, we thus obtain an off-shell action for type IIB supergravity.
Thermodynamic limit of the Nekrasov-type formula for E-string theory
We give a proof of the Nekrasov-type formula proposed by one of the authors for the Seiberg-Witten prepotential for the E-string theory on ℝ4×T2. We take the thermodynamic limit of the Nekrasov-type formula following the example of Nekrasov-Okounkov and reproduce the Seiberg-Witten description of the prepotential. The Seiberg-Witten curve obtained directly from the Nekrasov-type formula is of genus greater than one. We find that this curve is transformed into the known elliptic curve by a simple map. We consider the cases in which the low energy theory has E8, E7⊕A1 or E6⊕A2 as a global symmetry
Lahriri, Said; Santos, Ilmar
2013-01-01
This paper treats the experimental study on a shaft impacting its stator for different cases. The paper focuses mainly on the measured contact forces and the shaft motion in two different types of backup bearings. As such, the measured contact forces are thoroughly studied. These measured contact...... forces enable the hysteresis loops to be computed and analyzed. Consequently, the contact forces are plotted against the local deformation in order to assess the contact force loss during the impacts. The shaft motion during contact with the backup bearing is verified with a two-sided spectrum analyses...
Hall, Hardy C.; Fakhrzadeh, Azadeh; Luengo Hendriks, Cris L.; Fischer, Urs
2016-01-01
While novel whole-plant phenotyping technologies have been successfully implemented into functional genomics and breeding programs, the potential of automated phenotyping with cellular resolution is largely unexploited. Laser scanning confocal microscopy has the potential to close this gap by providing spatially highly resolved images containing anatomic as well as chemical information on a subcellular basis. However, in the absence of automated methods, the assessment of the spatial patterns and abundance of fluorescent markers with subcellular resolution is still largely qualitative and time-consuming. Recent advances in image acquisition and analysis, coupled with improvements in microprocessor performance, have brought such automated methods within reach, so that information from thousands of cells per image for hundreds of images may be derived in an experimentally convenient time-frame. Here, we present a MATLAB-based analytical pipeline to (1) segment radial plant organs into individual cells, (2) classify cells into cell type categories based upon Random Forest classification, (3) divide each cell into sub-regions, and (4) quantify fluorescence intensity to a subcellular degree of precision for a separate fluorescence channel. In this research advance, we demonstrate the precision of this analytical process for the relatively complex tissues of Arabidopsis hypocotyls at various stages of development. High speed and robustness make our approach suitable for phenotyping of large collections of stem-like material and other tissue types. PMID:26904081
Kuld, Sebastian; Moses, Poul Georg; Sehested, Jens; Conradsen, Christian Nagstrup; Chorkendorff, Ib
2014-01-01
Methanol has recently attracted renewed interest because of its potential importance as a solar fuel.1 Methanol is also an important bulk chemical that is most efficiently formed over the industrial Cu/ZnO/Al2O3 catalyst. The identity of the active site and, in particular, the role of ZnO as a pr...... systems where metalsupport interactions are important, and this work generally addresses the role of the carrier and the nature of the interactions between carrier and metal in heterogeneous catalysts.......Methanol has recently attracted renewed interest because of its potential importance as a solar fuel.1 Methanol is also an important bulk chemical that is most efficiently formed over the industrial Cu/ZnO/Al2O3 catalyst. The identity of the active site and, in particular, the role of ZnO as a...... promoter for this type of catalyst is still under intense debate.2 Structural changes that are strongly dependent on the pretreatment method have now been observed for an industrial-type methanol synthesis catalyst. A combination of chemisorption, reaction, and spectroscopic techniques provides a...
Kuld, Sebastian; Conradsen, Christian; Moses, Poul Georg; Chorkendorff, Ib; Sehested, Jens
2014-06-01
Methanol has recently attracted renewed interest because of its potential importance as a solar fuel. Methanol is also an important bulk chemical that is most efficiently formed over the industrial Cu/ZnO/Al2O3 catalyst. The identity of the active site and, in particular, the role of ZnO as a promoter for this type of catalyst is still under intense debate. Structural changes that are strongly dependent on the pretreatment method have now been observed for an industrial-type methanol synthesis catalyst. A combination of chemisorption, reaction, and spectroscopic techniques provides a consistent picture of surface alloying between copper and zinc. This analysis enables a reinterpretation of the methods that have been used for the determination of the Cu surface area and provides an opportunity to independently quantify the specific Cu and Zn areas. This method may also be applied to other systems where metal-support interactions are important, and this work generally addresses the role of the carrier and the nature of the interactions between carrier and metal in heterogeneous catalysts. PMID:24764288
Two-dimensional nonlocal heating theory of planar-type inductively coupled plasma discharge
A two-dimensional heating theory of planar-type inductively coupled plasma (ICP) discharge is developed. The theory includes the anomalous skin effect with an arbitrary value of electron collision frequency and source current. Based on the uniqueness theorem of wave equation, wave excitation by the source current is determined. With the calculated electromagnetic fields, plasma resistance is expressed as a function of various parameters such as plasma density np, electron temperature Te, radius of chamber R, length of plasma Lp, shielding cap length Ls, electron collision frequency ?, excitation frequency ?, and the position and size of the antenna coil. copyright 1997 The American Physical Society
The Classification of Guns Type Using Image Recognition Theory
M. L. Kulthon Kasemsan
2014-01-01
Full Text Available The research aims to develop the Guns Type and Models Classification (GTMC system using image recognition theory. It is expected that this study can serve as a guide for law enforcement agencies or at least serve as the catalyst for a similar type of research. Master image storage and image recognition are the two main processes. The procedures involved original images, scaling, gray scale, canny edge detector, SUSAN corner detector, block matching template, and finally gun types recognition. Of the 505 images, 80 were control or master images, and 425 were experimental images of the eight gun types. The finding from the experiment indicated that the GTMC was able to classify the images of the semi-automatic gun with the highest accuracy of 99.06 percent, and the average accurate gun image classification was 81.25 percent respectively.
The structure of the R8 term in type IIB string theory
Based on the structure of the on-shell linearized superspace of type IIB supergravity, we argue that there is a non-BPS 16 derivative interaction in the effective action of type IIB string theory of the form (t8t8R4)2, which we call the R8 interaction. It lies in the same supermultiplet as the G8R4 interaction. Using the Kawai–Lewellen–Tye relation, we analyze the structure of the tree level eight-graviton scattering amplitude in the type IIB theory, which leads to the R8 interaction at the linearized level. This involves an analysis of color-ordered multi-gluon disc amplitudes in the type I theory, which shows an intricate pole structure and transcendentality consistent with various other interactions. Considerations of S-duality show that the R8 interaction receives non-analytic contributions in the string coupling at one and two loops. Apart from receiving perturbative contributions, we show that the R8 interaction receives a non-vanishing contribution in the one D-instanton-anti-instanton background at leading order in the weak coupling expansion. (paper)
Casteels, Cindy [K.U. Leuven, University Hospital Leuven, Division of Nuclear Medicine, Leuven (Belgium); K.U. Leuven, MoSAIC, Molecular Small Animal Imaging Center, Leuven (Belgium); University Hospital Gasthuisberg, Division of Nuclear Medicine, Leuven (Belgium); Koole, Michel; Laere, Koen van [K.U. Leuven, University Hospital Leuven, Division of Nuclear Medicine, Leuven (Belgium); K.U. Leuven, MoSAIC, Molecular Small Animal Imaging Center, Leuven (Belgium); Celen, Sofie; Bormans, Guy [K.U. Leuven, MoSAIC, Molecular Small Animal Imaging Center, Leuven (Belgium); K.U. Leuven, Laboratory for Radiopharmacy, Leuven (Belgium)
2012-09-15
[{sup 18}F]MK-9470 is an inverse agonist for the type 1 cannabinoid (CB1) receptor allowing its use in PET imaging. We characterized the kinetics of [{sup 18}F]MK-9470 and evaluated its ability to quantify CB1 receptor availability in the rat brain. Dynamic small-animal PET scans with [{sup 18}F]MK-9470 were performed in Wistar rats on a FOCUS-220 system for up to 10 h. Both plasma and perfused brain homogenates were analysed using HPLC to quantify radiometabolites. Displacement and blocking experiments were done using cold MK-9470 and another inverse agonist, SR141716A. The distribution volume (V{sub T}) of [{sup 18}F]MK-9470 was used as a quantitative measure and compared to the use of brain uptake, expressed as SUV, a simplified method of quantification. The percentage of intact [{sup 18}F]MK-9470 in arterial plasma samples was 80 {+-} 23 % at 10 min, 38 {+-} 30 % at 40 min and 13 {+-} 14 % at 210 min. A polar radiometabolite fraction was detected in plasma and brain tissue. The brain radiometabolite concentration was uniform across the whole brain. Displacement and pretreatment studies showed that 56 % of the tracer binding was specific and reversible. V{sub T} values obtained with a one-tissue compartment model plus constrained radiometabolite input had good identifiability ({<=}10 %). Ignoring the radiometabolite contribution using a one-tissue compartment model alone, i.e. without constrained radiometabolite input, overestimated the [{sup 18}F]MK-9470 V{sub T}, but was correlated. A correlation between [{sup 18}F]MK-9470 V{sub T} and SUV in the brain was also found (R {sup 2} = 0.26-0.33; p {<=} 0.03). While the presence of a brain-penetrating radiometabolite fraction complicates the quantification of [{sup 18}F]MK-9470 in the rat brain, its tracer kinetics can be modelled using a one-tissue compartment model with and without constrained radiometabolite input. (orig.)
A COSSERAT-TYPE PLATE THEORY AND ITS APPLICATION TO CARBON NANOTUBE MICROSTRUCTURE
Abdellatif Selmi
2014-01-01
Full Text Available The predictive capabilities of plate and shell theories greatly depend on their underlying kinematic assumptions. In this study, we develop a Cosserat-type elastic plate theory which accounts for rotations around the normal to the mid-surface plane (so-called drilling rotations. Internal loads, equilibrium equations, boundary conditions and constitutive equations are derived. The case of a Single Walled carbon Nanotube (SWNT modelled as a Cosserat medium is taken here as a reference example. Material parameters are identiﬁed and the proposed theory is used to solve analytically the problem of a polymer-SWNT composite tube under torsion. Predictions such as an absolute size effect are compared to those of the classical Cauchy-de Saint-Venant results.
Weber, Tim F. [University of Heidelberg, Department of Diagnostic and Interventional Radiology, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)], E-mail: tim.weber@med.uni-heidelberg.de; Ganten, Maria-Katharina [German Cancer Research Center, Department of Radiology, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)], E-mail: m.ganten@dkfz.de; Boeckler, Dittmar [University of Heidelberg, Department of Vascular and Endovascular Surgery, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)], E-mail: dittmar.boeckler@med.uni-heidelberg.de; Geisbuesch, Philipp [University of Heidelberg, Department of Vascular and Endovascular Surgery, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)], E-mail: philipp.geisbuesch@med.uni-heidelberg.de; Kauczor, Hans-Ulrich [University of Heidelberg, Department of Diagnostic and Interventional Radiology, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)], E-mail: hu.kauczor@med.uni-heidelberg.de; Tengg-Kobligk, Hendrik von [German Cancer Research Center, Department of Radiology, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)], E-mail: h.vontengg@dkfz.de
2009-12-15
Purpose: The purpose of this study was to characterize the heartbeat-related displacement of the thoracic aorta in patients with chronic aortic dissection type B (CADB). Materials and methods: Electrocardiogram-gated computed tomography angiography was performed during inspiratory breath-hold in 11 patients with CADB: Collimation 16 mm x 1 mm, pitch 0.2, slice thickness 1 mm, reconstruction increment 0.8 mm. Multiplanar reformations were taken for 20 equidistant time instances through both ascending (AAo) and descending aorta (true lumen, DAoT; false lumen, DAoF) and the vertex of the aortic arch (VA). In-plane vessel displacement was determined by region of interest analysis. Results: Mean displacement was 5.2 {+-} 1.7 mm (AAo), 1.6 {+-} 1.0 mm (VA), 0.9 {+-} 0.4 mm (DAoT), and 1.1 {+-} 0.4 mm (DAoF). This indicated a significant reduction of displacement from AAo to VA and DAoT (p < 0.05). The direction of displacement was anterior for AAo and cranial for VA. Conclusion: In CADB, the thoracic aorta undergoes a heartbeat-related displacement that exhibits an unbalanced distribution of magnitude and direction along the thoracic vessel course. Since consecutive traction forces on the aortic wall have to be assumed, these observations may have implications on pathogenesis of and treatment strategies for CADB.
Purpose: The purpose of this study was to characterize the heartbeat-related displacement of the thoracic aorta in patients with chronic aortic dissection type B (CADB). Materials and methods: Electrocardiogram-gated computed tomography angiography was performed during inspiratory breath-hold in 11 patients with CADB: Collimation 16 mm x 1 mm, pitch 0.2, slice thickness 1 mm, reconstruction increment 0.8 mm. Multiplanar reformations were taken for 20 equidistant time instances through both ascending (AAo) and descending aorta (true lumen, DAoT; false lumen, DAoF) and the vertex of the aortic arch (VA). In-plane vessel displacement was determined by region of interest analysis. Results: Mean displacement was 5.2 1.7 mm (AAo), 1.6 1.0 mm (VA), 0.9 0.4 mm (DAoT), and 1.1 0.4 mm (DAoF). This indicated a significant reduction of displacement from AAo to VA and DAoT (p < 0.05). The direction of displacement was anterior for AAo and cranial for VA. Conclusion: In CADB, the thoracic aorta undergoes a heartbeat-related displacement that exhibits an unbalanced distribution of magnitude and direction along the thoracic vessel course. Since consecutive traction forces on the aortic wall have to be assumed, these observations may have implications on pathogenesis of and treatment strategies for CADB.
Miyaoka, Yuichiro; Berman, Jennifer R; Cooper, Samantha B; Mayerl, Steven J; Chan, Amanda H; Zhang, Bin; Karlin-Neumann, George A; Conklin, Bruce R
2016-01-01
Precise genome-editing relies on the repair of sequence-specific nuclease-induced DNA nicking or double-strand breaks (DSBs) by homology-directed repair (HDR). However, nonhomologous end-joining (NHEJ), an error-prone repair, acts concurrently, reducing the rate of high-fidelity edits. The identification of genome-editing conditions that favor HDR over NHEJ has been hindered by the lack of a simple method to measure HDR and NHEJ directly and simultaneously at endogenous loci. To overcome this challenge, we developed a novel, rapid, digital PCR-based assay that can simultaneously detect one HDR or NHEJ event out of 1,000 copies of the genome. Using this assay, we systematically monitored genome-editing outcomes of CRISPR-associated protein 9 (Cas9), Cas9 nickases, catalytically dead Cas9 fused to FokI, and transcription activator-like effector nuclease at three disease-associated endogenous gene loci in HEK293T cells, HeLa cells, and human induced pluripotent stem cells. Although it is widely thought that NHEJ generally occurs more often than HDR, we found that more HDR than NHEJ was induced under multiple conditions. Surprisingly, the HDR/NHEJ ratios were highly dependent on gene locus, nuclease platform, and cell type. The new assay system, and our findings based on it, will enable mechanistic studies of genome-editing and help improve genome-editing technology. PMID:27030102
Asymptotic freedom and infrared behavior in the type 0 string approach to gauge theory
In a recent paper we considered the type 0 string theories, obtained from the ten-dimensional closed NSR string by a GSO projection which excludes space-time fermions, and studied the low-energy dynamics of N coincident D-branes. This led us to conjecture that the four-dimensional SU(N) gauge theory coupled to six adjoint massless scalars is dual to a background of type 0 theory carrying N units of R-R 5-form flux and involving a tachyon condensate. The tachyon background leads to a 'soft breaking' of conformal invariance, and we derived the corresponding renormalization group equation. Minahan has subsequently found its asymptotic solution for weak coupling and showed that the coupling exhibits logarithmic flow, as expected from the asymptotic freedom of the dual gauge theory. We study this solution in more detail and identify the effect of the 2-loop beta function. We also demonstrate the existence of a fixed point at infinite coupling. Just like the fixed point at zero coupling, it is characterized by the AdS5 x S5 Einstein frame metric. We argue that there is a RG trajectory extending all the way from the zero coupling fixed point in the UV to the infinite coupling fixed point in the IR
WKB - type approximations in the theory of vacuum particle creation in strong fields
Smolyansky, S A; Panferov, A D; Prozorkevich, A V; Blaschke, D; Juchnowski, L
2014-01-01
Within the theory of vacuum creation of an $e^{+}e^{-}$ - plasma in the strong electric fields acting in the focal spot of counter-propagating laser beams we compare predictions on the basis of different WKB-type approximations with results obtained in the framework of a strict kinetic approach. Such a comparison demonstrates a considerable divergence results. We analyse some reasoning for this observation and conclude that WKB-type approximations have an insufficient foundation for QED in strong nonstationary fields. The results obtained in this work on the basis of the kinetic approach are most optimistic for the observation of an $e^{+}e^{-}$ - plasma in the range of optical and x-ray laser facilities. We discuss also the influence of unphysical features of non-adiabatic field models on the reliability of predictions of the kinetic theory.
G\\"odel and G\\"odel-type universes in Brans-Dicke theory
Agudelo, J A; Petrov, A Yu; Porfírio, P J; Santos, A F
2016-01-01
In this paper, conditions for existence of G\\"{o}del and G\\"{o}del-type solutions in Brans-Dicke (BD) scalar-tensor theory and their main features are studied. The special attention is paid to consistency of equations of motion, causality, existence of CTCs (closed time-like curves) and to the role which cosmological constant and Mach principle play to achieve the consistency of this model.
New Proposal for a 5-dimensional Theory of Kaluza-Klein type
Macedo, Paulo G.
2001-01-01
A new 5-dimensional Classical Unified Field Theory of Kaluza-Klein type is formulated using 2 separate scalar fields which are related in such a way as to make the 5-dimensional matter-geometry coupling parameter constant. It is shown that this procedure solves the problem of the variability of the gravity coupling parameter without having to assume a conformal invariance. The corresponding Field equations are discussed paying particular attention to the possible induction of scalar field gra...
Didarloo, A.; D. Shojaeizadeh1; asl, R. Gharaaghaji; Niknami, S.; A. Khorami
2014-01-01
The study evaluated the efficacy of the Theory of Reasoned Action (TRA), along with self-efficacy to predict dietary behaviour in a group of Iranian women with type 2 diabetes. A sample of 352 diabetic women referred to Khoy Diabetes Clinic, Iran, were selected and given a self-administered survey to assess eating behaviour, using the extended TRA constructs. Bivariate correlations and Enter regression analyses of the extended TRA model were performed with SPSS software. Overall, the proposed...
A tale of two cascades: Higgsing and Seiberg-duality cascades from type IIB string theory
Conde, Eduardo; Gaillard, Jerome; Nunez, Carlos; Piai, Maurizio; Ramallo, Alfonso V.
2011-01-01
We construct explicitly new solutions of type IIB supergravity with brane sources, the duals of which are N = 1 supersymmetric field theories exhibiting two very interesting phenomena. The far UV dynamics is controlled by a cascade of Seiberg dualities analogous to the Klebanov- Strassler backgrounds. At intermediate scales a cascade of Higgsing appears, in the sense that the gauge group undergoes a sequence of spontaneous symmetry breaking steps which reduces its rank. Deep in the IR, the th...
The epsilon regime of chiral perturbation theory with Wilson-type fermions
Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Shindler, A. [Liverpool Univ. (United Kingdom). Theoretical Physics Division
2009-11-15
In this proceeding contribution we report on the ongoing effort to simulate Wilson-type fermions in the so called epsilon regime of chiral perturbation theory (cPT).We present results for the chiral condensate and the pseudoscalar decay constant obtained with Wilson twisted mass fermions employing two lattice spacings, two different physical volumes and several quark masses. With this set of simulations we make a first attempt to estimate the systematic uncertainties. (orig.)
Cultivating New-type Farmers Based on the Theory of Human Resources Development
Zhang, Li
2010-01-01
Under the direction of theory of human resources development, this thesis analyzes the impact of rural human resources development on cultivating new-type farmers. Firstly, it increases the input of rural basic education; secondly, it reinforces the vocational education and technology training; thirdly, it promotes the rural medical and public health services; fourthly, it quickens the rural labor transfer. The status quo of Chinaâ€™s rural human resources has been analyzed as follows: in ter...
Bianchi type VI1 cosmological model with wet dark fluid in scale invariant theory of gravitation
Mishra, B
2014-01-01
In this paper, we have investigated Bianchi type VIh, II and III cosmological model with wet dark fluid in scale invariant theory of gravity, where the matter field is in the form of perfect fluid and with a time dependent gauge function (Dirac gauge). A non-singular model for the universe filled with disorder radiation is constructed and some physical behaviors of the model are studied for the feasible VIh (h = 1) space-time.
The criticality problem in reflected slab type reactor in the two-group transport theory
The criticality problem in reflected slab type reactor is solved for the first time in the two group neutron transport theory, by singular eingenfunctions expansion, the singular integrals obtained through continuity conditions of angular distributions at the interface are regularized by a recently proposed method. The result is a coupled system of regular integral equations for the expansion coefficients, this system is solved by an ordinary interactive method. Numerical results that can be utilized as a comparative standard for aproximation methods, are presented
Social Cognition in Later Life: Effects of Aging and Task Type on Theory of Mind Performance
Doyle, Claire L.
2009-01-01
Abstract Recent studies assessing the effects of age and task type on theory of mind (ToM) have found mixed results. However, these studies have not considered the possibility that by using a series of distinct and unrelated tasks, other confounding factors are likely to affect performance, such as the type of ToM reasoning required, the length of the social interactions, the characters involved etc. Moreover, most have relied on traditional ToM tests which lack resemblance to real-world s...
Some properties of the Cauchy-type integral for the Laplace vector fields theory
We study the analog of the Cauchy-type integral for the Laplace vector fields theory in case of a piece-wise Liapunov surface of integration and we prove the Sokhotski-Plemelj theorem for it as well as the necessary and sufficient condition for the possibility to extend a given Hoelder function from such a surface up to a Laplace vector field. Formula for the square of the singular Cauchy-type integral is given. The proofs of all these facts are based on intimate relations between Laplace vector held and some versions of quaternionic analysis
Fluxes, moduli fixing and MSSM-like vacua in Type IIA String Theory
Camara, P G
2006-01-01
We review some of the features of Type IIA compactifications in the presence of fluxes. In particular, the case of $T^6/(\\Omega (-1)^{F_L} \\sigma)$ orientifolds with RR, NS and metric fluxes is considered. This has revealed to possess remarkable properties such as vacua with all the closed string moduli stabilized, null or negative contributions to the RR tadpoles or supersymmetry on the branes enforced by the closed string background. In this way, Type IIA compactifications with non trivial fluxes seem to constitute a new window into the building of semi-realistic models in String Theory.
Voigt Christopher A
2010-10-01
Full Text Available Abstract Background The type III secretion system (T3SS is a molecular machine in gram negative bacteria that exports proteins through both membranes to the extracellular environment. It has been previously demonstrated that the T3SS encoded in Salmonella Pathogenicity Island 1 (SPI-1 can be harnessed to export recombinant proteins. Here, we demonstrate the secretion of a variety of unfolded spider silk proteins and use these data to quantify the constraints of this system with respect to the export of recombinant protein. Results To test how the timing and level of protein expression affects secretion, we designed a hybrid promoter that combines an IPTG-inducible system with a natural genetic circuit that controls effector expression in Salmonella (psicA. LacO operators are placed in various locations in the psicA promoter and the optimal induction occurs when a single operator is placed at the +5nt (234-fold and a lower basal level of expression is achieved when a second operator is placed at -63nt to take advantage of DNA looping. Using this tool, we find that the secretion efficiency (protein secreted divided by total expressed is constant as a function of total expressed. We also demonstrate that the secretion flux peaks at 8 hours. We then use whole gene DNA synthesis to construct codon optimized spider silk genes for full-length (3129 amino acids Latrodectus hesperus dragline silk, Bombyx mori cocoon silk, and Nephila clavipes flagelliform silk and PCR is used to create eight truncations of these genes. These proteins are all unfolded polypeptides and they encompass a variety of length, charge, and amino acid compositions. We find those proteins fewer than 550 amino acids reliably secrete and the probability declines significantly after ~700 amino acids. There also is a charge optimum at -2.4, and secretion efficiency declines for very positively or negatively charged proteins. There is no significant correlation with hydrophobicity. Conclusions We show that the natural system encoded in SPI-1 only produces high titers of secreted protein for 4-8 hours when the natural psicA promoter is used to drive expression. Secretion efficiency can be high, but declines for charged or large sequences. A quantitative characterization of these constraints will facilitate the effective use and engineering of this system.
Specimens: "most of" generic NPs in a contextually flexible type theory
Retoré, Christian
2011-01-01
This paper proposes to compute the meanings associated to sentences with generic NPs corresponding to the most of generalized quantifier. We call these generics specimens and they resemble stereotypes or prototypes in lexical semantics. The meanings are viewed as logical formulae that can be thereafter interpreted in your favorite models. We rather depart from the dominant Fregean single untyped universe and go for type theory with hints from Hilbert epsilon calculus and from medieval philosophy. Our type theoretic analysis bears some resemblance with on going work in lexical semantics. Our model also applies to classical examples involving a class (or a generic element of this class) which is provided by the context. An outcome of this study is that, in the minimalism-contextualism debate, if one adopts a type theoretical view, terms encode the purely semantic meaning component while their typing is pragmatically determined.
Ma, Fuyin; Wu, Jiu Hui; Huang, Meng
2015-09-01
In order to overcome the influence of the structural resonance on the continuous structures and obtain a lightweight thin-layer structure which can effectively isolate the low-frequency noises, an elastic membrane structure was proposed. In the low-frequency range below 500 Hz, the sound transmission loss (STL) of this membrane type structure is greatly higher than that of the current sound insulation material EVA (ethylene-vinyl acetate copo) of vehicle, so it is possible to replace the EVA by the membrane-type metamaterial structure in practice engineering. Based on the band structure, modal shapes, as well as the sound transmission simulation, the sound insulation mechanism of the designed membrane-type acoustic metamaterials was analyzed from a new perspective, which had been validated experimentally. It is suggested that in the frequency range above 200 Hz for this membrane-mass type structure, the sound insulation effect was principally not due to the low-level locally resonant mode of the mass block, but the continuous vertical resonant modes of the localized membrane. So based on such a physical property, a resonant modal group theory is initially proposed in this paper. In addition, the sound insulation mechanism of the membrane-type structure and thin plate structure were combined by the membrane/plate resonant theory.
Optimal Uncertainty Quantification
Owhadi, Houman; Sullivan, Timothy John; McKerns, Mike; Ortiz, Michael
2010-01-01
We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \\emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as extreme values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions, they have finite-dimensional reductions. As an application, we develop \\emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, contr...
Type synthesis for 4-DOF parallel press mechanism using GF set theory
He, Jun; Gao, Feng; Meng, Xiangdun; Guo, Weizhong
2015-07-01
Parallel mechanisms is used in the large capacity servo press to avoid the over-constraint of the traditional redundant actuation. Currently, the researches mainly focus on the performance analysis for some specific parallel press mechanisms. However, the type synthesis and evaluation of parallel press mechanisms is seldom studied, especially for the four degrees of freedom(DOF) press mechanisms. The type synthesis of 4-DOF parallel press mechanisms is carried out based on the generalized function(GF) set theory. Five design criteria of 4-DOF parallel press mechanisms are firstly proposed. The general procedure of type synthesis of parallel press mechanisms is obtained, which includes number synthesis, symmetrical synthesis of constraint GF sets, decomposition of motion GF sets and design of limbs. Nine combinations of constraint GF sets of 4-DOF parallel press mechanisms, ten combinations of GF sets of active limbs, and eleven combinations of GF sets of passive limbs are synthesized. Thirty-eight kinds of press mechanisms are presented and then different structures of kinematic limbs are designed. Finally, the geometrical constraint complexity( GCC), kinematic pair complexity( KPC), and type complexity( TC) are proposed to evaluate the press types and the optimal press type is achieved. The general methodologies of type synthesis and evaluation for parallel press mechanism are suggested.
A Density Functional Theory Study of Doped Tin Monoxide as a Transparent p-type Semiconductor
Bianchi Granato, Danilo
2012-05-01
In the pursuit of enhancing the electronic properties of transparent p-type semiconductors, this work uses density functional theory to study the effects of doping tin monoxide with nitrogen, antimony, yttrium and lanthanum. An overview of the theoretical concepts and a detailed description of the methods employed are given, including a discussion about the correction scheme for charged defects proposed by Freysoldt and others [Freysoldt 2009]. Analysis of the formation energies of the defects points out that nitrogen substitutes an oxygen atom and does not provide charge carriers. On the other hand, antimony, yttrium, and lanthanum substitute a tin atom and donate n-type carriers. Study of the band structure and density of states indicates that yttrium and lanthanum improves the hole mobility. Present results are in good agreement with available experimental works and help to improve the understanding on how to engineer transparent p-type materials with higher hole mobilities.
Canonical BF-type topological field theory and fractional statistics of strings
We consider BF-type topological field theory coupled to non-dynamical particle and string sources on spacetime manifolds of the form R1xM 3, where M 3 is a 3-manifold without boundary. Canonical quantization of the theory is carried out in the hamiltonian formalism and explicit solutions of the Schroedinger equation are obtained. We show that the Hilbert space is finite dimensional and the physical states carry a one-dimensional projective representation of the local gauge symmetries. When M 3 is homologically non-trivial the wavefunctions in addition carry a multi-dimensional projective representation, in terms of the linking matrix of the homology cycles of M 3, of the discrete group of large gauge transformations. The wavefunctions also carry a one-dimensional representation of the non-trivial linking of the particle trajectories and string surfaces in M 3. This topological field theory therefore provides a phenomenological generalization of anyons to (3+1) dimensions where the holonomies representing fractional statistics arise from the adiabatic transport of particles around strings. We also discuss a duality between large gauge transformations and these linking operations around the homology cycles of M 3, and show that this canonical quantum field theory provides novel quantum representations of the cohomology of M 3 and its associated motion group. ((orig.))
A tale of two cascades: Higgsing and Seiberg-duality cascades from type IIB string theory
Conde, Eduardo; Nunez, Carlos; Piai, Maurizio; Ramallo, Alfonso V
2011-01-01
We construct explicitly new solutions of type IIB supergravity with brane sources, the duals of which are N = 1 supersymmetric field theories exhibiting two very interesting phenomena. The far UV dynamics is controlled by a cascade of Seiberg dualities analog to the Klebanov- Strassler backgrounds. At intermediate scales a cascade of Higgsing appears, in the sense that the gauge group undergoes a sequence of spontaneous symmetry breaking steps which reduces its rank. Deep in the IR, the theory confines, and the gravity background has a non-singular end of space. We explain in detail how to generate such solutions, discuss some of the Physics associated with them and briefly comment on the possible applications.
Social cognitive theory correlates of moderate-intensity exercise among adults with type 2 diabetes.
Heiss, Valerie J; Petosa, R L
2016-01-01
The purpose of this study was to identify social cognitive theory (SCT) correlates of moderate- to vigorous-intensity exercise (MVPA) among adults with type 2 diabetes. Adults with type 2 diabetes (N=181) participated in the study. Participants were recruited through ResearchMatch.org to complete an online survey. The survey used previously validated instruments to measure dimensions of self-efficacy, self-regulation, social support, outcome expectations, the physical environment, and minutes of MVPA per week. Spearman Rank Correlations were used to determine the relationship between SCT variables and MVPA. Classification and Regression Analysis using a decision tree model was used to determine the amount of variance in MVPA explained by SCT variables. Due to low levels of vigorous activity, only moderate-intensity exercise (MIE) was analyzed. SCT variables explained 42.4% of the variance in MIE. Self-monitoring, social support from family, social support from friends, and self-evaluative outcome expectations all contributed to the variability in MIE. Other contributing variables included self-reward, task self-efficacy, social outcome expectations, overcoming barriers, and self-efficacy for making time for exercise. SCT is a useful theory for identifying correlates of MIE among adults with type 2 diabetes. The SCT correlates can be used to refine diabetes education programs to target the adoption and maintenance of regular exercise. PMID:25753761
S-matrix elements and covariant tachyon action in type 0 theory
Mohammad R. Garousi
2003-01-01
We evaluate the sphere level S-matrix element of two tachyons and two massless NS states, the S-matrix element of four tachyons, and the S-matrix element of two tachyons and two Ramon-Ramond vertex operators, in type 0 theory. We then find an expansion for theses amplitudes that their leading order terms correspond to a covariant tachyon action. To the order considered, there are no $T^4$, $T^2(\\prt T)^2$, $T^2H^2$, nor $T^2R$ tachyon couplings, whereas, the tachyon couplings $F\\bF T$ and $T^...
Hossienkhani, Hossien
2016-01-01
A spatially homogeneous and anisotropic Bianchi type I universe has been studied with the ghost dark energy (GDE) in the framework of Brans-Dicke theory. For this purpose, we use the squared sound speed $v_s^2$ whose sign determines the stability of the model. At first, we obtain the equation of state parameter, $\\omega_\\Lambda$, the deceleration parameter $q$ and the evolution equation of the ghost dark energy. Then, we extend our study to the case of ghost dark energy in a non-isotropic and...
From Peierls brackets to a generalized Moyal bracket for type-I gauge theories
Esposito, Giampiero; Stornaiolo, Cosimo
2006-01-01
In the space-of-histories approach to gauge fields and their quantization, the Maxwell, Yang--Mills and gravitational field are well known to share the property of being type-I theories, i.e. Lie brackets of the vector fields which leave the action functional invariant are linear combinations of such vector fields, with coefficients of linear combination given by structure constants. The corresponding gauge-field operator in the functional integral for the in-out amplitude is an invertible se...
Comment on the one-loop finiteness in type-I superstring theory
By using the Pauli-Villars method, one-loop divergence of the 4-point amplitude in SO(N) type-I superstring theory is studied. If one assigns the equal mass to the Pauli-Villars regulators appearing in the planar and nonorientable diagrams, the one-loop finiteness does not hold for N = 32. From the present view point, the principal-part prescription by Green and Schwarz corresponds to the different regulator mass assignment for the planar and nonorientable diagrams. (author)
Stable non-BPS D-branes in Type I string theory
Frau, M; Lerda, A; Strigazzi, P
2000-01-01
We use the boundary state formalism to study, from the closed string point of view, superpositions of branes and anti-branes which are relevant in some non-perturbative string dualities. Treating the tachyon instability of these systems as proposed by A. Sen, we show how to incorporate the effects of the tachyon condensation directly in the boundary state by means of a ``generalized T-duality''. In this way we manage to show explicitly that the D1 -- anti-D1 pair of Type I is a stable non-BPS D-particle, and compute its mass. We also generalize this construction to describe other non-BPS D-branes of Type I. By requiring the absence of tachyons in the open string spectrum, we find which configurations are stable and compute their tensions. Our classification is in complete agreement with the results recently obtained using the K-theory of space-time.
Stable non-BPS D-branes in type I string theory
We use the boundary state formalism to study, from the closed string point of view, superpositions of branes and anti-branes which are relevant in some non-perturbative string dualities. Treating the tachyon instability of these systems as proposed by Sen, we show how to incorporate the effects of the tachyon condensation directly in the boundary state. In this way we manage to show explicitly that the D1-anti-D1 pair of Type I is a stable non-BPS D-particle, and compute its mass. We also generalize this construction to describe other non-BPS D-branes of Type I. By requiring the absence of tachyons in the open string spectrum, we find which configurations are stable and compute their tensions. Our classification is in complete agreement with the results recently obtained using the K-theory of space-time
Four types of coping with COPD-induced breathlessness in daily living: a grounded theory study
Bastrup, Lene; Dahl, Ronald; Pedersen, Preben Ulrich; Lomborg, Kirsten
2013-01-01
Coping with breathlessness is a complex and multidimensional challenge for people with chronic obstructive pulmonary disease (COPD) and involves interacting physiological, cognitive, affective, and psychosocial dimensions. The aim of this study was to explore how people with moderate to most severe...... COPD predominantly cope with breathlessness during daily living. We chose a multimodal grounded theory design that holds the opportunity to combine qualitative and quantitative data to capture and explain the multidimensional coping behaviour among poeple with COPD. The participants' main concern in...... coping with breathlessness appeared to be an endless striving to economise on resources in an effort to preserve their integrity. In this integrity-preserving process, four predominant coping types emerged and were labelled: `Overrater´, `Challenger´, `Underrater´, and `Leveller´. Each coping type...
Robinson, P. A.; Cairns, I. H.; Gurnett, D. A.
1993-01-01
Detailed comparisons are made between the Langmuir-wave properties predicted by the recently developed stochastic-growth theory of type III sources and those observed by the plasma wave experiment on ISEE 3, after correcting for the main instrumental and selection effects. Analysis of the observed field-strength distribution confirms the theoretically predicted form and implies that wave growth fluctuates both spatially and temporally in sign and magnitude, leading to an extremely clumpy distribution of fields. A cutoff in the field-strength distribution is seen at a few mV/m, corresponding to saturation via nonlinear effects. Analysis of the size distribution of Langmuir clumps yields results in accord with those obtained in earlier work and with the size distribution of ambient density fluctuations in the solar wind. This confirms that the inhomogeneities in the Langmuir growth rate are determined by the density fluctuations and that these fluctuations persist during type III events.
The early life origin theory in the development of cardiovascular disease and type 2 diabetes.
Lindblom, Runa; Ververis, Katherine; Tortorella, Stephanie M; Karagiannis, Tom C
2015-04-01
Life expectancy has been examined from a variety of perspectives in recent history. Epidemiology is one perspective which examines causes of morbidity and mortality at the population level. Over the past few 100years there have been dramatic shifts in the major causes of death and expected life length. This change has suffered from inconsistency across time and space with vast inequalities observed between population groups. In current focus is the challenge of rising non-communicable diseases (NCD), such as cardiovascular disease and type 2 diabetes mellitus. In the search to discover methods to combat the rising incidence of these diseases, a number of new theories on the development of morbidity have arisen. A pertinent example is the hypothesis published by David Barker in 1995 which postulates the prenatal and early developmental origin of adult onset disease, and highlights the importance of the maternal environment. This theory has been subject to criticism however it has gradually gained acceptance. In addition, the relatively new field of epigenetics is contributing evidence in support of the theory. This review aims to explore the implication and limitations of the developmental origin hypothesis, via an historical perspective, in order to enhance understanding of the increasing incidence of NCDs, and facilitate an improvement in planning public health policy. PMID:25270249
On the effective theory of type II string compactifications on nilmanifolds and coset spaces
In this thesis we analyzed a large number of type IIA strict SU(3)-structure compactifications with fluxes and O6/D6-sources, as well as type IIB static SU(2)-structure compactifications with fluxes and O5/O7-sources. Restricting to structures and fluxes that are constant in the basis of left-invariant one-forms, these models are tractable enough to allow for an explicit derivation of the four-dimensional low-energy effective theory. The six-dimensional compact manifolds we studied in this thesis are nilmanifolds based on nilpotent Lie-algebras, and, on the other hand, coset spaces based on semisimple and U(1)-groups, which admit a left-invariant strict SU(3)- or static SU(2)-structure. In particular, from the set of 34 distinct nilmanifolds we identified two nilmanifolds, the torus and the Iwasawa manifold, that allow for an AdS4, N = 1 type IIA strict SU(3)-structure solution and one nilmanifold allowing for an AdS4, N = 1 type IIB static SU(2)-structure solution. From the set of all the possible six-dimensional coset spaces, we identified seven coset spaces suitable for strict SU(3)-structure compactifications, four of which also allow for a static SU(2)-structure compactification. For all these models, we calculated the four-dimensional low-energy effective theory using N = 1 supergravity techniques. In order to write down the most general four-dimensional effective action, we also studied how to classify the different disconnected ''bubbles'' in moduli space. (orig.)
Mild to severe social fears: ranking types of feared social situations using item response theory.
Crome, Erica; Baillie, Andrew
2014-06-01
Social anxiety disorder is one of the most common mental disorders, and is associated with long term impairment, distress and vulnerability to secondary disorders. Certain types of social fears are more common than others, with public speaking fears typically the most prevalent in epidemiological surveys. The distinction between performance- and interaction-based fears has been the focus of long-standing debate in the literature, with evidence performance-based fears may reflect more mild presentations of social anxiety. This study aims to explicitly test whether different types of social fears differ in underlying social anxiety severity using item response theory techniques. Different types of social fears were assessed using items from three different structured diagnostic interviews in four different epidemiological surveys in the United States (n=2261, n=5411) and Australia (n=1845, n=1497); and ranked using 2-parameter logistic item response theory models. Overall, patterns of underlying severity indicated by different fears were consistent across the four samples with items functioning across a range of social anxiety. Public performance fears and speaking at meetings/classes indicated the lowest levels of social anxiety, with increasing severity indicated by situations such as being assertive or attending parties. Fears of using public bathrooms or eating, drinking or writing in public reflected the highest levels of social anxiety. Understanding differences in the underlying severity of different types of social fears has important implications for the underlying structure of social anxiety, and may also enhance the delivery of social anxiety treatment at a population level. PMID:24873885
On the effective theory of type II string compactifications on nilmanifolds and coset spaces
Caviezel, Claudio
2009-07-30
In this thesis we analyzed a large number of type IIA strict SU(3)-structure compactifications with fluxes and O6/D6-sources, as well as type IIB static SU(2)-structure compactifications with fluxes and O5/O7-sources. Restricting to structures and fluxes that are constant in the basis of left-invariant one-forms, these models are tractable enough to allow for an explicit derivation of the four-dimensional low-energy effective theory. The six-dimensional compact manifolds we studied in this thesis are nilmanifolds based on nilpotent Lie-algebras, and, on the other hand, coset spaces based on semisimple and U(1)-groups, which admit a left-invariant strict SU(3)- or static SU(2)-structure. In particular, from the set of 34 distinct nilmanifolds we identified two nilmanifolds, the torus and the Iwasawa manifold, that allow for an AdS{sub 4}, N = 1 type IIA strict SU(3)-structure solution and one nilmanifold allowing for an AdS{sub 4}, N = 1 type IIB static SU(2)-structure solution. From the set of all the possible six-dimensional coset spaces, we identified seven coset spaces suitable for strict SU(3)-structure compactifications, four of which also allow for a static SU(2)-structure compactification. For all these models, we calculated the four-dimensional low-energy effective theory using N = 1 supergravity techniques. In order to write down the most general four-dimensional effective action, we also studied how to classify the different disconnected ''bubbles'' in moduli space. (orig.)
Scale relativity theory and integrative systems biology: 2. Macroscopic quantum-type mechanics.
Nottale, Laurent; Auffray, Charles
2008-05-01
In these two companion papers, we provide an overview and a brief history of the multiple roots, current developments and recent advances of integrative systems biology and identify multiscale integration as its grand challenge. Then we introduce the fundamental principles and the successive steps that have been followed in the construction of the scale relativity theory, which aims at describing the effects of a non-differentiable and fractal (i.e., explicitly scale dependent) geometry of space-time. The first paper of this series was devoted, in this new framework, to the construction from first principles of scale laws of increasing complexity, and to the discussion of some tentative applications of these laws to biological systems. In this second review and perspective paper, we describe the effects induced by the internal fractal structures of trajectories on motion in standard space. Their main consequence is the transformation of classical dynamics into a generalized, quantum-like self-organized dynamics. A Schrödinger-type equation is derived as an integral of the geodesic equation in a fractal space. We then indicate how gauge fields can be constructed from a geometric re-interpretation of gauge transformations as scale transformations in fractal space-time. Finally, we introduce a new tentative development of the theory, in which quantum laws would hold also in scale space, introducing complexergy as a measure of organizational complexity. Initial possible applications of this extended framework to the processes of morphogenesis and the emergence of prokaryotic and eukaryotic cellular structures are discussed. Having founded elements of the evolutionary, developmental, biochemical and cellular theories on the first principles of scale relativity theory, we introduce proposals for the construction of an integrative theory of life and for the design and implementation of novel macroscopic quantum-type experiments and devices, and discuss their potential applications for the analysis, engineering and management of physical and biological systems and properties, and the consequences for the organization of transdisciplinary research and the scientific curriculum in the context of the SYSTEMOSCOPE Consortium research and development agenda. PMID:17991513
Lazar, Markus; Agiasofitou, Eleni; Polyzos, Demosthenes
2015-01-01
The Comment by Aifantis that criticizes the article 'On non-singular crack fields in Helmholtz type enriched elasticity theories' [Lazar, M., Polyzos, D., 2014. Int. J. Solids Struct. doi: 10.1016/j.ijsolstr.2014.01.002] is refuted by means of clear and straightforward arguments. Important theoretical aspects of gradient enriched elasticity theories which emerge in this work are also discussed.
A sufficient condition for de Sitter vacua in type IIB string theory
Rummel, Markus
2011-01-01
We derive a sufficient condition for realizing meta-stable de Sitter vacua with small positive cosmological constant within type IIB string theory flux compactifications with spontaneously broken supersymmetry. There are a number of `lamp post' constructions of de Sitter vacua in type IIB string theory and supergravity. We show that one of them -- the method of `K\\"ahler uplifting' by F-terms from an interplay between non-perturbative effects and the leading $\\alpha'$-correction -- allows for a more general parametric understanding of the existence of de Sitter vacua. The result is a condition on the values of the flux induced superpotential and the topological data of the Calabi-Yau compactification, which guarantees the existence of a meta-stable de Sitter vacuum if met. Our analysis explicitly includes the stabilization of all moduli, i.e. the K\\"ahler, dilaton and complex structure moduli, by the interplay of the leading perturbative and non-perturbative effects at parametrically large volume.
A sufficient condition for de Sitter vacua in type IIB string theory
Rummel, Markus [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2011-07-15
We derive a sufficient condition for realizing meta-stable de Sitter vacua with small positive cosmological constant within type IIB string theory flux compactifications with spontaneously broken supersymmetry. There are a number of 'lamp post' constructions of de Sitter vacua in type IIB string theory and supergravity. We show that one of them - the method of 'Kaehler uplifting' by F-terms from an interplay between non-perturbative effects and the leading {alpha}'-correction - allows for a more general parametric understanding of the existence of de Sitter vacua. The result is a condition on the values of the flux induced superpotential and the topological data of the Calabi-Yau compactification, which guarantees the existence of a meta-stable de Sitter vacuum if met. Our analysis explicitly includes the stabilization of all moduli, i.e. the Kaehler, dilaton and complex structure moduli, by the interplay of the leading perturbative and non-perturbative effects at parametrically large volume. (orig.)
On heterotic orbifolds, M-theory and type I' brane engineering
Horava-Witten M-theory ? heterotic string duality poses special problems for the twisted sectors of heterotic orbifolds. In our previous paper we explained how in M theory the twisted states couple to gauge fields apparently living on M9 branes at both ends of the eleventh dimension at the same time. The resolution involves 7D gauge fields which live on fixed planes of the (T4/IZN)x(IS1/IZ2)xIR5,1 orbifold and lock onto the 10D gauge fields along the intersection planes. The physics of such intersection planes does not follow directly from the M theory but there are stringent kinematic constraints due to duality and local consistency, which allowed us to deduce the local fields and the boundary conditions at each intersection. In this paper we explain various phenomena at the intersection planes in terms of duality between Horava-Witten and tI superstring theories. The orbifold fixed planes are dual to stacks of D6 branes, the M9 planes are dual to Ori orientifold planes accompanied by D8 branes, and the intersections are dual to brane junctions. We engineer several junction types which lead to distinct patterns of 7D/10D gauge field locking, 7D symmetry breaking and/or local 6D fields. Another aspect of brane engineering is putting the junctions together; sometimes, the combined effect is rather spectacular from the HW point of view and the quantum numbers of some twisted states have to 'bounce' off both ends of the eleventh dimension before their heterotic identity becomes clear. Some models involve D6/Ori junctions where the string coupling diverges towards the orientifold plane. We use the heterotic?HW?I' duality to predict what should happen at such junctions. For example, pinning down an NS5 half-brane to a definite location on a ?=? Ori plane requires precisely four D6 branes. (author)
Demonstration of a viable quantitative theory for interplanetary type II radio bursts
Schmidt, J. M.; Cairns, Iver H.
2016-03-01
Between 29 November and 1 December 2013 the two widely separated spacecraft STEREO A and B observed a long lasting, intermittent, type II radio burst for the extended frequency range ≈ 4 MHz to 30 kHz, including an intensification when the shock wave of the associated coronal mass ejection (CME) reached STEREO A. We demonstrate for the first time our ability to quantitatively and accurately simulate the fundamental (F) and harmonic (H) emission of type II bursts from the higher corona (near 11 solar radii) to 1 AU. Our modeling requires the combination of data-driven three-dimensional magnetohydrodynamic simulations for the CME and plasma background, carried out with the BATS-R-US code, with an analytic quantitative kinetic model for both F and H radio emission, including the electron reflection at the shock, growth of Langmuir waves and radio waves, and the radiations propagation to an arbitrary observer. The intensities and frequencies of the observed radio emissions vary hugely by factors ≈ 106 and ≈ 103, respectively; the theoretical predictions are impressively accurate, being typically in error by less than a factor of 10 and 20 %, for both STEREO A and B. We also obtain accurate predictions for the timing and characteristics of the shock and local radio onsets at STEREO A, the lack of such onsets at STEREO B, and the z-component of the magnetic field at STEREO A ahead of the shock, and in the sheath. Very strong support is provided by these multiple agreements for the theory, the efficacy of the BATS-R-US code, and the vision of using type IIs and associated data-theory iterations to predict whether a CME will impact Earth's magnetosphere and drive space weather events.
A new type of coupled wave theory is described which is capable, in a very natural way, of analytically describing polychromatic gratings. In contrast to the well known and extremely successful coupled wave theory of Kogelnik, the new theory is based on a differential formulation of the process of Fresnel reflection within the grating. The fundamental coupled wave equations, which are an exact solution of Maxwell's equations for the case of the un-slanted reflection grating, can be analytically solved with minimal approximation. The equations may also be solved in a rotated frame of reference to provide useful formulae for the diffractive efficiency of the general polychromatic slanted grating in three dimensions. The new theory is compared with Kogelnik's theory where extremely good agreement is found for most cases. The theory has also been compared to a rigorous computational chain matrix simulation of the un-slanted grating with excellent agreement for cases typical to display holography. In contrast, Kogelnik's theory shows small discrepancies away from Bragg resonance. The new coupled wave theory may easily be extended to an N-coupled wave theory for the case of the multiplexed polychromatic grating and indeed for the purposes of analytically describing diffraction in the colour hologram. In the simple case of a monochromatic spatially-multiplexed grating at Bragg resonance the theory is in exact agreement with the predictions of conventional N-coupled wave theory.
Quantification analysis of CT of ovarian tumors
Early symptoms in patients with ovarian tumors are usually few and nonspecific. CT is often very helpful in the diagnosis of ovarian tumors. Although it is difficult to identify normal ovaries, it is usually possible to diagnose ovarian lesions on CT, because with few exceptions they show tumorous enlargement. We can even estimate the histology in typical cases such as dermoid cysts or some types of cystadenomas. However, estimation of histology is difficult in many cases. Tumors other than those of ovarian origin can occur in the pelvis and require differentiation. Ovarian tumors have a close relationship with the uterus and broad ligaments, and make contact with as least one side of the pelvic wall. Enhanced CT with contrast media may facilitate differentiation between pedunculated subserosal leiomyoma uteri and ovarian tumor, because the former shows intense enhancement as a uterine body; the latter is less intense. Thus, we have little difficulty in differentiating between tumors of ovarian origin and those of other origins. Our problem is differentiating between malignant and benign ovarian tumors, and clarification of their histology. In this study, we devised a decision flow chart to attain an accurate diagnosis. In part, we have utilized Hayashi's quantification theory II, a multiple regression analysis where predictive variables are categorical and outside criteria are classificatory. Hayashi stated that the aim of multi-dimensional quantification is to synthetically form numerical representation of intercorrelated patterns to maximize the efficiency of classification, i.e. the success rate of prediction. Thus, quantification of patterns is thought to be effective in facilitating image diagnosis such as CT and minimizing errors. (author)
Liu, Qiang; Nie, Jianhui; Huang, Weijin; Meng, Shufang; Yuan, Baozhu; Gao, Dongying; Xu, Xuemei; Wang, Youchun
2012-01-01
Background The presence of various levels of Adenovirus serotype 5 neutralizing antibodies (Ad5NAb) is thought to contribute to the inconsistent clinical results obtained from vaccination and gene therapy studies. Currently, two platforms based on high-throughput technology are available for Ad5NAb quantification, chemiluminescence- and fluorescence-based assays. The aim of this study was to compare the results of two assays in the seroepidemiology of Ad5NAb in a local population of donors. M...
Mirage Models Confront the LHC: II. Flux-Stabilized Type IIB String Theory
Kaufman, Bryan
2013-01-01
We continue the study of a class of string-motivated effective supergravity theories in light of current data from the CERN Large Hadron Collider (LHC). In this installment we consider Type IIB string theory compactified on a Calabi-Yau orientifold in the presence of fluxes, in the manner originally formulated by Kachru, et al. We allow for a variety of potential uplift mechanisms and embeddings of the Standard Model field content into D3 and D7 brane configurations. We find that an uplift sector independent of the Kahler moduli, as is the case with anti-D3 branes, is inconsistent with data unless the matter and Higgs sectors are localized on D7 branes exclusively, or are confined to twisted sectors between D3 and D7 branes. We identify regions of parameter space for all possible D-brane configurations that remain consistent with PLANCK observations on the dark matter relic density and measurements of the CP-even Higgs mass at the LHC. Constraints arising from LHC searches at 8 TeV center-of-mass energies, an...
Mirage models confront the LHC. II. Flux-stabilized type IIB string theory
Kaufman, Bryan L.; Nelson, Brent D.
2014-04-01
We continue the study of a class of string-motivated effective supergravity theories in light of current data from the CERN Large Hadron Collider (LHC). In this installment we consider type IIB string theory compactified on a Calabi-Yau orientifold in the presence of fluxes, in the manner originally formulated by Kachru et al. We allow for a variety of potential uplift mechanisms and embeddings of the Standard Model field content into D3-and D7-brane configurations. We find that an uplift sector independent of the Kähler moduli, as is the case with anti-D3-branes, is inconsistent with data unless the matter and Higgs sectors are localized on D7 branes exclusively, or are confined to twisted sectors between D3-and D7-branes. We identify regions of parameter space for all possible D-brane configurations that remain consistent with Planck observations on the dark matter relic density and measurements of the CP-even Higgs mass at the LHC. Constraints arising from LHC searches at √s =8 TeV and the LUX dark matter detection experiment are discussed. The discovery prospects for the remaining parameter space at dark matter direct-detection experiments are described, and signatures for detection of superpartners at the LHC with √s =14 TeV are analyzed.
Bianchi type I Universe and instability of new agegraphic dark energy in Brans-Dicke theories
Fayaz, V.
2016-02-01
In this paper, we consider the new agegraphic dark energy (NADE) in a Bianchi type-I metric (which is a spatially homogeneous and anisotropic) in the framework of Brans-Dicke theory. For this purpose, we use the squared sound speed vs2 whose sign determines the stability of the model. We explore the stability of this model in the presence/absence of interaction between dark energy and dark matter in both flat and non-isotropic geometry. The equation of state and the deceleration parameter of the new agegraphic dark energy in a anisotropic Universe is obtained. We show that the combination of Brans-Dicke field and new agegraphic dark energy can accommodate ω_{\\varLambda}=-1 crossing for the equation of state of noninteracting dark energy. When an interaction between dark energy and dark matter is taken into account, the transition of ω_{\\varLambda} to phantom regime can be more easily accounted when the Einstein field equations is being resort. In conclusion, we find evidences that the new agegraphic dark energy in BD theory can not lead to a stable Universe favored by observations at the present time. The anisotropy of the Universe decreases and the Universe transits to an isotropic flat FRW Universe accommodating the present acceleration.
Engineering of Quantum Hall Effect from type IIA string theory on the K3 surface
Using D-brane configurations on the K3 surface, we give six-dimensional type IIA stringy realizations of the Quantum Hall Effect (QHE) in 1+2 dimensions. Based on the vertical and horizontal lines of the K3 Hodge diamond, we engineer two different stringy realizations. The vertical line presents a realization in terms of D2 and D6-branes wrapping the K3 surface. The horizontal one is associated with hierarchical stringy descriptions obtained from a quiver gauge theory living on a stack of D4-branes wrapping intersecting 2-spheres embedded in the K3 surface with deformed singularities. These geometries are classified by three kinds of the Kac-Moody algebras: ordinary, i.e. finite dimensional, affine and indefinite. We find that no stringy QHE in 1+2 dimensions can occur in the quiver gauge theory living on intersecting 2-spheres arranged as affine Dynkin diagrams. Stringy realizations of QHE can be done only for the finite and indefinite geometries. In particular, the finite Lie algebras give models with fractional filling fractions, while the indefinite ones classify models with negative filling fractions which can be associated with the physics of holes in the graphene.
Hossienkhani, Hossien
2016-01-01
A spatially homogeneous and anisotropic Bianchi type I universe has been studied with the ghost dark energy (GDE) in the framework of Brans-Dicke theory. For this purpose, we use the squared sound speed $v_s^2$ whose sign determines the stability of the model. At first, we obtain the equation of state parameter, $\\omega_\\Lambda$, the deceleration parameter $q$ and the evolution equation of the ghost dark energy. Then, we extend our study to the case of ghost dark energy in a non-isotropic and Brans-Dicke framework and find out that the transition of $\\omega_\\Lambda$ to the phantom regime can be more easily accounted for than when it is restored into the Einstein field equations. Our numerical result show the effects of the interaction and anisotropic on the evolutionary behaviour the ghost dark energy models. In conclusion, we find evidence that the ghost dark energy in BD theory can lead to a stable universe favored by observations at the present time.
Brane Curvature Corrections to the $\\mathcal{N}=1$ Type II/F-theory Effective Action
Junghans, Daniel
2014-01-01
We initiate a study of corrections to the K\\"{a}hler potential of $\\mathcal{N}=1$ type II/F-theory compactifications that arise from curvature terms in the action of D-branes and orientifold planes. We first show that a recently proposed correction, which was argued to appear at order $\\alpha^{\\prime 2}g_s$ and be proportional to the intersection volume of D7-branes and O7-planes, is an artifact of an inconvenient field basis in the dual M-theory frame and can be removed by a field redefinition. We then analyze to what extent curvature terms in the DBI and WZ action may still lead to corrections of a similar kind and identify two general mechanisms that can potentially modify the volume dependence of the K\\"{a}hler potential in the presence of D-branes and O-planes. The first mechanism is related to an induced Einstein-Hilbert term on warped brane worldvolumes, which leads to a shift in the classical volume of the compactification manifold. The resulting corrections are generic and can appear at one-loop orde...
Umar Farooq
2015-04-01
Full Text Available A rosane type diterpenoid has been isolated from the ethyl acetate soluble fraction of Stachys parviflora. The structure elucidation was based primarily on 1D- and 2D-NMR techniques including correlation spectroscopy (COSY, heteronuclear multiple quantum coherence (HMQC, heteronuclear multiple bond correlation (HMBC, and nuclear Overhauser effect spectroscopy (NOESY. Density functional theory calculations have been performed to gain insight into the geometric, electronic and spectroscopic properties of the isolated diterpenoid. The geometries, vibrational spectrum and electronic properties were modeled at B3LYP/6-31G(d and the theoretical data correlated nicely with the experimental values. Simulated chemical shifts at B3LYP/6-311+G(2d,p showed much better correlation with the experimental chemical shifts, compared to B3LYP/6-31G(d and WP04/6-31G(d.
Adaptation of learning resources based on the MBTI theory of psychological types
Amel Behaz
2012-01-01
Full Text Available Today, the resources available on the web increases significantly. The motivation for the dissemination of knowledge and their acquisition by learners is central to learning. However, learners show differences between the ways of learning that suits them best. The objective of the work presented in this paper is to study how it is possible to integrate models from cognitive theories and ontologies for the adaptation of educational resources. The goal is to provide the system capabilities to conduct reasoning on descriptions obtained in order to automatically adapt the resources to a learner according to his preferences. We rely on the model MBTI (Myers-Briggs Type Indicator for the consideration of learning styles of learners as a criterion for adaptation.
The Bianchi type-V Dark Energy Cosmology in Self Interacting Brans Dicke Theory of Gravity
Singh, J K
2016-01-01
This paper deals with a spatially homogeneous and totally anisotropic Bianchi type-V cosmological model within the framework of self interacting Brans Dicke theory of gravity in the background of anisotropic dark energy (DE) with variable equation of state (EoS) parameter and constant deceleration parameter. Constant deceleration parameter leads to two models of universe, i.e. power law model and exponential model. EoS parameter {\\omega} and its existing range for the models is in good agreement with the most recent observational data. We notice that {\\omega} given by (37) i.e {\\omega}(t) = log(k1t) is more suitable in explaining the evolution of the universe. The physical behaviors of the solutions have also been discussed using some physical quantities. Finally, we observe that despite having several prominent features, both of the DE models discussed fail in details.
So far the second-order perturbation theory has been only applied to the hydrogen molecule. No application was attempted for another molecule, probably because of technical difficulties of such calculations. The purpose of this contribution is to show that the calculations of this type are now feasible on larger polyatomic molecules even on commonly used computers
Delimited continuations in natural language: quantification and polarity sensitivity
Shan, C
2004-01-01
Making a linguistic theory is like making a programming language: one typically devises a type system to delineate the acceptable utterances and a denotational semantics to explain observations on their behavior. Via this connection, the programming language concept of delimited continuations can help analyze natural language phenomena such as quantification and polarity sensitivity. Using a logical metalanguage whose syntax includes control operators and whose semantics involves evaluation order, these analyses can be expressed in direct style rather than continuation-passing style, and these phenomena can be thought of as computational side effects.
Critical phenomena in dynamical Ising-typed thin films by effective-field theory
The stationary state solutions of the Ising-typed thin films with different layers in the presence of an external oscillatory field are examined within the effective-field theory. The study focuses on the effects of external field frequency and amplitude on the overall behavior. Particular attention is paid on evolution of the special point with dynamic field frequency corresponding to critical temperature of the three-dimensional infinite bulk system where the surface and modified exchange parameters are of no importance. Some findings such as surface enhancement phenomenon and effect of thickness on the dynamic process are introduced together with some other well known characteristics. An attempt is made to explain the relations between the competing time scales (intrinsic microscopic relaxation time of the system and the time period of the external oscillatory field) and frequency dispersion of the critical temperature coordinate of the special point. - Highlights: • Dynamical ferromagnetic Ising-type thin films were examined. • Variation of dynamical order parameters with temperature was plotted. • The profiles of average magnetizations on each layer were presented. • Dynamic phase boundaries were plotted in related planes. • The frequency dispersion of the related coordinate of special point was propounded
Godin Gaston; Boudreau François
2009-01-01
Abstract Background Regular physical activity is considered a cornerstone for managing type 2 diabetes. However, in Canada, most individuals with type 2 diabetes do not meet national physical activity recommendations. When designing a theory-based intervention, one should first determine the key determinants of physical activity for this population. Unfortunately, there is a lack of information on this aspect among adults with type 2 diabetes. The purpose of this cross-sectional study is to f...
Holographic-Type Gravitation via Non-Differentiability in Weyl-Dirac Theory
Mihai Pricop; Mugur R?ut; Zoltan Borsos; Anca Baciu; Maricel Agop
2013-01-01
In the Weyl-Dirac non-relativistic hydrodynamics approach, the non-linear interaction between sub-quantum level and particle gives non-differentiable properties to the space. Therefore, the movement trajectories are fractal curves, the dynamics are described by a complex speed field and the equation of motion is identified with the geodesics of a fractal space which corresponds to a Schrodinger non-linear equation. The real part of the complex speed field assures, through a quantification co...
The signals of fission chambers are usually evaluated with the help of the co-called Campbelling techniques. These are based on the Campbell theorem, which states that if the primary incoming events, generating the detector pulses, are independent, then relationships exist between the moments of various orders of the signal in the current mode. This gives the possibility to determine the mean value of the intensity of the detection events, which is proportional to the static flux, from the higher moments of the detector current, which has certain advantages. However, the main application area of fission chambers is measurements in power reactors where, as is well known, the individual detection events are not independent, due to the branching character of the neutron chains (neutron multiplication). Therefore it is of interest to extend the Campbelling-type theory for the case of correlated neutron events. Such a theory could address two questions: partly, to investigate the bias when the traditional Campbell techniques are used for correlated incoming events; and partly, to see whether the correlation properties of the detection events, which carry information on the multiplying medium, could be extracted from the measurements. This paper is devoted to the investigation of these questions. The results show that there is a potential possibility to extract the same information from fission chamber signals in the current mode as with the Rossi- or Feynman-alpha methods, or from coincidence and multiplicity measurements, which so far have required detectors working in the pulse mode. It is also shown that application of the standard Campbelling techniques to neutron detection in multiplying systems does not lead to an error for estimating the stationary flux as long as the detector is calibrated in in situ measurements
Pál, L.; Pázsit, I.
2015-09-01
The signals of fission chambers are usually evaluated with the help of the co-called Campbelling techniques. These are based on the Campbell theorem, which states that if the primary incoming events, generating the detector pulses, are independent, then relationships exist between the moments of various orders of the signal in the current mode. This gives the possibility to determine the mean value of the intensity of the detection events, which is proportional to the static flux, from the higher moments of the detector current, which has certain advantages. However, the main application area of fission chambers is measurements in power reactors where, as is well known, the individual detection events are not independent, due to the branching character of the neutron chains (neutron multiplication). Therefore it is of interest to extend the Campbelling-type theory for the case of correlated neutron events. Such a theory could address two questions: partly, to investigate the bias when the traditional Campbell techniques are used for correlated incoming events; and partly, to see whether the correlation properties of the detection events, which carry information on the multiplying medium, could be extracted from the measurements. This paper is devoted to the investigation of these questions. The results show that there is a potential possibility to extract the same information from fission chamber signals in the current mode as with the Rossi- or Feynman-alpha methods, or from coincidence and multiplicity measurements, which so far have required detectors working in the pulse mode. It is also shown that application of the standard Campbelling techniques to neutron detection in multiplying systems does not lead to an error for estimating the stationary flux as long as the detector is calibrated in in situ measurements.
Non-perturbative black holes in Type-IIA String Theory versus the No-Hair conjecture
We obtain the first black hole solution to Type-IIA String Theory compactified on an arbitrary self-mirror Calabi–Yau manifold in the presence of non-perturbative quantum corrections. Remarkably enough, the solution involves multivalued functions, which could lead to a violation of the No-Hair conjecture. We discuss how String Theory forbids such scenario. However, the possibility still remains open in the context of four-dimensional ungauged Supergravity. (paper)
The D^6 R^4 term in type IIB string theory on T^2 and U-duality
Basu, Anirban
2007-01-01
We propose a manifestly U-duality invariant modular form for the D^6 R^4 interaction in the effective action of type IIB string theory compactified on T^2. It receives perturbative contributions upto genus three, as well as non-perturbative contributions from D-instantons and (p,q) string instantons wrapping T^2. Our construction is based on constraints coming from string perturbation theory, U-duality, the decompactification limit to ten dimensions, and the equality of the perturbative part of the amplitude in type IIA and type IIB string theories. Using duality, parts of the perturbative amplitude are also shown to match exactly the results obtained from eleven dimensional supergravity compactified on T^3 at one loop. We also obtain parts of the genus one and genus k amplitudes for the D^{2k} R^4 interaction for arbitrary k > 3. We enhance a part of this amplitude to a U-duality invariant modular form.
Mahanta, Snigdhayan
2011-01-01
We study the twisted K-theory and K-homology of some infinite dimensional spaces, like $\\operatorname {SU}(\\infty)$ , in the bivariant setting. Using a general procedure due to Cuntz we construct a bivariant K-theory on the category of separable $\\sigma$ - $C^{*}$ -algebras that generalizes both the twisted K-theory and K-homology of (locally) compact spaces. We construct a bivariant Chern–Connes-type character taking values in a bivariant local cyclic homology. We analyze the structure of th...
Improved BCS-type pairing for the relativistic mean-field theory
A density-dependent ? interaction (DDDI) is proposed in the formalism of BCS-type pairing correlations for exotic nuclei whose Fermi surfaces are close to the threshold of the unbound state. It provides the possibility to pick up those states whose wave functions are concentrated in the nuclear region by making the pairing matrix elements state dependent. On this basis, the energy level distributions, occupations, and ground-state properties are self-consistently studied in the RMF theory with deformation. Calculations are performed for the Sr isotopic chain. A good description of the total energy per nucleon, deformations, two-neutron separation energies and isotope shift from the proton drip line to the neutron drip line is found. Especially, by comparing the single-particle structure from the DDDI pairing interaction with that from the constant pairing interaction for a very neutron-rich nucleus it is demonstrated that the DDDI pairing method improves the treatment of the pairing in the continuum. (orig.)
Sandryhaila, Aliaksei; Pueschel, Markus
2010-01-01
A polynomial transform is the multiplication of an input vector $x\\in\\C^n$ by a matrix $\\PT_{b,\\alpha}\\in\\C^{n\\times n},$ whose $(k,\\ell)$-th element is defined as $p_\\ell(\\alpha_k)$ for polynomials $p_\\ell(x)\\in\\C[x]$ from a list $b=\\{p_0(x),\\dots,p_{n-1}(x)\\}$ and sample points $\\alpha_k\\in\\C$ from a list $\\alpha=\\{\\alpha_0,\\dots,\\alpha_{n-1}\\}$. Such transforms find applications in the areas of signal processing, data compression, and function interpolation. Important examples include the discrete Fourier and cosine transforms. In this paper we introduce a novel technique to derive fast algorithms for polynomial transforms. The technique uses the relationship between polynomial transforms and the representation theory of polynomial algebras. Specifically, we derive algorithms by decomposing the regular modules of these algebras as a stepwise induction. As an application, we derive novel $O(n\\log{n})$ general-radix algorithms for the discrete Fourier transform and the discrete cosine transform of type 4.
Didarloo, A; Shojaeizadeh, D; Gharaaghaji Asl, R; Niknami, S; Khorami, A
2014-06-01
The study evaluated the efficacy of the Theory of Reasoned Action (TRA), along with self-efficacy to predict dietary behaviour in a group of Iranian women with type 2 diabetes. A sample of 352 diabetic women referred to Khoy Diabetes Clinic, Iran, were selected and given a self-administered survey to assess eating behaviour, using the extended TRA constructs. Bivariate correlations and Enter regression analyses of the extended TRA model were performed with SPSS software. Overall, the proposed model explained 31.6% of variance of behavioural intention and 21.5% of variance of dietary behaviour. Among the model constructs, self-efficacy was the strongest predictor of intentions and dietary practice. In addition to the model variables, visit intervals of patients and source of obtaining information about diabetes from sociodemographic factors were also associated with dietary behaviours of the diabetics. This research has highlighted the relative importance of the extended TRA constructs upon behavioural intention and subsequent behaviour. Therefore, use of the present research model in designing educational interventions to increase adherence to dietary behaviours among diabetic patients was recommended and emphasized. PMID:25076670
From Peierls brackets to a generalized Moyal bracket for type-I gauge theories
Esposito, G; Esposito, Giampiero; Stornaiolo, Cosimo
2006-01-01
In the space-of-histories approach to gauge fields and their quantization, the Maxwell, Yang--Mills and gravitational field are well known to share the property of being type-I theories, i.e. Lie brackets of the vector fields which leave the action functional invariant are linear combinations of such vector fields, with coefficients of linear combination given by structure constants. The corresponding gauge-field operator in the functional integral for the in-out amplitude is an invertible second-order differential operator. For such an operator, we consider advanced and retarded Green functions giving rise to a Peierls bracket among group-invariant functionals. Our Peierls bracket is a Poisson bracket on the space of all group-invariant functionals in two cases only: either the gauge-fixing is arbitrary but the gauge fields lie on the dynamical sub-space; or the gauge-fixing is a linear functional of gauge fields, which are generic points of the space of histories. In both cases, the resulting Peierls bracke...
Chern class identities from tadpole matching in type IIB and F-theory
In light of Sen's weak coupling limit of F-theory as a type IIB orientifold, the compatibility of the tadpole conditions leads to a non-trivial identity relating the Euler characteristics of an elliptically fibered Calabi-Yau fourfold and of certain related surfaces. We present the physical argument leading to the identity, and a mathematical derivation of a Chern class identity which confirms it, after taking into account singularities of the relevant loci. This identity of Chern classes holds in arbitrary dimension, and for varieties that are not necessarily Calabi-Yau. Singularities are essential in both the physics and the mathematics arguments: the tadpole relation may be interpreted as an identity involving stringy invariants of a singular hypersurface, and corrections for the presence of pinch-points. The mathematical discussion is streamlined by the use of Chern-Schwartz-MacPherson classes of singular varieties. We also show how the main identity may be obtained by applying 'Verdier specialization' to suitable constructible functions.
Exceptional Field Theory I: $E_{6(6)}$ covariant Form of M-Theory and Type IIB
Hohm, Olaf
2013-01-01
We present the details of the recently constructed $E_{6(6)}$ covariant extension of 11-dimensional supergravity. This theory requires a 5+27 dimensional spacetime in which the `internal' coordinates transform in the $\\bar{\\bf 27}$ of $E_{6(6)}$. All fields are $E_{6(6)}$ tensors and transform under (gauged) internal generalized diffeomorphisms. The `Kaluza-Klein' vector field acts as a gauge field for the $E_{6(6)}$ covariant `E-bracket' rather than a Lie bracket, requiring the presence of two-forms akin to the tensor hierarchy of gauged supergravity. We construct the complete and unique action that is gauge invariant under generalized diffeomorphisms in the internal and external coordinates. The theory is subject to covariant section constraints on the derivatives, implying that only a subset of the extra 27 coordinates is physical. We give two solutions of the section constraints: the first preserves GL(6) and embeds the action of the complete (i.e. untruncated) 11-dimensional supergravity; the second pres...
Many inner ear disorders, including Meniere's disease, are believed to be based on endolymphatic hydrops. We evaluated a newly proposed method for semi-quantification of endolymphatic size in patients with suspected endolymphatic hydrops that uses 2 kinds of processed magnetic resonance (MR) images. Twenty-four consecutive patients underwent heavily T2-weighted (hT2W) MR cisternography (MRC), hT2W 3-dimensional (3D) fluid-attenuated inversion recovery (FLAIR) with inversion time of 2250 ms (positive perilymph image, PPI), and hT2W-3D-IR with inversion time of 2050 ms (positive endolymph image, PEI) 4 hours after intravenous administration of single-dose gadolinium-based contrast material (IV-SD-GBCM). Two images were generated using 2 new methods to process PPI, PEI, and MRC. Three radiologists contoured the cochlea and vestibule on MRC, copied regions of interest (ROIs) onto the 2 kinds of generated images, and semi-quantitatively measured the size of the endolymph for the cochlea and vestibule by setting a threshold pixel value. Each observer noted a strong linear correlation between endolymphatic size of both the cochlea and vestibule of the 2 kinds of generated images. The Pearson correlation coefficients (r) were 0.783, 0.734, and 0.800 in the cochlea and 0.924, 0.930, and 0.933 in the vestibule (P<0.001, for all). In both the cochlea and vestibule, repeated-measures analysis of variance showed no statistically significant difference between observers. Use of the 2 kinds of generated images generated from MR images obtained 4 hours after IV-SD-GBCM might enable semi-quantification of endolymphatic size with little observer dependency. (author)
MAMA Software Features: Visual Examples of Quantification
Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-05-20
This document shows examples of the results from quantifying objects of certain sizes and types in the software. It is intended to give users a better feel for some of the quantification calculations, and, more importantly, to help users understand the challenges with using a small set of ‘shape’ quantification calculations for objects that can vary widely in shapes and features. We will add more examples to this in the coming year.
The D^4 R^4 term in type IIB string theory on T^2 and U-duality
Basu, Anirban
2007-01-01
We propose a manifestly U-duality invariant modular form for the D^4 R^4 interaction in type IIB string theory compactified on T^2. It receives perturbative contributions upto two loops, and non-perturbative contributions from D-instantons and (p,q) string instantons wrapping T^2. We provide evidence for this modular form by showing that the coefficients at tree level and at one loop precisely match those obtained using string perturbation theory. Using duality, parts of the perturbative amplitude are also shown to match exactly the results obtained from eleven dimensional supergravity compactified on T^3 at one loop. Decompactifying the theory to nine dimensions, we obtain a U-duality invariant modular form, whose coefficients at tree level and at one loop agree with string perturbation theory.
Anderson, Edward
2013-01-01
I already showed that Kendall's shape geometry work was the geometrical description of Barbour's relational mechanics' reduced configuration spaces (alias shape spaces). I now describe the extent to which Kendall's subsequent statistical application to such as the `standing stones problem' realizes further ideas along the lines of Barbour-type timeless records theories, albeit just at the classical level.
Lu, Lyan-Ywan; Chen, Pei-Rong; Pong, Kuan-Wen
2016-03-01
Although it has been proven that seismic isolation is an effective technology for seismic protection of structures and equipment, most existing isolation systems are for mitigating horizontal ground motions, and in practice there are very few vertical isolation systems. Part of the reason is due to the conflict with regard to the demand for isolation stiffness. In other words, a vertical isolation system must have sufficient vertical rigidity to sustain the weight of the isolated object, while it must also have sufficient flexibility in order to elongate the vibration period under seismic excitation. In order to overcome this difficulty, a novel system is proposed in this study, called an inertia-type vertical isolation system (IVIS). The primary difference between the IVIS and a traditional system is that the former has an additional leverage mechanism with a counterweight. The counterweight will provide a static uplifting force and an extra dynamic inertia force, such that the effective vertical stiffness of the IVIS becomes higher in its static state and lower in the dynamic one. The theory underlying the IVIS is developed and verified experimentally by a seismic simulation test in this work. The results show that the IVIS leads to a less static settlement and at the same time a lower effective isolation frequency. The test results also demonstrate that the isolator displacement demand of the IVIS is only about 30-40 percent that of the traditional one in all kinds of earthquakes. With regard to the reduction of acceleration response, the IVIS is particularly effective for near-fault earthquakes or near-resonant excitations, but is less effective for far-field earthquakes with more high-frequency contents, as compared with the traditional system.
Advances in type-2 fuzzy sets and systems theory and applications
Mendel, Jerry; Tahayori, Hooman
2013-01-01
This book explores recent developments in the theoretical foundations and novel applications of general and interval type-2 fuzzy sets and systems, including: algebraic properties of type-2 fuzzy sets, geometric-based definition of type-2 fuzzy set operators, generalizations of the continuous KM algorithm, adaptiveness and novelty of interval type-2 fuzzy logic controllers, relations between conceptual spaces and type-2 fuzzy sets, type-2 fuzzy logic systems versus perceptual computers; modeling human perception of real world concepts with type-2 fuzzy sets, different methods for generating membership functions of interval and general type-2 fuzzy sets, and applications of interval type-2 fuzzy sets to control, machine tooling, image processing and diet. The applications demonstrate the appropriateness of using type-2 fuzzy sets and systems in real world problems that are characterized by different degrees of uncertainty.
Brane webs in the presence of an $O5^-$-plane and 4d class S theories of type D
Zafrir, Gabi
2016-01-01
In this article we conjecture a relationship between 5d SCFT's, that can be engineered by 5-brane webs in the presence of an $O5^-$-plane, and 4d class S theories of type D. The specific relation is that compactification on a circle of the former leads to the latter. We present evidence for this conjecture. One piece of evidence, which is also an interesting application of this, is that it suggests identifications between different class S theories. This can in turn be tested by comparing their central charges.
A Novel Framework for Quantification of Supply Chain Risks
Qazi, Abroon; Quigley, John; Dickson, Alex
2014-01-01
Supply chain risk management is an active area of research and there is a research gap of exploring established risk quantification techniques in other fields for application in the context of supply chain management. We have developed a novel framework for quantification of supply chain risks that integrates two techniques of Bayesian belief network and Game theory. Bayesian belief network can capture interdependency between risk factors and Game theory can assess risks associated with confl...
Spectral analysis of polynomial potentials and its relation with ABJ/M-type theories
We obtain a general class of polynomial potentials for which the Schroedinger operator has a discrete spectrum. This class includes all the scalar potentials in membrane, 5-brane, p-branes, multiple M2 branes, BLG and ABJM theories. We provide a proof of the discreteness of the spectrum of the associated Schroedinger operators. This is the first step in order to analyze BLG and ABJM supersymmetric theories from a non-perturbative point of view.
Spectral analysis of polynomial potentials and its relation with ABJ/M-type theories
Garcia del Moral, M.P., E-mail: garciamormaria@uniovi.e [Departamento de Fisica, Universidad de Oviedo, Calvo Sotelo 18, 33007 Oviedo (Spain); Martin, I., E-mail: isbeliam@usb.v [Departamento de Fisica, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Navarro, L., E-mail: lnavarro@ma.usb.v [Departamento de Matematicas, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Perez, A.J., E-mail: ajperez@ma.usb.v [Departamento de Matematicas, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Restuccia, A., E-mail: arestu@usb.v [Departamento de Fisica, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of)
2010-11-01
We obtain a general class of polynomial potentials for which the Schroedinger operator has a discrete spectrum. This class includes all the scalar potentials in membrane, 5-brane, p-branes, multiple M2 branes, BLG and ABJM theories. We provide a proof of the discreteness of the spectrum of the associated Schroedinger operators. This is the first step in order to analyze BLG and ABJM supersymmetric theories from a non-perturbative point of view.
Godin Gaston
2009-06-01
Full Text Available Abstract Background Regular physical activity is considered a cornerstone for managing type 2 diabetes. However, in Canada, most individuals with type 2 diabetes do not meet national physical activity recommendations. When designing a theory-based intervention, one should first determine the key determinants of physical activity for this population. Unfortunately, there is a lack of information on this aspect among adults with type 2 diabetes. The purpose of this cross-sectional study is to fill this gap using an extended version of Ajzen's Theory of Planned Behavior (TPB as reference. Methods A total of 501 individuals with type 2 diabetes residing in the Province of Quebec (Canada completed the study. Questionnaires were sent and returned by mail. Results Multiple hierarchical regression analyses indicated that TPB variables explained 60% of the variance in intention. The addition of other psychosocial variables in the model added 7% of the explained variance. The final model included perceived behavioral control (β = .38, p Conclusion The findings suggest that interventions aimed at individuals with type 2 diabetes should ensure that people have the necessary resources to overcome potential obstacles to behavioral performance. Interventions should also favor the development of feelings of personal responsibility to exercise and promote the advantages of exercising for individuals with type 2 diabetes.
Microscopic entropy of the most general BPS black hole for type II/M-theory on torii
In the present dissertation we review the statistical computation of the entropy for the most general static BPS black hole solution in the framework of toroidally compactified type II/M-theory. This achievement is inscribed within a research project aimed to the study of the microscopic properties of this kind of solutions in relation to U-duality invariants (e.g. the entropy) computed on the corresponding macroscopic (supergravity) description. (orig.)
Clark, Stephen; Zinchenko, Maxim
2010-01-01
We prove local and global versions of Borg-Marchenko-type uniqueness theorems for half-lattice and full-lattice CMV operators (CMV for Cantero, Moral, and Velazquez) with matrix-valued Verblunsky coefficients. While our half-lattice results are formulated in terms of matrix-valued Weyl-Titchmarsh functions, our full-lattice results involve the diagonal and main off-diagonal Green's matrices. We also develop the basics of Weyl-Titchmarsh theory for CMV operators with matrix-valued Verblunsky coefficients as this is of independent interest and an essential ingredient in proving the corresponding Borg-Marchenko-type uniqueness theorems.
Liu, Qiang; Nie, Jianhui; Huang, Weijin; Meng, Shufang; Yuan, Baozhu; Gao, Dongying; Xu, Xuemei; Wang, Youchun
2012-01-01
Background The presence of various levels of Adenovirus serotype 5 neutralizing antibodies (Ad5NAb) is thought to contribute to the inconsistent clinical results obtained from vaccination and gene therapy studies. Currently, two platforms based on high-throughput technology are available for Ad5NAb quantification, chemiluminescence- and fluorescence-based assays. The aim of this study was to compare the results of two assays in the seroepidemiology of Ad5NAb in a local population of donors. Methodology/Principal Findings The fluorescence-based neutralizing antibody detection test (FRNT) using recombinant Ad5-EGFP virus and the chemiluminescence-based neutralizing antibody test (CLNT) using Ad5-Fluc were developed and standardized for detecting the presence of Ad5NAb in serum samples from the population of donors in Beijing and Anhui provinces, China. First, the overall percentage of people positive for Ad5NAb performed by CLNT was higher than that obtained by FRNT (85.4 vs 69.9%, p<0.001). There was an 84.5% concordance between the two assays for the 206 samples tested (144 positive in both assays and 30 negative in both assays). All 32 discordant sera were CLNT-positive/FRNT-negative and were confirmed positive by western blot. Secondly, for all 144 sera positive by both assays, the two assays showed high correlation (r = 0.94, p<0.001) and close agreement (mean difference: 0.395 log10, 95% CI: −0.054 log10 to 0.845 log10). Finally, it was found by both assays that there was no significant difference observed for titer or prevalence by gender (p = 0.503 vs 0.818, for two assays); however, age range (p = 0.049 vs 0.010) and geographic origin (p = 0.007 vs 0.011) were correlated with Ad5NAb prevalence in northern regions of China. Conclusion The CLNT assay was relatively more simple and had higher sensitivity than the FRNT assay for determining Ad5NAb titers. It is strongly suggested that the CLNT assay be used for future epidemiological studies of Ad5NAb in other localities. PMID:22655054
School Type and Academic Culture: Evidence for the Differentiation-Polarization Theory
Van Houtte, Mieke
2006-01-01
Several decades ago it was shown that the differentiation of pupils into tracks and streams led to a polarization into "anti-school" and "pro-school" cultures. Support for this differentiation-polarization theory is mainly based on case studies. This paper presents findings of a quantitative study in Belgium (Flanders). Attention is given to the…
Communication: Cosolvency and cononsolvency explained in terms of a Flory-Huggins type theory
Dudowicz, Jacek; Freed, Karl F.; Douglas, Jack F.
2015-10-01
Standard Flory-Huggins (FH) theory is utilized to describe the enigmatic cosolvency and cononsolvency phenomena for systems of polymers dissolved in mixed solvents. In particular, phase boundaries (specifically upper critical solution temperature spinodals) are calculated for solutions of homopolymers B in pure solvents and in binary mixtures of small molecule liquids A and C. The miscibility (or immiscibility) patterns for the ternary systems are classified in terms of the FH binary interaction parameters {χαβ} and the ratio r = ϕA/ϕC of the concentrations ϕA and ϕC of the two solvents. The trends in miscibility are compared to those observed for blends of random copolymers (AxC1-x) with homopolymers (B) and to those deduced for A/B/C solutions of polymers B in liquid mixtures of small molecules A and C that associate into polymeric clusters {ApCq}i, (i = 1, 2, …, ∞). Although the classic FH theory is able to explain cosolvency and cononsolvency phenomena, the theory does not include a consideration of the mutual association of the solvent molecules and the competitive association between the solvent molecules and the polymer. These interactions can be incorporated in refinements of the FH theory, and the present paper provides a foundation for such extensions for modeling the rich thermodynamics of polymers in mixed solvents.
Communication: Cosolvency and cononsolvency explained in terms of a Flory-Huggins type theory
Standard Flory-Huggins (FH) theory is utilized to describe the enigmatic cosolvency and cononsolvency phenomena for systems of polymers dissolved in mixed solvents. In particular, phase boundaries (specifically upper critical solution temperature spinodals) are calculated for solutions of homopolymers B in pure solvents and in binary mixtures of small molecule liquids A and C. The miscibility (or immiscibility) patterns for the ternary systems are classified in terms of the FH binary interaction parameters (χαβ) and the ratio r = ϕA/ϕC of the concentrations ϕA and ϕC of the two solvents. The trends in miscibility are compared to those observed for blends of random copolymers (AxC1−x) with homopolymers (B) and to those deduced for A/B/C solutions of polymers B in liquid mixtures of small molecules A and C that associate into polymeric clusters (ApCq)i, (i = 1, 2, …, ∞). Although the classic FH theory is able to explain cosolvency and cononsolvency phenomena, the theory does not include a consideration of the mutual association of the solvent molecules and the competitive association between the solvent molecules and the polymer. These interactions can be incorporated in refinements of the FH theory, and the present paper provides a foundation for such extensions for modeling the rich thermodynamics of polymers in mixed solvents
On Heisenberg's algebra of field theory and vector type fibre spaces
The generalization of commutation relations between the elements of Lie's algebra related to the Poincare group and the operators of boson fields is presented. The elements of Heisenberg's algebra of boson field theory are considered as geometrical objects related to the differentiable manifold of space-like hyperplanes of Minkowski space. This permits to obtain the commutation relations using fibre spaces and similar constructions. (D.Gy.)
BPS-type equations in the non-anticommutative N=2 supersymmetric U(1) gauge theory
Ketov, Sergei V.; Sasaki, Shin(Department of Physics, Kitasato University, Sagamihara, 252-0373, JAPAN)
2004-01-01
We investigate the equations of motion in the four-dimensional non-anticommutative N=2 supersymmetric U(1) gauge field theory, in the search for BPS configurations. The BPS-like equations, generalizing the abelian (anti)self-duality conditions, are proposed. We prove full solvability of our BPS-like equations, as well their consistency with the equations of motion. Certain restrictions on the allowed scalar field values are also found. Surviving supersymmetry is briefly discussed too.
The double Mellin-Barnes type integrals and their applications to convolution theory
Hai, Nguyen Thanh
1992-01-01
This book presents new results in the theory of the double Mellin-Barnes integrals popularly known as the general H-function of two variables.A general integral convolution is constructed by the authors and it contains Laplace convolution as a particular case and possesses a factorization property for one-dimensional H-transform. Many examples of convolutions for classical integral transforms are obtained and they can be applied for the evaluation of series and integrals.
Electrostatic field in superconductors IV: theory of Ginzburg-Landau type
Lipavský, Pavel; Koláček, Jan
2009-01-01
Roč. 23, 20-21 (2009), s. 4505-4511. ISSN 0217-9792 R&D Projects: GA ČR GA202/04/0585; GA ČR GA202/05/0173; GA AV ČR IAA1010312 Institutional research plan: CEZ:AV0Z10100521 Keywords : superconductivity * Ginzburg-Landau theory Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.408, year: 2009
A method is proposed that makes it possible to determine whether a timelike singularity corresponds to a point, linear, or other type of gravitational field source. It is shown that in the general theory of relativity it is also possible to have sources of a quite different type with no analogs in a space of finite curvature. An analysis is made of some well-known solutions containing timelike singularities whose type varies depending on the signs of the functions that occur in the solutions. The form of the solution near simple linear sources [W. Israel, Phys. Rev. D15, 935 (1977)] and generalized anisotropic solutions [S. L. Parnovsky, Physica (Utrecht) 104A, 210 (1980); E. M. Lifshitz and I. M. Khalatnikov, Sov. Phys. Usp. 6, 359 (1963)] is determined more accurately; the space-time described by the γ metric (3) is completely investigated; and the form of the metric near the ends and singular points of linear Weyl singularities is found
Navarro, Jordana N; Jasinski, Jana L
2015-01-01
This article presents an analysis of the relationship between online sexual offenders' demographic background and characteristics indicative of motivation and offense type. Specifically, we investigate whether these characteristics can distinguish different online sexual offender groups from one another as well as inform routine activity theorists on what potentially motivates perpetrators. Using multinomial logistic regression, this study found that online sexual offenders' demographic backgrounds and characteristics indicative of motivation do vary by offense types. Two important implications of this study are that the term "online sexual offender" encompasses different types of offenders, including some who do not align with mainstream media's characterization of "predators," and that the potential offender within routine activity theory can be the focus of empirical investigation rather than taken as a given in research. PMID:26480242
Li, Yiping; Handberg, K.J.; Kabell, Susanne; Kusk, M.; Zhang, M.F.; Jorgensen, P.H.
2007-01-01
In present study, different types of infectious bursal disease virus (IBDV), virulent strain DK01, classic strain F52/70 and vaccine strain D78 were quantified and detected in infected bursa of Fabricius (BF) and cloacal swabs using quantitative real time RT-PCR with SYBR green dye. For selection...
An attempt has been made to obtain a strategy coherent with the available instruments and that could be implemented with future developments. A calculation methodology was developed for fuel reload in PWR reactors, which evolves cell calculation with the HAMMER-TECHNION code and neutronics calculation with the CITATION code.The management strategy adopted consists of fuel element position changing at the beginning of each reactor cycle in order to decrease the radial peak factor. The bi-dimensional, two group First Order perturbation theory was used for the mathematical modeling. (L.C.J.A.)
Weyl Group Multiple Dirichlet Series Type A Combinatorial Theory (AM-175)
Brubaker, Ben; Friedberg, Solomon
2011-01-01
Weyl group multiple Dirichlet series are generalizations of the Riemann zeta function. Like the Riemann zeta function, they are Dirichlet series with analytic continuation and functional equations, having applications to analytic number theory. By contrast, these Weyl group multiple Dirichlet series may be functions of several complex variables and their groups of functional equations may be arbitrary finite Weyl groups. Furthermore, their coefficients are multiplicative up to roots of unity, generalizing the notion of Euler products. This book proves foundational results about these series an
Massless particles, orthosymplectic symmetry and another type of Kaluza-Klein theory
The superalgebra osp(8/1) is intimately related to the twistor program. Its most singular representation has the following property: restricted to the conformal subalgebra it contains each and every massless representation exactly once. In other words, one irreducible representation of osp(8/1) describes all massless particles with maximal efficiency. It is believed that such unification is required if massless fields of high spins are to have self-consistent interactions. There are other reasons for studying massless particles of all spins simultaneously. There is a very appealing model in which massless particles are viewed as states of two so(3,2) singletons. The astounding fact is that all free two-singleton states are precisely massless. The most singular representation of osp(8/2) is irreducible on osp(8/1) and completely determined by the latter representation. It finds direct application in supergravity theories. The most interesting Sp(8/R) homogeneous space is 10-dimensional. The action of the conformal subgroup leaves invariant a unique 4-dimensional submanifold that can be identified with space time. Kaluza-Klein expansion of the scalar field on 10-space, around this 4-dimensional manifold, leads to a field theory of massless particles with all integer spins on space time. A supersymmetric extension is also possible. (Auth.)
Ståhl, M.; Kokotovic, B; Hjulsager, C.K.; Breum, S.Ø.; Angen, Ø.
2011-01-01
Abstract Four quantitative PCR (qPCR) assays were evaluated for quantitative detection of Brachyspira pilosicoli, Lawsonia intracellularis, and E. coli fimbrial types F4 and F18 in pig feces. Standard curves were based on feces spiked with the respective reference strains. The detection limits from the spiking experiments were 102 bacteria/g feces for Bpilo-qPCR and Laws-qPCR, 103 CFU/g feces for F4-qPCR and F18-qPCR. The PCR efficiency for all four qPCR assays was between 0.91 and...
Open and closed boson theories are discussed in a classical framework, highlighting the physical interpretation of conformal symmetry and the Virasoro (1970) algebra. The quantification of bosonic strings is done within the old covariant operational formalism. This method is much less elegant and powerful than the BRST quantification, but it quickly reveals the physical content of quantum theory. Generalization to theories with fermionic degrees of freedom is introduced: the Neveu-Schartz (1971) and Ramond (1971) models, their reduced supersymmetry (two dimensions) and the Gliozzi, Scherk and Olive (1977) projection which leads to a supersymmetry theory in the usual meaning of the term
A Gaussian-type quadrature formula is derived for Lebesgue-Stieltjes integrals pertaining to the neutron transport theory (one-velocity, isotropic scattering, plane geometry). The quadrature formula originates from an orthogonality property satisfied by the well-known gsub(n)(c, ν) functions which appear in the solution by Legendre expansion of the transport equation. The quadrature formula thus obtained reduces to the classical Gaussian one in the case of a purely capturing medium. An application to the Milne problem is given. Examples of numerical quadratures are carried out in the appendix
The classical Yang-Baxter equation and the associated Yangian symmetry of gauged WZW-type theories
Itsios, Georgios; Siampos, Konstantinos; Torrielli, Alessandro
2014-01-01
We construct the Lax-pair, the classical monodromy matrix and the corresponding solution of the Yang-Baxter equation, for a class of integrable gauged WZW-type theories interpolating between the WZW model and the non-Abelian T-dual of the principal chiral model for a simple group. We derive in full detail the Yangian algebra using two independent methods: by computing the algebra of the non-local charges and alternatively through an expansion of the Maillet brackets for the monodromy matrix. As a byproduct, we also provide a detailed general proof of the Serre relations for the Yangian symmetry.
Five Dimensional Bianchi Type-V Space-Time in f (R,T Theory of Gravityw
L.S. Ladke,
2016-02-01
Full Text Available We study the spatially homogeneous anisotropic Bianchi type-V universe in f(R,T theory of gravity, where R is the Ricci scalar and T is the trace of the energy-momentum tensor. We assume the variation law of mean Hubble parameter and constant deceleration parameter to find two different five dimensional exact solutions of the modified field equations. The first solution yields a singular model for n 0 while the second gives a nonsingular model for n 0. The physical quantities are discussed for both models in future evolution of the universe.
S D Katore; R S Rane; K S Wankhade
2011-04-01
Bianchi type-I massive string cosmological model for perfect ﬂuid distribution in the presence of magnetic ﬁeld is investigated in Rosen’s [Gen. Relativ. Gravit. 4, 435 (1973)] bimetric theory of gravitation. To obtain the deterministic model in terms of cosmic time, we have used the condition $A = (B C)^n$, where n is a constant, between the metric potentials. The magnetic ﬁeld is due to the electric current produced along the -axis with inﬁnite electrical conductivity. Some physical and geometrical properties of the exhibited model are discussed and studied.
Calculation of Fayet–Iliopoulos D-term in type I string theory revisited: T6/Z3 orbifold case
The string one-loop computation of the Fayet–Iliopoulos D-term in type I string theory in the case of T6/Z3 orbifold compactification associated with annulus (planar) and the Möbius strip string worldsheet diagrams is reexamined. The mass extracted from the sum of these amplitudes through a limiting procedure is found to be non-vanishing, which is contrary to the earlier computation. The sum can be made finite by a rescaling of the modular parameter in the closed string channel
Types of two-dimensional = 4 superconformal ﬁeld theories
Abbas Ali
2003-12-01
Various types of = 4 superconformal symmetries in two dimensions are considered. It is proposed that apart from the well-known cases of (2) and (2)× (2)× (1), their Kac–Moody symmetry can also be (2)× ((1))4. Operator product expansions for the last case are derived. A complete free ﬁeld realization for the same is obtained.
Ge, Junyi; Gutierrez, Joffre; Cuppens, Jo; Moshchalkov, Victor V., E-mail: Victor.Moshchalkov@fys.kuleuven.be
2014-08-15
Highlights: Flux tubes with various vorticities are observed in the intermediate state of a type-1 superconducting film. The stability of stripe pattern is probed under the drive of ac field. The stripe patterns represent a more stable state compared with flux tube state. All the stripe patterns have very close energy. - Abstract: The intermediate state in a type-1 superconducting Pb film is studied by using the scanning Hall probe microscopy, which shows quantized flux tubes with distinct flux density. The vorticity of flux tubes are quantified using the monopole model. It is found that the vorticity of the flux tubes can be tuned by using flux expulsion process under different magnetic field and temperatures. The stability of stipe patterns at high fields is studied with new stripe patterns formed after shaking with ah ac field. No flux tube is observed even after shaking with intense ac fields. All the results suggests the stripe patterns have very close energy, which is much favorable than the flux tube state.
Highlights: Flux tubes with various vorticities are observed in the intermediate state of a type-1 superconducting film. The stability of stripe pattern is probed under the drive of ac field. The stripe patterns represent a more stable state compared with flux tube state. All the stripe patterns have very close energy. - Abstract: The intermediate state in a type-1 superconducting Pb film is studied by using the scanning Hall probe microscopy, which shows quantized flux tubes with distinct flux density. The vorticity of flux tubes are quantified using the monopole model. It is found that the vorticity of the flux tubes can be tuned by using flux expulsion process under different magnetic field and temperatures. The stability of stipe patterns at high fields is studied with new stripe patterns formed after shaking with ah ac field. No flux tube is observed even after shaking with intense ac fields. All the results suggests the stripe patterns have very close energy, which is much favorable than the flux tube state
We studied Faujasite type molecular sieves by using Fermi Dirac statistics and the quantum theory of dielectricity. We developed an empirical relationship for quantum capacitance which follows an inverse Gaussian profile in the frequency range of 66 Hz - 3 MHz. We calculated quantum capacitance, sample crystal momentum, charge quantization and quantized energy of Faujasite type molecular sieves in the frequency range of 0.1 Hz - 10/sup 4/ MHz. Our calculations for diameter of sodalite and super-cages of Faujasite type molecular sieves are in agreement with experimental results reported in this manuscript. We also calculated quantum polarizability, quantized molecular field, orientational polarizability and deformation polarizability by using experimental results of Ligia Frunza etal. The phonons are over damped in the frequency range 0.1 Hz - 10 kHz and become a source for producing cages in the Faujasite type molecular sieves. Ion exchange recovery processes occur due to over damped phonon excitations in Faujasite type molecular sieves and with increasing temperatures. (author)
Chern-Simons and Born-Infeld gravity theories and Maxwell algebras type
Concha, P.K.; Penafiel, D.M.; Rodriguez, E.K.; Salgado, P. [Universidad de Concepcion, Departamento de Fisica, Concepcion (Chile)
2014-02-15
Recently it was shown that standard odd- and even-dimensional general relativity can be obtained from a (2n + 1)-dimensional Chern-Simons Lagrangian invariant under the B{sub 2n+1} algebra and from a (2n)-dimensional Born-Infeld Lagrangian invariant under a subalgebra L{sup B{sub 2}{sub n}{sub +}{sub 1}}, respectively. Very recently, it was shown that the generalized Inoenue-Wigner contraction of the generalized AdS-Maxwell algebras provides Maxwell algebras of types M{sub m} which correspond to the so-called B{sub m} Lie algebras. In this article we report on a simple model that suggests a mechanism by which standard odd-dimensional general relativity may emerge as the weak coupling constant limit of a (2p + 1)-dimensional Chern-Simons Lagrangian invariant under the Maxwell algebra type M{sub 2m+1}, if and only if m ? p. Similarly, we show that standard even-dimensional general relativity emerges as the weak coupling constant limit of a (2p)-dimensional Born-Infeld type Lagrangian invariant under a subalgebra L{sup M{sub 2}{sub m}} of theMaxwell algebra type, if and only if m ? p. It is shown that when m < p this is not possible for a (2p+1)-dimensional Chern-Simons Lagrangian invariant under the M{sub 2m+1} and for a (2p)-dimensional Born-Infeld type Lagrangian invariant under the L{sup M{sub 2}{sub m}} algebra. (orig.)
The quality of Mueller type functionals in reduced density matrix functional theory
Reduced density matrix functional theory, which uses the one-body density matrix as its fundamental variable, provides a powerful tool for the description of many-electron systems. While the kinetic energy is known exactly as a functional of the one-body density matrix the correlation energy needs to be approximated. Most approximations that are currently employed are modifications of the Mueller functional. The adiabatic extension of these functionals into the time-dependent domain proofs problematic because it leads to time-independent occupation numbers. We assess the general quality of these approximations for an exactly solvable two-electron system as well as for calculations of the fundamental gap. In addition, we address the impact of those functionals for excited state properties in optics
Conformity to the power PC theory of causal induction depends on the type of probe question.
Collins, Darrell J; Shanks, David R
2006-02-01
P. W. Cheng's (1997) power PC theory of causal induction proposes that causal estimates are based on the power (P) of a potential cause, where P is the contingency between the cause and effect normalized by the base rate of the effect. Most previous research using a standard causal probe question has failed to support the predictions of the power PC model but recently Buehner, Cheng, and Clifford (2003) found that participants responded in terms of causal power when probed with a counterfactual test question, which they argued prompted participants to consider the base rate of the effect. However, Buehner et al. framed their counterfactual question in terms of frequency, a factor that has been demonstrated to decrease base rate neglect in judgements under uncertainty. In the experiment reported here, we sought to disentangle the influence of counterfactual and frequency framing of the probe question to determine which factor is responsible for encouraging responses in terms of causal power. PMID:16618631
Theory of light-matter interactions in cascade and diamond type atomic ensembles
Jen, Hsiang-Hua
2011-01-01
In this thesis, we investigate the quantum mechanical interaction of light with matter in the form of a gas of ultracold atoms: the atomic ensemble. We present a theoretical analysis of two problems, which involve the interaction of quantized electromagnetic fields (called signal and idler) with the atomic ensemble (i) cascade two-photon emission in an atomic ladder configuration, and (ii) photon frequency conversion in an atomic diamond configuration. The motivation of these studies comes from potential applications in long-distance quantum communication where it is desirable to generate quantum correlations between telecommunication wavelength light fields and ground level atomic coherences. We develop a theory of correlated signal-idler pair correlation. The analysis is complicated by the possible generation of multiple excitations in the atomic ensemble. An analytical treatment is given in the limit of a single excitation assuming adiabatic laser excitations. The analysis predicts superradiant timescales ...
A New Survey of types of Uncertainties in Nonlinear System with Fuzzy Theory
Fereshteh Mohammadi
2013-03-01
Full Text Available This paper is an attempt to introduce a new framework to handle both uncertainty and time in spatial domain. The application of the fuzzy temporal constraint network (FTCN method is proposed for representation and reasoning of uncertain temporal data. A brief introduction of the fuzzy sets theory is followed by description of the FTCN method with its main algorithms. The paper then discusses the issues of incorporating fuzzy approach into current spatio-temporal processing framework. The general temporal data model is extended to accommodate uncertainties with temporal data and relationships among events. A theoretical FTCN process of fuzzy transition for the imprecise information is introduced with an example. A summary of the paper is given together with outlining some contributions of the paper and future research directions.
Ståhl, Marie; Kokotovic, Branko; Hjulsager, Charlotte Kristiane; Breum, Solvej Østergaard; Angen, Øystein
using specific standard curves, where each pathogen is analysed in the same matrix as sample DNA. The qPCRs were compared to traditional bacteriological diagnostic methods and found to be more sensitive than cultivation for E. coli and B. pilosicoli. The qPCR assay for Lawsonia was also more sensitive......Four quantitative PCR (qPCR) assays were evaluated for quantitative detection of Brachyspira pilosicoli, Lawsonia intracellularis, and E. coli fimbrial types F4 and F18 in pig feces. Standard curves were based on feces spiked with the respective reference strains. The detection limits from the...... than the earlier used method due to improvements in DNA extraction. In addition, as samples were not analysed for all four pathogen agents by traditional diagnostic methods, many samples were found positive for agents that were not expected on the basis of age and case history. The use of quantitative...
AdS3 xw (S3 x S3 x S1) solutions of type IIB string theory
We analyse a recently constructed class of local solutions of type IIB supergravity that consist of a warped product of AdS3 with a sevendimensional internal space. In one duality frame the only other nonvanishing fields are the NS three-form and the dilaton. We analyse in detail how these local solutions can be extended to globally well-defined solutions of type IIB string theory, with the internal space having topology S3 x S3 x S1 and with properly quantised three-form flux. We show that many of the dual (0,2) SCFTs are exactly marginal deformations of the (0,2) SCFTs whose holographic duals are warped products of AdS3 with seven-dimensional manifolds of topology S3 x S2 x T2. (orig.)
Generic Investigations on Transport Theory Modelling of High Temperature Reactors of Pebble Bed Type
Sureda Sureda, Antonio Jaime
2008-01-01
The GRS (Gesellschaft fuer Anlagen und Reaktorsicherheit = Company for Plant and Reactor Safety) maintains and further develops the code system DORT-TD/HERMIX-DIREKT, which is a complex tool for the simulation of coupled neutronics/thermal-hydraulics transients and accident scenarios of high-temperature gas cooled reactors of pebble bed type. With this tool, GRS takes part in the international benchmark activity "OECD/NEA PBMR400 Transient Benchmark”, which aims at the simulation of transient...
Recent progress on Kubas-type hydrogen-storage nanomaterials: from theories to experiments
Chung, ChiHye; Ihm, Jisoon; Lee, Hoonkyung
2015-06-01
Transition-metal (TM) atoms are known to form TM-H2 complexes, which are collectively called Kubas dihydrogen complexes. The TM-H2 complexes are formed through the hybridization of the TM d orbitals with the H2 σ and σ* orbitals. The adsorption energy of H2 molecules in the TM-H2 complexes is usually within the range of energy required for reversible H2 storage at room temperature and ambient pressure (-0.4 ~ -0.2 eV/H2). Thus, TM-H2 complexes have been investigated as potential Kubas-type hydrogen-storage materials. Recently, TM-decorated nanomaterials have attracted much attention because of their promising high capacity and reversibility as Kubas-type hydrogen-storage materials. The hydrogen storage capacity of TM-decorated nanomaterials is expected to be as large as ~9 wt%, which is suitable for certain vehicular applications. However, in the TM-decorated nanostructures, the TM atoms prefer to form clusters because of the large cohesive energy (approximately 4 eV), which leads to a significant reduction in the hydrogen-storage capacity. On the other hand, Ca atoms can form complexes with H2 molecules via Kubas-like interactions. Ca atoms attached to nanomaterials have been reported to be able to adsorb as many H2 molecules as TM atoms. Ca atoms tend to cluster less because of the small cohesive energy of bulk Ca (1.83 eV), which is much smaller than those of bulk TMs. These observations suggest thatKubas interactions can occur in d orbital-free elements, thereby making Ca a more suitable element for attracting H2 in hydrogen-storage materials. Recently, Kubas-type TM-based, hydrogen- stor ge materials were experimentally synthesized, and the Kubas-type interactions were measured to be stronger than the van der Waals interactions. In this review, the recent progress of Kubas-type hydrogen- storage materials will be discussed from both theoretical and experimental viewpoints.
Uncertainty quantification of effective nuclear interactions
Perez, R Navarro; Arriola, E Ruiz
2016-01-01
We give a brief review on the development of phenomenological NN interactions and the corresponding quantification of statistical uncertainties. We look into the uncertainty of effective interactions broadly used in mean field calculations through the Skyrme parameters and effective field theory counter-terms by estimating both statistical and systematic uncertainties stemming from the NN interaction. We also comment on the role played by different fitting strategies on the light of recent developments.
Hu, Xiao-long; Du, Hai; Xu, Yan
2015-12-01
Chinese strong-aroma type liquor (CSAL) is a popular distilled alcoholic beverage in China. It is produced by a complex fermentation process that is conducted in pits in the ground. Ethyl caproate is a key flavor compound in CSAL and is thought to originate from caproic acid produced by Clostridia inhabiting the fermentation pit mud. However, the particular species of Clostridium associated with this production are poorly understood and problematic to quantify by culturing. In this study, a total of 28 closest relatives including 15 Clostridia and 8 Bacilli species in pit muds from three CSAL distilleries, were detected by culture-dependent and -independent methods. Among them, Clostridium kluyveri was identified as the main producer of caproic acid. One representative strain C. kluyveri N6 could produce caproic, butyric and octanoic acids and their corresponding ethyl esters, contributing significantly to CSAL flavor. A real time quantitative PCR assay of C. kluyveri in pit muds developed showed that a concentration of 1.79×10(7) 16S rRNA gene copies/g pit mud in LZ-old pit was approximately six times higher than that in HLM and YH pits and sixty times higher than that in LZ-new pit respectively. This method can be used to improve the management of pit mud microbiology and its impact on CSAL quality. PMID:26267890
Fujita, Masahiko
2016-03-01
Lesions of the cerebellum result in large errors in movements. The cerebellum adaptively controls the strength and timing of motor command signals depending on the internal and external environments of movements. The present theory describes how the cerebellar cortex can control signals for accurate and timed movements. A model network of the cerebellar Golgi and granule cells is shown to be equivalent to a multiple-input (from mossy fibers) hierarchical neural network with a single hidden layer of threshold units (granule cells) that receive a common recurrent inhibition (from a Golgi cell). The weighted sum of the hidden unit signals (Purkinje cell output) is theoretically analyzed regarding the capability of the network to perform two types of universal function approximation. The hidden units begin firing as the excitatory inputs exceed the recurrent inhibition. This simple threshold feature leads to the first approximation theory, and the network final output can be any continuous function of the multiple inputs. When the input is constant, this output becomes stationary. However, when the recurrent unit activity is triggered to decrease or the recurrent inhibition is triggered to increase through a certain mechanism (metabotropic modulation or extrasynaptic spillover), the network can generate any continuous signals for a prolonged period of change in the activity of recurrent signals, as the second approximation theory shows. By incorporating the cerebellar capability of two such types of approximations to a motor system, in which learning proceeds through repeated movement trials with accompanying corrections, accurate and timed responses for reaching the target can be adaptively acquired. Simple models of motor control can solve the motor error vs. sensory error problem, as well as the structural aspects of credit (or error) assignment problem. Two physiological experiments are proposed for examining the delay and trace conditioning of eyelid responses, as well as saccade adaptation, to investigate this novel idea of cerebellar processing. PMID:26799130
Introduction to uncertainty quantification
Sullivan, T J
2015-01-01
Uncertainty quantification is a topic of increasing practical importance at the intersection of applied mathematics, statistics, computation, and numerous application areas in science and engineering. This text provides a framework in which the main objectives of the field of uncertainty quantification are defined, and an overview of the range of mathematical methods by which they can be achieved. Complete with exercises throughout, the book will equip readers with both theoretical understanding and practical experience of the key mathematical and algorithmic tools underlying the treatment of uncertainty in modern applied mathematics. Students and readers alike are encouraged to apply the mathematical methods discussed in this book to their own favourite problems to understand their strengths and weaknesses, also making the text suitable as a self-study. This text is designed as an introduction to uncertainty quantification for senior undergraduate and graduate students with a mathematical or statistical back...
Six-Dimensional Superconformal Theories and their Compactifications from Type IIA Supergravity
Apruzzi, Fabio; Fazzi, Marco; Passias, Achilleas; Rota, Andrea; Tomasiello, Alessandro
2015-08-01
We describe three analytic classes of infinitely many AdSd supersymmetric solutions of massive IIA supergravity, for d =7 ,5 ,4 . The three classes are related by simple universal maps. For example, the AdS7×M3 solutions (where M3 is topologically S3 ) are mapped to AdS5×Σ2×M3' , where Σ2 is a Riemann surface of genus g ≥2 and the metric on M3' is obtained by distorting M3 in a certain way. The solutions can have localized D6 or O6 sources, as well as an arbitrary number of D8-branes. The AdS7 case (previously known only numerically) is conjecturally dual to an NS5-D6-D8 system. The field theories in three and four dimensions are not known, but their number of degrees of freedom can be computed in the supergravity approximation. The AdS4 solutions have numerical "attractor" generalizations that might be useful for flux compactification purposes.
Electron interactions and transport theory in n-type silicon and HCP metals
Electron interactions with impurities, phonons, and other elections are studied in a calculation of linear screening and electron mobility in n-type silicon. The dielectric function is calculated at non-zero temperatures in both the Random Phase Approximation (RPA) and the Singwi-Tosi-Land-Sjoelander (STLS) approximation. Significant differences are found at non-zero temperatures between exact solutions of the Boltzmann equation for electron-impurity scattering in the RPA Born approximation and the less accurate memory function formula for the electrical resistivity. RPA screening of impurity potentials combined with exact phase shift cross-sections yield electron mobilities in n-type silicon at 300K and 77K that agree more closely with experiment than more simple models. The electron-electron differential scattering rate in Born approximation is derived in terms of the nonequilibrium electron density-density correlation function and is evaluated in RPA to determine expressions for the inelastic electron lifetime and the Boltzmann equation collision term. The plasmon-pole contribution to the structure factor is found to be strongly damped in n-type silicon. The Fermi-surface density of states of Drudge Plasma frequency tensor are calculated for 14 metallic elements with hcp structures. By comparison with measured anisotropic resistivity components, electron-phonon coupling constants λtr are extracted which compare reasonably well with λ from Tc for the ten superconducting elements. For Sc and Y, λtr is sufficiently high (0.5-0.6) to require spin-fluctuation suppression of Tc. Resistivity anisotropy is moderately well accounted for by anisotropy of the Drude plasma frequency, except for the sp elements, which have significant scattering anisotropy. A systematic onset of open-quotes resistivity saturationclose quotes is found when the mean free path l≤10 angstrom
Kingshuk Pal
2015-10-01
This protocol demonstrates a multi-disciplinary approach to combining evidence from multiple sources to create ’HeLP-Diabetes’: a theory and evidence based online self-management intervention for adults with type 2 diabetes.
Fay, Stephane
2003-01-01
We look for necessary conditions such that minimally coupled scalar-tensor theory with a massive scalar field and a perfect fluid in the Bianchi type I model isotropises. Then we derive the dynamical asymptotical properties of the Universe.
To the theory of $q$-ary Steiner and other-type trades
Krotov, Denis; Mogilnykh, Ivan; Potapov, Vladimir
2014-01-01
We introduce the concept of a clique bitrade, which generalizes several known types of bitrades, including latin bitrades, Steiner $T(k-1,k,v)$ bitrades, extended $1$-perfect bitrades. For a distance-regular graph, we show a one-to-one correspondence between the clique bitrades that meet the weight-distribution lower bound on the cardinality and the bipartite isometric subgraphs that are distance-regular with certain parameters. As an application of the results, we find the minimum cardinalit...
Theory of superfluid states with singlet and triplet types of pairing in nuclear matter
The paper presents the results of investigation of superfluid states in a two-component Fermi liquid in the framework of the Fermi liquid approach. Particular attention is paid to superfluid states in nuclear matter which are characterized by the superposition of singlet and triplet types of pairing in spin and isospin spaces. The authors have formulated the basic points of the Fermi liquid approach which are used in the study of superfluidity in nuclear matter with the superposition of singlet and triplet types of pairing. Derivation of the system of self-consistency equations and their solution are presented. For concrete calculations the interaction in the Skyrme model is taken. Using this model the conditions for the existence of the considered states are determined. These conditions impose certain constraints on the potential of interaction and on the density of particles in the system. It is shown that the states with a complete set of nonzero order parameters are realized only in a narrow density range, whose width and position in the density scale depend on the choice of a particular Skyrme force. Considered are 18 different parameterizations, and indicated is for which of them the studied types of superfluid states may appear The problem of stability of the states with superposition of singlet and triplet types of pairing is studied. It is shown that the lowest value of the thermodynamic potential corresponds to purely triplet states, then in order of increasing there are the thermodynamic potential of purely singlet states, and mixed singlet-triplet states. The case of unitary states is considered separately. For these states the solutions of the self-consistency equations are analyzed too. The density range for these states is defined and it is shown that this range is different than from that which corresponds to the nonunitary states. In addition, studied is the problem of the existence of unitary superfluid states with the superposition of singlet and triplet superfluidity in the case of asymmetrical nuclear matter. It is shown that the appearance of asymmetry causes the unitarity of superfluid states in nuclear matter to be broken.
Wilson, D; Mills, M; Wang, B [University of Louisville, Louisville, KY (United States)
2014-06-15
Purpose: Carbon fiber materials have been increasingly used clinically, mainly in orthopedics, as an alternative to metallic implants because of their minimal artifacts on CT and MRI images. This study characterizes the transmission and backscatter property of carbon fiber plates (CarboFix Orthopedics, Herzeliya, Israel) with measurements for radiation therapy applications, and compares them to traditional Stainless Steel (SS) and Titanium (Ti) metal materials. Methods: For the transmission measurements, 1-mm-thick test plate was placed upstream from a plane parallel Markus chamber, separated by various thicknesses of polystyrene plates in 0.5 cm increments between 0 and 5 cm. With this setup, we quantified the radiation transmission as a function of distance to the inhomogeneity interface. The LINAC source to detector distance was maintained at 100 cm and 200 MU was delivered for each measurement. Two 3-cm solid water phantoms were placed at the top and bottom to provide build up. All the measurements were performed for 6 MV and 18 MV photons. The backscatter measurements had the identical setup, except that the test plate was downstream of the chamber from radiation. Results: The carbon fiber plates did not introduce any measureable inhomogeneity effect on the transmission and backscatter factor because of its low atomic number. In contrast, traditional metal implant materials caused up to 15% dose difference at upstream and 25% backscatter at downstream from radiation. Such differences decrease as the distance to the inhomogeneity interface increases and become unmeasurable at distance of 3 cm and 1 cm for upstream and downstream, respectively. Conclusion: A new type of carbon fiber implant plate was evaluated and found to have minimal inhomogeneity effect in MV radiation beams. Patients would benefit from a carbon based implant over metal for radiation therapy due to their minimal backscatter and imaging artifacts.
Purpose: Carbon fiber materials have been increasingly used clinically, mainly in orthopedics, as an alternative to metallic implants because of their minimal artifacts on CT and MRI images. This study characterizes the transmission and backscatter property of carbon fiber plates (CarboFix Orthopedics, Herzeliya, Israel) with measurements for radiation therapy applications, and compares them to traditional Stainless Steel (SS) and Titanium (Ti) metal materials. Methods: For the transmission measurements, 1-mm-thick test plate was placed upstream from a plane parallel Markus chamber, separated by various thicknesses of polystyrene plates in 0.5 cm increments between 0 and 5 cm. With this setup, we quantified the radiation transmission as a function of distance to the inhomogeneity interface. The LINAC source to detector distance was maintained at 100 cm and 200 MU was delivered for each measurement. Two 3-cm solid water phantoms were placed at the top and bottom to provide build up. All the measurements were performed for 6 MV and 18 MV photons. The backscatter measurements had the identical setup, except that the test plate was downstream of the chamber from radiation. Results: The carbon fiber plates did not introduce any measureable inhomogeneity effect on the transmission and backscatter factor because of its low atomic number. In contrast, traditional metal implant materials caused up to 15% dose difference at upstream and 25% backscatter at downstream from radiation. Such differences decrease as the distance to the inhomogeneity interface increases and become unmeasurable at distance of 3 cm and 1 cm for upstream and downstream, respectively. Conclusion: A new type of carbon fiber implant plate was evaluated and found to have minimal inhomogeneity effect in MV radiation beams. Patients would benefit from a carbon based implant over metal for radiation therapy due to their minimal backscatter and imaging artifacts
Flexural wave band gaps in metamaterial beams with membrane-type resonators: theory and experiment
Zhang, Hao; Xiao, Yong; Wen, Jihong; Yu, Dianlong; Wen, Xisen
2015-11-01
This paper deals with flexural wave band gaps in metamaterial beams with membrane-type resonators. The proposed membrane-type resonator consists of a tensioned elastic membrane and a mass block attached to the center of the membrane. Numerical models based on finite element method are presented to predict the dispersion relation, band gaps and eigen-modes. It has shown that the metamaterial beams exhibit unique wave physics. A broad Bragg band gap (BBG) and two low-frequency locally resonant band gaps (LRBGs) can be observed due to the structural periodicity and locally resonant behavior respectively. The first LRBG can be ascribed to the combined resonance of the membranes and the masses, while the second LRBG is caused by the resonance of the membranes. The study of the effective property shows that negative mass density occurs in the LRBGs. The effects of membrane tension and mass magnitude (the weight of mass block) on the LRBGs are further analyzed. It is shown that both the two LRBGs move to high-frequency with the increase of the membrane tension. However, as the mass magnitude increases, the first LRBG moves to low-frequency and the second LRBG almost remains unchanged. It is further demonstrated that, when a larger unit cell with multiple kinds of masses (a larger unit cell incorporating multiple basic unit cells but with different weights of mass blocks within each basic unit cell) are used, the first LRBG can be broadened, which can be employed to achieve broadband vibration attenuation. Moreover, experimental measurements of vibration transmittance are conducted to validate the theoretical predictions. Good agreements between the experimental results and the theoretical predictions are observed.
Type Ia Supernovae and their Environment: Theory and Applications to SN 2014J
Dragulin, Paul
2015-01-01
We present theoretical semi-analytic models for the interaction of stellar winds with the interstellar medium (ISM) or prior mass loss implemented in our code SPICE (Supernovae Progenitor Interaction Calculator for parameterized Environments, available on request), assuming spherical symmetry and power-law ambient density profiles and using the Pi-theorem. This allows us to test a wide variety of configurations, their functional dependencies, and to find classes of solutions for given observations. Here, we study Type Ia (SN~Ia) surroundings of single and double degenerate systems, and their observational signatures. Winds may originate from the progenitor prior to the white dwarf (WD) stage, the WD, a donor star, or an accretion disk (AD). For M_Ch explosions,the AD wind dominates and produces a low-density void several light years across surrounded by a dense shell. The bubble explains the lack of observed interaction in late time SN light curves for, at least, several years. The shell produces narrow ISM l...
Quantification of SPECT images
The procedure of quantifying images involves: to detect and locate an injured organ, to determinate dimension and evaluate the relative activity concentration. The precision quantification of the radioactivity in SPECT's studies is related with diverse factors that affect the image quality. The above mentioned factor can be grouped in physics, technique and humans
Quantification of protein carbonylation.
Wehr, Nancy B; Levine, Rodney L
2013-01-01
Protein carbonylation is the most commonly used measure of oxidative modification of proteins. It is most often measured spectrophotometrically or immunochemically by derivatizing proteins with the classical carbonyl reagent 2,4 dinitrophenylhydrazine (DNPH). We present protocols for the derivatization and quantification of protein carbonylation with these two methods, including a newly described dot blot with greatly increased sensitivity. PMID:23296665
Type Ia Supernovae and Their Environment:Theory and Applications to SN 2014J
Dragulin, Paul; Hoeflich, Peter
2016-02-01
We present theoretical semi-analytic models for the interaction of stellar winds with the interstellar medium (ISM) or prior mass loss implemented in our code SPICE, assuming spherical symmetry and power-law ambient density profiles and using the Π-theorem. This allows us to test a wide variety of configurations, their functional dependencies, and to find classes of solutions for given observations. Here, we study Type Ia Supernova (SN Ia) surroundings of single and double degenerate systems, and their observational signatures. Winds may originate from the progenitor prior to the white dwarf (WD) stage, the WD, a donor star, or an accretion disk (AD). For MCh explosions, the AD wind dominates and produces a low-density void several light years across, surrounded by a dense shell. The bubble explains the lack of observed interaction in late time SN light curves for, at least, several years. The shell produces narrow ISM lines Doppler shifted by 10-100 km s-1, and equivalent widths of ≈100 mÅ and ≈1 mÅ in cases of ambient environments with constant density and produced by prior mass loss, respectively. For SN2014J, both mergers and MCh mass explosions have been suggested based on radio and narrow lines. As a consistent and most likely solution, we find an AD wind running into an environment produced by the red giant wind of the progenitor during the pre-WD stage, and a short delay, 0.013-1.4 Myr, between the WD formation and the explosion. Our framework may be applied more generally to stellar winds and star formation feedback in large scale galactic evolution simulations.
Quantification of Cannabinoid Content in Cannabis
Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.
2015-09-01
Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.
Lueddeke, Sara E; Higham, Philip A
2011-09-01
This paper presents an experimental investigation into how individuals make decisions under uncertainty when faced with different payout structures in the context of gambling. Type 2 signal detection theory was utilized to compare sensitivity to bias manipulations between regular nonproblem gamblers and nongamblers in a novel probability-based gambling task. The results indicated that both regular gamblers and nongamblers responded to the changes of rewards for correct responses (Experiment 1) and penalties for errors (Experiment 2) in setting their gambling criteria, but that regular gamblers were more sensitive to these manipulations of bias. Regular gamblers also set gambling criteria that were more optimal. The results are discussed in terms of an expertise-transference hypothesis. PMID:21846266
Tachyon and fixed scalars of D5pm-D1pm black hole in type 0B string theory
In the type 0B string theory, we discuss the role of tachyon (T) and fixed scalars (ν, λ). The issue is to explain the difference between tachyon and fixed scalars in the D5pm-D1pm black hole background. For this purpose, we perform the semiclassical calculation. Here one finds a mixing between (ν, λ, T) and the other fields. Using the decoupling procedure, one finds the linearized equation for the tachyon. From the potential analysis, it turns out that ν plays a role of test field well, while the tachyon induces an instability of Minkowski space vacuum. But the roles of ν and T are the same in the near-horizon geometry. Finally, we discuss the stability problem. (author)
Borkar, M. S.; Ameen, A.
2015-01-01
In this paper, Bianchi type VI0 magnetized anisotropic dark energy models with constant deceleration parameter have been studied by solving the Rosen's field equations in Bimetric theory of gravitation. The models corresponding to power law expansion and exponential law expansion have been evaluated and studied their nature geometrically and physically. It is seen that there is real visible matter (baryonic matter) suddenly appeared only for small interval of time 0.7 ≤ t Plank Collab. (P. A. R. Ade), arXiv:1303.5076; arXiv:1303.5082]) conclude that the dark energy occupies near about 73% of the energy of the universe and dark matter is about 23%. In exponential law of expansion, our model is fully occupied by real visible matter and there is no chance of dark energy and dark matter.
Band-gap corrected density functional theory calculations for InAs/GaSb type II superlattices
Wang, Jianwei; Zhang, Yong [Department of Electrical and Computer Engineering, The University of North Carolina at Charlotte, 9201 University City Boulevard, Charlotte, North Carolina 28223 (United States)
2014-12-07
We performed pseudopotential based density functional theory (DFT) calculations for GaSb/InAs type II superlattices (T2SLs), with bandgap errors from the local density approximation mitigated by applying an empirical method to correct the bulk bandgaps. Specifically, this work (1) compared the calculated bandgaps with experimental data and non-self-consistent atomistic methods; (2) calculated the T2SL band structures with varying structural parameters; (3) investigated the interfacial effects associated with the no-common-atom heterostructure; and (4) studied the strain effect due to lattice mismatch between the two components. This work demonstrates the feasibility of applying the DFT method to more exotic heterostructures and defect problems related to this material system.
The role of the l1-norm in quantum information theory and two types of the Yang-Baxter equation
The role of the l1-norm in the Yang-Baxter system has been studied through Wigner's D-functions, where l1-norm means ?i|Ci| for |?) = ?iCi|?i) with |?i) being the orthonormal basis. It is shown that the existing two types of braiding matrices, which can be viewed as particular solutions of the Yang-Baxter equation (YBE) with different spectral parameters can be unified in the 2D YBE. We prove that the maximum of the l1-norm is connected with the maximally entangled states and topological quantum field theory with two-component anyons, while the minimum leads to the deformed permutation related to the familiar integrable models.
The classical Yang–Baxter equation and the associated Yangian symmetry of gauged WZW-type theories
Itsios, Georgios, E-mail: gitsios@upatras.gr [Department of Mathematics, University of Patras, 26110 Patras (Greece); Sfetsos, Konstantinos, E-mail: ksfetsos@phys.uoa.gr [Department of Nuclear and Particle Physics, Faculty of Physics, University of Athens, 15771 Athens (Greece); Siampos, Konstantinos, E-mail: konstantinos.siampos@umons.ac.be [Mécanique et Gravitation, Université de Mons, 7000 Mons (Belgium); Torrielli, Alessandro, E-mail: a.torrielli@surrey.ac.uk [Department of Mathematics, University of Surrey, Guildford GU2 7XH (United Kingdom)
2014-12-15
We construct the Lax-pair, the classical monodromy matrix and the corresponding solution of the Yang–Baxter equation, for a two-parameter deformation of the Principal chiral model for a simple group. This deformation includes as a one-parameter subset, a class of integrable gauged WZW-type theories interpolating between the WZW model and the non-Abelian T-dual of the principal chiral model. We derive in full detail the Yangian algebra using two independent methods: by computing the algebra of the non-local charges and alternatively through an expansion of the Maillet brackets for the monodromy matrix. As a byproduct, we also provide a detailed general proof of the Serre relations for the Yangian symmetry.
Bouland, Olivier H.
2016-03-01
This article supplies an overview of issues related to the interpretation of surrogate measurement results for neutron-incident cross section predictions; difficulties that are somehow masked by the historical conversion route based on Weisskopf-Ewing approximation. Our proposal is to handle the various difficulties by using a more rigorous approach relying on Monte Carlo simulation of transfer reactions with extended R-matrix theory. The multiple deficiencies of the historical surrogate treatment are recalled but only one is examined in some details here; meaning the calculation of in-out-going channel Width Fluctuation Correction Factors (WFCF) which behavior witness partly the failure of Niels Bohr's compound nucleus theoretical landmark. Relevant WFCF calculations according to neutron-induced surrogate- and cross section-types as a function of neutron-induced fluctuating energy range [0 - 2.1 MeV] are presented and commented in the case of the 240Pu* and 241Pu* compound nucleus isotopes.
Aditya, Y.; Rao, V. U. M.; Vijaya Santhi, M.
2016-02-01
Spatially homogeneous Bianchi type-II, VIII and IX perfect fluid cosmological models in f(R,T) modified theory of gravity have been investigated for a special choice of f(R,T)=f1(R)+f2(T) with f1(R)=?1R and f2(T)=?2T. This special choice leads to a cosmological constant \\varLambda, which depends on stress energy tensor of matter source. To get the deterministic model of Universe, we assume that the expansion scalar (?) in the model is proportional to shear scalar (?). This condition leads to relation between metric potentials, which yields a time dependent deceleration parameter. Various physical and geometrical features of the models are also discussed.
We investigate the Bianchi type-I massive string magnetized barotropic perfect fluid cosmological model in Rosen's bimetric theory of gravitation with and without a magnetic field by applying the techniques used by Latelier (1979, 1980) and Stachel (1983). To obtain a deterministic model of the universe, it is assumed that the universe is filled with barotropic perfect fluid distribution. The physical and geometrical significance of the model are discussed. By comparing our model with the model of Bali et al. (2007), it is realized that there are no big-bang and big-crunch singularities in our model and T = 0 is not the time of the big bang, whereas the model of Bali et al. starts with a big bang at T = 0. Further, our model is in agreement with Bali et al. (2007) as time increases in the presence, as well as in the absence, of a magnetic field. (geophysics, astronomy, and astrophysics)
The classical YangBaxter equation and the associated Yangian symmetry of gauged WZW-type theories
We construct the Lax-pair, the classical monodromy matrix and the corresponding solution of the YangBaxter equation, for a two-parameter deformation of the Principal chiral model for a simple group. This deformation includes as a one-parameter subset, a class of integrable gauged WZW-type theories interpolating between the WZW model and the non-Abelian T-dual of the principal chiral model. We derive in full detail the Yangian algebra using two independent methods: by computing the algebra of the non-local charges and alternatively through an expansion of the Maillet brackets for the monodromy matrix. As a byproduct, we also provide a detailed general proof of the Serre relations for the Yangian symmetry
Band-gap corrected density functional theory calculations for InAs/GaSb type II superlattices
We performed pseudopotential based density functional theory (DFT) calculations for GaSb/InAs type II superlattices (T2SLs), with bandgap errors from the local density approximation mitigated by applying an empirical method to correct the bulk bandgaps. Specifically, this work (1) compared the calculated bandgaps with experimental data and non-self-consistent atomistic methods; (2) calculated the T2SL band structures with varying structural parameters; (3) investigated the interfacial effects associated with the no-common-atom heterostructure; and (4) studied the strain effect due to lattice mismatch between the two components. This work demonstrates the feasibility of applying the DFT method to more exotic heterostructures and defect problems related to this material system
Barutello, Vivina; Jadanza, Riccardo D.; Portaluri, Alessandro
2016-01-01
It is well known that the linear stability of the Lagrangian elliptic solutions in the classical planar three-body problem depends on a mass parameter β and on the eccentricity e of the orbit. We consider only the circular case ( e = 0) but under the action of a broader family of singular potentials: α-homogeneous potentials, for α in (0, 2), and the logarithmic one. It turns out indeed that the Lagrangian circular orbit persists also in this more general setting. We discover a region of linear stability expressed in terms of the homogeneity parameter α and the mass parameter β, then we compute the Morse index of this orbit and of its iterates and we find that the boundary of the stability region is the envelope of a family of curves on which the Morse indices of the iterates jump. In order to conduct our analysis we rely on a Maslov-type index theory devised and developed by Y. Long, X. Hu and S. Sun; a key role is played by an appropriate index theorem and by some precise computations of suitable Maslov-type indices.
Here, a scenario is proposed, according to which a generic self-organized critical (SOC) system can be looked upon as a Witten-type topological field theory (W-TFT) with spontaneously broken Becchi-Rouet-Stora-Tyutin (BRST) symmetry. One of the conditions for the SOC is the slow driving noise, which unambiguously suggests Stratonovich interpretation of the corresponding stochastic differential equation (SDE). This, in turn, necessitates the use of Parisi-Sourlas-Wu stochastic quantization procedure, which straightforwardly leads to a model with BRST-exact action, i.e., to a W-TFT. In the parameter space of the SDE, there must exist full-dimensional regions where the BRST symmetry is spontaneously broken by instantons, which in the context of SOC are essentially avalanches. In these regions, the avalanche-type SOC dynamics is liberated from overwise a rightful dynamics-less W-TFT, and a Goldstone mode of Fadeev-Popov ghosts exists. Goldstinos represent moduli of instantons (avalanches) and being gapless are responsible for the critical avalanche distribution in the low-energy, long-wavelength limit. The above arguments are robust against moderate variations of the SDE's parameters and the criticality is 'self-tuned'. The proposition of this paper suggests that the machinery of W-TFTs may find its applications in many different areas of modern science studying various physical realizations of SOC. It also suggests that there may in principle exist a connection between some SOC's and the concept of topological quantum computing.
Tarde's idea of quantification
LATOUR, Bruno
2010-01-01
Even though Tarde is said to have had a literary view of social science, he himself was deeply involved in statistics (especially criminal statistics) and took an essentially quantitative view of social phenomena. What is so paradoxical in his view of quantification is that it relies not only on the aggregates but also on the individual element. The paper reviews this paradox, the reason why Tarde was son intent on finding a quantitative grasp for establishing the social sciences and relates ...
Topic-Focus Structure and Quantification of Dou 'all'
Joonho Shin
2007-06-01
Full Text Available This paper examines a type of dou quantification found in wh-questions such as ta dou mai le shenme? What are all the things that he bought? This type is different from the well-known dou quantification in that the leftness condition cannot be applied to the former. I propose that the former type of quantification is subject to the topic-focus structure rather than to the syntactic structure, which means that the domain of the quantification is determined in relation to 'old' and 'new' information of a sentence. Sentences including dou can be divided into topic and focus, and each part is mapped onto the restrictor and the nuclear scope in a tripartite structure of dou quantification. This analysis accounts for the reason why a list answer is appropriate to questions with dou, why wh-words in the questions cannot be quantity expressions, and why wh-words should either have a plural interpretation or take the plural form. This analysis also explains the distribution of dou, i.e., dou should c-command a focused phrase. Finally, I point out that the analysis can extend to declaratives which are rare but still observable, and that the two types of dou quantification can arise simultaneously.
Quantification and Negation in Event Semantics
Lucas Champollion
2010-01-01
Recently, it has been claimed that event semantics does not go well together with quantification, especially if one rejects syntactic, LF-based approaches to quantifier scope. This paper shows that such fears are unfounded, by presenting a simple, variable-free framework which combines a Neo-Davidsonian event semantics with a type-shifting based account of quantifier scope. The main innovation is that the event variable is bound inside the verbal denotation, rather than at sentence level by e...
In this paper we classify Bianchi type VIII and IX space—times according to their teleparallel Killing vector fields in the teleparallel theory of gravitation by using a direct integration technique. It turns out that the dimensions of the teleparallel Killing vector fields are either 4 or 5. From the above study we have shown that the Killing vector fields for Bianchi type VIII and IX space—times in the context of teleparallel theory are different from that in general relativity. (general)
Transparent quantification into hyperintensional contexts
Duží, M.; Jespersen, Bjorn
London : College Publications, 2011 - (Peliš, M.; Punčochář, V.), s. 81-97 ISBN 978-1-84890-038-7. [ LOGICA 2010. Hejnice (CZ), 21.06.2010-25.06.2010] Institutional research plan: CEZ:AV0Z90090514 Keywords : hyperintensions * type theory * Transparant intensional logic * propositional attitudes Subject RIV: AA - Philosophy ; Religion
Various phenomenological theories of wave-type heat transport, which can be interpreted as the models of an isotropic rigid heat conductor with an internal vector state variable, have been proposed in the literature with the objective to describe the second sound propagation in dielectric crystals. The aim of this paper is to analyze the relation between these phenomenological approaches and the phonon gas hydrodynamics. The four-moment phonon gas hydrodynamics based on the maximum entropy closure of the moment equations with nonlinear isotropic phonon dispersion relation is considered for this purpose. We reformulate the equations of this hydrodynamics in terms of energy and quasi-momentum as the primitive fields and subsequently demonstrate that, from the macroscopic point of view, they can be understood as describing the reference model of an isotropic rigid heat conductor with quasi-momentum playing the role of the internal vector state variable. This model is determined by the entropy function and the additional scalar potential, but if the finite domain of phonon wave vectors is approximated by the whole space, the additional potential can be expressed in terms of the entropy function and its first derivatives. Then the transformation of primitive fields and the expansion of thermodynamic potentials in powers of the square of quasi-momentum enable us to compare the reference model with the models proposed earlier in the literature. It is shown that the previous models require some subtle modifications in order to achieve full consistency with phonon gas hydrodynamics.
Bouland Olivier H.
2016-01-01
Full Text Available This article supplies an overview of issues related to the interpretation of surrogate measurement results for neutron-incident cross section predictions; difficulties that are somehow masked by the historical conversion route based on Weisskopf-Ewing approximation. Our proposal is to handle the various difficulties by using a more rigorous approach relying on Monte Carlo simulation of transfer reactions with extended R-matrix theory. The multiple deficiencies of the historical surrogate treatment are recalled but only one is examined in some details here; meaning the calculation of in-out-going channel Width Fluctuation Correction Factors (WFCF which behavior witness partly the failure of Niels Bohr’s compound nucleus theoretical landmark. Relevant WFCF calculations according to neutron-induced surrogate- and cross section-types as a function of neutron-induced fluctuating energy range [0 - 2.1 MeV] are presented and commented in the case of the 240Pu* and 241Pu* compound nucleus isotopes.
Session Types = Intersection Types + Union Types
Padovani, Luca
2011-01-01
We propose a semantically grounded theory of session types which relies on intersection and union types. We argue that intersection and union types are natural candidates for modeling branching points in session types and we show that the resulting theory overcomes some important defects of related behavioral theories. In particular, intersections and unions provide a native solution to the problem of computing joins and meets of session types. Also, the subtyping relation turns out to be a pre-congruence, while this is not always the case in related behavioral theories.
Juul Lise
2011-11-01
Full Text Available Abstract Background Treatment recommendations for prevention of type 2 diabetes complications often require radical and life-long health behaviour changes. Observational studies based on Self-determination theory (SDT propose substantial factors for the maintenance of behaviour changes and concomitant well-being, but experimental research is needed to develop and evaluate SDT-based interventions. The aims of this paper were to describe 1 the design of a trial assessing the effectiveness of a training course for practice-nurses in autonomy support on patient-perceived motivation, HbA1, cholesterol, and well-being among a diabetes population, 2 the actual intervention to a level of detail that allows its replication, and 3 the connection between SDT recommendations for health care-provider behaviour and the content of the training course. Methods/Design The study is a cluster-randomised pragmatic trial including 40 Danish general practices with nurse-led diabetes consultations, and the associated diabetes population. The diabetes population was identified by registers (n = 4034. The intervention was a 16-hour course with interactive training for practice nurses. The course was delivered over 4 afternoons at Aarhus University and one 1/2 hour visit to the practice by one of the course-teachers over a period of 10 months (0, 2, 5, 10 mths.. The intervention is depicted by a PaT Plot showing the timeline and the characteristics of the intervention components. Effectiveness of the intervention will be assessed on the diabetes populations with regard to well-being (PAID, SF-12, HbA1c- and cholesterol-levels, perceived autonomy support (HCCQ, type of motivation (TSRQ, and perceived competence for diabetes care (PCD 15-21 months after the core course; the completion of the second course afternoon. Data will be retrieved from registers and by questionnaires. Discussion Challenges and advantages of the pragmatic design are discussed. In a real-world setting, this study will determine the impact on motivation, HbA1c, cholesterol, and well-being for people with diabetes by offering a training course in autonomy support to practice-nurses from general practices with nurse-led consultations. Trial registration ClinicalTrials.gov: NCT01187069
Accident sequence quantification with KIRAP
The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs
Čársky, Petr
2015-01-01
Roč. 191, č. 2015 (2015), s. 191-192. ISSN 1551-7616 R&D Projects: GA MŠk OC09079; GA MŠk(CZ) OC10046; GA ČR GA202/08/0631 Grant ostatní: COST(XE) CM0805; COST(XE) CM0601 Institutional support: RVO:61388955 Keywords : electron-scattering * calculation of cross sections * second-order perturbation theory Subject RIV: CF - Physical ; Theoretical Chemistry
Sonier, J E; Miller, R I; Boaknin, E; Taillefer, L; Kiefl, R F; Brewer, J H; Poon, K F; Brewer, J D
2004-01-01
A key ingredient missing from the much relied upon macroscopic Ginzburg-Landau theory is the quasiparticle excitations. As a result, the GL description of the vortex lattice in a type-II superconductor does not account for the electronic structure of the magnetic vortices. Here we report experimental results on the conventional type-II superconductor V3Si that provide clear evidence for changes to the inner structure of a vortex due to the delocalization of bound quasiparticle core states. A consequence is that even for simple type-II superconductors, the full microscopic theory is necessary to physically describe the vortex lattice. This detail is a potential explanation for many experimental anomalies.
Disease quantification in dermatology
Greve, Tanja Maria; Kamp, Søren; Jemec, Gregor B E
2013-01-01
selected plaques were scored clinically. A partial least squares (PLS) regression model was used to analyze and predict the severity scores on the NIR spectra of psoriatic and uninvolved skin. The correlation between predicted and clinically assigned scores was R=0.94 (RMSE=0.96), suggesting that in vivo......Accurate documentation of disease severity is a prerequisite for clinical research and the practice of evidence-based medicine. The quantification of skin diseases such as psoriasis currently relies heavily on clinical scores. Although these clinical scoring methods are well established and very...... useful in quantifying disease severity, they require an extensive clinical experience and carry a risk of subjectivity. We explore the opportunity to use in vivo near-infrared (NIR) spectra as an objective and noninvasive method for local disease severity assessment in 31 psoriasis patients in whom...
On a singular Fredholm-type integral equation arising in N=2 super-Yang-Mills theories
Ferrari, Franco, E-mail: ferrari@fermi.fiz.univ.szczecin.pl [Institute of Physics and CASA, University of Szczecin, Wielkopolska 15, 70451 Szczecin (Poland); Piatek, Marcin, E-mail: piatek@fermi.fiz.univ.szczecin.pl [Institute of Physics and CASA, University of Szczecin, Wielkopolska 15, 70451 Szczecin (Poland); Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)
2013-01-08
In this work we study the Nekrasov-Shatashvili limit of the Nekrasov instanton partition function of Yang-Mills field theories with N=2 supersymmetry and gauge group SU(N{sub c}). The theories are coupled with N{sub f} flavors of fundamental matter. The equation that determines the density of eigenvalues at the leading order in the saddle-point approximation is exactly solved when N{sub f}=2N{sub c}. The dominating contribution to the instanton free energy is computed. The requirement that this energy is finite imposes quantization conditions on the parameters of the theory that are in agreement with analogous conditions that have been derived in previous works. The instanton energy and thus the instanton contribution to the prepotential of the gauge theory is computed in closed form.
On a singular Fredholm-type integral equation arising in N=2 super-YangMills theories
In this work we study the NekrasovShatashvili limit of the Nekrasov instanton partition function of YangMills field theories with N=2 supersymmetry and gauge group SU(Nc). The theories are coupled with Nf flavors of fundamental matter. The equation that determines the density of eigenvalues at the leading order in the saddle-point approximation is exactly solved when Nf=2Nc. The dominating contribution to the instanton free energy is computed. The requirement that this energy is finite imposes quantization conditions on the parameters of the theory that are in agreement with analogous conditions that have been derived in previous works. The instanton energy and thus the instanton contribution to the prepotential of the gauge theory is computed in closed form.
Joseph, Robert M; TAGER–FLUSBERG, HELEN
2004-01-01
Although neurocognitive impairments in theory of mind and in executive functions have both been hypothesized to play a causal role in autism, there has been little research investigating the explanatory power of these impairments with regard to autistic symptomatology. The present study examined the degree to which individual differences in theory of mind and executive functions could explain variations in the severity of autism symptoms. Participants included 31 verbal, school-aged children ...
El Naschie's ? (?) space-time, hydrodynamic model of scale relativity theory and some applications
A generalization of the Nottale's scale relativity theory is elaborated: the generalized Schroedinger equation results as an irrotational movement of Navier-Stokes type fluids having an imaginary viscosity coefficient. Then ? simultaneously becomes wave-function and speed potential. In the hydrodynamic formulation of scale relativity theory, some implications in the gravitational morphogenesis of structures are analyzed: planetary motion quantizations, Saturn's rings motion quantizations, redshift quantization in binary galaxies, global redshift quantization etc. The correspondence with El Naschie's ? (?) space-time implies a special type of superconductivity (El Naschie's superconductivity) and Cantorian-fractal sequences in the quantification of the Universe
Advancing agricultural greenhouse gas quantification*
Olander, Lydia; Wollenberg, Eva; Tubiello, Francesco; Herold, Martin
2013-03-01
1. Introduction Better information on greenhouse gas (GHG) emissions and mitigation potential in the agricultural sector is necessary to manage these emissions and identify responses that are consistent with the food security and economic development priorities of countries. Critical activity data (what crops or livestock are managed in what way) are poor or lacking for many agricultural systems, especially in developing countries. In addition, the currently available methods for quantifying emissions and mitigation are often too expensive or complex or not sufficiently user friendly for widespread use. The purpose of this focus issue is to capture the state of the art in quantifying greenhouse gases from agricultural systems, with the goal of better understanding our current capabilities and near-term potential for improvement, with particular attention to quantification issues relevant to smallholders in developing countries. This work is timely in light of international discussions and negotiations around how agriculture should be included in efforts to reduce and adapt to climate change impacts, and considering that significant climate financing to developing countries in post-2012 agreements may be linked to their increased ability to identify and report GHG emissions (Murphy et al 2010, CCAFS 2011, FAO 2011). 2. Agriculture and climate change mitigation The main agricultural GHGs—methane and nitrous oxide—account for 10%-12% of anthropogenic emissions globally (Smith et al 2008), or around 50% and 60% of total anthropogenic methane and nitrous oxide emissions, respectively, in 2005. Net carbon dioxide fluxes between agricultural land and the atmosphere linked to food production are relatively small, although significant carbon emissions are associated with degradation of organic soils for plantations in tropical regions (Smith et al 2007, FAO 2012). Population growth and shifts in dietary patterns toward more meat and dairy consumption will lead to increased emissions unless we improve production efficiencies and management. Developing countries currently account for about three-quarters of direct emissions and are expected to be the most rapidly growing emission sources in the future (FAO 2011). Reducing agricultural emissions and increasing carbon sequestration in the soil and biomass has the potential to reduce agriculture's contribution to climate change by 5.5-6.0 gigatons (Gt) of carbon dioxide equivalent (CO2eq)/year. Economic potentials, which take into account costs of implementation, range from 1.5 to 4.3 GT CO2eq/year, depending on marginal abatement costs assumed and financial resources committed, with most of this potential in developing countries (Smith et al 2007). The opportunity for mitigation in agriculture is thus significant, and, if realized, would contribute to making this sector carbon neutral. Yet it is only through a robust and shared understanding of how much carbon can be stored or how much CO2 is reduced from mitigation practices that informed decisions can be made about how to identify, implement, and balance a suite of mitigation practices as diverse as enhancing soil organic matter, increasing the digestibility of feed for cattle, and increasing the efficiency of nitrogen fertilizer applications. Only by selecting a portfolio of options adapted to regional characteristics and goals can mitigation needs be best matched to also serve rural development goals, including food security and increased resilience to climate change. Expansion of agricultural land also remains a major contributor of greenhouse gases, with deforestation, largely linked to clearing of land for cultivation or pasture, generating 80% of emissions from developing countries (Hosonuma et al 2012). There are clear opportunities for these countries to address mitigation strategies from the forest and agriculture sector, recognizing that agriculture plays a large role in economic and development potential. In this context, multiple development goals can be reinforced by specific climate funding granted on the basis of multiple benefits and synergies, for instance through currently negotiated mechanisms such as Nationally Appropriate Mitigation Actions (NAMAs) (REDD+, Kissinger et al 2012). 3. Challenges to quantifying GHG information for the agricultural sector The quantification of GHG emissions from agriculture is fundamental to identifying mitigation solutions that are consistent with the goals of achieving greater resilience in production systems, food security, and rural welfare. GHG emissions data are already needed for such varied purposes as guiding national planning for low-emissions development, generating and trading carbon credits, certifying sustainable agriculture practices, informing consumers' choices with regard to reducing their carbon footprints, assessing product supply chains, and supporting farmers in adopting less carbon-intensive farming practices. Demonstrating the robustness, feasibility, and cost effectiveness of agricultural GHG inventories and monitoring is a necessary technical foundation for including agriculture in the international negotiations under the United Nations Framework Convention on Climate Change (UNFCCC), and is needed to provide robust data and methodology platforms for global corporate supply-chain initiatives (e.g., SAFA, FAO 2012). Given such varied drivers for GHG reductions, there are a number of uses for agricultural GHG information, including (1) reporting and accounting at the national or company level, (2) land-use planning and management to achieve specific objectives, (3) monitoring and evaluating impact of management, (4) developing a credible and thus tradable offset credit, and (5) research and capacity development. The information needs for these uses is likely to differ in the required level of certainty, scale of analysis, and need for comparability across systems or repeatability over time, and they may depend on whether descriptive trends are sufficient or an understanding of drivers and causes are needed. While there are certainly similar needs across uses and users, the necessary methods, data, and models for quantifying GHGs may vary. Common challenges for quantification noted in an informal survey of users of GHG information by Olander et al (2013) include the following. 3.1. Need for user-friendly methods that work across scales, regions, and systems Much of the data gathered and models developed by the research community provide high confidence in data or indicators computed at one place or for one issue, thus they are relevant for only specific uses, not transparent, or not comparable. These research approaches need to be translated to practitioners though the development of farmer friendly, transparent, comparable, and broadly applicable methods. Many users noted the need for quantification data and methods that work and are accurate across region and scales. One of the interviewed users, Charlotte Streck, summed it up nicely: 'A priority would be to produce comparable datasets for agricultural GHG emissions of particular agricultural practices for a broad set of countries ... with a gradual increase in accuracy'. 3.2. Need for lower cost, feasible approaches Concerns about cost and complexity of existing quantification methods were raised by a number of users interviewed in the survey. In the field it is difficult to measure changes in GHGs from agricultural management due to spatial and temporal variability, and the scale of the management-induced changes relative to background pools and fluxes. Many users noted data gaps and inconsistencies and insufficient technical capacity and infrastructure to generate necessary information, particularly in developing countries. The need for creative approaches for data collection and analysis, such as crowd sourcing and mobile technology, were noted. 3.3. Need for methods that can crosswalk between emission-reduction strategy and inventories or reporting A few users emphasized the need for information and quantification approaches that cannot only track GHGs but also help with strategic planning on what to grow where and when to maximize mitigation and adaptation benefits. Methods need to incorporate the quantification context, taking into account climate impacts, viability, and cost of management options. Thus, data and methods are needed that integrate climate impacts into models used to assess the potential and costs of GHG mitigation strategies. 3.4. Need for confidence thresholds and rules that are appropriate for use Users noted that national inventories through the UNFCCC or Intergovernmental Panel on Climate Change (IPCC) require 95% confidence, while some offset market standards leave confidence levels to the discretion of the developer, using discounts in value for greater uncertainty. Nonetheless, these standards tend to have expectations of 20% confidence or better. In fact, both regulatory and voluntary reporting suffer from large uncertainties in the underlying activity data as well as in emission factors. In some circumstances emissions factors may add as much as 50-150% uncertainty to GHG estimates (IPCC 2006). Uncertainty clearly needs to be assessed in implementing projects and programs. In some cases there are uncertainty thresholds, while in others uncertainty is assessed and used as part of the quantification process. What is not always clear is where uncertainty thresholds are necessary to maintain the usefulness of the information and where they are hindering early progress. 3.5. Easily understood and common metrics for policy and market users Inventories usually track tons of CO2 equivalents, while supply-chain and corporate reporting are more likely to track efficiency metrics, such as GHG emissions per unit of product; offsets protocols may combine both approaches. As demand for food rises, efficiency of production becomes an increasingly important metric, even if total CO2 equivalents need to be tracked in parallel to assess climate impacts. For livestock systems it is unclear which metrics are most important to track, GHGs per unit of meat or milk or perhaps per calorie? Different metrics are likely needed for different uses. 3.6. Capacity development in developing countries There is need to improve on the current lack of capacities to monitor land use and land-use change and their associated GHG emissions and removals for national inventories (UNFCCC 2008, Romijn et al 2012). Since there are ongoing efforts to improve, data, methods and capacities for monitoring forests in the context of REDD+ (Herold and Skutsch 2011), synergies should be sought to use and build upon joint data sources and approaches, such as remote sensing, field inventories, crowd sourcing. and human capacities to estimate and report on GHG balance in both forests and agriculture. A number of specific objectives to meet these challenges are discussed in this special issue. Improve the accuracy of emissions factors across regional differences. Improve national inventory data of management activities, crop type and variety, and livestock breeds. Use historical data and data collection over time to show trends. Test the extent of model applications through field validation (e.g., can they be used in regions with less data?). Enhance technical capacity and infrastructure for data acquisition and for application of mitigation strategies in field programs. Increase understanding of which mitigation practices result in more resilient systems. Improve understanding of the GHG tradeoffs of expanding fertilizer use. While data sources and methods are improving and research and operational monitoring are increasing, the international community can be strategic in targeting support for this work and coordinating data and information collection to move toward revised good practice guidelines that would address the particular circumstances and practices dominant in developing countries. 4. Current data infrastructure and systems supporting GHG quantification in the agricultural sector To understand the challenges facing GHG quantification it is helpful to understand the existing supporting infrastructure and systems for quantification. The existing and developing structures for national and local data acquisition and management are the foundation for the empirical and process-based models used by most countries and projects currently quantifying agricultural greenhouse gases. Direct measurement can be used to complement and supplement such models, but this is not yet sufficient by itself given costs, complexities, and uncertainties. One of the primary purposes of data acquisition and quantification is for national-level inventories and planning. For such efforts countries are conducting national-level collection of activity data (who is doing which agricultural practices where) and some are also developing national or regional-level emissions factors. Infrastructure that supports these efforts includes intergovernmental panels, global alliances, and data-sharing networks. Multilateral data sharing for applications, such as the FAO Statistical Database (FAOSTAT) (FAO 2012), the IPCC Emission Factor Database (IPCC 2012), and UNFCCC national inventories (UNFCCC 2012), are building greater consistency and standardization by using global standards such as the IPCC's Good Practice Guidance for Land Use, Land-Use Change and Forestry (e.g., IPCC 1996, 2003, 2006). There is also work on common quantification methods and accounting, for example agreed on global warming potentials for different contributing gases and GHG quantification methodologies for projects (e.g., the Verified Carbon Standard Sustainable Agricultural Land Management [SALM] protocol, VCS 2011). Other examples include the Global Research Alliance on Agricultural Greenhouse Gases (2012) and GRACEnet (Greenhouse gas Reduction through Agricultural Carbon Enhancement network) (USDA Agricultural Research Service 2011), which aim to improve consistency of field measurement and data collection for soil carbon sequestration and soil nitrous oxide fluxes. Often these national-level activity data and emissions factors are the basis for regional and smaller-scale applications. Such data are used for model-based estimates of changes in GHGs at a project or regional level (Olander et al 2011). To complement national data for regional-, landscape-, or field-level applications, new data are often collected through farmer knowledge or records and field sampling. Ideally such data could be collected in a standardized manner, perhaps through some type of crowd sourcing model to improve regional—and national—level data, as well as to improve consistency of locally collected data. Data can also be collected by companies working with agricultural suppliers and in country networks, within efforts aimed at understanding firm and product (supply-chain) sustainability and risks (FAO 2009). Such data may feed into various certification processes or reporting requirements from buyers. Unfortunately, this data is likely proprietary. A new process is needed to aggregate and share private data in a way that would not be a competitive concern so such data could complement or supplement national data and add value. A number of papers in this focus issue discuss issues surrounding quantification methods and systems at large scales, global and national levels, while others explore landscape- and field-scale approaches. A few explore the intersection of top-down and bottom-up data measurement and modeling approaches. 5. The agricultural greenhouse gas quantification project and ERL focus issue Important land management decisions are often made with poor or few data, especially in developing countries. Current systems for quantifying GHG emissions are inadequate in most low-income countries, due to a lack of funding, human resources, and infrastructure. Most non-Annex 1 countries reporting agricultural emissions to the UNFCCC have used only Tier I default emissions factors (Nihart 2012, unpublished data), yet default numbers are based on a very limited number of studies. Furthermore, most non-Annex I countries have reported their National Communications only one or two times in the period 1990-2010. China, for instance, has not submitted agricultural inventory data since 1994. As we move toward the next IPCC assessment report on climate change and while UNFCCC negotiations give greater attention to the role of agriculture within international agreements, it is valuable to understand our current and potential near-term capacity to quantify and track emissions and assess mitigation potential in the agriculture sector, providing countries—especially least developed countries (LDCs)—with the information they need to promote and implement actions that, while conducive to mitigation, are also consistent with their rural development and food security goals. The purpose of this focus issue is to improve the knowledge and practice of quantifying GHG emissions from agriculture around the globe. The issue discusses methodological, data, and capacity gaps and needs across scales of quantification, from global and national-scale inventories to landscape- and farm-scale measurement. The inherent features of agriculture and especially smallholder farming have made quantification expensive and complicated, as farming systems and farmers' practices are diverse and impermanent and exhibit high temporal and spatial variability. Quantifying the emissions of the complex crop livestock or diverse cropping systems that characterize smallholder systems presents particular challenges. New ideas, methods, and uses of technology are needed to address these challenges. Many papers in this special issue synthesize the state of the art in their respective fields, analyze gaps, identify innovations, and make recommendations for improving quantification. Special attention is given to methods appropriate to low-income countries, where strategies are needed for getting robust data with extremely limited resources in order to support national mitigation planning within widely accepted standards and thus provide access to essential international support, including climate funding. Managing agricultural emissions needs to occur in tandem with managing for agricultural productivity, resilience to climate change, and ecosystem impacts. Management decisions and priorities will require measures and information that identify GHG efficiencies in production and reduce inputs without reducing yields, while addressing climate resilience and maintaining other essential environmental services, such as water quality and support for pollinators. Another set of papers in this issue considers the critical synergies and tradeoffs possible between these multiple objectives of mitigation, resilience, and production efficiency to help us understand how we need to tackle these in our quantification systems. Significant capacity to quantify greenhouse gases is already built, and with some near-term strategic investment, could become an increasingly robust and useful tool for planning and development in the agricultural sector around the world. Acknowledgments The Climate Change Agriculture and Food Security Program of the Consultative Group on International Agricultural Research, the Technical Working Group on Agricultural Greenhouse Gases (T-AGG) at Duke University's Nicholas Institute for Environmental Policy Solutions, and the United Nations Food and Agriculture Organization (FAO) have come together to guide the development of this focus issue and associated activities and papers, given their common desire to improve our understanding of the state of agricultural greenhouse gas (GHG) quantification and to advance ideas for building data and methods that will help mitigation policy and programs move forward around the world. We thank the David and Lucile Packard Foundation for their support of this initiative. The project has been developed with guidance from an esteemed steering group of experts and users of mitigation information (http://nicholasinstitute.duke.edu/ecosystem/t-agg/international-project). Many of the papers in this issue were commissioned. Authors of each of the commissioned papers met with guest editors at FAO in Rome in April 2012 to further develop their ideas, synthesize state of the art knowledge and generate new ideas (http://nicholasinstitute.duke.edu/ecosystem/t-agg/events-and-presentations). Additional interesting and important research has come forward through the general call for papers and has been incorporated into this issue. References CCAFS (Climate Change, Agriculture and Food Security) 2011 Victories for food and farming in Durban climate deals Press Release 13 December 2011 (http://ccafs.cgiar.org/news/press-releases/victories-food-and-farming-durban-climate-deals) FAO (Food and Agriculture Organization of the United Nations) 2009 Expert consultation on GHG emissions and mitigation potentials in the agricultural, forestry and fisheries sectors (Rome: FAO) FAO 2011 Linking Sustainability and Climate Financing: Implications for Agriculture (Rome: FAO) FAO 2012 FAOSTAT online database (http://faostat.fao.org/) Global Research Alliance on Agricultural Greenhouse Gases 2012 www.globalresearchalliance.org/ Herold M and Skutsch M 2011 Monitoring, reporting and verification for national REDD+ programmes: two proposals Environ. Res. Lett. 6 014002 Hosonuma N, Herold M, De Sy V, De Fries R S, Brockhaus M, Verchot L, Angelsen A and Romijn E 2012 An assessment of deforestation and forest degradation drivers in developing countries Environ. Res. Lett. 7 044009 IPCC (Intergovernmental Panel on Climate Change) 1996 Guidelines for National Greenhouse Gas Inventories (Paris: Organisation for Economic Co-operation and Development) IPCC 2003 Good Practice Guidance for Land Use, Land-Use Change and Forestry (Hayama: IPCC National Greenhouse Gas Inventories Programme) IPCC 2006 Guidelines for National Greenhouse Gas Inventories. Prepared by the National Greenhouse Gas Inventories Programme ed H S Eggleston et al (Hayama: IGES) IPCC 2012 IPCC Emission Factor Database (EFDB) (www.ipcc-nggip.iges.or.jp/EFDB/main.php) Kissinger G, Herold M and De Sy V 2012 Drivers of Deforestation and Forest Degradation: A Synthesis Report for REDD+ Policymakers (Vancouver: Lexeme Consulting) (www.decc.gov.uk/assets/decc/11/tackling-climate-change/international-climate-change/6316-drivers-deforestation-report.pdf) Murphy D, McCandless M and Drexhage J 2010 Expanding Agriculture's Role in the International Climate Change Regime: Capturing the Opportunities (Winnipeg: International Institute for Sustainable Development) Nihart A 2012 unpublished data Olander L, Wollenberg L and Van de Bogert A 2013 Understanding the users and uses of agricultural greenhouse gas information CCAFS/NI T-AGG Report (in progress) Olander L P and Haugen-Kozyra K with contributions from Del Grosso S, Izaurralde C, Malin D, Paustian K and Salas W 2011 Using Biogeochemical Process Models to Quantify Greenhouse Gas Mitigation from Agricultural Management Projects (Durham, NC: Nicholas Institute for Environmental Policy Solutions, Duke University) (http://nicholasinstitute.duke.edu/ecosystem/t-agg/using-biogeochemical-process) Romijn J E, Herold M, Kooistra L, Murdiyarso D and Verchot L 2012 Assessing capacities of non-Annex I countries for national forest monitoring in the context of REDD+ Environ. Sci. Policy 20 33-48 Smith P et al 2007 Agriculture Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change ed B Metz, O R Davidson, P R Bosch, R Dave and L A Meyer (Cambridge: Cambridge University Press) Smith P et al 2008 Greenhouse gas mitigation in agriculture Phil. Trans. R. Soc. B 363 789-813 UNFCCC (United Nations Framework Convention on Climate Change) 2008 Financial support provided by the Global Environment Facility for the preparation of National Communications from Parties not included in Annex I to the Convention FCCC/SBI/2008/INF.10 (http://unfccc.int/resource/docs/2008/sbi/eng/inf10.pdf) UNFCCC 2012 GHG Data from UNFCCC (http://unfccc.int/ghg_data/ghg_data_unfccc/items/4146.php) USDA (US Department of Agriculture) 2011 Agricultural Research Service (www.ars.usda.gov/research/programs/programs.htm?np_code=204&docid=17271) VCS (Verified Carbon Standard) 2011 New Methodology: VM0017 Sustainable Agricultural Land Management (http://v-c-s.org/SALM_methodology_approved) * We dedicate this special issue to the memory of Daniel Martino, a generous leader in greenhouse gas quantification and accounting from agriculture, land-use change, and forestry.
Merikanto, Joonas; Duplissy, Jonathan; Määttänen, Anni; Henschel, Henning; Donahue, Neil M.; Brus, David; Schobesberger, Siegfried; Kulmala, Markku; Vehkamäki, Hanna
2016-02-01
We derive a version of Classical Nucleation Theory normalized by quantum chemical results on sulfuric acid-water hydration to describe neutral and ion-induced particle formation in the binary sulfuric acid-water system. The theory is extended to treat the kinetic regime where the nucleation free energy barrier vanishes at high sulfuric acid concentrations or low temperatures. In the kinetic regime particle formation rates become proportional to sulfuric acid concentration to second power in the neutral system or first power in the ion-induced system. We derive simple general expressions for the prefactors in kinetic-type and activation-type particle formation calculations applicable also to more complex systems stabilized by other species. The theory predicts that the binary water-sulfuric acid system can produce strong new particle formation in the free troposphere both through barrier crossing and through kinetic pathways. At cold stratospheric and upper free tropospheric temperatures neutral formation dominates the binary particle formation rates. At midtropospheric temperatures the ion-induced pathway becomes the dominant mechanism. However, even the ion-induced binary mechanism does not produce significant particle formation in warm boundary layer conditions, as it requires temperatures below 0°C to take place at atmospheric concentrations. The theory successfully reproduces the characteristics of measured charged and neutral binary particle formation in CERN CLOUD3 and CLOUD5 experiments, as discussed in a companion paper.
Verb aspect, alternations and quantification
Svetla Koeva
2015-11-01
Full Text Available Verb aspect, alternations and quantificationIn this paper we are briefly discuss the nature of Bulgarian verb aspect and argue that the verb aspect pairs are different lexical units with different (although related meaning, different argument structure (reflecting categories, explicitness and referential status of arguments and different sets of semantic and syntactic alternations. The verb prefixes resulting in perfective verbs derivation in some cases can be interpreted as lexical quantifiers as well. Thus the Bulgarian verb aspect is related (in different way both with the potential for the generation of alternations and with the prefixal lexical quantification. It is shown that the scope of the lexical quantification by means of verbal prefixes is the quantified verb phrase and the scope remains constant in all derived alternations. The paper concerns the basic issues of these complex problems, while the detailed description of the conditions satisfying particular alternation or particular lexical quantification are subject of a more detailed study.
Pittner, Jiří; Piecuch, P.
2009-01-01
Roč. 107, 8-12 (2009), s. 1209-1221. ISSN 0026-8976 R&D Projects: GA ČR GA203/07/0070; GA AV ČR 1ET400400413; GA AV ČR KSK4040110 Institutional research plan: CEZ:AV0Z40400503 Keywords : multireference coupled cluster theory * method of moments of coupled cluster equations * state-universal multireference coupled cluster approach Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.634, year: 2009
An homogeneous model which simulates the stationary behavior of steam generators of PWR type reactors and uses the differential formalism of perturbation theory for analysing sensibility of linear and non-linear responses, is presented. The PERGEVAP computer code to calculate the temperature distribution in the steam generator and associated importance function, is developed. The code also evaluates effects of the thermohydraulic parameter variation on selected functionals. The obtained results are compared with results obtained by GEVAP computer code . (M.C.K.)
Johnston Marie; Dijkstra Rob; Bosch Marije; Francis Jill J; Eccles Martin P; Hrisos Susan; Grol Richard; Kaner Eileen FS; Steen Ian N
2009-01-01
Abstract Background Long term management of patients with Type 2 diabetes is well established within Primary Care. However, despite extensive efforts to implement high quality care both service provision and patient health outcomes remain sub-optimal. Several recent studies suggest that psychological theories about individuals' behaviour can provide a valuable framework for understanding generalisable factors underlying health professionals' clinical behaviour. In the context of the team mana...
Uncertainty quantification and stochastic modeling with Matlab
Souza de Cursi, Eduardo
2015-01-01
Uncertainty Quantification (UQ) is a relatively new research area which describes the methods and approaches used to supply quantitative descriptions of the effects of uncertainty, variability and errors in simulation problems and models. It is rapidly becoming a field of increasing importance, with many real-world applications within statistics, mathematics, probability and engineering, but also within the natural sciences. Literature on the topic has up until now been largely based on polynomial chaos, which raises difficulties when considering different types of approximation and does no
After noting some advantages of using perturbation theory some of the various types are related on a chart and described, including many-body nonlinear summations, quartic force-field fit for geometry, fourth-order correlation approximations, and a survey of some recent work. Alternative initial approximations in perturbation theory are also discussed. 25 references
Ion N.Chiuta
2009-05-01
Full Text Available The paper determines relations for shieldingeffectiveness relative to several variables, includingmetal type, metal properties, thickness, distance,frequency, etc. It starts by presenting some relationshipsregarding magnetic, electric and electromagnetic fieldsas a pertinent background to understanding and applyingfield theory. Since literature about electromagneticcompatibility is replete with discussions about Maxwellequations and field theory only a few aspects arepresented.
Uncertainty Quantification in Climate Modeling
Sargsyan, K.; Safta, C.; Berry, R.; Debusschere, B.; Najm, H.
2011-12-01
We address challenges that sensitivity analysis and uncertainty quantification methods face when dealing with complex computational models. In particular, climate models are computationally expensive and typically depend on a large number of input parameters. We consider the Community Land Model (CLM), which consists of a nested computational grid hierarchy designed to represent the spatial heterogeneity of the land surface. Each computational cell can be composed of multiple land types, and each land type can incorporate one or more sub-models describing the spatial and depth variability. Even for simulations at a regional scale, the computational cost of a single run is quite high and the number of parameters that control the model behavior is very large. Therefore, the parameter sensitivity analysis and uncertainty propagation face significant difficulties for climate models. This work employs several algorithmic avenues to address some of the challenges encountered by classical uncertainty quantification methodologies when dealing with expensive computational models, specifically focusing on the CLM as a primary application. First of all, since the available climate model predictions are extremely sparse due to the high computational cost of model runs, we adopt a Bayesian framework that effectively incorporates this lack-of-knowledge as a source of uncertainty, and produces robust predictions with quantified uncertainty even if the model runs are extremely sparse. In particular, we infer Polynomial Chaos spectral expansions that effectively encode the uncertain input-output relationship and allow efficient propagation of all sources of input uncertainties to outputs of interest. Secondly, the predictability analysis of climate models strongly suffers from the curse of dimensionality, i.e. the large number of input parameters. While single-parameter perturbation studies can be efficiently performed in a parallel fashion, the multivariate uncertainty analysis requires a large number of training runs, as well as an output parameterization with respect to a fast-growing spectral basis set. To alleviate this issue, we adopt the Bayesian view of compressive sensing, well-known in the image recognition community. The technique efficiently finds a sparse representation of the model output with respect to a large number of input variables, effectively obtaining a reduced order surrogate model for the input-output relationship. The methodology is preceded by a sampling strategy that takes into account input parameter constraints by an initial mapping of the constrained domain to a hypercube via the Rosenblatt transformation, which preserves probabilities. Furthermore, a sparse quadrature sampling, specifically tailored for the reduced basis, is employed in the unconstrained domain to obtain accurate representations. The work is supported by the U.S. Department of Energy's CSSEF (Climate Science for a Sustainable Energy Future) program. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Analysis of New Type Air-conditioning for Loom Based on CFD Simulation and Theory of Statistics
Ruiliang Yang; Yide Zhou; Nannan Zhao; Gaoju Song
2011-01-01
Based on theory of statistics, main factors affecting effects of loom workshop’s large and small zone ventilation using the CFD simulation in this paper. Firstly, four factors and three levels of orthogonal experimental table is applied to CFD simulation, the order from major to minor of four factors is obtained, which can provide theoretical basis for design and operation. Then single-factor experiment method is applied to CFD simulation, certain factor changing can be obtained w...
CORTÉS CASTRO, LUIS ALBERTO
2010-01-01
Coronary heart disease is the first cause of mortality among Colombia's female population (1) and constitutes a health problem that implies deterioration of the quality of life of this group. The objective of this study was to categorize the ischemic thoracic pain in women in light of the theory Of Unpleasant Symptoms. The study was designed to be descriptiveexploratory with a qualitativequantitative approach. The information was collected by means of a semistructured interview and sampl...
Quantification and Negation in Event Semantics
Lucas Champollion
2010-12-01
Full Text Available Recently, it has been claimed that event semantics does not go well together with quantification, especially if one rejects syntactic, LF-based approaches to quantifier scope. This paper shows that such fears are unfounded, by presenting a simple, variable-free framework which combines a Neo-Davidsonian event semantics with a type-shifting based account of quantifier scope. The main innovation is that the event variable is bound inside the verbal denotation, rather than at sentence level by existential closure. Quantifiers can then be interpreted in situ. The resulting framework combines the strengths of event semantics and type-shifting accounts of quantifiers and thus does not force the semanticist to posit either a default underlying word order or a syntactic LF-style level. It is therefore well suited for applications to languages where word order is free and quantifier scope is determined by surface order. As an additional benefit, the system leads to a straightforward account of negation, which has also been claimed to be problematic for event-based frameworks.ReferencesBarker, Chris. 2002. ‘Continuations and the nature of quantification’. Natural Language Semantics 10: 211–242.http://dx.doi.org/10.1023/A:1022183511876Barker, Chris & Shan, Chung-chieh. 2008. ‘Donkey anaphora is in-scope binding’. Semantics and Pragmatics 1: 1–46.Beaver, David & Condoravdi, Cleo. 2007. ‘On the logic of verbal modification’. In Maria Aloni, Paul Dekker & Floris Roelofsen (eds. ‘Proceedings of the Sixteenth Amsterdam Colloquium’, 3–9. Amsterdam, Netherlands: University of Amsterdam.Beghelli, Filippo & Stowell, Tim. 1997. ‘Distributivity and negation: The syntax of each and every’. In Anna Szabolcsi (ed. ‘Ways of scope taking’, 71–107. Dordrecht, Netherlands: Kluwer.Brasoveanu, Adrian. 2010. ‘Modified Numerals as Post-Suppositions’. In Maria Aloni, Harald Bastiaanse, Tikitu de Jager & Katrin Schulz (eds. ‘Logic, Language and Meaning’, Lecture Notes in Computer Science, vol. 6042, 203–212. Berlin, Germany: Springer.Carlson, Gregory N. 1977. Reference to Kinds in English. Ph.D. thesis, University of Massachusetts, Amherst, MA.Carlson, Gregory N. 1984. ‘Thematic roles and their role in semantic interpretation’. Linguistics 22: 259–279.http://dx.doi.org/10.1515/ling.1984.22.3.259Champollion, Lucas. 2010. Parts of a whole: Distributivity as a bridge between aspect and measurement. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA.Champollion, Lucas, Tauberer, Josh & Romero, Maribel. 2007. ‘The Penn Lambda Calculator: Pedagogical software for natural language semantics’. In Tracy Holloway King & Emily Bender (eds. ‘Proceedings of the Grammar Engineering Across Frameworks(GEAF 2007 Workshop’, Stanford, CA: CSLI Online Publications.Condoravdi, Cleo. 2002. ‘Punctual until as a scalar NPI’. In Sharon Inkelas & Kristin Hanson (eds. ‘The nature of the word’, 631–654. Cambridge, MA: MIT Press.Csirmaz, Aniko. 2006. ‘Aspect, Negation and Quantifiers’. In Liliane Haegeman, Joan Maling, James McCloskey & Katalin E. Kiss (eds. ‘Event Structure And The Left Periphery’, Studies in Natural Language and Linguistic Theory, vol. 68, 225–253. SpringerNetherlands.Davidson, Donald. 1967. ‘The logical form of action sentences’. In Nicholas Rescher (ed. ‘The logic of decision and action’, 81–95. Pittsburgh, PA: University of Pittsburgh Press.de Swart, Henriëtte. 1996. ‘Meaning and use of not . . . until’. Journal of Semantics 13: 221–263.http://dx.doi.org/10.1093/jos/13.3.221de Swart, Henriëtte & Molendijk, Arie. 1999. ‘Negation and the temporal structure of narrative discourse’. Journal of Semantics 16: 1–42.http://dx.doi.org/10.1093/jos/16.1.1Dowty, David R. 1979. Word meaning and Montague grammar. Dordrecht, Netherlands: Reidel.Eckardt, Regine. 2010. ‘A Logic for Easy Linking Semantics’. In Maria Aloni, Harald Bastiaanse, Tikitu de Jager & Katrin Schulz (eds. ‘Logic, Language and Meaning’, Lecture Notes in Computer Science, vol. 6042, 274–283. Springer Berlin / Heidelberg.http://dx.doi.org/10.1007/978-3-642-14287-1_28Giannakidou, Anastasia. 2002. ‘UNTIL, aspect and negation: A novel argument for two untils’. In ‘Semantics and linguistic theory (SALT’, vol. 12, 84–103.Groenendijk, Jeroen & Stokhof, Martin. 1990. ‘Dynamic Montague grammar’. In Lászlo Kálman & Lászlo Polos (eds. ‘Papers from the Second Symposium on Logic and Language’, Budapest, Hungary: Akadémiai Kiadó.Heim, Irene & Kratzer, Angelika. 1998. Semantics in Generative Grammar. Oxford, UK: Blackwell Publishing.Hendriks, Herman. 1993. Studied flexibility. Ph.D. thesis, University of Amsterdam, Amsterdam, Netherlands.Jacobson, Pauline. 1999. ‘Towards a variable-free semantics’. Linguistics and Philosophy 117–184.http://dx.doi.org/10.1023/A:1005464228727Krifka, Manfred. 1989. ‘Nominal reference, temporal constitution and quantification in event semantics’. In Renate Bartsch, Johan van Benthem & P. van Emde Boas (eds. ‘Semantics and contextual expression’, 75–115. Dordrecht, Netherlands: Foris.Krifka, Manfred. 1998. ‘The origins of telicity’. In Susan Rothstein (ed. ‘Events and grammar’, 197–235. Dordrecht, Netherlands: Kluwer.Krifka, Manfred. 1999. ‘At Least Some Determiners Aren’t Determiners’. In K. Turner (ed. ‘The Semantics/Pragmatics Interface from Different Points of View’, 257–291. Amsterdam, Netherlands: Elsevier.Landman, Fred. 1996. ‘Plurality’. In Shalom Lappin (ed. ‘Handbook of Contemporary Semantics’, 425–457. Oxford, UK: Blackwell Publishing.Landman, Fred. 2000. Events and plurality: The Jerusalem lectures. Dordrecht, Netherlands: Kluwer.May, Robert. 1985. Logical form: Its structure and derivation. Cambridge, MA: MIT Press.Parsons, Terence. 1990. Events in the semantics of English. Cambridge, MA: MIT Press.Partee, Barbara H. 1973. ‘Some structural analogies between tenses and pronouns in English’. The Journal of Philosophy 70: 601–609.http://dx.doi.org/10.2307/2025024Partee, Barbara H. 1987. ‘Noun phrase interpretation and type-shifting principles’. In Jeroen Groenendijk, Dick de Jongh & Martin Stokhof (eds. ‘Studies in Discourse Representation Theory and the Theory of Generalized Quanti?ers’, 115–143. Dordrecht, Netherlands: Foris.Rathert, Monika. 2004. Textures of time. Berlin, Germany: Akademie Verlag.Smith, Steven Bradley. 1975. Meaning and negation. The Hague, Netherlands: Mouton.von Stechow, Arnim. 2009. ‘Tenses in compositional semantics’. In Wolfgang Klein & Ping Li (eds. ‘The expression of time’, 129–166. Berlin, Germany: Mouton de Gruyter.Winter, Yoad & Zwarts, Joost. 2011. ‘Event semantics and Abstract Categorial Grammar’. In Makoto Kanazawa, Marcus Kracht & Hiroyuki Seki (eds. ‘Proceedings of Mathematics of Language 12’, Lecture Notes in Computer Science / Lecture Notes in Arti?cial Intelligence, vol. 6878, 174–191. Berlin / Heidelberg: Springer.Zucchi, Sandro & White, Michael. 2001. ‘Twigs, sequences and the temporal constitution of predicates’. Linguistics and Philosophy 24: 187–222.http://dx.doi.org/10.1023/A:1005690022190
Sensitivity calculations are very important in design and safety of nuclear reactor cores. Large codes with a great number of physical considerations have been used to perform sensitivity studies. However, these codes need long computation time involving high costs. The perturbation theory has constituted an efficient and economical method to perform sensitivity analysis. The present work is an application of the perturbation theory (matricial formalism) to a simplified model of DNB (Departure from Nucleate Boiling) analysis to perform sensitivity calculations in PWR cores. Expressions to calculate the sensitivity coefficients of enthalpy and coolant velocity with respect to coolant density and hot channel area were developed from the proposed model. The CASNUR.FOR code to evaluate these sensitivity coefficients was written in Fortran. The comparison between results obtained from the matricial formalism of perturbation theory with those obtained directly from the proposed model makes evident the efficiency and potentiality of this perturbation method for nuclear reactor cores sensitivity calculations (author). 23 refs, 4 figs, 7 tabs
Simple tangent, hard site chains near a hard wall are modeled with a density functional (DF) theory that uses the direct correlation function, c(r), as its ''input.'' Two aspects of this DF theory are focused upon: (1) the consequences of variations in c(r)'s detailed form; and (2) the correct way to introduce c(r) into the DF formalism. The most important aspect of c(r) is found to be its integrated value, c(0). Indeed, it appears that, for fixed c(0), all reasonable guesses of the detailed shape of c(r) result in surprisingly similar density distributions, ?(r). Of course, the more accurate the c(r), the better the ?(r). As long as the length scale introduced by c(r) is roughly the hard site diameter and as long as the solution remains liquid-like, the ?(r) is found to be in good agreement with simulation results. The c(r) is used in DF theory to calculate the medium-induced potential, UM(r), from the density distribution, ?(r). The form of UM(r) can be chosen to be one of a number of different forms. It is found that the forms for UM(r)which yield the most accurate results for the wall problem are also those which were suggested as accurate in previous, related studies. (c) 2000 American Institute of Physics
Bianchi type-V cosmological models with perfect fluid and heat flow in Saez–Ballester theory
Shri Ram; M Zeyauddin; C P Singh
2009-02-01
In this paper we discuss the variation law for Hubble's parameter with average scale factor in a spatially homogeneous and anisotropic Bianchi type-V space-time model, which yields constant value of the deceleration parameter. We derive two laws of variation of the average scale factor with cosmic time, one is of power-law type and the other is of exponential form. Exact solutions of Einstein field equations with perfect fluid and heat conduction are obtained for Bianchi type-V space-time in these two types of cosmologies. In the cosmology with the power-law, the solutions correspond to a cosmological model which starts expanding from the singular state with positive deceleration parameter. In the case of exponential cosmology, we present an accelerating non-singular model of the Universe. We find that the constant value of deceleration parameter is reasonable for the present day Universe and gives an appropriate description of evolution of Universe. We have also discussed different types of physical and kinematical behaviour of the models in these two types of cosmologies.
Uncertainty quantification in hybrid dynamical systems
Sahai, Tuhin; Pasini, Jos Miguel
2013-03-01
Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontinuities, we use a wavelet-based Wiener-Haar expansion. We develop a boundary layer approach to propagate uncertainty through separable reset conditions. We also introduce a transport theory based approach for propagating uncertainty through hybrid dynamical systems. Here the expansion yields a set of hyperbolic equations that are solved by integrating along characteristics. The solution of the partial differential equation along the characteristics allows one to quantify uncertainty in hybrid or switching dynamical systems. The above methods are demonstrated on example problems.
Uncertainty Quantification in Hybrid Dynamical Systems
Sahai, Tuhin
2011-01-01
Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontinuities, we use a wavelet-based Wiener-Haar expansion. We develop a boundary layer approach to propagate uncertainty through separable reset conditions. We also introduce a transport theory based approach for propagating uncertainty through hybrid dynamical systems. Here the expansion yields a set of hyperbolic equations that are solved by integrating along characteristics. The solution of the partial differential equation along the characteristics allows one to quantify uncertainty in hybrid or switching dynamical systems. The above method...
Palla, Mirko; Bosco, Filippo Giacomo; Yang, Jaeyoung; Rindzevicius, Tomas; Alstrøm, Tommy Sonne; Schmidt, Michael Stenbak; Lin, Qiao; Ju, Jingyue; Boisen, Anja
This paper presents the development of a novel statistical method for quantifying trace amounts of biomolecules by surface-enhanced Raman spectroscopy (SERS) using a rigorous, single molecule (SM) theory based mathematical derivation. Our quantification framework could be generalized for planar...... SERS substrates, in which the nanostructured features can be approximated as a closely spaced electromagnetic dimer problem. The potential for SM detection was also shown, which opens up an exciting opportunity in the field of SERS quantification....
The Types of Axisymmetric Exact Solutions Closely Related to n-SOLITONS for Yang-Mills Theory
Zhong, Zai Zhe
In this letter, we point out that if a symmetric 2×2 real matrix M(ρ,z) obeys the Belinsky-Zakharov equation and |det(M)|=1, then an axisymmetric Bogomol'nyi field exact solution for the Yang-Mills-Higgs theory can be given. By using the inverse scattering technique, some special Bogomol'nyi field exact solutions, which are closely related to the true solitons, are generated. In particular, the Schwarzschild-like solution is a two-soliton-like solution.
Pressure-induced phase transformation in zircon-type orthovanadate SmVO4 from experiment and theory
Popescu, C.; Garg, Alka B; Errandonea, D.; Sans, J. A.; Rodriguez-Hernandez, P.; Radescu, S; Munoz, A.; Achary, S.N.; Tyagi, A. K.
2016-01-01
The compression behavior of zircon-type samarium orthovanadate, SmVO4, has been investigated using synchrotron-based powder x-ray diffraction and ab-initio calculations up to 21 GPa. The results indicate the instability of ambient zircon phase at around 6 GPa, which transforms to a high-density scheelite-type phase. The high-pressure phase remains stable up to 21 GPa, the highest pressure reached in the present investigations. On pressure release, the scheelite phase is recovered. Crystal str...
Started with analyzing the features of metallogenetic epoch and space distribution of typical interlayer oxidation zone sandstone type uranium deposit both in China and abroad and their relations of basin evolution, the authors have proposed the idea that the last unconformity mainly controls the metallogenetic epoch and the strength of structure activity after the last unconformity determines the deposit space. An exploration theory with the kernel from new events to the old one is put forward. The means and method to use SAR technology to identify ore-controlling key factors are discussed. An application study in Eastern Jungar Basin is performed
MAMA Software Features: Quantification Verification Documentation-1
Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-05-21
This document reviews the verification of the basic shape quantification attributes in the MAMA software against hand calculations in order to show that the calculations are implemented mathematically correctly and give the expected quantification results.
Frequency conversion (FC) and type-II parametric down-conversion (PDC) processes serve as basic building blocks for the implementation of quantum optical experiments: type-II PDC enables the efficient creation of quantum states such as photon-number states and Einstein–Podolsky–Rosen (EPR)-states. FC gives rise to technologies enabling efficient atom–photon coupling, ultrafast pulse gates and enhanced detection schemes. However, despite their widespread deployment, their theoretical treatment remains challenging. Especially the multi-photon components in the high-gain regime as well as the explicit time-dependence of the involved Hamiltonians hamper an efficient theoretical description of these nonlinear optical processes. In this paper, we investigate these effects and put forward two models that enable a full description of FC and type-II PDC in the high-gain regime. We present a rigorous numerical model relying on the solution of coupled integro-differential equations that covers the complete dynamics of the process. As an alternative, we develop a simplified model that, at the expense of neglecting time-ordering effects, enables an analytical solution. While the simplified model approximates the correct solution with high fidelity in a broad parameter range, sufficient for many experimental situations, such as FC with low efficiency, entangled photon-pair generation and the heralding of single photons from type-II PDC, our investigations reveal that the rigorous model predicts a decreased performance for FC processes in quantum pulse gate applications and an enhanced EPR-state generation rate during type-II PDC, when EPR squeezing values above 12 dB are considered. (paper)
The potential distribution in cyclotron-type gaps with liner (outer electrodes at ground potential) is determined via Schwarz-Christoffel transformation as well as by computer analysis (relaxation method). First-order focusing formulas for both static as well as time-varying potentials are derived. In addition exact calculations were carried out by direct numerical integration of the equations of motion with a computer program. The numerical data permitted an accurate evaluation of the validity of the analytical approximation as well as further improvement of the theoretical formulas. Focusing relations are presented in a generalized form which shows the scaling laws and is readily applicable to different types of particles, energies or lens geometries. As an example, the theory is applied in the axial motion of ions in a cyclotron
Quantification of natural phenomena
The science is like a great spider's web in which unexpected connections appear and therefore it is frequently difficult to already know the consequences of new theories on those existent. The physics is a clear example of this. The Newton mechanics laws describe the physical phenomena observable accurately by means of our organs of the senses or by means of observation teams not very sophisticated. After their formulation at the beginning of the XVIII Century, these laws were recognized in the scientific world as a mathematical model of the nature. Together with the electrodynamics law, developed in the XIX century, and the thermodynamic one constitutes what we call the classic physics. The state of maturity of the classic physics at the end of last century it was such that some scientists believed that the physics was arriving to its end obtaining a complete description of the physical phenomena. The spider's web of the knowledge was supposed finished, or at least very near its termination. It ended up saying, in arrogant form, that if the initial conditions of the universe were known, we could determine the state of the same one in any future moment. Two phenomena related with the light would prove in firm form that mistaken that they were, creating unexpected connections in the great spider's web of the knowledge and knocking down part of her. The thermal radiation of the bodies and the fact that the light spreads to constant speed in the hole, without having an absolute system of reference with regard to which this speed is measured, they constituted the decisive factors in the construction of a new physics. The development of sophisticated of measure teams gave access to more precise information and it opened the microscopic world to the observation and confirmation of existent theories
A new type of hollow cathode discharge gun used in ion beam coating apparatus and theory analysis
In recent years, the hollow cathode discharge gun has been widely used in the metallurgical coating field. However, the design of its electromagnetic field is unreasonable, the gun body is large, its structure is intricate and its operation is not convenient. In order to overcome these shortcomings, through many years of study, the authors have developed a new type of hollow cathode discharge gun which is characterised by a simple structure, steady efficiency, propitious examination, etc. This gun uses only one magnet as a focusing and deflective pole instead of the normal four magnetic focusing poles. The electron beam can strike just to the centre position of the crucible by controlling the distance between the gun and the crucible and the electromagnetic field density. The new type of gun has been successfully applied to industry. (author)
Renormalisation des theories de champs non commutatives
Vignes-Tourneret, Fabien
2006-12-01
Very high energy physics needs a coherent description of the four fundamental forces. Non-commutative geometry is a promising mathematical framework which already allowed to unify the general relativity and the standard model, at the classical level, thanks to the spectral action principle. Quantum field theories on non-commutative spaces is a first step towards the quantification of such a model. These theories can't be obtained simply by writing usual field theory on non-commutative spaces. Such attempts exhibit indeed a new type of divergencies, called ultraviolet/infrared mixing, which prevents renormalisability. H. Grosse and R. Wulkenhaar showed, with an example, that a modification of the propagator may restore renormalisability. This thesis aims at studying the generalization of such a method. We studied two different models which allowed to specify certain aspects of non-commutative field theory. In x space, the major technical difficulty is due to oscillations in the interaction part. We generalized the results of T. Filk in order to exploit such oscillations at best. We were then able to distinguish between two mixings, renormalizable or not. We also bring the notion of orientability to light : the orientable non-commutative Gross-Neveu model is renormalizable without any modification of its propagator. The adaptation of multi-scale analysis to the matrix basis emphasized the importance of dual graphs and represents a first step towards a formulation of field theory independent of the underlying space.
Pressure-induced phase transformation in zircon-type orthovanadate SmVO4 from experiment and theory.
Popescu, C; Garg, Alka B; Errandonea, D; Sans, J A; Rodriguez-Hernández, P; Radescu, S; Muñoz, A; Achary, S N; Tyagi, A K
2016-01-27
The compression behavior of zircon-type samarium orthovanadate, SmVO4, has been investigated using synchrotron-based powder x-ray diffraction and ab initio calculations of up to 21 GPa. The results indicate the instability of ambient zircon phase at around 6 GPa, which transforms to a high-density scheelite-type phase. The high-pressure phase remains stable up to 21 GPa, the highest pressure reached in the present investigations. On pressure release, the scheelite phase is recovered. The crystal structure of the high-pressure phase and the equations of state for the zircon- and scheelite-type phases have been determined. Various compressibilities, such as the bulk, axial and bond compressibilities, estimated from the experimental data are found to be in good agreement with the results obtained from theoretical calculations. The calculated elastic constants show that the zircon structure becomes mechanically unstable beyond the transition pressure. Overall there is good agreement between the experimental and theoretical findings. PMID:26733093
Quantified PIRT and Uncertainty Quantification for Computer Code Validation
Luo, Hu
This study is intended to investigate and propose a systematic method for uncertainty quantification for the computer code validation application. Uncertainty quantification has gained more and more attentions in recent years. U.S. Nuclear Regulatory Commission (NRC) requires the use of realistic best estimate (BE) computer code to follow the rigorous Code Scaling, Application and Uncertainty (CSAU) methodology. In CSAU, the Phenomena Identification and Ranking Table (PIRT) was developed to identify important code uncertainty contributors. To support and examine the traditional PIRT with quantified judgments, this study proposes a novel approach, the Quantified PIRT (QPIRT), to identify important code models and parameters for uncertainty quantification. Dimensionless analysis to code field equations to generate dimensionless groups (pi groups) using code simulation results serves as the foundation for QPIRT. Uncertainty quantification using DAKOTA code is proposed in this study based on the sampling approach. Nonparametric statistical theory identifies the fixed number of code run to assure the 95 percent probability and 95 percent confidence in the code uncertainty intervals.
Zhou, Guangfen [College of Science, Beijing Institute of Technology, Beijing 100081 (China); College of Science, Hebei University of Science and Technology, Shijiazhuang 050018 (China); Ren, Jie, E-mail: renjie@fudan.edu.cn [College of Science, Hebei University of Science and Technology, Shijiazhuang 050018 (China); Zhang, Shaowen [College of Science, Beijing Institute of Technology, Beijing 100081 (China)
2012-12-01
The initial reaction mechanism of atomic layer deposited TiO{sub 2} thin film on the silicon surface using Cp*Ti(OCH{sub 3}){sub 3} as the metal precursor has been investigated by using the density functional theory. We find that Cp*Ti(OCH{sub 3}){sub 3} adsorbed state can be formed via the hydrogen bonding interaction between CH{sub 3}O ligands and the Si-OH sites, which is in good agreement with the quadrupole mass spectrometry (QMS) experimental observations. Moreover, the desorption of adsorbed Cp*Ti(OCH{sub 3}){sub 3} is favored in the thermodynamic equilibrium state. The elimination reaction of CH{sub 3}OH can occur more readily than that of Cp*H during the Cp*Ti(OCH{sub 3}){sub 3} pulse. This conclusion is also confirmed by the QMS experimental results. - Highlights: Black-Right-Pointing-Pointer Initial reaction mechanism of atomic layer deposition of TiO{sub 2} has been studied. Black-Right-Pointing-Pointer The Cp*Ti(OCH{sub 3}){sub 3} absorbed state on silicon surface is formed by hydrogen bonds. Black-Right-Pointing-Pointer The elimination of CH{sub 3}OH occurs more readily than that of Cp*H in Cp*Ti(OCH{sub 3}){sub 3}. Black-Right-Pointing-Pointer The Cp*Ti(OCH{sub 3}){sub 3} adsorbs on silicon surface via the CH{sub 3}O ligand.
Exact solutions of Bianchi-type I and V spacetimes in the f(R) theory of gravity
In this paper, the crucial phenomenon of the expansion of the universe has been discussed. For this purpose, we study the vacuum solutions of Bianchi-type I and V spacetimes in the framework of f(R) gravity. In particular, we find two exact solutions in each case using the variation law of Hubble parameter. These solutions correspond to two models of the universe. The first solution gives a singular model, while the second solution provides a non-singular model. The physical behavior of these models is discussed. Moreover, the function of the Ricci scalar is evaluated for both models in each case.
The effect of anisotropy on the measurement, by muon spin rotation, of the London penetration depth in the high Tc uniaxial type II superconductors is considered in detail. Expressions are derived which will allow the principal penetration depths, ?1 and ?2, to be determined using measurements of the ?SR line width from single crystals. For polycrystalline, powder or sintered, samples an expression is derived which will allow an effective penetration depth, ?eff, to be determined from the measured ?SR line width. Further, it is shown that for all anisotropy ratios, ?2/?1, greater than five ?1 ? 0.81?eff. (author)
We study the probability distribution P(Λ) of the cosmological constant Λ in a specific set of KKLT type models of supersymmetric IIB vacua. We show that, as we sweep through the quantized flux values in this flux compactification, P(Λ) behaves divergent at Λ=0− and the median magnitude of Λ drops exponentially as the number of complex structure moduli h2,1 increases. Also, owing to the hierarchical and approximate no-scale structure, the probability of having a positive Hessian (mass-squared matrix) approaches unity as h2,1 increases
The depressurization behaviour of a fibre-type thermal insulation has been investigated both by measurements with air and helium and with numerical models. A simple lumped parameter model has been used to reproduce the measured transients for air as well as for helium. All the experimental data have been obtained with reasonable accuracy by fitting two empirical parameters, the effective surfaces of the flow through the venting holes and the flow through the perforated tube. It is remarkable that the same parameters reproduce the experimental data for such different gases as air and helium. The dependence of the depressurization behaviour on the different parameters has been treated by a dimensional analysis. (Auth.)
Electronic structure of cage-type ternaries ARu2Al10 – Theory and XPS experiment (A = Ce and U)
Highlights: ► Electronic structures of (U;Ce)Ru2Al10 probed by X-ray photoemission and ab initio. ► Good agreement between valence-band XPS and calculated (within LDA) spectra. ► More itinerant character of the U 5f than Ce 4f electrons in these compounds. ► Reduced Fermi surface of CeRu2Al10 compared with the U-based system. -- Abstract: The electronic structure of the isomorphic, orthorhombic URu2Al10 and CeRu2Al10 aluminides have been studied by X-ray photoelectron spectroscopy (XPS) and ab initio calculations using the fully relativistic full-potential local-orbital (FPLO) method within the local density approximation (LDA). The calculated data of the former system revealed fairly sharp triple-peaks of the U 5f states around the Fermi level (EF) and a large broad contribution from the Ru 4d states expanded from EF to about 6.5 eV of binding energy. Although the size and positions of the Ru 4d bands for the latter compound are quite similar to those of the U-based one, the double Ce 4f sharp peaks are placed almost completely above EF underlying their mostly localized character. We have also analyzed the Fermi surfaces (FSs) in these two aluminides. The calculated results of both ternaries were then compared with our experimental XPS data for URu2Al10 and with such data for CeRu2Al10 available in the literature. The results are in fairly good agreement between the theory and experiment. Especially, the fact that the spectrum weight of the Ce 4f electrons below EF turned out to be very much reduced, reflecting rather a small f–c hybridization of these electrons compared to considerably larger one in the U-based compound
Wolpert, David H.
2005-01-01
Probability theory governs the outcome of a game; there is a distribution over mixed strat.'s, not a single "equilibrium". To predict a single mixed strategy must use our loss function (external to the game's players. Provides a quantification of any strategy's rationality. Prove rationality falls as cost of computation rises (for players who have not previously interacted). All extends to games with varying numbers of players.
Secret symmetries of type IIB superstring theory on AdS3 × S3 × M4
We establish features of so-called Yangian secret symmetries for AdS3 type IIB superstring backgrounds, thus verifying the persistence of such symmetries to this new instance of the AdS/CFT correspondence. Specifically, we find two a priori different classes of secret symmetry generators. One class of generators, anticipated from the previous literature, is more naturally embedded in the algebra governing the integrable scattering problem. The other class of generators is more elusive and somewhat closer in its form to its higher-dimensional AdS5 counterpart. All of these symmetries respect left-right crossing. In addition, by considering the interplay between left and right representations, we gain a new perspective on the AdS5 case. We also study the RTT-realisation of the Yangian in AdS3 backgrounds, thus establishing a new incarnation of the Beisert–de Leeuw construction. (paper)
Caramello, Olivia
2013-01-01
We introduce an abstract topos-theoretic framework for building Galois-type theories in a variety of different mathematical contexts; such theories are obtained from representations of certain atomic two-valued toposes as toposes of continuous actions of a topological group. Our framework subsumes in particular Grothendieck's Galois theory and allows to build Galois-type equivalences in new contexts, such as for example graph theory and finite group theory.
It is shown that the effective Hamiltonian representation, as it is formulated in author’s papers, serves as a basis for distinguishing, in a broadband environment of an open quantum system, independent noise sources that determine, in terms of the stationary quantum Wiener and Poisson processes in the Markov approximation, the effective Hamiltonian and the equation for the evolution operator of the open system and its environment. General stochastic differential equations of generalized Langevin (non-Wiener) type for the evolution operator and the kinetic equation for the density matrix of an open system are obtained, which allow one to analyze the dynamics of a wide class of localized open systems in the Markov approximation. The main distinctive features of the dynamics of open quantum systems described in this way are the stabilization of excited states with respect to collective processes and an additional frequency shift of the spectrum of the open system. As an illustration of the general approach developed, the photon dynamics in a single-mode cavity without losses on the mirrors is considered, which contains identical intracavity atoms coupled to the external vacuum electromagnetic field. For some atomic densities, the photons of the cavity mode are “locked” inside the cavity, thus exhibiting a new phenomenon of radiation trapping and non-Wiener dynamics.
New spin(7) holonomy metrics admitting G2 holonomy reductions and M-theory/type-IIA dualities
As is well known, when D6 branes wrap a special Lagrangian cycle on a noncompact Calabi-Yau threefold in such a way that the internal string frame metric is a Kaehler one there exists a dual description, which is given in terms of a purely geometrical 11-dimensional background with an internal metric of G2 holonomy. It is also known that when D6 branes wrap a coassociative cycle of a noncompact G2 manifold in the presence of a self-dual two-form strength the internal part of the string frame metric is conformal to the G2 metric and there exists a dual description, which is expressed in terms of a purely geometrical 11-dimensional background with an internal noncompact metric of spin(7) holonomy. In the present work it is shown that any G2 metric participating in the first of these dualities necessarily participates in one of the second type. Additionally, several explicit spin(7) holonomy metrics admitting a G2 holonomy reduction along one isometry are constructed. These metrics can be described as R fibrations over a 6-dimensional Kaehler metric, thus realizing the pattern spin(7)→G2→(Kahler) mentioned above. Several of these examples are further described as fibrations over the Eguchi-Hanson gravitational instanton and, to the best of our knowledge, have not been previously considered in the literature.
Structure and dynamics of Xn-type clusters (n = 3, 4, 6) from spontaneous symmetry breaking theory
On the basis of three symmetries of nature, homogeneity and isotropy of space and indistinguishability of identical particles, we have found a group of coordinate transformations that leaves invariant the electronic energy and the potential energy of nuclei in every molecule subjected to no external fields. From these transformations we derived the formula for the dynamical representation and proved that every molecule has at least one Raman-active, totally symmetric normal mode of vibration. As an example, we studied stable configurations and dynamics of Xn-type molecules (clusters), n = 3, 4, 6, within symmetry-adapted, second-order expansion of the electronic energy with respect to nuclear coordinates, around the united atom. Within this approximation, for a positive coefficient in the expansion, a homonuclear three- (four-, six-) atomic cluster has a stable configuration of D3h (Td, Oh) symmetry. Our calculated mutual ratios of vibrational frequencies for clusters with these geometries are in reasonable agreement with experiment. (paper)
Anelastic modal equations are used to examine thermal convection occurring over many density scale heights in the entire outer envelope of an A-type star, encompassing both the hydrogen and helium convectively unstable zones. Single-mode anelastic solutions for such compressible convection display strong overshooting of the motions into adjacent radiative zones. Such mixing would preclude diffusive separation of elements in the supposedly quiescent region between the two unstable zones. Indeed, the anelastic solutions reveal that the two zones of convective instability are dynamically coupled by the overshooting motions. The nonlinear single-mode equations admit two solutions for the same horizontal wavelength, and these are distinguished by the sense of the vertical velocity at the center of the three-dimensional cell. The upward directed flows experience large pressure effects when they penetrate into regions where the vertical scale height has become small compared to their horizontal scale. The fluctuating pressure can modify the density fluctuations so that the sense of the buoyancy force is changed, with buoyancy braking actually achieved near the top of the convection zone, even though the mean stratification is still superadiabatic. The pressure and buoyancy work there serves to decelerate the vertical motions and deflect them laterally, leading to strong horizontal shearing motions. Thus the shallow but highly unstable hydrogen ionization zone may serve to prevent convection with a horizontal scale comparable to supergranulation from getting through into the atmosphere with any significant portion of its original momentum. This suggests that strong horizontal shear flows should be present just below the surface of the star, and similarly that strong horizontal shear flows should be present just below the surface of the star, and similarly that the large-scale motions extending into the stable atmosphere would appear mainly as horizontal flows
Sharma, Leigh; Markon, Kristian E; Clark, Lee Anna
2014-03-01
Impulsivity is considered a personality trait affecting behavior in many life domains, from recreational activities to important decision making. When extreme, it is associated with mental health problems, such as substance use disorders, as well as with interpersonal and social difficulties, including juvenile delinquency and criminality. Yet, trait impulsivity may not be a unitary construct. We review commonly used self-report measures of personality trait impulsivity and related constructs (e.g., sensation seeking), plus the opposite pole, control or constraint. A meta-analytic principal-components factor analysis demonstrated that these scales comprise 3 distinct factors, each of which aligns with a broad, higher order personality factor-Neuroticism/Negative Emotionality, Disinhibition versus Constraint/Conscientiousness, and Extraversion/Positive Emotionality/Sensation Seeking. Moreover, Disinhibition versus Constraint/Conscientiousness comprise 2 correlated but distinct subfactors: Disinhibition versus Constraint and Conscientiousness/Will versus Resourcelessness. We also review laboratory tasks that purport to measure a construct similar to trait impulsivity. A meta-analytic principal-components factor analysis demonstrated that these tasks constitute 4 factors (Inattention, Inhibition, Impulsive Decision-Making, and Shifting). Although relations between these 2 measurement models are consistently low to very low, relations between both trait scales and laboratory behavioral tasks and daily-life impulsive behaviors are moderate. That is, both independently predict problematic daily-life impulsive behaviors, such as substance use, gambling, and delinquency; their joint use has incremental predictive power over the use of either type of measure alone and furthers our understanding of these important, problematic behaviors. Future use of confirmatory methods should help to ascertain with greater precision the number of and relations between impulsivity-related components. PMID:24099400
An uncertainty inventory demonstration - a primary step in uncertainty quantification
Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM
2009-01-01
Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.
Andrade-Ines, Eduardo; Beaugé, Cristian; Michtchenko, Tatiana; Robutel, Philippe
2016-04-01
We analyse the secular dynamics of planets on S-type coplanar orbits in tight binary systems, based on first- and second-order analytical models, and compare their predictions with full N-body simulations. The perturbation parameter adopted for the development of these models depends on the masses of the stars and on the semimajor axis ratio between the planet and the binary. We show that each model has both advantages and limitations. While the first-order analytical model is algebraically simple and easy to implement, it is only applicable in regions of the parameter space where the perturbations are sufficiently small. The second-order model, although more complex, has a larger range of validity and must be taken into account for dynamical studies of some real exoplanetary systems such as γ Cephei and HD 41004A. However, in some extreme cases, neither of these analytical models yields quantitatively correct results, requiring either higher-order theories or direct numerical simulations. Finally, we determine the limits of applicability of each analytical model in the parameter space of the system, giving an important visual aid to decode which secular theory should be adopted for any given planetary system in a close binary.
Austin, Stéphanie; Senécal, Caroline; Guay, Frédéric; Nouwen, Arie
2011-09-01
This study tests a model derived from Self-Determination Theory (SDT) (Deci and Ryan, 2000) to explain the mechanisms by which non-modifiable factors influence dietary self-care in adolescents with type 1 diabetes (n = 289). SEM analyses adjusted for HbA1c levels revealed that longer diabetes duration and female gender were indicative of poorer dietary self-care. This effect was mediated by contextual and motivational factors as posited by SDT. Poorer autonomy support from practitioners was predominant in girls with longer diabetes duration. Perceived autonomous motivation and self-efficacy were indicative of greater autonomy support, and led to better dietary self-care. PMID:21430132
El kaaouachi, A.; Abdia, R.; Nafidi, A.; Zatni, A.; Sahsah, H.; Biskupski, G.
2010-04-01
The metal-insulator transition (MIT) induced by magnetic field, in barely metallic and compensated n-type InP has been analyzed using a scale theory. The experiments were carried out at low temperature in the range (4.2-0.066 K) and in magnetic field up to 11 T. We have determined the magnetic field for which the conductivity changes from the metallic behaviour to insulator regime. On the metallic side of the MIT, the electrical conductivity is found to obey ? = ?0+mT1/2 down to 66 mK. The zero-temperature conductivity can be described by scaling laws. Physical explanation to the temperature dependence of the conductivity is given in metallic side of the MIT using a competition between different characteristic scale lengths involved in the mechanisms of conduction, like correlation length and interaction length.
Erta?, Mehmet; Kantar, Ersin; Kocakaplan, Yusuf; Keskin, Mustafa
2016-02-01
Dynamic magnetic properties in the kinetic Ising ferromagnet on a triangular lattice are studied within the effective-field theory with correlations and using Glauber-type stochastic dynamics. In particular, we investigate the time variations of average order parameters and thermal behaviors of the dynamic total magnetizations and present the dynamic phase diagrams. The tricritical point, the triple point and zero critical end point as well as reentrant behaviors are found in the dynamic phase diagrams. We also study the dynamic hysteresis behaviors of the system. When the hysteresis behaviors of the system are examined, single hysteresis loop as well as S-shaped thin loops and elliptical shapes are observed for various values of the physical parameters. Results are compared with some other dynamic studies and quantitatively good agreement is found.
Droplet digital PCR for absolute quantification of pathogens.
Gutirrez-Aguirre, Ion; Ra?ki, Nejc; Dreo, Tanja; Ravnikar, Maja
2015-01-01
The recent advent of different digital PCR (dPCR) platforms is enabling the expansion of this technology for research and diagnostic applications worldwide. The main principle of dPCR, as in other PCR-based methods including quantitative PCR (qPCR), is the specific amplification of a nucleic acid target. The distinctive feature of dPCR is the separation of the reaction mixture into thousands to millions of partitions which is followed by a real time or end point detection of the amplification. The distribution of target sequences into partitions is described by the Poisson distribution, thus allowing accurate and absolute quantification of the target from the ratio of positive against all partitions at the end of the reaction. This omits the need to use reference materials with known target concentrations and increases the accuracy of quantification at low target concentrations compared to qPCR. dPCR has also shown higher resilience to inhibitors in a number of different types of samples. In this chapter we describe the droplet digital PCR (ddPCR) workflow for the detection and quantification of pathogens using the droplet digital Bio-Rad platform QX100. We present as an example the quantification of the quarantine plant pathogenic bacterium, Erwinia amylovora. PMID:25981265
McCloskey, Douglas; Gangoiti, Jon A.; Palsson, Bernhard O.; Feist, Adam M.
2015-01-01
existing reverse phase ion-paring liquid chromatography methods for separation and detection of polar and anionic compounds that comprise key nodes of intracellular metabolism by optimizing pH and solvent composition. In addition, the presented method utilizes multiple scan types provided by hybrid...... instrumentation to improve confidence in compound identification. The developed method was validated for a broad coverage of polar and anionic metabolites of intracellular metabolism...
Karunamuni Nandini
2008-12-01
Full Text Available Abstract Background Aerobic physical activity (PA and resistance training are paramount in the treatment and management of type 2 diabetes (T2D, but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random national sample which was created by generating a random list of household phone numbers. The list was proportionate to the actual number of household telephone numbers for each Canadian province (with the exception of Quebec. These individuals completed self-report TPB constructs of attitude, subjective norm, perceived behavioral control and intention, and a 3-month follow-up that assessed aerobic PA and resistance training. Results TPB explained 10% and 8% of the variance respectively for aerobic PA and resistance training; and accounted for 39% and 45% of the variance respectively for aerobic PA and resistance training intentions. Conclusion These results may guide the development of appropriate PA interventions for aerobic PA and resistance training based on the TPB.
Quantification of Information in a One-Way Plant-to-Animal Communication System
Laurance R. Doyle
2009-08-01
Full Text Available In order to demonstrate possible broader applications of information theory to the quantification of non-human communication systems, we apply calculations of information entropy to a simple chemical communication from the cotton plant (Gossypium hirsutum to the wasp (Cardiochiles nigriceps studied by DeMoraes et al. The purpose of this chemical communication from cotton plants to wasps is presumed to be to allow the predatory wasp to more easily obtain the location of its preferred prey—one of two types of parasitic herbivores feeding on the cotton plants. Specification of the plant-eating herbivore feeding on it by the cotton plants allows preferential attraction of the wasps to those individual plants. We interpret the emission of nine chemicals by the plants as individual signal differences, (depending on the herbivore type, to be detected by the wasps as constituting a nine-signal one-way communication system across kingdoms (from the kingdom Plantae to the kingdom Animalia. We use fractional differences in the chemical abundances, (emitted as a result of the two herbivore types, to calculate the Shannon information entropic measures (marginal, joint, and mutual entropies, as well as the ambiguity, etc. of the transmitted message. We then compare these results with the subsequent behavior of the wasps, (calculating the equivocation in the message reception, for possible insights into the history and actual working of this one-way communication system.
MODELS OF CAPITAL COSTS QUANTIFICATION
Tomáš KLIEŠTIK; Katarína VALÁŠKOVÁ
2013-01-01
The present contribution deals with the quantification of capital costs. The contribution is written on a theoretical basis. The costs will be particularly quantified in financing only by equity and only by debt capital and particularly in the so-called mixed financing in which weighted average costs of capital will be quantified. The cost of capital can be seen from three different perspectives: in the assets part of a company, in the liability part of a company and in the part of potential ...
Uncertainty quantification for systems of conservation laws
Uncertainty quantification through stochastic spectral methods has been recently applied to several kinds of non-linear stochastic PDEs. In this paper, we introduce a formalism based on kinetic theory to tackle uncertain hyperbolic systems of conservation laws with Polynomial Chaos (PC) methods. The idea is to introduce a new variable, the entropic variable, in bijection with our vector of unknowns, which we develop on the polynomial basis: by performing a Galerkin projection, we obtain a deterministic system of conservation laws. We state several properties of this deterministic system in the case of a general uncertain system of conservation laws. We then apply the method to the case of the inviscid Burgers' equation with random initial conditions and we present some preliminary results for the Euler system. We systematically compare results from our new approach to results from the stochastic Galerkin method. In the vicinity of discontinuities, the new method bounds the oscillations due to Gibbs phenomenon to a certain range through the entropy of the system without the use of any adaptative random space discretizations. It is found to be more precise than the stochastic Galerkin method for smooth cases but above all for discontinuous cases
A scattering theory for the wave equation with compactly supported perturbations was developed by Lax-Phillips in 1967. Using Enss approach, Phillips developed a Lax-Phillips scattering theory with short range perturbations of the type: V(x)=o((1)/|x|β), β > 2. In this paper we develop a scattering theory for more general perturbations, i.e. for V(x)=(φ(x))/|x|β, where β=2-(n)/s, φ is an element of Ls(Rn), s > 2 and s ≥ (n)/2. Refs
Statistical Approach to Protein Quantification*
Gerster, Sarah; Kwon, Taejoon; Ludwig, Christina; Matondo, Mariette; Vogel, Christine; Marcotte, Edward M.; Aebersold, Ruedi; Bühlmann, Peter
2014-01-01
A major goal in proteomics is the comprehensive and accurate description of a proteome. This task includes not only the identification of proteins in a sample, but also the accurate quantification of their abundance. Although mass spectrometry typically provides information on peptide identity and abundance in a sample, it does not directly measure the concentration of the corresponding proteins. Specifically, most mass-spectrometry-based approaches (e.g. shotgun proteomics or selected reaction monitoring) allow one to quantify peptides using chromatographic peak intensities or spectral counting information. Ultimately, based on these measurements, one wants to infer the concentrations of the corresponding proteins. Inferring properties of the proteins based on experimental peptide evidence is often a complex problem because of the ambiguity of peptide assignments and different chemical properties of the peptides that affect the observed concentrations. We present SCAMPI, a novel generic and statistically sound framework for computing protein abundance scores based on quantified peptides. In contrast to most previous approaches, our model explicitly includes information from shared peptides to improve protein quantitation, especially in eukaryotes with many homologous sequences. The model accounts for uncertainty in the input data, leading to statistical prediction intervals for the protein scores. Furthermore, peptides with extreme abundances can be reassessed and classified as either regular data points or actual outliers. We used the proposed model with several datasets and compared its performance to that of other, previously used approaches for protein quantification in bottom-up mass spectrometry. PMID:24255132
Statistical approach to protein quantification.
Gerster, Sarah; Kwon, Taejoon; Ludwig, Christina; Matondo, Mariette; Vogel, Christine; Marcotte, Edward M; Aebersold, Ruedi; Bühlmann, Peter
2014-02-01
A major goal in proteomics is the comprehensive and accurate description of a proteome. This task includes not only the identification of proteins in a sample, but also the accurate quantification of their abundance. Although mass spectrometry typically provides information on peptide identity and abundance in a sample, it does not directly measure the concentration of the corresponding proteins. Specifically, most mass-spectrometry-based approaches (e.g. shotgun proteomics or selected reaction monitoring) allow one to quantify peptides using chromatographic peak intensities or spectral counting information. Ultimately, based on these measurements, one wants to infer the concentrations of the corresponding proteins. Inferring properties of the proteins based on experimental peptide evidence is often a complex problem because of the ambiguity of peptide assignments and different chemical properties of the peptides that affect the observed concentrations. We present SCAMPI, a novel generic and statistically sound framework for computing protein abundance scores based on quantified peptides. In contrast to most previous approaches, our model explicitly includes information from shared peptides to improve protein quantitation, especially in eukaryotes with many homologous sequences. The model accounts for uncertainty in the input data, leading to statistical prediction intervals for the protein scores. Furthermore, peptides with extreme abundances can be reassessed and classified as either regular data points or actual outliers. We used the proposed model with several datasets and compared its performance to that of other, previously used approaches for protein quantification in bottom-up mass spectrometry. PMID:24255132
Quantification of wastewater sludge dewatering.
Skinner, Samuel J; Studer, Lindsay J; Dixon, David R; Hillis, Peter; Rees, Catherine A; Wall, Rachael C; Cavalida, Raul G; Usher, Shane P; Stickland, Anthony D; Scales, Peter J
2015-10-01
Quantification and comparison of the dewatering characteristics of fifteen sewage sludges from a range of digestion scenarios are described. The method proposed uses laboratory dewatering measurements and integrity analysis of the extracted material properties. These properties were used as inputs into a model of filtration, the output of which provides the dewatering comparison. This method is shown to be necessary for quantification and comparison of dewaterability as the permeability and compressibility of the sludges varies by up to ten orders of magnitude in the range of solids concentration of interest to industry. This causes a high sensitivity of the dewaterability comparison to the starting concentration of laboratory tests, thus simple dewaterability comparison based on parameters such as the specific resistance to filtration is difficult. The new approach is demonstrated to be robust relative to traditional methods such as specific resistance to filtration analysis and has an in-built integrity check. Comparison of the quantified dewaterability of the fifteen sludges to the relative volatile solids content showed a very strong correlation in the volatile solids range from 40 to 80%. The data indicate that the volatile solids parameter is a strong indicator of the dewatering behaviour of sewage sludges. PMID:26003332
Carlos, Martel; Lianka, Cairampoma.
2012-08-01
Full Text Available La llanura amaznica peruana se caracteriza por la presencia de mltiples formaciones vegetales. stas cada vez reciben mayor impacto por actividades antropognicas tales como la minera y tala. Todo esto, sumado al cambio climtico global, genera desconcierto sobre el futuro de los bosques. La iden [...] tificacin de los niveles de almacenamiento de carbono en reas boscosas, y especficamente en cada formacin vegetal, permitira un mejor manejo de las zonas de conservacin, as como identificar las reas potenciales que serviran para el financiamiento de la absorcin de carbono y otros servicios ambientales. El presente estudio fue desarrollado en la estacin Biolgica del Centro de Investigacin y Capacitacin Ro Los Amigos (CICRA). En el CICRA se identificaron tres formaciones vegetales principales, el bosque de terraza, el bosque inundable y el aguajal. Siendo los bosques de terraza los de mayor extensin y mayor cantidad de carbono acumulado. Como resultado se valoriz la vegetacin presente en el CICRA, en alrededor de 11 millones de dlares americanos. El ingreso a la oferta de los bonos de carbono promovera la conservacin de los bosques. Abstract in english The Peruvian Amazon Basin is characterized by the presence of multiple vegetation types. They are being given great impact by human activities such as mining and, logging. All this, coupled with global climate change, creates confusion about the future of our forests. The identification of levels of [...] carbon storage in forested areas, and specifically in each vegetation type, would allow better management of conservation areas, and then identify potential areas that could serve to finance carbon sequestration and other environmental services. This study was conducted at the Biological Station for Research and Training Center Rio Los Amigos (CICRA, Spanish acronym). At the station three main formations were identified, alluvial terrace forests, flood terrace forests and Mauritia swamps. The alluvial terrace forest presents the most extensive area and the highest amount of carbon stored. As result, CICRA vegetations were valued at approx. 11 millions U.S. dollars. Admission to the supply of carbon credits could promote Amazon forest conservation.
The necessity of operational risk management and quantification
Barbu Teodora Cristina
2008-04-01
Full Text Available Beginning with the fact that performant strategies of the financial institutions have programmes and management procedures for the banking risks, which have as main objective to minimize the probability of risk generation and the bank’s potential exposure, this paper wants to present the operational risk management and quantification methods. Also it presents the modality of minimum capital requirement for the operational risk. Therefore, the first part presents the conceptual approach of the operational risks through the point of view of the financial institutions exposed to this type of risk. The second part describes the management and evaluation methods for the operational risk. The final part of this article presents the approach assumed by a financial institution with a precise purpose: the quantification of the minimum capital requirements of the operational risk.
Polycrystalline Bi2Al4O9 powder samples were synthesized using the glycerine method. Single crystals were produced from the powder product in a Bi2O3 melt. The lattice thermal expansion of the mullite-type compound was studied using X-ray diffraction, Raman spectroscopy and density functional theory (DFT). The metric parameters were modeled using Grüneisen approximation for the zero pressure equation of state, where the temperature-dependent vibrational internal energy was calculated from the Debye characteristic frequency. Both the first-order and second-order Grüneisen approximations were applied for modeling the volumetric expansion, and the second-order approach provided physically meaningful axial parameters. The phonon density of states as well as phonon dispersion guided to set the characteristic frequency for simulation. The experimental infrared and Raman phonon bands were compared with those calculate from the DFT calculations. Selective Raman modes were analyzed for the thermal anharmonic behaviors using simplified Klemens model. The respective mode Grüneisen parameters were calculated from the pressure-dependent Raman spectra. - Graphical abstract: Crystal structure of mullite-type Bi2Al4O9 showing the edge-sharing AlO6 octahedra running parallel to the c-axis. - Highlights: • Thermal expansion of Bi2Al4O9 was studied using XRD, FTIR, Raman and DFT. • Metric parameters were modeled using Grüneisen approximation. • Phonon DOS and phonon dispersion helped to set the Debye frequency. • Mode Grüneisen parameters were calculated from the pressure-dependent Raman spectra. • Anharmonicity was analyzed for some selective Raman modes
Keun Woo Lee
2012-04-01
Full Text Available 11-Hydroxysteroid dehydrogenase type1 (11HSD1 regulates the conversion from inactive cortisone to active cortisol. Increased cortisol results in diabetes, hence quelling the activity of 11HSD1 has been thought of as an effective approach for the treatment of diabetes. Quantitative hypotheses were developed and validated to identify the critical chemical features with reliable geometric constraints that contribute to the inhibition of 11HSD1 function. The best hypothesis, Hypo1, which contains one-HBA; one-Hy-Ali, and two-RA features, was validated using Fischers randomization method, a test and a decoy set. The well validated, Hypo1, was used as 3D query to perform a virtual screening of three different chemical databases. Compounds selected by Hypo1 in the virtual screening were filtered by applying Lipinskis rule of five, ADMET, and molecular docking. Finally, five hit compounds were selected as virtual novel hit molecules for 11HSD1 based on their electronic properties calculated by Density functional theory.
Sakkiah, Sugunadevi; Meganathan, Chandrasekaran; Sohn, Young-Sik; Namadevan, Sundaraganesan; Lee, Keun Woo
2012-01-01
11?-Hydroxysteroid dehydrogenase type1 (11?HSD1) regulates the conversion from inactive cortisone to active cortisol. Increased cortisol results in diabetes, hence quelling the activity of 11?HSD1 has been thought of as an effective approach for the treatment of diabetes. Quantitative hypotheses were developed and validated to identify the critical chemical features with reliable geometric constraints that contribute to the inhibition of 11?HSD1 function. The best hypothesis, Hypo1, which contains one-HBA; one-Hy-Ali, and two-RA features, was validated using Fischer's randomization method, a test and a decoy set. The well validated, Hypo1, was used as 3D query to perform a virtual screening of three different chemical databases. Compounds selected by Hypo1 in the virtual screening were filtered by applying Lipinski's rule of five, ADMET, and molecular docking. Finally, five hit compounds were selected as virtual novel hit molecules for 11?HSD1 based on their electronic properties calculated by Density functional theory. PMID:22606035
Mangir Murshed, M.; Mendive, Cecilia B.; Curti, Mariano; ehovi?, Malik; Friedrich, Alexandra; Fischer, Michael; Gesing, Thorsten M.
2015-09-01
Polycrystalline Bi2Al4O9 powder samples were synthesized using the glycerine method. Single crystals were produced from the powder product in a Bi2O3 melt. The lattice thermal expansion of the mullite-type compound was studied using X-ray diffraction, Raman spectroscopy and density functional theory (DFT). The metric parameters were modeled using Grneisen approximation for the zero pressure equation of state, where the temperature-dependent vibrational internal energy was calculated from the Debye characteristic frequency. Both the first-order and second-order Grneisen approximations were applied for modeling the volumetric expansion, and the second-order approach provided physically meaningful axial parameters. The phonon density of states as well as phonon dispersion guided to set the characteristic frequency for simulation. The experimental infrared and Raman phonon bands were compared with those calculate from the DFT calculations. Selective Raman modes were analyzed for the thermal anharmonic behaviors using simplified Klemens model. The respective mode Grneisen parameters were calculated from the pressure-dependent Raman spectra.
Recent computational and experimental studies have confirmed that high energy cascades produce clustered defects of both vacancy- and interstitial-types as well as isolated point defects. However, the production probability, configuration, stability and other characteristics of the cascade clusters are not well understood in spite of the fact that clustered defect production would substantially affect the irradiation-induced microstructures and the consequent property changes in a certain range of temperatures and displacement rates. In this work, a model of point defect and cluster evolution in irradiated materials under cascade damage conditions was developed by combining the conventional reaction rate theory and the results from the latest molecular dynamics simulation studies. This paper provides a description of the model and a model-based fundamental investigation of the influence of configuration, production efficiency and the initial size distribution of cascade-produced vacancy clusters. In addition, using the model, issues on characterizing cascade-induced defect production by microstructural analysis will be discussed. In particular, the determination of cascade vacancy cluster configuration, surviving defect production efficiency and cascade-interaction volume is attempted by analyzing the temperature dependence of swelling rate and loop growth rate in austenitic steels and model alloys. (author)
Andrade-Ines, Eduardo; Michtchenko, Tatiana; Robutel, Philippe
2015-01-01
We analyse the secular dynamics of planets on S-type coplanar orbits in tight binary systems, based on first- and second-order analytical models, and compare their predictions with full N-body simulations. The perturbation parameter adopted for the development of these models depends on the masses of the stars and on the semimajor axis ratio between the planet and the binary. We show that each model has both advantages and limitations. While the first-order analytical model is algebraically simple and easy to implement, it is only applicable in regions of the parameter space where the perturbations are sufficiently small. The second-order model, although more complex, has a larger range of validity and must be taken into account for dynamical studies of some real exoplanetary systems such as $\\gamma$-Cephei and HD 41004A. However, in some extreme cases, neither of these analytical models yields quantitatively correct results, requiring either higher-order theories or direct numerical simulations. Finally, we ...
Xu, Xuewen, E-mail: xuxuewen@hebut.edu.cn [School of Materials Science and Engineering, Hebei Univeristy of Technology, Tianjin 300130 (China); Fu, Kun, E-mail: fukun@hebut.edu.cn [School of Computer Science and Engineering, Hebei Univeristy of Technology, Tianjin 300130 (China); Yu, Man, E-mail: 781092332@qq.com [School of Materials Science and Engineering, Hebei Univeristy of Technology, Tianjin 300130 (China); Lu, Zunming, E-mail: luzm@hebut.edu.cn [School of Materials Science and Engineering, Hebei Univeristy of Technology, Tianjin 300130 (China); Zhang, Xinghua, E-mail: zhangxinghua@hebut.edu.cn [School of Materials Science and Engineering, Hebei Univeristy of Technology, Tianjin 300130 (China); Liu, Guodong, E-mail: gdliu1978@126.com [School of Materials Science and Engineering, Hebei Univeristy of Technology, Tianjin 300130 (China); Tang, Chengchun, E-mail: tangcc@hebut.edu.cn [School of Materials Science and Engineering, Hebei Univeristy of Technology, Tianjin 300130 (China)
2014-09-01
Highlights: The thermodynamic characters of TMB{sub 2} have been firstly studied using the QHA method. WB{sub 2} and TaB{sub 2} are good candidates for the structural application at high temperature. Most of the early-transition-metal diborides cannot be easily machined. The correlations between elastic constants and VECs of TMB{sub 2} have been discussed. - Abstract: The thermodynamic, electronic and elastic properties of a class of early-transition-metal diborides (TMB{sub 2}, TM = Sc, Ti, V, Cr, Y, Zr, Nb, Mo, Hf, Ta, W) with AlB{sub 2}-type structure have been investigated using the quasi-harmonic Debye model and the ab initio calculation based on the density functional theory, respectively. According to the characters of temperature dependent bulk modulus and coefficient of thermal expansion, the TMB{sub 2} compounds can be divided into three groups. The results also indicate that 4d- and 5d-TMB{sub 2} compounds are good high-temperature structural materials. The five independent stiffness coefficients, bulk and shear moduli of the diborides are obtained and well agreement with the available experimental and theoretical data. The correlations between elastic properties and electronic structure are discussed in detail. Due to the high values of hardness, the VIB-transition-metal diborides with relatively high B/G and B/C{sub 44} ratios are still difficult to machine with usual methods.
Highlights: The thermodynamic characters of TMB2 have been firstly studied using the QHA method. WB2 and TaB2 are good candidates for the structural application at high temperature. Most of the early-transition-metal diborides cannot be easily machined. The correlations between elastic constants and VECs of TMB2 have been discussed. - Abstract: The thermodynamic, electronic and elastic properties of a class of early-transition-metal diborides (TMB2, TM = Sc, Ti, V, Cr, Y, Zr, Nb, Mo, Hf, Ta, W) with AlB2-type structure have been investigated using the quasi-harmonic Debye model and the ab initio calculation based on the density functional theory, respectively. According to the characters of temperature dependent bulk modulus and coefficient of thermal expansion, the TMB2 compounds can be divided into three groups. The results also indicate that 4d- and 5d-TMB2 compounds are good high-temperature structural materials. The five independent stiffness coefficients, bulk and shear moduli of the diborides are obtained and well agreement with the available experimental and theoretical data. The correlations between elastic properties and electronic structure are discussed in detail. Due to the high values of hardness, the VIB-transition-metal diborides with relatively high B/G and B/C44 ratios are still difficult to machine with usual methods
Wenger, A; Mischke, C
2015-10-01
Type 2 diabetes is on the increase among the Swiss immigrants. The cultural background of patients presents new linguistic and sociocultural barriers and gains in importance for health care. In order to develop patient-centred care, it is necessary to focus on different sociocultural aspects in everyday life and experiences of immigrants from the former republics of Yugoslavia with diabetes who have rarely been studied in Switzerland. Based on these insights the needs for counselling can be identified and nursing interventions can be designed accordingly. Using the Grounded Theory approach, 5 interviews were analysed according to the Corbin and Strauss coding paradigm. The central phenomenon found is the experience to live in 2 different cultures. The complexity arises from the tension living in 2 cultural backgrounds at the same time. It turns out that in the country of origin the immigrants adjust their disease management. The changing daily rhythm and the more traditional role model affect aspects of their disease management such as diet and/or drug therapy. The different strategies impact the person's roles, emotions, their everyday lives and their families. It provides an insight into the perspective of Swiss immigrants from the former republics of Yugoslavia suffering from diabetes. Many questions are still unanswered and further research will be required. PMID:26270044
Marlene Silva de Moraes
2008-03-01
Full Text Available O presente texto descreve um equipamento na escala-piloto e um método simples para comparar a eficiência de distribuidores de líquido. A técnica consiste basicamente em analisar a massa do líquido coletado em 21 tubos verticais de 52mm de diâmetro interno e 800 mm de comprimento dispostos em arranjo quadrático colocados abaixo do distribuidor. Uma manta acrílica que não dispersa o líquido com 50 mm de espessura foi fixada entre o distribuidor e o banco de tubos para evitar respingos. Como exemplo de aplicação foram realizados ensaios com nove distribuidores do tipo espinha de peixe de 4 tubos paralelos cada, para uma coluna com 400 mm de diâmetro. Variaram-se o número (n de furos (95, 127 e 159 furos/m², o diâmetro (d dos furos (2, 3 e 4 mm e as vazões (q de (1,2; 1,4 e 1,6m³/h. A melhor eficiência de espalhamento pelo menor desvio-padrão foi obtida com n de 159, d de 2 e q de 1,4 indicando as limitações de regras práticas de projeto. A pressão (p, na entrada do distribuidor, para essa condição, foi de apenas 51000 Pa (0,51 kgf/cm² e a velocidade média (v em cada orifício foi de 6,3 m/s.This paper describes a device developed on the pilot scale and a simple approach to compare liquid distributor efficiencies. The technique consists basically of analyzing the mass of the liquid collected in 21 vertical pipes measuring 52 mm in internal diameter and 800 mm in length placed in a quadratic arrangement and positioned below the distributor. A 50 mm thick acrylic blanket that does not disperse liquids was placed between the distributor and the pipe bank to avoid splashes. Assays were carried out with ladder-type distributors equipped with 4 parallel pipes each for a column measuring 400 mm in diameter as an example of the application. The number (n of orifices (95, 127, and 159 orifices/m², orifice diameter (d (2, 3, and 4 mm and the flowrate (q (1.2; 1.4; and 1.6 m3/h were varied. The best spread efficiency, which presented the lowest standard deviation, was achieved with 159 orifices, 2 mm and 1.4 m³/h. The pressure (p at the distributor's inlet for this condition was only 51000 Pa (0.51 kgf/cm², while the average velocity (v was 6.3 m/s in each orifice. These results show some limitations of the practical rules used in distributor designs.
Accessible quantification of multiparticle entanglement
Cianciaruso, Marco; Adesso, Gerardo
2015-01-01
Entanglement is a key ingredient for quantum technologies and a fundamental signature of quantumness in a broad range of phenomena encompassing many-body physics, thermodynamics, cosmology, and life sciences. For arbitrary multiparticle systems, the quantification of entanglement typically involves hard optimisation problems, and requires demanding tomographical techniques. In this paper we show that such difficulties can be overcome by developing an experimentally friendly method to evaluate measures of multiparticle entanglement via a geometric approach. The method provides exact analytical results for a relevant class of mixed states of $N$ qubits, and computable lower bounds to entanglement for any general state. For practical purposes, the entanglement determination requires local measurements in just three settings for any $N$. We demonstrate the power of our approach to quantify multiparticle entanglement in $N$-qubit bound entangled states and other states recently engineered in laboratory using quant...
Johnston Marie
2009-08-01
Full Text Available Abstract Background Long term management of patients with Type 2 diabetes is well established within Primary Care. However, despite extensive efforts to implement high quality care both service provision and patient health outcomes remain sub-optimal. Several recent studies suggest that psychological theories about individuals' behaviour can provide a valuable framework for understanding generalisable factors underlying health professionals' clinical behaviour. In the context of the team management of chronic disease such as diabetes, however, the application of such models is less well established. The aim of this study was to identify motivational factors underlying health professional teams' clinical management of diabetes using a psychological model of human behaviour. Methods A predictive questionnaire based on the Theory of Planned Behaviour (TPB investigated health professionals' (HPs' cognitions (e.g., beliefs, attitudes and intentions about the provision of two aspects of care for patients with diabetes: prescribing statins and inspecting feet. General practitioners and practice nurses in England and the Netherlands completed parallel questionnaires, cross-validated for equivalence in English and Dutch. Behavioural data were practice-level patient-reported rates of foot examination and use of statin medication. Relationships between the cognitive antecedents of behaviour proposed by the TPB and healthcare teams' clinical behaviour were explored using multiple regression. Results In both countries, attitude and subjective norm were important predictors of health professionals' intention to inspect feet (Attitude: beta = .40; Subjective Norm: beta = .28; Adjusted R2 = .34, p 2 = .40, p Conclusion Using the TPB, we identified modifiable factors underlying health professionals' intentions to perform two clinical behaviours, providing a rationale for the development of targeted interventions. However, we did not observe a relationship between health professionals' intentions and our proxy measure of team behaviour. Significant methodological issues were highlighted concerning the use of models of individual behaviour to explain behaviours performed by teams. In order to investigate clinical behaviours performed by teams it may be necessary to develop measures that reflect the collective cognitions of the members of the team to facilitate the application of these theoretical models to team behaviours.
Development of Quantification Method for Bioluminescence Imaging
Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time
Supersymmetric Gauge Theories from String Theory
Metzger, Steffen
2005-01-01
The subject of this thesis are various ways to construct four-dimensional quantum field theories from string theory. In a first part we study the generation of a supersymmetric Yang-Mills theory, coupled to an adjoint chiral superfield, from type IIB string theory on non-compact Calabi-Yau manifolds, with D-branes wrapping certain subcycles. Properties of the gauge theory are then mapped to the geometric structure of the Calabi-Yau space. In particular, the low energy effective superpotential...
Superspace conformal field theory
Quella, Thomas [Koeln Univ. (Germany). Inst. fuer Theoretische Physik; Schomerus, Volker [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2013-07-15
Conformal sigma models and WZW models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type I supergroups, the classification of conformal sigma models and their embedding into string theory.
Superspace conformal field theory
Conformal sigma models and WZW models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type I supergroups, the classification of conformal sigma models and their embedding into string theory.
Battenfeld, Ingo
2008-01-01
This thesis presents Topological Domain Theory as a powerful and flexible framework for denotational semantics. Topological Domain Theory models a wide range of type constructions and can interpret many computational features. Furthermore, it has close connections to established frameworks for denotational semantics, as well as to well-studied mathematical theories, such as topology and computable analysis.
Nonrelativistic superstring theories
We construct a supersymmetric version of the critical nonrelativistic bosonic string theory [B. S. Kim, Phys. Rev. D 76, 106007 (2007).] with its manifest global symmetry. We introduce the anticommuting bc conformal field theory (CFT) which is the super partner of the ?? CFT. The conformal weights of the b and c fields are both 1/2. The action of the fermionic sector can be transformed into that of the relativistic superstring theory. We explicitly quantize the theory with manifest SO(8) symmetry and find that the spectrum is similar to that of type IIB superstring theory. There is one notable difference: the fermions are nonchiral. We further consider noncritical generalizations of the supersymmetric theory using the superspace formulation. There is an infinite range of possible string theories similar to the supercritical string theories. We comment on the connection between the critical nonrelativistic string theory and the lightlike linear dilaton theory
Birkett Nicholas
2010-01-01
Full Text Available Abstract Background The primary aim of this study was to compare the efficacy of three physical activity (PA behavioural intervention strategies in a sample of adults with type 2 diabetes. Method/Design Participants (N = 287 were randomly assigned to one of three groups consisting of the following intervention strategies: (1 standard printed PA educational materials provided by the Canadian Diabetes Association [i.e., Group 1/control group]; (2 standard printed PA educational materials as in Group 1, pedometers, a log book and printed PA information matched to individuals' PA stage of readiness provided every 3 months (i.e., Group 2; and (3 PA telephone counseling protocol matched to PA stage of readiness and tailored to personal characteristics, in addition to the materials provided in Groups 1 and 2 (i.e., Group 3. PA behaviour measured by the Godin Leisure Time Exercise Questionnaire and related social-cognitive measures were assessed at baseline, 3, 6, 9, 12 and 18-months (i.e., 6-month follow-up. Clinical (biomarkers and health-related quality of life assessments were conducted at baseline, 12-months, and 18-months. Linear Mixed Model (LMM analyses will be used to examine time-dependent changes from baseline across study time points for Groups 2 and 3 relative to Group 1. Discussion ADAPT will determine whether tailored but low-cost interventions can lead to sustainable increases in PA behaviours. The results may have implications for practitioners in designing and implementing theory-based physical activity promotion programs for this population. Clinical Trials Registration ClinicalTrials.gov identifier: NCT00221234
Matrix Theory on Non-Orientable Surfaces
Zwart, Gysbert
1997-01-01
We construct the Matrix theory descriptions of M-theory on the Mobius strip and the Klein bottle. In a limit, these provide the matrix string theories for the CHL string and an orbifold of type IIA string theory.
Quantification of petroleum-type hydrocarbons in avian tissue
Gay, M.L.; Belisle, A.A.; Patton, J.F.
1980-01-01
Summary: Methods were developed for the analysis of 16 hydrocarbons in avian tissue. Mechanical extraction with pentane was followed by clean-up on Florisil and Silicar. Residues were determined by gas--liquid chromatography and gas-liquid, chromatography-mass spectrometry. The method was applied to the analysis of liver, kidney, fat, and brain tissue of mallard ducks (Anas platyrhynchos) fed a mixture of hydrocarbons. Measurable concentrations of all compounds analyzed were present in all tissues except brain. Highest concentrations were in fat.
Tractability of Theory Patching
Argamon-Engelson, S
1998-01-01
In this paper we consider the problem of `theory patching', in which we are given a domain theory, some of whose components are indicated to be possibly flawed, and a set of labeled training examples for the domain concept. The theory patching problem is to revise only the indicated components of the theory, such that the resulting theory correctly classifies all the training examples. Theory patching is thus a type of theory revision in which revisions are made to individual components of the theory. Our concern in this paper is to determine for which classes of logical domain theories the theory patching problem is tractable. We consider both propositional and first-order domain theories, and show that the theory patching problem is equivalent to that of determining what information contained in a theory is `stable' regardless of what revisions might be performed to the theory. We show that determining stability is tractable if the input theory satisfies two conditions: that revisions to each theory compone...
Uncertainty Quantification in Aerodynamics Simulations Project
National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...
Quantification of nanowire uptake by live cells
Margineanu, Michael B.
2015-05-01
Nanostructures fabricated by different methods have become increasingly important for various applications at the cellular level. In order to understand how these nanostructures “behave” and for studying their internalization kinetics, several attempts have been made at tagging and investigating their interaction with living cells. In this study, magnetic iron nanowires with an iron oxide layer are coated with (3-Aminopropyl)triethoxysilane (APTES), and subsequently labeled with a fluorogenic pH-dependent dye pHrodo™ Red, covalently bound to the aminosilane surface. Time-lapse live imaging of human colon carcinoma HCT 116 cells interacting with the labeled iron nanowires is performed for 24 hours. As the pHrodo™ Red conjugated nanowires are non-fluorescent outside the cells but fluoresce brightly inside, internalized nanowires are distinguished from non-internalized ones and their behavior inside the cells can be tracked for the respective time length. A machine learning-based computational framework dedicated to automatic analysis of live cell imaging data, Cell Cognition, is adapted and used to classify cells with internalized and non-internalized nanowires and subsequently determine the uptake percentage by cells at different time points. An uptake of 85 % by HCT 116 cells is observed after 24 hours incubation at NW-to-cell ratios of 200. While the approach of using pHrodo™ Red for internalization studies is not novel in the literature, this study reports for the first time the utilization of a machine-learning based time-resolved automatic analysis pipeline for quantification of nanowire uptake by cells. This pipeline has also been used for comparison studies with nickel nanowires coated with APTES and labeled with pHrodo™ Red, and another cell line derived from the cervix carcinoma, HeLa. It has thus the potential to be used for studying the interaction of different types of nanostructures with potentially any live cell types.
Vague quantification in the scientific journal article
Banks, David
2014-01-01
Bien que lcriture scientifique soit en gnral considre prcise, elle contient un nombre significatif dexemples de quantification imprcise. Ltude dun petit corpus de six articles de recherche scientifique dmontre que les expressions constitues uniquement de mots sont distribues diffremment par rapport aux expressions comprenant des chiffres. Dans certains cas, limprcision est compense lintrieur mme du texte. Lusage de la quantification imprcise est li au phnomne de h...
Uncertainty Quantification in Solidification Modelling
Fezi, K.; Krane, M. J. M.
2015-06-01
Numerical models have been used to simulate solidification processes, to gain insight into physical phenomena that cannot be observed experimentally. Often validation of such models has been done through comparison to a few or single experiments, in which agreement is dependent on both model and experimental uncertainty. As a first step to quantifying the uncertainty in the models, sensitivity and uncertainty analysis were performed on a simple steady state 1D solidification model of continuous casting of weld filler rod. This model includes conduction, advection, and release of latent heat was developed for use in uncertainty quantification in the calculation of the position of the liquidus and solidus and the solidification time. Using this model, a Smolyak sparse grid algorithm constructed a response surface that fit model outputs based on the range of uncertainty in the inputs to the model. The response surface was then used to determine the probability density functions (PDF's) of the model outputs and sensitivities of the inputs. This process was done for a linear fraction solid and temperature relationship, for which there is an analytical solution, and a Scheil relationship. Similar analysis was also performed on a transient 2D model of solidification in a rectangular domain.
A Theory of Noninterference for the ?-Calculus
Crafa, Silvia; Rossi, Sabina
We develop a theory of noninterference for a typed version of the ?-calculus where types are used to assign secrecy levels to channels. We provide two equivalent characterizations of noninterference based on a typed behavioural equivalence relative to a security level ?, which captures the idea of external observers of level ?. The first characterization involves a universal quantification over all the possible active attacks, i.e., malicious processes which interact with the system possibly leaking secret information. The second definition of noninterference is expressed in terms of an unwinding condition, which deals with so-called passive attacks trying to infer confidential information just by observing the behaviour of the system. This unwinding-based characterization naturally leads to efficient methods for the verification and construction of (compositional) secure systems. Furthermore, we characterize noninterference in terms of bisimulation-like (partial) equivalence relations in the style of a stream of similar studies for other process calculi (e.g., CCS and CryptoSPA) and languages (e.g., imperative and multi-threaded languages).
Tan, Uner
2014-01-01
Two consanguineous families with Uner Tan Syndrome (UTS) were analyzed in relation to self-organizing processes in complex systems, and the evolutionary emergence of human bipedalism. The cases had the key symptoms of previously reported cases of UTS, such as quadrupedalism, mental retardation, and dysarthric or no speech, but the new cases also exhibited infantile hypotonia and are designated UTS Type-II. There were 10 siblings in Branch I and 12 siblings in Branch II. Of these, there were seven cases exhibiting habitual quadrupedal locomotion (QL): four deceased and three living. The infantile hypotonia in the surviving cases gradually disappeared over a period of years, so that they could sit by about 10 years, crawl on hands and knees by about 12 years. They began walking on all fours around 14 years, habitually using QL. Neurological examinations showed normal tonus in their arms and legs, no Babinski sign, brisk tendon reflexes especially in the legs, and mild tremor. The patients could not walk in a straight line, but (except in one case) could stand up and maintain upright posture with truncal ataxia. Cerebello-vermial hypoplasia and mild gyral simplification were noted in their MRIs. The results of the genetic analysis were inconclusive: no genetic code could be identified as the triggering factor for the syndrome in these families. Instead, the extremely low socio-economic status of the patients was thought to play a role in the emergence of UTS, possibly by epigenetically changing the brain structure and function, with a consequent selection of ancestral neural networks for QL during locomotor development. It was suggested that UTS may be regarded as one of the unpredictable outcomes of self-organization within a complex system. It was also noted that the prominent feature of this syndrome, the diagonal-sequence habitual QL, generated an interference between ipsilateral hands and feet, as in non-human primates. It was suggested that this may have been the triggering factor for the attractor state "bipedal locomotion" (BL), which had visual and manual benefits for our ape-like ancestors, and therefore enhancing their chances for survival, with consequent developments in the psychomotor domain of humans. This was put forward as a novel theory of the evolution of BL in human beings. PMID:24795558
Uncertainty Quantification for Safeguards Measurements
Part of the scientific method requires all calculated and measured results to be accompanied by a description that meets user needs and provides an adequate statement of the confidence one can have in the results. The scientific art of generating quantitative uncertainty statements is closely related to the mathematical disciplines of applied statistics, sensitivity analysis, optimization, and inversion, but in the field of non-destructive assay, also often draws heavily on expert judgment based on experience. We call this process uncertainty quantification, (UQ). Philosophical approaches to UQ along with the formal tools available for UQ have advanced considerably over recent years and these advances, we feel, may be useful to include in the analysis of data gathered from safeguards instruments. This paper sets out what we hope to achieve during a three year US DOE NNSA research project recently launched to address the potential of advanced UQ to improve safeguards conclusions. By way of illustration we discuss measurement of uranium enrichment by the enrichment meter principle (also known as the infinite thickness technique), that relies on gamma counts near the 186 keV peak directly from 235U. This method has strong foundations in fundamental physics and so we have a basis for the choice of response model — although in some implementations, peak area extraction may result in a bias when applied over a wide dynamic range. It also allows us to describe a common but usually neglected aspect of applying a calibration curve, namely the error structure in the predictors. We illustrate this using a combination of measured data and simulation. (author)
Weak Set Theory for Grothendieck's Number Theory
McLarty, Colin
2011-01-01
Grothendieck preempted set theoretic issues in cohomology by positing universes, where his version made these sets so large that Zermelo Fraenkel set theory (ZFC) cannot prove they exist. We show the weak fragment of ZFC called MacLane set theory (MC) suffices for existing applications in number theory. It has the proof theoretic strength of simple type theory. Adding a version of Mac Lane's axiom of one universe gives MC+U, also a weak fragment of ZFC yet sufficient for the whole SGA.
MOTIVATION INTERNALIZATION AND SIMPLEX STRUCTURE IN SELF-DETERMINATION THEORY.
nl, Ali; Dettweiler, Ulrich
2015-12-01
Self-determination theory, as proposed by Deci and Ryan, postulated different types of motivation regulation. As to the introjected and identified regulation of extrinsic motivation, their internalizations were described as "somewhat external" and "somewhat internal" and remained undetermined in the theory. This paper introduces a constrained regression analysis that allows these vaguely expressed motivations to be estimated in an "optimal" manner, in any given empirical context. The approach was even generalized and applied for simplex structure analysis in self-determination theory. The technique was exemplified with an empirical study comparing science teaching in a classical school class versus an expeditionary outdoor program. Based on a sample of 84 German pupils (43 girls, 41 boys, 10 to 12 years old), data were collected using the German version of the Academic Self-Regulation Questionnaire. The science-teaching format was seen to not influence the pupils' internalization of identified regulation. The internalization of introjected regulation differed and shifted more toward the external pole in the outdoor teaching format. The quantification approach supported the simplex structure of self-determination theory, whereas correlations may disconfirm the simplex structure. PMID:26595290
Inverse problems Tikhonov theory and algorithms
Ito, Kazufumi
2014-01-01
Inverse problems arise in practical applications whenever one needs to deduce unknowns from observables. This monograph is a valuable contribution to the highly topical field of computational inverse problems. Both mathematical theory and numerical algorithms for model-based inverse problems are discussed in detail. The mathematical theory focuses on nonsmooth Tikhonov regularization for linear and nonlinear inverse problems. The computational methods include nonsmooth optimization algorithms, direct inversion methods and uncertainty quantification via Bayesian inference. The book offers a c
D. M. Armstrong on the Identity Theory of Mind
Shanjendu Nath
2013-01-01
The Identity theory of mind occupies an important place in the history of philosophy. This theory is one of the important representations of the materialistic philosophy. This theory is known as "Materialist Monist Theory of Mind". Sometimes it is called "Type Physicalism", "Type Identity" or "Type-Type Theory" or "Mind-Brain Identity Theory". This theory appears in the philosophical domain as a reaction to the failure of Behaviourism. A number of philosophers developed this theory and among...
Gerald Jarre
2014-11-01
Full Text Available Nanodiamonds functionalized with different organic moieties carrying terminal amino groups have been synthesized. These include conjugates generated by Diels–Alder reactions of ortho-quinodimethanes formed in situ from pyrazine and 5,6-dihydrocyclobuta[d]pyrimidine derivatives. For the quantification of primary amino groups a modified photometric assay based on the Kaiser test has been developed and validated for different types of aminated nanodiamond. The results correspond well to values obtained by thermogravimetry. The method represents an alternative wet-chemical quantification method in cases where other techniques like elemental analysis fail due to unfavourable combustion behaviour of the analyte or other impediments.
Lint, J.H. van; Nieuwland, G.Y.
1996-01-01
Coding theory lies naturally at the intersection of a large number of disciplines in pure and applied mathematics: algebra and number theory, probability theory and statistics, communication theory, discrete mathematics and combinatorics, complexity theory, and statistical physics. The workshop on coding theory covered many facets of the recent research advances.
Li Song
2010-04-01
Full Text Available Abstract Background Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification. Results We developed a novel discrete wavelet transform (DWT and a 'Spatial Adaptive Algorithm' to remove noise and to identify true peaks. We programmed and compiled WaveletQuant using Visual C++ 2005 Express Edition. We then incorporated the WaveletQuant program in the Trans-Proteomic Pipeline (TPP, a commonly used open source proteomics analysis pipeline. Conclusions We showed that WaveletQuant was able to quantify more proteins and to quantify them more accurately than the ASAPRatio, a program that performs quantification in the TPP pipeline, first using known mixed ratios of yeast extracts and then using a data set from ovarian cancer cell lysates. The program and its documentation can be downloaded from our website at http://systemsbiozju.org/data/WaveletQuant.
Quantification of brain lipids by FTIR spectroscopy and partial least squares regression
Dreissig, Isabell; Machill, Susanne; Salzer, Reiner; Krafft, Christoph
2009-01-01
Brain tissue is characterized by high lipid content. Its content decreases and the lipid composition changes during transformation from normal brain tissue to tumors. Therefore, the analysis of brain lipids might complement the existing diagnostic tools to determine the tumor type and tumor grade. Objective of this work is to extract lipids from gray matter and white matter of porcine brain tissue, record infrared (IR) spectra of these extracts and develop a quantification model for the main lipids based on partial least squares (PLS) regression. IR spectra of the pure lipids cholesterol, cholesterol ester, phosphatidic acid, phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, sphingomyelin, galactocerebroside and sulfatide were used as references. Two lipid mixtures were prepared for training and validation of the quantification model. The composition of lipid extracts that were predicted by the PLS regression of IR spectra was compared with lipid quantification by thin layer chromatography.
Techniques for quantification of liver fat in risk stratification of diabetics
Fatty liver disease plays an important role in the development of type 2 diabetes. Accurate techniques for detection and quantification of liver fat are essential for clinical diagnostics. Chemical shift-encoded magnetic resonance imaging (MRI) is a simple approach to quantify liver fat content. Liver fat quantification using chemical shift-encoded MRI is influenced by several bias factors, such as T2* decay, T1 recovery and the multispectral complexity of fat. The confounder corrected proton density fat fraction is a simple approach to quantify liver fat with comparable results independent of the software and hardware used. The proton density fat fraction is an accurate biomarker for assessment of liver fat. An accurate and reproducible quantification of liver fat using chemical shift-encoded MRI requires a calculation of the proton density fat fraction. (orig.)
Comparison of five DNA quantification methods
Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes; Niedersttter, Harald; Parson, Walther; Morling, Niels
2008-01-01
Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than...... expected based on the information by the manufacturers. UV spectrometry, SYBR-Green dye staining, slot blot and RB1 rt-PCR gave 39, 27, 11 and 12%, respectively, higher concentrations than expected based on the manufacturers' information. The DNA preparations were quantified using the Quantifiler Human DNA...... Quantification kit in two experiments. The measured DNA concentrations with Quantifiler were 125 and 160% higher than expected based on the manufacturers' information. When the Quantifiler human DNA standard (Raji cell line) was replaced by the commercial human DNA preparation G147A (Promega) to generate the DNA...
De Mey, Marjan; Lequeux, Gaspard; Maertens, Jo; De Maeseneire, Sofie; Soetaert, Wim; Vandamme, Erick
2006-06-15
Recent developments in cellular and molecular biology require the accurate quantification of DNA and RNA in large numbers of samples at a sensitivity that enables determination on small quantities. In this study, five current methods for nucleic acid quantification were compared: (i) UV absorbance spectroscopy at 260 nm, (ii) colorimetric reaction with orcinol reagent, (iii) colorimetric reaction based on diphenylamine, (iv) fluorescence detection with Hoechst 33258 reagent, and (v) fluorescence detection with thiazole orange reagent. Genomic DNA of three different microbial species (with widely different G+C content) was used, as were two different types of yeast RNA and a mixture of equal quantities of DNA and RNA. We can conclude that for nucleic acid quantification, a standard curve with DNA of the microbial strain under study is the best reference. Fluorescence detection with Hoechst 33258 reagent is a sensitive and precise method for DNA quantification if the G+C content is less than 50%. In addition, this method allows quantification of very low levels of DNA (nanogram scale). Moreover, the samples can be crude cell extracts. Also, UV absorbance at 260 nm and fluorescence detection with thiazole orange reagent are sensitive methods for nucleic acid detection, but only if purified nucleic acids need to be measured. PMID:16545766
Yankov, A.; Downar, T. [University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States)
2013-07-01
Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)
Schwabe, O.; Shehab, E.; Erkoyuncu, J.
2015-08-01
The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.
Agop, M. [Department of Physics, University of Athens, Athens 15771 (Greece) and Department of Physics, Technical Gh., Asachi University, Iasi 700050 (Romania)]. E-mail: magop@phys.tuiasi.ro; Nica, P. [Department of Physics, University of Athens, Athens 15771 (Greece); Department of Physics, Technical Gh., Asachi University, Iasi 700050 (Romania); Ioannou, P.D. [Department of Physics, University of Athens, Athens 15771 (Greece); Malandraki, Olga [Department of Physics, University of Athens, Athens 15771 (Greece); Gavanas-Pahomi, I. [Department of Physics, Technical Gh., Asachi University, Iasi 700050 (Romania)
2007-12-15
A generalization of the Nottale's scale relativity theory is elaborated: the generalized Schroedinger equation results as an irrotational movement of Navier-Stokes type fluids having an imaginary viscosity coefficient. Then {psi} simultaneously becomes wave-function and speed potential. In the hydrodynamic formulation of scale relativity theory, some implications in the gravitational morphogenesis of structures are analyzed: planetary motion quantizations, Saturn's rings motion quantizations, redshift quantization in binary galaxies, global redshift quantization etc. The correspondence with El Naschie's {epsilon} {sup ({infinity})} space-time implies a special type of superconductivity (El Naschie's superconductivity) and Cantorian-fractal sequences in the quantification of the Universe.
Molecular quantification of genes encoding for green-fluorescent proteins
Felske, A; Vandieken, V; Pauling, B V; von Canstein, H F; Wagner-Döbler, I
2003-01-01
A quantitative PCR approach is presented to analyze the amount of recombinant green fluorescent protein (gfp) genes in environmental DNA samples. The quantification assay is a combination of specific PCR amplification and temperature gradient gel electrophoresis (TGGE). Gene quantification is...
Abendaño, Naiara; Sevilla, Iker; Prieto, José M.; Garrido, Joseba M; Juste, Ramon A.; Alonso-Hearn, Marta
2012-01-01
Quantification of 11 clinical strains of Mycobacterium avium subsp. paratuberculosis isolated from domestic (cattle, sheep, and goat) and wildlife (fallow deer, deer, wild boar, and bison) animal species in an automatic liquid culture system (Bactec MGIT 960) was accomplished. The strains were previously isolated and typed using IS1311 PCR followed by restriction endonuclease analysis (PCR-REA) into type C, S, or B. A strain-specific quantification curve was generated for each M. avium subsp....
Via compactification on a circle, the matrix model of M-theory proposed by Banks et al. suggests a concrete identification between the large N limit of two-dimensional N=8 supersymmetric Yang-Mills theory and type IIA string theory. In this paper we collect evidence that supports this identification. We explicitly identify the perturbative string states and their interactions, and describe the appearance of D-particle and D-membrane states. (orig.)
Introduction to superstring theory
This is a very basic introduction to the AdS/CFT correspondence. The first lecture motivates the duality between gauge theories and gravity/string theories. The next two lectures introduce the bosonic and supersymmetric string theories. The fourth lecture is devoted to study Dp-branes and finally, in the fifth lecture I discuss the two worlds: N=4 SYM in 3+1 flat dimensions and type IIB superstrings in AdS5 x S5. (author)
Introduction to superstring theory
Nunez, Carmen [Instituto de Astronomia y Fisica del Espacio, Buenos Aires (Argentina)], e-mail: carmen@iafe.uba.ar
2009-07-01
This is a very basic introduction to the AdS/CFT correspondence. The first lecture motivates the duality between gauge theories and gravity/string theories. The next two lectures introduce the bosonic and supersymmetric string theories. The fourth lecture is devoted to study Dp-branes and finally, in the fifth lecture I discuss the two worlds: N=4 SYM in 3+1 flat dimensions and type IIB superstrings in AdS{sub 5} x S5. (author)
This paper presents the quantification of resonance interference effect for multi-group effective cross-section in lattice physics calculation. In the resonance self-shielding method based on the equivalence theory, the resonance interference effect among multiple nuclides cannot be treated directly to the multi-group effective cross-section. The continuous energy or the ultra-fine-group treatment can directly consider the effect, but the application to the fuel assembly geometry is not realistic with practical computation time. In the present study, the resonance interference effect to the multi-group effective cross-section is simply quantified by the resonance interference factor (RIF) in order to confirm the benefit for considering the effect. The RIF is generated for the typical pin-cell geometry of water moderated system. The multi-group effective cross-sections with and without RIFs are compared with the continuous energy Monte-Carlo result. As a result, the significant impact for considering the resonance interference effect is confirmed to the limited nuclide, reaction type and energy group. Fortunately, these have small effect on k-infinity because the resonance interference effect is mainly induced by the wide resonances of 238U to the other minor nuclides (e.g., 235U, 239Pu) in the limited resonance energy ranges. The results also show that the effect is small to the absorption cross-section of 238U, which is the dominant resonance nuclide in the fuel. The quantification results in the present study indicate a useful material to investigate the more advanced resonance treatment for the next generation lattice physics code. (author)
Li, Xu; Xiakun, Chu; Zhiqiang, Yan; Xiliang, Zheng; Kun, Zhang; Feng, Zhang; Han, Yan; Wei, Wu; Jin, Wang
2016-01-01
In this review, we explore the physical mechanisms of biological processes such as protein folding and recognition, ligand binding, and systems biology, including cell cycle, stem cell, cancer, evolution, ecology, and neural networks. Our approach is based on the landscape and flux theory for nonequilibrium dynamical systems. This theory provides a unifying principle and foundation for investigating the underlying mechanisms and physical quantification of biological systems. Project supported by the Natural Science Foundation of China (Grant Nos. 21190040, 11174105, 91225114, 91430217, and 11305176) and Jilin Province Youth Foundation, China (Grant No. 20150520082JH).
Yamazaki, Masahito
2013-01-01
We propose a new concept of entanglement for quantum systems: entanglement in theory space. This is defined by decomposing a theory into two by an un-gauging procedure. We provide two examples where this newly-introduced entanglement is closely related with conventional geometric entropies: deconstruction and AGT-type correspondence.
Entanglement quantification by local unitaries
Monras, A; Giampaolo, S M; Gualdi, G; Davies, G B; Illuminati, F
2011-01-01
Invariance under local unitary operations is a fundamental property that must be obeyed by every proper measure of quantum entanglement. However, this is not the only aspect of entanglement theory where local unitaries play a relevant role. In the present work we show that the application of suitable local unitary operations defines a family of bipartite entanglement monotones, collectively referred to as "shield entanglement". They are constructed by first considering the (squared) Hilbert- Schmidt distance of the state from the set of states obtained by applying to it a given local unitary. To the action of each different local unitary there corresponds a different distance. We then minimize these distances over the sets of local unitaries with different spectra, obtaining an entire family of different entanglement monotones. We show that these shield entanglement monotones are organized in a hierarchical structure, and we establish the conditions that need to be imposed on the spectrum of a local unitary f...
Quantifications and Modeling of Human Failure Events in a Fire PSA
USNRC and EPRI developed guidance, 'Fire Human Reliability Analysis Guidelines, NUREG-1921', for estimating human error probabilities (HEPs) for HFEs under fire conditions. NUREG-1921 classifies HFEs into four types associated with the following human actions: - Type 1: New and existing Main Control Room (MCR) actions - Type 2: New and existing ex-MCR actions - Type 3: Actions associated with using alternate shutdown means (ASD) - Type 4: Actions relating to the error of commissions (EOCs) or error of omissions (EOOs) as a result of incorrect indications (SPI) In this paper, approaches for the quantifications and modeling of HFEs related to Type 1, 2 and 3 human actions are introduced. This paper introduced the human reliability analysis process for a fire PSA of Hanul Unit 3. A multiplier of 10 was used to re-estimate the HEPs for the preexisting internal human actions. The HEPs for all ex- MCR actions were assumed to be one. New MCR human actions were quantified using the scoping analysis method of NUREG-1921. If the quantified human action were identified to be risk-significant, detailed approaches (modeling and quantification) were used for incorporating fire situations into them. Multiple HFEs for single human action were defined and they were separately and were separately quantified to incorporate the specific fire situations into them. From this study, we can confirm that the modeling as well as quantifications of human actions is very important to appropriately treat them in PSA logic structures
Quantifications and Modeling of Human Failure Events in a Fire PSA
Kang, Dae Il; Kim, Kilyoo; Jang, Seung-Cheol [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
USNRC and EPRI developed guidance, 'Fire Human Reliability Analysis Guidelines, NUREG-1921', for estimating human error probabilities (HEPs) for HFEs under fire conditions. NUREG-1921 classifies HFEs into four types associated with the following human actions: - Type 1: New and existing Main Control Room (MCR) actions - Type 2: New and existing ex-MCR actions - Type 3: Actions associated with using alternate shutdown means (ASD) - Type 4: Actions relating to the error of commissions (EOCs) or error of omissions (EOOs) as a result of incorrect indications (SPI) In this paper, approaches for the quantifications and modeling of HFEs related to Type 1, 2 and 3 human actions are introduced. This paper introduced the human reliability analysis process for a fire PSA of Hanul Unit 3. A multiplier of 10 was used to re-estimate the HEPs for the preexisting internal human actions. The HEPs for all ex- MCR actions were assumed to be one. New MCR human actions were quantified using the scoping analysis method of NUREG-1921. If the quantified human action were identified to be risk-significant, detailed approaches (modeling and quantification) were used for incorporating fire situations into them. Multiple HFEs for single human action were defined and they were separately and were separately quantified to incorporate the specific fire situations into them. From this study, we can confirm that the modeling as well as quantifications of human actions is very important to appropriately treat them in PSA logic structures.
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Recurrence quantification analysis in Liu's attractor
Recurrence Quantification Analysis is used to detect transitions chaos to periodical states or chaos to chaos in a new dynamical system proposed by Liu et al. This system contains a control parameter in the second equation and was originally introduced to investigate the forming mechanism of the compound structure of the chaotic attractor which exists when the control parameter is zero
Anwendung der "Uncertainty Quantification" bei eisenbahndynamischen problemen
Bigoni, Daniele; Engsig-Karup, Allan Peter; True, Hans
The paper describes the results of the application of "Uncertainty Quantification" methods in railway vehicle dynamics. The system parameters are given by probability distributions. The results of the application of the Monte-Carlo and generalized Polynomial Chaos methods to a simple bogie model...
Strong laws for recurrence quantification analysis
Grendár, Marian; Majerová, Jana; Špitalský, Vladimír
2013-01-01
The recurrence rate and determinism are two of the basic complexity measures studied in the recurrence quantification analysis. In this paper, the recurrence rate and determinism are expressed in terms of the correlation sums, and strong laws of large numbers are given for them.
Automated quantification and analysis of mandibular asymmetry
Darvann, T. A.; Hermann, N. V.; Larsen, P.; Ólafsdóttir, Hildur; Hansen, I. V.; Hove, H. D.; Christensen, L.; Rueckert, D.; Kreiborg, S.
We present an automated method of spatially detailed 3D asymmetry quantification in mandibles extracted from CT and apply it to a population of infants with unilateral coronal synostosis (UCS). An atlas-based method employing non-rigid registration of surfaces is used for determining deformation ...
Perfusion Quantification Using Gaussian Process Deconvolution
Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward; Hansson, Lars; Marstrand, J. R.; Larsson, Henrik B.W.; Hansen, Lars Kai
2002-01-01
The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...
Colour thresholding and objective quantification in bioimaging
Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.
1992-01-01
Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.
Blagojević, Milutin
2012-01-01
During the last five decades, gravity, as one of the fundamental forces of nature, has been formulated as a gauge field theory of the Weyl-Cartan-Yang-Mills type. The resulting theory, the Poincar\\'e gauge theory of gravity, encompasses Einstein's gravitational theory as well as the teleparallel theory of gravity as subcases. In general, the spacetime structure is enriched by Cartan's torsion and the new theory can accommodate fermionic matter and its spin in a perfectly natural way. The present reprint volume contains articles from the most prominent proponents of the theory and is supplemented by detailed commentaries of the editors. This guided tour starts from special relativity and leads, in its first part, to general relativity and its gauge type extensions a la Weyl and Cartan. Subsequent stopping points are the theories of Yang-Mills and Utiyama and, as a particular vantage point, the theory of Sciama and Kibble. Later, the Poincar\\'e gauge theory and its generalizations are explored and specific topi...
Instantaneous Wavenumber Estimation for Damage Quantification in Layered Plate Structures
Mesnil, Olivier; Leckey, Cara A. C.; Ruzzene, Massimo
2014-01-01
This paper illustrates the application of instantaneous and local wavenumber damage quantification techniques for high frequency guided wave interrogation. The proposed methodologies can be considered as first steps towards a hybrid structural health monitoring/ nondestructive evaluation (SHM/NDE) approach for damage assessment in composites. The challenges and opportunities related to the considered type of interrogation and signal processing are explored through the analysis of numerical data obtained via EFIT simulations of damage in CRFP plates. Realistic damage configurations are modeled from x-ray CT scan data of plates subjected to actual impacts, in order to accurately predict wave-damage interactions in terms of scattering and mode conversions. Simulation data is utilized to enhance the information provided by instantaneous and local wavenumbers and mitigate the complexity related to the multi-modal content of the plate response. Signal processing strategies considered for this purpose include modal decoupling through filtering in the frequency/wavenumber domain, the combination of displacement components, and the exploitation of polarization information for the various modes as evaluated through the dispersion analysis of the considered laminate lay-up sequence. The results presented assess the effectiveness of the proposed wavefield processing techniques as a hybrid SHM/NDE technique for damage detection and quantification in composite, plate-like structures.
Assay for the quantification of intact/fragmented genomic DNA.
Georgiou, Christos D; Papapostolou, Ioannis
2006-11-15
This study shows that the accuracy of the quantification of genomic DNA by the commonly used Hoechst- and PicoGreen-based assays is drastically affected by its degree of fragmentation. Specifically, it was shown that these assays underestimate by 70% the concentration of double-stranded DNA (dsDNA) with sizes less than 23 kb. On the other hand, DNA sizes greater and less than approximately 23 kb are commonly characterized as intact and fragmented genomic DNA, respectively, by the agarose electrophoresis DNA smearing assay and are evaluated only qualitatively by this assay. The need for accurate quantification of fragmented and total genomic DNA, combined with the lack of specific, reliable, and simple quantitative methods, prompted us to develop a Hoechst/PicoGreen-based fluorescent assay that quantifies both types of DNA. This assay addresses these problems, and in its Hoechst and PicoGreen version it accurately quantifies dsDNA as being either intact (>or=23 kb) or fragmented (Hoechst or PicoGreen, respectively, as well as the individual fractions of intact/fragmented DNA existing in any proportions in a total DNA sample in concentrations as low as 10 ng ml-1 or 15 pg ml-1 with Hoechst or PicoGreen, respectively. Because the assay discriminates total genomic DNA in the two size ranges (>or=23 and DNA smearing assay. PMID:16942746
Birmingham, D. (CERN, Geneva (Switzerland). Theory Div.); Blau, M. (CNRS, 13 - Marseille (France). Centre de Physique Theorique NIKHEF-H, Amsterdam (Netherlands)); Rakowski, M.; Thompson, G. (Mainz Univ. (Germany). Inst. fuer Physik)
1991-12-01
We begin with a general discussion of topological field theories, their defining properties, and classification. The first model we consider in detail (section 3) is supersymmetric quantum mechanics. Topological sigma models, their observables, and the associated mathematics of complex geometry and intersection theory are presented in section 4. Following this, topological gauge theories are discussed in section 5, with particular emphasis on Donaldson theory. The matematics here is necessarily much more sophisticated than at any other point in this report, and to bridge this gap, a mathematical review of gauge theory and moduli spaces has been included. An analysis of the geometry underlying Donaldson theory gives a general recipe for constructing field theories associated to moduli spaces in arbitrary dimensions, and as an example, we analyze in detail the super BF theories associated with flat connections. Chern-Simons theory and related BF models are the subject of section 6. The connections with knot theory are briefly reviewed and the link with 2D conformal field theory is sketched. We also consider 3D gravity from the Chern-Simons point of view. A presentation of the metric and gauge theory approaches to topological gravity in two dimensions is given. As in all quantum field theories, the issues of renormalization needs to be addressed, and one is obliged to show that the formal topological properties of these theories survive quantization. This point is considered in section 8. We present a detailed analysis of the beta function in certain Witten type theories, and compute one-loop effects in Chern-Simons theory. (orig./HSI).
Manceur, Aziza P; Kamen, Amine A
2015-11-01
Significant improvements in production and purification have been achieved since the first approved influenza vaccines were administered 75 years ago. Global surveillance and fast response have limited the impact of the last pandemic in 2009. In case of another pandemic, vaccines can be generated within three weeks with certain platforms. However, our Achilles heel is at the quantification level. Production of reagents for the quantification of new vaccines using the SRID, the main method formally approved by regulatory bodies, requires two to three months. The impact of such delays can be tragic for vulnerable populations. Therefore, efforts have been directed toward developing alternative quantification methods, which are sensitive, accurate, easy to implement and independent of the availability of specific reagents. The use of newly-developed antibodies against a conserved region of hemagglutinin (HA), a surface protein of influenza, holds great promises as they are able to recognize multiple subtypes of influenza; these new antibodies could be used in immunoassays such as ELISA and slot-blot analysis. HA concentration can also be determined using reversed-phase high performance liquid chromatography (RP-HPLC), which obviates the need for antibodies but still requires a reference standard. The number of viral particles can be evaluated using ion-exchange HPLC and techniques based on flow cytometry principles, but non-viral vesicles have to be taken into account with cellular production platforms. As new production systems are optimized, new quantification methods that are adapted to the type of vaccine produced are required. The nature of these new-generation vaccines might dictate which quantification method to use. In all cases, an alternative method will have to be validated against the current SRID assay. A consensus among the scientific community would have to be reached so that the adoption of new quantification methods would be harmonized between international laboratories. PMID:26271833
Sugimoto, Shigeki; Takahashi, Kazuyoshi
2004-01-01
We analyze the D9-D9bar system in type IIB string theory using Dp-brane probes. It is shown that the world-volume theory of the probe Dp-brane contains two-dimensional and four-dimensional QED in the cases with p=1 and p=3, respectively, and some applications of the realization of these well-studied quantum field theories are discussed. In particular, the two-dimensional QED (the Schwinger model) is known to be a solvable theory and we can apply the powerful field theoretical techniques, such...
Techniques used in conventional project appraisal are mathematically very simple in comparison to those used in reservoir modelling, and in the geosciences. Clearly it would be possible to value assets in mathematically more sophisticated ways if it were meaningful and worthwhile so to do. The DCf approach in common use has recognized limitations; the inability to select a meaningful discount rate being particularly significant. Financial Theory has advanced enormously over the last few years, along with computational techniques, and methods are beginning to appear which may change the way we do project evaluations in practice. The starting point for all of this was a paper by Black and Scholes, which asserts that almost all corporate liabilities can be viewed as options of varying degrees of complexity. Although the financial presentation may be unfamiliar to engineers and geoscientists, some of the concepts used will not be. This paper outlines, in plain English, the basis of option pricing theory for assessing the market value of a project. it also attempts to assess the future role of this type of approach in practical Petroleum Exploration and Engineering economics. Reference is made to relevant published Natural Resource literature
String Theory or Field Theory?
Marshakov, A.
2002-01-01
The status of string theory is reviewed, and major recent developments - especially those in going beyond perturbation theory in the string theory and quantum field theory frameworks - are discussed. This analysis helps better understand the role and place of string theory in the modern picture of the physical world. Even though quantum field theory describes a wide range of experimental phenomena, it is emphasized that there are some insurmountable problems inherent in it - notably the impos...
Uncertainty quantification methodology development for the best-estimate safety analysis
This study deals with two approaches to uncertainty quantification methodology. In the first approach, an uncertainty quantification methodology is proposed and applied to the estimation of nuclear reactor fuel peak cladding temperature (PCT) uncertainty. The proposed method adopts the use of Latin hypercube sampling (LHS). The independency between the input variables is verified through a correlation coefficient test. The uncertainty of the output variables is estimated through a goodness-of-fit test on the sample data. In the application, the approach taken to quantifying the total mean and total 95% probability PCTs is given. Emphasis is placed upon the PCT uncertainty estimation due to models' or correlations' uncertainties with the assumption that significant sources of PCT uncertainty are determined. In the second approach, an uncertainty quantification methodology is proposed for a severe accident analysis which has large uncertainties. The proposed method adopts the concept of probabilistic belief measure to transform an analyst's belief on a top event into the equivalent probability of that top event. For the purpose of comparison, analyses are done by 1) applying probability theory regarding the occurring probability of top event as a physical probability or a frequency, 2) applying fuzzy set theory with fuzzy numbered occurring probability of top event, and 3) transforming the analysts' belief on the top event into equivalent probability by the probabilistic belief measure method
Quantification of atherosclerosis with MRI
Cardiovascular disease due to atherosclerosis is a major cause of death in the United States. A major limitation in the current treatment of atherosclerosis is the lack of a quantitative means to non-invasively evaluate the extent of the disease. Recent studies suggest that Magnetic Resonance Imaging (MRI) has the potential for the detection of atherosclerotic plaque. It has been demonstrated that multi-dimensional pattern recognition can be applied to multi-pulse sequence MR images to identify different tissue types. The authors reported the identification of tissues involved in the atherosclerotic disease process, such as normal endothelium, smooth muscle, thrombus, fat or lipid, connective tissue and calcified plaque. The work reported in this abstract presents preliminary results of applying quantitative 3-D reconstruction to the problem of identifying and quantifying atherosclerotic plaque in vitro
AdS{sub 3} x{sub w} (S{sup 3} x S{sup 3} x S{sup 1}) solutions of type IIB string theory
Donos, Aristomenis [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Gauntlett, Jerome P. [Imperial College, London (United Kingdom). Blackett Lab.]|[Imperial College, London (United Kingdom). The Institute for Mathematical Sicences; Sparks, James [Oxford Univ. (United Kingdom). Mathematical Institute
2008-10-15
We analyse a recently constructed class of local solutions of type IIB supergravity that consist of a warped product of AdS{sub 3} with a sevendimensional internal space. In one duality frame the only other nonvanishing fields are the NS three-form and the dilaton. We analyse in detail how these local solutions can be extended to globally well-defined solutions of type IIB string theory, with the internal space having topology S{sup 3} x S{sup 3} x S{sup 1} and with properly quantised three-form flux. We show that many of the dual (0,2) SCFTs are exactly marginal deformations of the (0,2) SCFTs whose holographic duals are warped products of AdS{sub 3} with seven-dimensional manifolds of topology S{sup 3} x S{sup 2} x T{sup 2}. (orig.)
Marino Beiras, Marcos
2011-01-01
We give an overview of the relations between matrix models and string theory, focusing on topological string theory and the Dijkgraaf--Vafa correspondence. We discuss applications of this correspondence and its generalizations to supersymmetric gauge theory, enumerative geometry and mirror symmetry. We also present a brief overview of matrix quantum mechanical models in superstring theory.
Jara, Pascual; Torrecillas, Blas
1988-01-01
The papers in this proceedings volume are selected research papers in different areas of ring theory, including graded rings, differential operator rings, K-theory of noetherian rings, torsion theory, regular rings, cohomology of algebras, local cohomology of noncommutative rings. The book will be important for mathematicians active in research in ring theory.
UnerTan
2014-01-01
Two consanguineous families with Uner Tan Syndrome (UTS) were analyzed in relation to self-organizing processes in complex systems, and the evolutionary emergence of human bipedalism. The cases had the key symptoms of previously reported cases of UTS, such as quadrupedalism, mental retardation, and dysarthric or no speech, but the new cases also exhibited infantile hypotonia and are designated UTS Type-II. There were 10 siblings in Branch I and 12 siblings in Branch II. Of these, there were s...
String Theory: Lessons for Low Energy Physics
Dine, Michael
1992-01-01
This talk considers possible lessons of string theory for low energy physics. These are of two types. First, assuming that string theory is the correct underlying theory of all interactions, we ask whether there are any generic predictions the theory makes, and we compare the predictions of string theory with those of conventional grand unified theories. Second, string theory offers some possible answers to a number of troubling naturalness questions. These include problems of discrete and co...
The paper traces the development of the String Theory, and was presented at Professor Sir Rudolf Peierls' 80sup(th) Birthday Symposium. The String theory is discussed with respect to the interaction of strings, the inclusion of both gauge theory and gravitation, inconsistencies in the theory, and the role of space-time. The physical principles underlying string theory are also outlined. (U.K.)
Quantification of carotid vessel atherosclerosis
Chiu, Bernard; Egger, Micaela; Spence, J. D.; Parraga, Grace; Fenster, Aaron
2006-03-01
Atherosclerosis is characterized by the development of plaques in the arterial wall, which ultimately leads to heart attacks and stroke. 3D ultrasound (US) has been used to screen patients' carotid arteries. Plaque measurements obtained from these images may aid in the management and monitoring of patients, and in evaluating the effect of new treatment options. Different types of measures for ultrasound phenotypes of atherosclerosis have been proposed. Here, we report on the development and application of a method used to analyze changes in carotid plaque morphology from 3D US images obtained at two different time points. We evaluated our technique using manual segmentations of the wall and lumen of the carotid artery from images acquired in two US scanning sessions. To incorporate the effect of intraobserver variability in our evaluation, manual segmentation was performed five times each for the arterial wall and lumen. From this set of five segmentations, the mean wall and lumen surfaces were reconstructed, with the standard deviation at each point mapped onto the surfaces. A correspondence map between the mean wall and lumen surfaces was then established, and the thickness of the atherosclerotic plaque at each point in the vessel was estimated to be the distance between each correspondence pairs. The two-sample Student's t-test was used to judge whether the difference between the thickness values at each pair corresponding points of the arteries in the two 3D US images was statistically significant.
Masuda, J.; Mori, S.; Harada, T. [Science University of Tokyo, Chiba (Japan). Faculty of Science and Technology
1997-01-30
A discussion using a game-theory type optimization was given on setting power charges in areas where three parties of a utility company, a cogenerator and general users coexist. In the discussion model, a cogenerator is supposed to possess such ancillary facilities to take care of thermal demand as a gas engine, a cogeneration system and a boiler, and to be connected to systems of utility companies. Power generated by utility companies is classified into three grades, wherein privately generated power is regarded as equivalent to low quality power. General uses determine how much power they will buy from the utility companies and the cogenerators according to their quality and price. The above three parties form a game-theory type market with respect to power price and amount of trade. The balanced price and the trade amount form an intersection of a critical utility function for consumers and a supply function for producers. An analysis using numerical experiments derived a result which nearly satisfies a hypothesis, although a few problems are left unresolved. A similar result was also obtained in an analysis on combinations other than shops and hospitals. 5 refs., 18 figs., 6 tabs.
Automatic quantification in lung scintigraphy: functional atlas
In spite of the development of new techniques, the ventilation perfusion lung scintigraphy keeps a good place for the diagnosis of pulmonary embolism (PE). In the context of an improvement of reliability and reproducibility of the diagnosis, this study proposes to realize an automatic quantification of the distribution of the radioactive tracers by pulmonary segments. Measurements are made following a procedure of non-rigid matching of morphological 2-D charts of the lungs on the scintigraphic images. The adaptation of these charts to the patients' morphology is carried out by exploiting iso-contour information of the images and using Fourier descriptors to determine the parameters of the transformation. The study was performed on a population of 30 patients with a probability of nil of the pulmonary embolism. After a study of the robustness of the quantification, 2-D segmental functional reference charts (according to the conditions of acquisition) were proposed. In the perfusion case and four views, the following lobar distribution, in relative value, is measured: Right Inferior Lobar = 23,39%, Medial Lobar = 10,41 %, Right Superior Lobar = 20,37%, Left Inferior Lobar 20,6% and Left Superior Lobar = 25.6% with culmen = 18,8% and lingula = 6,8%; values comparable with those of the publication. The process of quantification is adaptable to the ventilation lung scans. The segmental quantifications of a patient carried out under the same conditions of acquisition as the functional reference charts, could be compared with the reference data and provide indicators for the diagnosis but also for patient follow-up and preoperative evaluation of lung cancers. (authors)
Chapitre 1. Quantification of the Scientific Diaspora
Johnson, Jean
2013-01-01
Introduction The quantification of expatriate scientists and engineers (S&Es) provides information in three main sections: 1) the international comparison of foreign S&E graduate student flows and stay rates; 2) the presence of foreign-born scientists and engineers in the U.S. labor force, and 3) the reverse flow of S&E knowledge. The international flow of foreign students compares their graduate enrollment in the United States, the United Kingdom and France, and includes some information on ...