Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
Recurrence quantification analysis theory and best practices
Jr, Jr; Marwan, Norbert
2015-01-01
The analysis of recurrences in dynamical systems by using recurrence plots and their quantification is still an emerging field. Over the past decades recurrence plots have proven to be valuable data visualization and analysis tools in the theoretical study of complex, time-varying dynamical systems as well as in various applications in biology, neuroscience, kinesiology, psychology, physiology, engineering, physics, geosciences, linguistics, finance, economics, and other disciplines. This multi-authored book intends to comprehensively introduce and showcase recent advances as well as established best practices concerning both theoretical and practical aspects of recurrence plot based analysis. Edited and authored by leading researcher in the field, the various chapters address an interdisciplinary readership, ranging from theoretical physicists to application-oriented scientists in all data-providing disciplines.
Andrews' Type Theory with Undefinedness
Farmer, William M.
2014-01-01
${\\cal Q}_0$ is an elegant version of Church's type theory formulated and extensively studied by Peter B. Andrews. Like other traditional logics, ${\\cal Q}_0$ does not admit undefined terms. The "traditional approach to undefinedness" in mathematical practice is to treat undefined terms as legitimate, nondenoting terms that can be components of meaningful statements. ${\\cal Q}^{\\rm u}_{0}$ is a modification of Andrews' type theory ${\\cal Q}_0$ that directly formalizes the tr...
Definitional Extension in Type Theory
Xue, Tao
2014-01-01
When we extend a type system, the relation between the original system and its extension is an important issue we want to know. Conservative extension is a traditional relation we study with. But in some cases, like coercive subtyping, it is not strong enough to capture all the properties, more powerful relation between the systems is required. We bring the idea definitional extension from mathematical logic into type theory. In this paper, we study the notion of definitional extension for t...
Homotopy limits in type theory
Avigad, Jeremy; Kapulkin, Chris; Lumsdaine, Peter LeFanu
2013-01-01
Working in homotopy type theory, we provide a systematic study of homotopy limits of diagrams over graphs, formalized in the Coq proof assistant. We discuss some of the challenges posed by this approach to formalizing homotopy-theoretic material. We also compare our constructions with the more classical approach to homotopy limits via fibration categories.
A "Toy" Model for Operational Risk Quantification using Credibility Theory
Bühlmann, Hans; Shevchenko, Pavel V.; Wüthrich, Mario V.
2009-01-01
To meet the Basel II regulatory requirements for the Advanced Measurement Approaches in operational risk, the bank's internal model should make use of the internal data, relevant external data, scenario analysis and factors reflecting the business environment and internal control systems. One of the unresolved challenges in operational risk is combining of these data sources appropriately. In this paper we focus on quantification of the low frequency high impact losses excee...
Causality in Time Series: Its Detection and Quantification by Means of Information Theory.
Czech Academy of Sciences Publication Activity Database
Hlavá?ková-Schindler, Kate?ina
New York : Springer, 2008 - (Emmert-Streib, F.; Dehmer, M.), s. 183-207 ISBN 978-0-387-84815-0. - (Computer Science ) R&D Projects: GA MŠk 2C06001 Institutional research plan: CEZ:AV0Z10750506 Keywords : causality * time series * information theory Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2009/AS/schindler-causality in time series its detection and quantification by means of information theory.pdf
Idempotents in intensional type theory
Shulman, Michael
2015-01-01
We study idempotents in intensional Martin-L\\"of type theory, and in particular the question of when and whether they split. We show that in the presence of propositional truncation and Voevodsky's univalence axiom, there exist idempotents that do not split; thus in plain MLTT not all idempotents can be proven to split. On the other hand, assuming only function extensionality, an idempotent can be split if and only if its witness of idempotency satisfies one extra coherence ...
Quantification of bottleneck effects for different types of facilities
Zhang, Jun
2015-01-01
Restrictions of pedestrian flow could be triggered by directional changes, merging of streams and other changes or disturbances causing effects similar to bottlenecks given by geometrical narrowings. In this contribution we analyze empirically how the types of the changes or disturbances influence the capacity of a facility. For this purpose four types of facilities including a short narrowing, a long narrowing, a corner and a T-junction are investigated. It is found that the reduction of pedestrian flow depends on the shape and the length of the narrowing. The maximum observed flow of the corner (about 1.45 (m.s)-1) is the lowest in all facilities studied, whereas that of the short narrowing is highest. The finding indicates that the usage of an unique fundamental diagram for the description of pedestrian flow at different kind of geometrical narrowings is limited.
Extensional concepts in intensional type theory
Hofmann, Martin
1995-01-01
Theories of dependent types have been proposed as a foundation of constructive mathematics and as a framework in which to construct certified programs. In these applications an important role is played by identity types which internalise equality and therefore are essential for accommodating proofs and programs in the same formal system. This thesis attempts to reconcile the two different ways that type theories deal with identity types. In extensional type theory the propositional equali...
Use of Multiple Competitors for Quantification of Human Immunodeficiency Virus Type 1 RNA in Plasma
Vener, Tanya; Nygren, Malin; Andersson, AnnaLena; Uhlén, Mathias; Albert, Jan; Lundeberg, Joakim
1998-01-01
Quantification of human immunodeficiency virus type 1 (HIV-1) RNA in plasma has rapidly become an important tool in basic HIV research and in the clinical care of infected individuals. Here, a quantitative HIV assay based on competitive reverse transcription-PCR with multiple competitors was developed. Four RNA competitors containing identical PCR primer binding sequences as the viral HIV-1 RNA target were constructed. One of the PCR primers was fluorescently labeled, which facilitated discri...
Quantification of uncertainty of performance measures using graph theory
Lopes, Isabel da Silva; Sousa, Sérgio; Nunes, Eusébio P.
2013-01-01
In this paper, the graph theory is used to quantify the uncertainty generated in performance measures during the process of performance measurement. A graph is developed considering all the sources of uncertainty present in this process and their relationship. The permanent function of the matrix associated with the graph is used as the basis for determining an uncertainty index.
Banks, H. T.; Holm, Kathleen; Robbins, Danielle
2010-01-01
We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, ...
Quantum Gauge Field Theory in Cohesive Homotopy Type Theory
Schreiber, Urs; Shulman, Michael
2014-01-01
We implement in the formal language of homotopy type theory a new set of axioms called cohesion. Then we indicate how the resulting cohesive homotopy type theory naturally serves as a formal foundation for central concepts in quantum gauge field theory. This is a brief survey of work by the authors developed in detail elsewhere.
Directory of Open Access Journals (Sweden)
Tuncay Kagan
2007-01-01
Full Text Available Abstract Background Gene expression microarray and other multiplex data hold promise for addressing the challenges of cellular complexity, refined diagnoses and the discovery of well-targeted treatments. A new approach to the construction and quantification of transcriptional regulatory networks (TRNs is presented that integrates gene expression microarray data and cell modeling through information theory. Given a partial TRN and time series data, a probability density is constructed that is a functional of the time course of transcription factor (TF thermodynamic activities at the site of gene control, and is a function of mRNA degradation and transcription rate coefficients, and equilibrium constants for TF/gene binding. Results Our approach yields more physicochemical information that compliments the results of network structure delineation methods, and thereby can serve as an element of a comprehensive TRN discovery/quantification system. The most probable TF time courses and values of the aforementioned parameters are obtained by maximizing the probability obtained through entropy maximization. Observed time delays between mRNA expression and activity are accounted for implicitly since the time course of the activity of a TF is coupled by probability functional maximization, and is not assumed to be proportional to expression level of the mRNA type that translates into the TF. This allows one to investigate post-translational and TF activation mechanisms of gene regulation. Accuracy and robustness of the method are evaluated. A kinetic formulation is used to facilitate the analysis of phenomena with a strongly dynamical character while a physically-motivated regularization of the TF time course is found to overcome difficulties due to omnipresent noise and data sparsity that plague other methods of gene expression data analysis. An application to Escherichia coli is presented. Conclusion Multiplex time series data can be used for the construction of the network of cellular processes and the calibration of the associated physicochemical parameters. We have demonstrated these concepts in the context of gene regulation understood through the analysis of gene expression microarray time series data. Casting the approach in a probabilistic framework has allowed us to address the uncertainties in gene expression microarray data. Our approach was found to be robust to error in the gene expression microarray data and mistakes in a proposed TRN.
Combinatorial realizability models of type theory
Hofstra, Pieter; Warren, Michael A.
2012-01-01
We introduce a new model construction for Martin-L\\"{o}f intensional type theory, which is sound and complete for the 1-truncated version of the theory. The model formally combines the syntactic model with a notion of realizability; it also encompasses the well-known Hofmann- Streicher groupoid semantics. As our main application, we use the model to analyse the syntactic groupoid associated to the type theory generated by a graph G, showing that it has the same homotopy type...
Some Properties of Type I' String Theory
Schwarz, John H.
1999-01-01
The T-dual formulation of Type I superstring theory, sometimes called Type I' theory, has a number of interesting features. Here we review some of them including the role of D0-branes and D8-branes in controlling possible gauge symmetry enhancement.
Type II string theory and modularity
Kriz, Igor; Sati, Hisham
2005-01-01
This paper, in a sense, completes a series of three papers. In the previous two hep-th/0404013, hep-th/0410293, we have explored the possibility of refining the K-theory partition function in type II string theories using elliptic cohomology. In the present paper, we make that more concrete by defining a fully quantized free field theory based on elliptic cohomology of 10-dimensional spacetime. Moreover, we describe a concrete scenario how this is related to compactification...
Type II string theory and modularity
International Nuclear Information System (INIS)
This paper, in a sense, completes a series of three papers. In the previous two, we have explored the possibility of refining the K-theory partition function in type II string theories using elliptic cohomology. In the present paper, we make that more concrete by defining a fully quantized free field theory based on elliptic cohomology of 10-dimensional spacetime. Moreover, we describe a concrete scenario how this is related to compactification of F-theory on an elliptic curve leading to IIA and IIB theories. We propose an interpretation of the elliptic curve in the context of elliptic cohomology. We discuss the possibility of orbifolding of the elliptic curves and derive certain properties of F-theory. We propose a link of this to type IIB modularity, the structure of the topological lagrangian of M-theory, and Witten's index of loop space Dirac operators. The discussion suggests a S1-lift of type IIB and an F-theoretic model for type I obtained by orbifolding that for type IIB
Type II string theory and modularity
Energy Technology Data Exchange (ETDEWEB)
Kriz, Igor [Department of Mathematics, University of Michigan, Ann Arbor, MI 48109 (United States); Sati, Hisham [Department of Physics, University of Adelaide, Adelaide, SA 5005 (Australia); Department of Pure Mathematics, University of Adelaide, Adelaide, SA 5005 (Australia)
2005-08-01
This paper, in a sense, completes a series of three papers. In the previous two, we have explored the possibility of refining the K-theory partition function in type II string theories using elliptic cohomology. In the present paper, we make that more concrete by defining a fully quantized free field theory based on elliptic cohomology of 10-dimensional spacetime. Moreover, we describe a concrete scenario how this is related to compactification of F-theory on an elliptic curve leading to IIA and IIB theories. We propose an interpretation of the elliptic curve in the context of elliptic cohomology. We discuss the possibility of orbifolding of the elliptic curves and derive certain properties of F-theory. We propose a link of this to type IIB modularity, the structure of the topological lagrangian of M-theory, and Witten's index of loop space Dirac operators. The discussion suggests a S{sup 1}-lift of type IIB and an F-theoretic model for type I obtained by orbifolding that for type IIB.
Doyle, Laurance R.; McCowan, Brenda; Hanser, Sean F.; Chyba, Christopher; Bucci, Taylor; Blue, J. E.
2008-06-01
We assess the effectiveness of applying information theory to the characterization and quantification of the affects of anthropogenic vessel noise on humpback whale (Megaptera novaeangliae) vocal behavior in and around Glacier Bay, Alaska. Vessel noise has the potential to interfere with the complex vocal behavior of these humpback whales which could have direct consequences on their feeding behavior and thus ultimately on their health and reproduction. Humpback whale feeding calls recorded during conditions of high vessel-generated noise and lower levels of background noise are compared for differences in acoustic structure, use, and organization using information theoretic measures. We apply information theory in a self-referential manner (i.e., orders of entropy) to quantify the changes in signaling behavior. We then compare this with the reduction in channel capacity due to noise in Glacier Bay itself treating it as a (Gaussian) noisy channel. We find that high vessel noise is associated with an increase in the rate and repetitiveness of sequential use of feeding call types in our averaged sample of humpback whale vocalizations, indicating that vessel noise may be modifying the patterns of use of feeding calls by the endangered humpback whales in Southeast Alaska. The information theoretic approach suggested herein can make a reliable quantitative measure of such relationships and may also be adapted for wider application to many species where environmental noise is thought to be a problem.
Directory of Open Access Journals (Sweden)
J. Ellen Blue
2008-05-01
Full Text Available We assess the effectiveness of applying information theory to the characterization and quantification of the affects of anthropogenic vessel noise on humpback whale (Megaptera novaeangliae vocal behavior in and around Glacier Bay, Alaska. Vessel noise has the potential to interfere with the complex vocal behavior of these humpback whales which could have direct consequences on their feeding behavior and thus ultimately on their health and reproduction. Humpback whale feeding calls recorded during conditions of high vessel-generated noise and lower levels of background noise are compared for differences in acoustic structure, use, and organization using information theoretic measures. We apply information theory in a self-referential manner (i.e., orders of entropy to quantify the changes in signaling behavior. We then compare this with the reduction in channel capacity due to noise in Glacier Bay itself treating it as a (Gaussian noisy channel. We find that high vessel noise is associated with an increase in the rate and repetitiveness of sequential use of feeding call types in our averaged sample of humpback whale vocalizations, indicating that vessel noise may be modifying the patterns of use of feeding calls by the endangered humpback whales in Southeast Alaska. The information theoretic approach suggested herein can make a reliable quantitative measure of such relationships and may also be adapted for wider application to many species where environmental noise is thought to be a problem.
McDonnell, J D; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W
2015-01-01
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models; to estimate model errors and thereby improve predictive capability; to extrapolate beyond the regions reached by experiment; and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, w...
Damond, F.; Benard, A.; Ruelle, J.; Alabi, A.; Kupfer, B; Gomes, P (Pedro); Rodes, B; Albert, J.; Böni, J; Garson, J.; Ferns, B; Matheron, S; Chene, G; Brun-Vezinet, F.
2008-01-01
Human immunodeficiency virus type 2 (HIV-2) RNA quantification assays used in nine laboratories of the ACHI(E)V(2E) (A Collaboration on HIV-2 Infection) study group were evaluated. In a blinded experimental design, laboratories quantified three series of aliquots of an HIV-2 subtype A strain, each at a different theoretical viral load. Quantification varied between laboratories, and international standardization of quantification assays is strongly needed.
Voevodsky's Univalence Axiom in homotopy type theory
Awodey, Steve; Pelayo, Álvaro; Warren, Michael A.
2013-01-01
In this short note we give a glimpse of homotopy type theory, a new field of mathematics at the intersection of algebraic topology and mathematical logic, and we explain Vladimir Voevodsky's univalent interpretation of it. This interpretation has given rise to the univalent foundations program, which is the topic of the current special year at the Institute for Advanced Study.
Hoare type theory, polymorphism and separation
DEFF Research Database (Denmark)
Nanevski, Alexandar; Morrisett, J. Gregory
2008-01-01
We consider the problem of reconciling a dependently typed functional language with imperative features such as mutable higher-order state, pointer aliasing, and nontermination. We propose Hoare type theory (HTT), which incorporates Hoare-style specifications into types, making it possible to statically track and enforce correct use of side effects. The main feature of HTT is the Hoare type {P}x:A{Q} specifying computations with precondition P and postcondition Q that return a result of type A. Hoare types can be nested, combined with other types, and abstracted, leading to a smooth integration with higher-order functions and type polymorphism. We further show that in the presence of type polymorphism, it becomes possible to interpret the Hoare types in the “small footprint” manner, as advocated by separation logic, whereby specifications tightly describe the state required by the computation. We establish that HTT is sound and compositional, in the sense that separate verifications of individual program components suffice to ensure the correctness of the composite program.
McDonnell, J. D.; Schunck, N.; Higdon, D.; Sarich, J.; Wild, S. M.; Nazarewicz, W.
2015-03-01
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
International Nuclear Information System (INIS)
The purpose of this paper is to quantify uncertainties of fuel pin cell or fuel assembly (FA) homogenized few group diffusion theory constants generated from the B1 theory-augmented Monte Carlo (MC) method. A mathematical formulation of the first kind is presented to quantify uncertainties of the few group constants in terms of the two major sources of the MC method; statistical and nuclear cross section and nuclide number density input data uncertainties. The formulation is incorporated into the Seoul National Univ. MC code McCARD. It is then used to compute the uncertainties of the burnup-dependent homogenized two group constants of a low-enriched UO2 fuel pin cell and a PWR FA on the condition that nuclear cross section input data of U-235 and U-238 from JENDL 3.3 library and nuclide number densities from the solution to fuel depletion equations have uncertainties. The contribution of the MC input data uncertainties to the uncertainties of the two group constants of the two fuel systems is separated from that of the statistical uncertainties. The utilities of uncertainty quantifications are then discussed from the standpoints of safety analysis of existing power reactors, development of new fuel or reactor system design, and improvement of covariance files of the evaluated nuclear data libraries. (authors)
Multi-level Contextual Type Theory
Mathieu Boespflug; Brigitte Pientka
2011-01-01
Contextual type theory distinguishes between bound variables and meta-variables to write potentially incomplete terms in the presence of binders. It has found good use as a framework for concise explanations of higher-order unification, characterize holes in proofs, and in developing a foundation for programming with higher-order abstract syntax, as embodied by the programming and reasoning environment Beluga. However, to reason about these applications, we need to introduce...
J. Ellen Blue; Taylor Bucci; Christopher Chyba; Sean F. Hanser; Brenda McCowan; Laurance R. Doyle
2008-01-01
We assess the effectiveness of applying information theory to the characterization and quantification of the affects of anthropogenic vessel noise on humpback whale (Megaptera novaeangliae) vocal behavior in and around Glacier Bay, Alaska. Vessel noise has the potential to interfere with the complex vocal behavior of these humpback whales which could have direct consequences on their feeding behavior and thus ultimately on their health and reproduction. Humpback whale feeding calls recorded d...
Quantification of Spatial Parameters in 3D Cellular Constructs Using Graph Theory
Directory of Open Access Journals (Sweden)
G. E. Plopper
2009-01-01
Full Text Available Multispectral three-dimensional (3D imaging provides spatial information for biological structures that cannot be measured by traditional methods. This work presents a method of tracking 3D biological structures to quantify changes over time using graph theory. Cell-graphs were generated based on the pairwise distances, in 3D-Euclidean space, between nuclei during collagen I gel compaction. From these graphs quantitative features are extracted that measure both the global topography and the frequently occurring local structures of the “tissue constructs.” The feature trends can be controlled by manipulating compaction through cell density and are significant when compared to random graphs. This work presents a novel methodology to track a simple 3D biological event and quantitatively analyze the underlying structural change. Further application of this method will allow for the study of complex biological problems that require the quantification of temporal-spatial information in 3D and establish a new paradigm in understanding structure-function relationships.
Multi-level Contextual Type Theory
Directory of Open Access Journals (Sweden)
Mathieu Boespflug
2011-10-01
Full Text Available Contextual type theory distinguishes between bound variables and meta-variables to write potentially incomplete terms in the presence of binders. It has found good use as a framework for concise explanations of higher-order unification, characterize holes in proofs, and in developing a foundation for programming with higher-order abstract syntax, as embodied by the programming and reasoning environment Beluga. However, to reason about these applications, we need to introduce meta^2-variables to characterize the dependency on meta-variables and bound variables. In other words, we must go beyond a two-level system granting only bound variables and meta-variables. In this paper we generalize contextual type theory to n levels for arbitrary n, so as to obtain a formal system offering bound variables, meta-variables and so on all the way to meta^n-variables. We obtain a uniform account by collapsing all these different kinds of variables into a single notion of variabe indexed by some level k. We give a decidable bi-directional type system which characterizes beta-eta-normal forms together with a generalized substitution operation.
Wendt, K A; Ekström, A
2014-01-01
We extract the statistical uncertainties for the pion-nucleon ($\\pi N$) low energy constants (LECs) up to fourth order $\\mathcal{O}(Q^4)$ in the chiral expansion of the nuclear effective Lagrangian. The LECs are optimized with respect to experimental scattering data. For comparison, we also present an uncertainty quantification that is based solely on \\pin{} scattering phase shifts. Statistical errors on the LECs are critical in order to estimate the subsequent uncertainties in \\textit{ab initio} modeling of light and medium mass nuclei which exploit chiral effective field theory. As an example of the this, we present the first complete predictions with uncertainty quantification of peripheral phase shifts of elastic proton-neutron scattering.
Multi-level Contextual Type Theory
Boespflug, Mathieu; 10.4204/EPTCS.71.3
2011-01-01
Contextual type theory distinguishes between bound variables and meta-variables to write potentially incomplete terms in the presence of binders. It has found good use as a framework for concise explanations of higher-order unification, characterize holes in proofs, and in developing a foundation for programming with higher-order abstract syntax, as embodied by the programming and reasoning environment Beluga. However, to reason about these applications, we need to introduce meta^2-variables to characterize the dependency on meta-variables and bound variables. In other words, we must go beyond a two-level system granting only bound variables and meta-variables. In this paper we generalize contextual type theory to n levels for arbitrary n, so as to obtain a formal system offering bound variables, meta-variables and so on all the way to meta^n-variables. We obtain a uniform account by collapsing all these different kinds of variables into a single notion of variabe indexed by some level k. We give a decidable ...
Tang, Pingjun
The general goal of this research was to study percolation and sieving segregation patterns---quantification, mechanistic theory, model development and validation of particulate materials. A second generation primary segregation shear cell (PSSC-II) was designed and fabricated to model the sieving and percolation segregation mechanisms of particulate materials. Two test materials used in this research were spherical shaped glass beads (denoted as G) and irregular shaped mash poultry feed (denoted as F), which are considered as representatives of ideal and real world materials, respectively. The PSSC-II test results showed that there is a linear relationship between normalized segregation rate (NSR) and absolute size or size ratio for GG and FG combinations; whereas, linear relationship does not hold for FF and GF combinations although the effect of absolute size and size ratio on NSR were significant (P methods. The MTB model, for the first time, successfully correlated the effect of particle size, density, and shape to segregation potential of binary mixtures in one quantitative equation. Furthermore, the MTB model has the potential to accommodate additional effects such as surface texture and electrostatic charge to generalize the model. Finally, as a case study, effect of feed particle segregation on bird performance was performed to examine the effectiveness of the research results. The results showed that, due to bird selection behavior and particle segregation, birds did not sufficiently consume those nutrients that are contained in smaller feed particles (<1,180 mum). The results of feed particle size and nutrients analysis verified the above observations. (Abstract shortened by UMI.)
Determination and quantification of collagen types by LC-MS/MS and CE-MS/MS.
Czech Academy of Sciences Publication Activity Database
Mikšík, Ivan; Pataridis, Statis; Eckhardt, Adam; Lacinová, Kate?ina; Sedláková, Pavla
Freiberg : Forschungsinstitut für Leder und Kunststoffbahnen (FILK)gGmbH, 2012, s. 131-141. ISBN 978-3-00-039421-8. [Freiberg Collagen Symposium /5./. Freiberg (DE), 04.09.2012-05.09.2012] R&D Projects: GA ?R(CZ) GA203/08/1428; GA ?R(CZ) GAP206/12/0453 Institutional research plan: CEZ:AV0Z50110509 Institutional support: RVO:67985823 Keywords : collagen * protein quantification Subject RIV: CB - Analytical Chemistry, Separation
Uncertainty Quantification of Composite Laminate Damage with the Generalized Information Theory
Energy Technology Data Exchange (ETDEWEB)
J. Lucero; F. Hemez; T. Ross; K.Kline; J.Hundhausen; T. Tippetts
2006-05-01
This work presents a survey of five theories to assess the uncertainty of projectile impact induced damage on multi-layered carbon-epoxy composite plates. Because the types of uncertainty dealt with in this application are multiple (variability, ambiguity, and conflict) and because the data sets collected are sparse, characterizing the amount of delamination damage with probability theory alone is possible but incomplete. This motivates the exploration of methods contained within a broad Generalized Information Theory (GIT) that rely on less restrictive assumptions than probability theory. Probability, fuzzy sets, possibility, and imprecise probability (probability boxes (p-boxes) and Dempster-Shafer) are used to assess the uncertainty in composite plate damage. Furthermore, this work highlights the usefulness of each theory. The purpose of the study is not to compare directly the different GIT methods but to show that they can be deployed on a practical application and to compare the assumptions upon which these theories are based. The data sets consist of experimental measurements and finite element predictions of the amount of delamination and fiber splitting damage as multilayered composite plates are impacted by a projectile at various velocities. The physical experiments consist of using a gas gun to impact suspended plates with a projectile accelerated to prescribed velocities, then, taking ultrasound images of the resulting delamination. The nonlinear, multiple length-scale numerical simulations couple local crack propagation implemented through cohesive zone modeling to global stress-displacement finite element analysis. The assessment of damage uncertainty is performed in three steps by, first, considering the test data only; then, considering the simulation data only; finally, performing an assessment of total uncertainty where test and simulation data sets are combined. This study leads to practical recommendations for reducing the uncertainty and improving the prediction accuracy of the damage modeling and finite element simulation.
Type Arithmetics Computation based on the theory of types
Kiselyov, O
2001-01-01
The present paper shows meta-programming turn programming, which is rich enough to express arbitrary arithmetic computations. We demonstrate a type system that implements Peano arithmetics, slightly generalized to negative numbers. Certain types in this system denote numerals. Arithmetic operations on such types-numerals - addition, subtraction, and even division - are expressed as type reduction rules executed by a compiler. A remarkable trait is that division by zero becomes a type error - and reported as such by a compiler.
Hui, Kai Hwee; Ambrosi, Adriano; Sofer, Zden?k; Pumera, Martin; Bonanni, Alessandra
2015-05-01
Graphene doped with heteroatoms can show new or improved properties as compared to the original undoped material. It has been reported that the type of heteroatoms and the doping conditions can have a strong influence on the electronic and electrochemical properties of the resulting material. Here, we wish to compare the electrochemical behavior of two n-type and two p-type doped graphenes, namely boron-doped graphenes and nitrogen-doped graphenes containing different amounts of heteroatoms. We show that the boron-doped graphene containing a higher amount of dopants provides the best electroanalytical performance in terms of calibration sensitivity, selectivity and linearity of response for the detection of gallic acid normally used as the standard probe for the quantification of antioxidant activity of food and beverages. Our findings demonstrate that the type and amount of heteroatoms used for the doping have a profound influence on the electrochemical detection of gallic acid rather than the structural properties of the materials such as amounts of defects, oxygen functionalities and surface area. This finding has a profound influence on the application of doped graphenes in the field of analytical chemistry.Graphene doped with heteroatoms can show new or improved properties as compared to the original undoped material. It has been reported that the type of heteroatoms and the doping conditions can have a strong influence on the electronic and electrochemical properties of the resulting material. Here, we wish to compare the electrochemical behavior of two n-type and two p-type doped graphenes, namely boron-doped graphenes and nitrogen-doped graphenes containing different amounts of heteroatoms. We show that the boron-doped graphene containing a higher amount of dopants provides the best electroanalytical performance in terms of calibration sensitivity, selectivity and linearity of response for the detection of gallic acid normally used as the standard probe for the quantification of antioxidant activity of food and beverages. Our findings demonstrate that the type and amount of heteroatoms used for the doping have a profound influence on the electrochemical detection of gallic acid rather than the structural properties of the materials such as amounts of defects, oxygen functionalities and surface area. This finding has a profound influence on the application of doped graphenes in the field of analytical chemistry. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr01045d
Godel Type Metrics in Einstein-Aether Theory
Gurses, Metin
2008-01-01
Aether theory is introduced to implement the violation of the Lorentz invariance in general relativity. For this purpose a unit timelike vector field introduced to theory in addition to the metric tensor. Aether theory contains four free parameters which satisfy some inequalities in order that the theory to be consistent with the observations. We show that the G{\\" o}del type of metrics of general relativity are also exact solutions of the Einstein-aether theory. The only fi...
Numerical Domain Wall Type Solutions in phi**4 Theory
Karkowski, J.; Swierczynski, Z.
1996-01-01
The well known domain wall type solutions are nowadays of great physical interest in classical field theory. These solutions can mostly be found only approximately. Recently the Hilbert-Chapman-Enskog method was succesfully applied to obtain this type solutions in phi**4 theory. The goal of the present paper is to verify these perturbative results by numerical computations.
Toward a Theory of Psychological Type Congruence for Advertisers.
McBride, Michael H.; And Others
Focusing on the impact of advertisers' persuasive selling messages on consumers, this paper discusses topics relating to the theory of psychological type congruence. Based on an examination of persuasion theory and relevant psychological concepts, including recent cognitive stability and personality and needs theory and the older concept of…
DEFF Research Database (Denmark)
Ding, Ming; Hvid, I
2000-01-01
Structure model type and trabecular thickness are important characteristics in describing cancellous bone architecture. It has been qualitatively observed that a radical change of trabeculae from plate-like to rod-like occurs in aging, bone remodeling, and osteoporosis. Thickness of trabeculae has traditionally been measured using model-based histomorphometric methods on two-dimensional (2-D) sections. However, no quantitative study has been published based on three-dimensional (3-D) methods on the age-related changes in structure model type and trabecular thickness for human peripheral (tibial) cancellous bone. In this study, 160 human proximal tibial cancellous bone specimens from 40 normal donors, aged 16 to 85 years, were collected. These specimens were micro-computed tomography (micro-CT) scanned, then the micro-CT images were segmented using optimal thresholds. From accurate 3-D data sets, structure model type and trabecular thickness were quantified by means of novel 3-D methods. Structure model type was assessed by calculating the structure model index (SMI). The SMI was quantified based on a differential analysis of the triangulated bone surface of a structure. This technique allows quantification of structure model type, such as plate, rod objects, or mixture of plates or rods. Trabecular thickness is calculated directly from 3-D images, which is especially important for an a priori unknown or changing structure. Furthermore, 2-D trabecular thickness was also calculated based on the plate model. Our results showed that structure model type changed towards more rod-like in the elderly, and that trabecular thickness declined significantly with age. These changes become significant after 80 years of age for human tibial cancellous bone, whereas both properties seem to remain relatively unchanged between 20 and 80 years. Although a fairly close relationship was seen between 3-D trabecular thickness and 2-D trabecular thickness, real 3-D trabecular thickness was significantly underestimated using 2-D method.
Closed tachyon solitons in type II string theory
García-Etxebarria, Iñaki; Uranga, Angel M
2015-01-01
Type II theories can be described as the endpoint of closed string tachyon condensation in certain orbifolds of supercritical type 0 theories. In this paper, we study solitons of this closed string tachyon and analyze the nature of the resulting defects in critical type II theories. The solitons are classified by the real K-theory groups KO of bundles associated to pairs of supercritical dimensions. For real codimension 4 and 8, corresponding to $KO({\\bf S}^4)={\\bf Z}$ and $KO({\\bf S}^8)={\\bf Z}$, the defects correspond to a gravitational instanton and a fundamental string, respectively. We apply these ideas to reinterpret the worldsheet GLSM, regarded as a supercritical theory on the ambient toric space with closed tachyon condensation onto the CY hypersurface, and use it to describe charged solitons under discrete isometries. We also suggest the possible applications of supercritical strings to the physical interpretation of the matrix factorization description of F-theory on singular spaces.
Khademi, April; Hosseinzadeh, Danoush
2014-03-01
Alzheimer's disease (AD) is the most common form of dementia in the elderly characterized by extracellular deposition of amyloid plaques (AP). Using animal models, AP loads have been manually measured from histological specimens to understand disease etiology, as well as response to treatment. Due to the manual nature of these approaches, obtaining the AP load is labourious, subjective and error prone. Automated algorithms can be designed to alleviate these challenges by objectively segmenting AP. In this paper, we focus on the development of a novel algorithm for AP segmentation based on robust preprocessing and a Type II fuzzy system. Type II fuzzy systems are much more advantageous over the traditional Type I fuzzy systems, since ambiguity in the membership function may be modeled and exploited to generate excellent segmentation results. The ambiguity in the membership function is defined as an adaptively changing parameter that is tuned based on the local contrast characteristics of the image. Using transgenic mouse brains with AP ground truth, validation studies were carried out showing a high degree of overlap and low degree of oversegmentation (0.8233 and 0.0917, respectively). The results highlight that such a framework is able to handle plaques of various types (diffuse, punctate), plaques with varying A? concentrations as well as intensity variation caused by treatment effects or staining variability.
Theory and Practice of Higher-type Computation (Tutorial)
Escardó, Martin; Tutorials
2009-01-01
In higher-type computation, established by Kleene and Kreisel in the late 1950's (independently), one works with the data types obtained from the discrete natural numbers by closing under finite products and function spaces. For the theory of higher-type programming languages, it is natural to work with a corresponding hierarchy, or type structure, of domains, identified by Ershov and Scott in the late 1960's (again independently). The Kleene-Kreisel and Ershov-Scott hierarchies account for t...
Simple Type Theory with Undefinedness, Quotation, and Evaluation
Farmer, William M.
2014-01-01
This paper presents a version of simple type theory called ${\\cal Q}^{\\rm uqe}_{0}$ that is based on ${\\cal Q}_0$, the elegant formulation of Church's type theory created and extensively studied by Peter B. Andrews. ${\\cal Q}^{\\rm uqe}_{0}$ directly formalizes the traditional approach to undefinedness in which undefined expressions are treated as legitimate, nondenoting expressions that can be components of meaningful statements. ${\\cal Q}^{\\rm uqe}_{0}$ is also equipped wit...
Gravitational anomaly cancellation in type I superstring theory
International Nuclear Information System (INIS)
By explicit calculations we show that the gravitational anomaly of type I superstring theory vanishes at the string level. There are contributions from four topologically different diagrams to the anomaly: annulus, Moebius strip, torus, and Klein bottle. We explicitly show how the non-trivial cancellation occurs between the open (annulus and Moebius strip) and closed (Klein bottle) sectors. The anomaly of the torus diagram has the same form of type II superstring theory and vanishes because of the modular invariance. (orig.)
Gravitational anomaly cancellation in type I superstring theory
Energy Technology Data Exchange (ETDEWEB)
Hayashi, Masahito; Kawamotu, Noboru; Kuramoto, Tetsuji; Shigemoto, Kazuyasu
1988-01-11
By explicit calculations we show that the gravitational anomaly of type I superstring theory vanishes at the string level. There are contributions from four topologically different diagrams to the anomaly: annulus, Moebius strip, torus, and Klein bottle. We explicitly show how the non-trivial cancellation occurs between the open (annulus and Moebius strip) and closed (Klein bottle) sectors. The anomaly of the torus diagram has the same form of type II superstring theory and vanishes because of the modular invariance.
Isotropization of Bianchi type models in string effective theories
Energy Technology Data Exchange (ETDEWEB)
Cervantes C, J.L.; Rodriguez M, M.A.; Nahmad, M. [Dep. de Fisica, ININ, A.P. 18-1027, 11801 Mexico D.F. (Mexico)]. e-mail: jorge@nuclear.inin.mx
2003-07-01
Using scaled variables we are able to integrate an equation valid for isotropic and anisotropic Bianchi type I, V, IX models in Brans-Dicke theory. We specialize our analysis for the case in which {omega} = - 1 that corresponds to string effective theories. In these theories we analyze the possibility that anisotropic models asymptotically isotropize, and/or possess inflationary properties. Additionally, a new solution of curve (k {ne} O) Friedmann-Robertson-Walker (FRW) cosmologies is discussed. (Author)
International Nuclear Information System (INIS)
Several analytical techniques that are currently available can be used to determine the spatial distribution and amount of austenite, ferrite and precipitate phases in steels. The application of magnetic force microscopy, in particular, to study the local microstructure of stainless steels is beneficial due to the selectivity of this technique for detection of ferromagnetic phases. In the comparison of Magnetic Force Microscopy and Electron Back-Scatter Diffraction for the morphological mapping and quantification of ferrite, the degree of sub-surface measurement has been found to be critical. Through the use of surface shielding, it has been possible to show that Magnetic Force Microscopy has a measurement depth of 105–140 nm. A comparison of the two techniques together with the depth of measurement capabilities are discussed. - Highlights: • MFM used to map distribution and quantify ferrite in type 321 stainless steels. • MFM results compared with EBSD for same region, showing good spatial correlation. • MFM gives higher area fraction of ferrite than EBSD due to sub-surface measurement. • From controlled experiments MFM depth sensitivity measured from 105 to 140 nm. • A correction factor to calculate area fraction from MFM data is estimated
Intensional Type Theory with Guarded Recursive Types qua Fixed Points on Universes
DEFF Research Database (Denmark)
Birkedal, Lars; Mogelberg, R.E.
2013-01-01
Guarded recursive functions and types are useful for giving semantics to advanced programming languages and for higher-order programming with infinite data types, such as streams, e.g., for modeling reactive systems. We propose an extension of intensional type theory with rules for forming fixed points of guarded recursive functions. Guarded recursive types can be formed simply by taking fixed points of guarded recursive functions on the universe of types. Moreover, we present a general model construction for constructing models of the intensional type theory with guarded recursive functions and types. When applied to the groupoid model of intensional type theory with the universe of small discrete groupoids, the construction gives a model of guarded recursion for which there is a one-to-one correspondence between fixed points of functions on the universe of types and fixed points of (suitable) operators on types. In particular, we find that the functor category Grpd?op from the preordered set of natural numbers to the category of groupoids is a model of intensional type theory with guarded recursive types.
Intensional type theory with guarded recursive types qua fixed points on universes
DEFF Research Database (Denmark)
MØgelberg, Rasmus Ejlers; Birkedal, Lars
2013-01-01
Guarded recursive functions and types are useful for giving semantics to advanced programming languages and for higher-order programming with infinite data types, such as streams, e.g., for modeling reactive systems. We propose an extension of intensional type theory with rules for forming fixed points of guarded recursive functions. Guarded recursive types can be formed simply by taking fixed points of guarded recursive functions on the universe of types. Moreover, we present a general model construction for constructing models of the intensional type theory with guarded recursive functions and types. When applied to the groupoid model of intensional type theory with the universe of small discrete groupoids, the construction gives a model of guarded recursion for which there is a one-to-one correspondence between fixed points of functions on the universe of types and fixed points of (suitable) operators on types. In particular, we find that the functor category from the preordered set of natural numbers to the category of groupoids is a model of intensional type theory with guarded recursive types.
Type IIB string theory, S-duality, and generalized cohomology
Energy Technology Data Exchange (ETDEWEB)
Kriz, Igor [Department of Mathematics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: ikriz@umich.edu; Sati, Hisham [Department of Physics, University of Adelaide, Adelaide, SA 5005 (Australia) and Department of Pure Mathematics, University of Adelaide, Adelaide, SA 5005 (Australia)]. E-mail: hsati@maths.adelaide.edu.au
2005-05-30
In the presence of background Neveu-Schwarz flux, the description of the Ramond-Ramond fields of type IIB string theory using twisted K-theory is not compatible with S-duality. We argue that other possible variants of twisted K-theory would still not resolve this issue. We propose instead a connection of S-duality with elliptic cohomology, and a possible T-duality relation of this to a previous proposal for IIA theory, and higher-dimensional limits. In the process, we obtain some other results which may be interesting on their own. In particular, we prove a conjecture of Witten that the 11-dimensional spin cobordism group vanishes on K(Z,6), which eliminates a potential new {theta}-angle in type IIB string theory.
Type IIB string theory, S-duality, and generalized cohomology
International Nuclear Information System (INIS)
In the presence of background Neveu-Schwarz flux, the description of the Ramond-Ramond fields of type IIB string theory using twisted K-theory is not compatible with S-duality. We argue that other possible variants of twisted K-theory would still not resolve this issue. We propose instead a connection of S-duality with elliptic cohomology, and a possible T-duality relation of this to a previous proposal for IIA theory, and higher-dimensional limits. In the process, we obtain some other results which may be interesting on their own. In particular, we prove a conjecture of Witten that the 11-dimensional spin cobordism group vanishes on K(Z,6), which eliminates a potential new ?-angle in type IIB string theory
Type IIB String Theory, S-Duality, and Generalized Cohomology
Kriz, Igor; Sati, Hisham
2004-01-01
In the presence of background Neveu-Schwarz flux, the description of the Ramond-Ramond fields of type IIB string theory using twisted K-theory is not compatible with S-duality. We argue that other possible variants of twisted K-theory would still not resolve this issue. We propose instead a possible path to a solution using elliptic cohomology. We also discuss T-duality relation of this to a previous proposal for IIA theory, and higher-dimensional limits. In the process, we ...
Introduction to type-2 fuzzy logic control theory and applications
Mendel, Jerry M; Tan, Woei-Wan; Melek, William W; Ying, Hao
2014-01-01
Written by world-class leaders in type-2 fuzzy logic control, this book offers a self-contained reference for both researchers and students. The coverage provides both background and an extensive literature survey on fuzzy logic and related type-2 fuzzy control. It also includes research questions, experiment and simulation results, and downloadable computer programs on an associated website. This key resource will prove useful to students and engineers wanting to learn type-2 fuzzy control theory and its applications.
Calculating the Fundamental Group of the Circle in Homotopy Type Theory
Daniel R. Licata; Shulman, Michael
2013-01-01
Recent work on homotopy type theory exploits an exciting new correspondence between Martin-Lof's dependent type theory and the mathematical disciplines of category theory and homotopy theory. The category theory and homotopy theory suggest new principles to add to type theory, and type theory can be used in novel ways to formalize these areas of mathematics. In this paper, we formalize a basic result in algebraic topology, that the fundamental group of the circle is the inte...
DEFF Research Database (Denmark)
Barascuk, Natasha; Vassiliadis, Efstathios
2011-01-01
Degradation of collagen in the arterial wall by matrix metalloproteinases is the hallmark of atherosclerosis. We have developed an ELISA for the quantification of type III collagen degradation mediated by MMP-9 in urine.
Quantification of Spatial Parameters in 3D Cellular Constructs Using Graph Theory
G. E. Plopper; Zaki, M. J.; B. Yener; J. P. Stegemann; McKeen, L. M.; Bilgin, C. C.; Hasan, M. A.; A. W. Lund
2009-01-01
Multispectral three-dimensional (3D) imaging provides spatial information for biological structures that cannot be measured by traditional methods. This work presents a method of tracking 3D biological structures to quantify changes over time using graph theory. Cell-graphs were generated based on the pairwise distances, in 3D-Euclidean space, between nuclei during collagen I gel compaction. From these graphs quantitative features are extracted that measure both the global topography and the ...
Ryerson, F. J.; Ezzedine, S. M.; Antoun, T.
2013-12-01
The success of implementation and execution of numerous subsurface energy technologies such shale gas extraction, geothermal energy, underground coal gasification rely on detailed characterization of the geology and the subsurface properties. For example, spatial variability of subsurface permeability controls multi-phase flow, and hence impacts the prediction of reservoir performance. Subsurface properties can vary significantly over several length scales making detailed subsurface characterization unfeasible if not forbidden. Therefore, in common practices, only sparse measurements of data are available to image or characterize the entire reservoir. For example pressure, P, permeability, k, and production rate, Q, measurements are only available at the monitoring and operational wells. Elsewhere, the spatial distribution of k is determined by various deterministic or stochastic interpolation techniques and P and Q are calculated from the governing forward mass balance equation assuming k is given at all locations. Several uncertainty drivers, such as PSUADE, are then used to propagate and quantify the uncertainty (UQ) of quantities (variable) of interest using forward solvers. Unfortunately, forward-solver techniques and other interpolation schemes are rarely constrained by the inverse problem itself: given P and Q at observation points determine the spatially variable map of k. The approach presented here, motivated by fluid imaging for subsurface characterization and monitoring, was developed by progressively solving increasingly complex realistic problems. The essence of this novel approach is that the forward and inverse partial differential equations are the interpolator themselves for P, k and Q rather than extraneous and sometimes ad hoc schemes. Three cases with different sparsity of data are investigated. In the simplest case, a sufficient number of passive pressure data (pre-production pressure gradients) are given. Here, only the inverse hyperbolic equation for the distribution of k is solved, provided that Cauchy data are appropriately assigned. In the next stage, only a limited number of passive measurements are provided. In this case, the forward and inverse PDEs are solved simultaneously. This is accomplished by adding regularization terms and filtering the pressure gradients in the inverse problem. Both the forward and the inverse problem are either simultaneously or sequentially coupled and solved using implicit schemes, adaptive mesh refinement, Galerkin finite elements. The final case arises when P, k, and Q data only exist at producing wells. This exceedingly ill posed problem calls for additional constraints on the forward-inverse coupling to insure that the production rates are satisfied at the desired locations. Results from all three cases are presented demonstrating stability and accuracy of the proposed approach and, more importantly, providing some insights into the consequences of data under sampling, uncertainty propagation and quantification. We illustrate the advantages of this novel approach over the common UQ forward drivers on several subsurface energy problems in either porous or fractured or/and faulted reservoirs. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Penner Type Matrix Model and Seiberg-Witten Theory
Eguchi, Tohru; Maruyoshi, Kazunobu
2009-01-01
We discuss the Penner type matrix model recently proposed by Dijkgraaf and Vafa for a possible explanation of the relation between four-dimensional gauge theory and Liouville theory by making use of the connection of the matrix model to two-dimensional CFT. We first consider the relation of gauge couplings defined in UV and IR regimes of N_f = 4, N = 2 supersymmetric gauge theory being related as $q_{{\\rm UV}}={\\vartheta_2(q_{{\\rm IR}})^4/\\vartheta_3(q_{{\\rm IR}})^4}$. We th...
Multivariate Bonferroni-type inequalities theory and applications
Chen, John
2014-01-01
Multivariate Bonferroni-Type Inequalities: Theory and Applications presents a systematic account of research discoveries on multivariate Bonferroni-type inequalities published in the past decade. The emergence of new bounding approaches pushes the conventional definitions of optimal inequalities and demands new insights into linear and Fréchet optimality. The book explores these advances in bounding techniques with corresponding innovative applications. It presents the method of linear programming for multivariate bounds, multivariate hybrid bounds, sub-Markovian bounds, and bounds using Hamil
Calabi-Yau compactifications of type IIB superstring theory
International Nuclear Information System (INIS)
Starting from a non-self-dual action for ten dimensional type IIB supergravity this theory is compactified on a Calabi-Yau 3-fold and 4- fold. The compactification are thereby performed in the limit, in which the volumina of the manifolds are large against the string scale
Rainich theory for type D aligned Einstein-Maxwell solutions
Ferrando, Joan Josep; Sáez, Juan Antonio
2007-01-01
The original Rainich theory for the non-null Einstein-Maxwell solutions consists of a set of algebraic conditions and the Rainich (differential) equation. We show here that the subclass of type D aligned solutions can be characterized just by algebraic restrictions.
On the homotopy theory of n-types
Biedermann, G
2006-01-01
An n-truncated model structure on simplicial (pre-)sheaves is described having as weak equivalences maps that induce isomorphisms on certain homotopy sheaves only up to degree n. Starting from one of Jardine's intermediate model structures we construct such an n-type model structure via Bousfield-Friedlander localization and exhibit useful generating sets of trivial cofibrations. Injectively fibrant objects in these categories are called n-hyperstacks. The whole setup can consequently be viewed as a description of the homotopy theory of higher hyperstacks. More importantly, we construct analogous n-truncations on simplicial groupoids and prove a Quillen equivalence between these settings. We achieve a classification of n-types of simplicial presheaves in terms of (n-1)-types of presheaves of simplicial groupoids. Our classification holds for general n. Therefore this can also be viewed as the homotopy theory of (pre-)sheaves of (weak) higher groupoids.
A ground many-valued type theory and its extensions.
Czech Academy of Sciences Publication Activity Database
B?hounek, Libor
Linz : Johannes Kepler Universität, 2014 - (Flaminio, T.; Godo, L.; Gottwald, S.; Klement, E.). s. 15-18 [Linz Seminar on Fuzzy Set Theory /35./. 18.02.2014-22.02.2014, Linz] R&D Projects: GA MŠk ED1.1.00/02.0070; GA MŠk EE2.3.30.0010 Institutional support: RVO:67985807 Keywords : type theory * many-valued logics * higher-order logic * teorie typ? * vícehodnotové logiky * logika vyššího ?ádu Subject RIV: BA - General Mathematics
Gauge-Higgs Unification in Lifshitz Type Gauge Theory
Hatanaka, Hisaki; Sakamoto, Makoto; Takenaga, Kazunori
2011-01-01
We discuss the gauge-Higgs unification in a framework of Lifshitz type gauge theory. We study a higher dimensional gauge theory on R^{D-1}\\times S^{1} in which the normal second (first) order derivative terms for scalar (fermion) fields in the action are replaced by higher order derivative ones for the direction of the extra dimension. We provide some mathematical tools to evaluate a one-loop effective potential for the zero mode of the extra component of a higher dimensiona...
Type 1 2HDM as Effective Theory of Supersymmetry
International Nuclear Information System (INIS)
It is generally believed that the low energy effective theory of the minimal supersymmetric standard model is the type 2 two Higgs doublet model. We will show that the type 1 two Higgs doublet model can also be as the effective of supersymmetry in a specific case with high scale supersymmetry breaking and gauge mediation. If the other electroweak doublet obtain the vacuum expectation value after the electroweak symmetry breaking, the Higgs spectrum is quite different. A remarkable feature is that the physical Higgs boson mass can be 125 GeV unlike in the ordinary models with high scale supersymmetry in which the Higgs mass is generally around 140 GeV.
On global anomalies in type IIB string theory
Sati, Hisham
2011-01-01
We study global gravitational anomalies in type IIB string theory with nontrivial middle cohomology. This requires the study of the action of diffeomorphisms on this group. Several results and constructions, including some recent vanishing results via elliptic genera, make it possible to consider this problem. Along the way, we describe in detail the intersection pairing and the action of diffeomorphisms, and highlight the appearance of various structures, including the Rochlin invariant and its variants on the mapping torus.
On global anomalies in type IIB string theory
Sati, Hisham
2011-01-01
We study global gravitational anomalies in type IIB string theory with nontrivial middle cohomology. This requires the study of the action of diffeomorphisms on this group. Several results and constructions, including some recent vanishing results via elliptic genera, make it possible to consider this problem. Along the way, we describe in detail the intersection pairing and the action of diffeomorphisms, and highlight the appearance of various structures, including the Roch...
Enhanced gauge symmetry in type II string theory
International Nuclear Information System (INIS)
We show how enhanced gauge symmetry in type II string theory compactified on a Calabi-Yau threefold arises from singularities in the geometry of the target space. When the target space of the type IIA string acquires a genus g curve C of AN-1 singularities, we find that an SU(N) gauge theory with g adjoint hypermultiplets appears at the singularity. The new massless states correspond to solitons wrapped about the collapsing cycles, and their dynamics is described by a twisted supersymmetric gauge theory on C x R4. We reproduce this result from an analysis of the S-dual D-manifold. We check that the predictions made by this model about the nature of the Higgs branch, the monodromy of period integrals, and the asymptotics of the one-loop topological amplitude are in agreement with geometrical computations. In one of our examples we find that the singularity occurs at strong coupling in the heterotic dual proposed by Kachru and Vafa. (orig.)
A type reduction theory for systems with replicated components
Mazur, Tomasz
2012-01-01
The Parameterised Model Checking Problem asks whether an implementation $Impl(t)$ satisfies a specification $\\Spec(t)$ for all instantiations of parameter $t$. In general, $t$ can determine numerous entities: the number of processes used in a network, the type of data, the capacities of buffers, etc. The main theme of this paper is automation of uniform verification of a subclass of PMCP with the parameter of the first kind, i.e. the number of processes in the network. We use CSP as our formalism. We present a type reduction theory, which, for a given verification problem, establishes a function $\\phi$ that maps all (sufficiently large) instantiations $T$ of the parameter to some fixed type $\\shiftedhat{T}$ and allows us to deduce that if $\\Spec(\\shiftedhat{T})$ is refined by $\\phi(Impl(T))$, then (subject to certain assumptions) $\\Spec(T)$ is refined by $Impl(T)$. The theory can be used in practice by combining it with a suitable abstraction method that produces a $t$-independent process $Abstr$ that is refi...
A graphical representation of Weisskopf-Wigner type theories
International Nuclear Information System (INIS)
It is shown that the usual (non-existing) Hamiltonian operator H' governing the interaction of a one-electron atom with transverse photons, can be written as the sum of a finite number of self-adjoint and bounded 'partial' interaction Hamiltonians L, where each L has a well defined physical meaning. The simplest of the Weisskopf-Wigner type theories are defined by a single L and practically all model Hamiltonians used in quantum optics are closely related either to a single L or to sums of very few L. The systematic 'Weisskopf-Wigner approximation scheme' introduced previously consists of special sequences of partial sums of L. The system of partial sums of L are here equipped with a system of graphs where each graph defines uniquely a certain Weisskopf-Wigner theory and visualises its physical content in a comparable way to a Feynman graph. Finally some applications are given. (author)
Irregular singularities in Liouville theory and Argyres-Douglas type gauge theories, I
International Nuclear Information System (INIS)
Motivated by problems arising in the study of N=2 supersymmetric gauge theories we introduce and study irregular singularities in two-dimensional conformal field theory, here Liouville theory. Irregular singularities are associated to representations of the Virasoro algebra in which a subset of the annihilation part of the algebra act diagonally. In this paper we define natural bases for the space of conformal blocks in the presence of irregular singularities, describe how to calculate their series expansions, and how such conformal blocks can be constructed by some delicate limiting procedure from ordinary conformal blocks. This leads us to a proposal for the structure functions appearing in the decomposition of physical correlation functions with irregular singularities into conformal blocks. Taken together, we get a precise prediction for the partition functions of some Argyres-Douglas type theories on S4. (orig.)
Irregular singularities in Liouville theory and Argyres-Douglas type gauge theories, I
Energy Technology Data Exchange (ETDEWEB)
Gaiotto, D. [Institute for Advanced Study (IAS), Princeton, NJ (United States); Teschner, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2012-03-15
Motivated by problems arising in the study of N=2 supersymmetric gauge theories we introduce and study irregular singularities in two-dimensional conformal field theory, here Liouville theory. Irregular singularities are associated to representations of the Virasoro algebra in which a subset of the annihilation part of the algebra act diagonally. In this paper we define natural bases for the space of conformal blocks in the presence of irregular singularities, describe how to calculate their series expansions, and how such conformal blocks can be constructed by some delicate limiting procedure from ordinary conformal blocks. This leads us to a proposal for the structure functions appearing in the decomposition of physical correlation functions with irregular singularities into conformal blocks. Taken together, we get a precise prediction for the partition functions of some Argyres-Douglas type theories on S{sup 4}. (orig.)
Church-style type theories over finitary weakly implicative logics.
Czech Academy of Sciences Publication Activity Database
B?hounek, Libor
Vienna : Vienna University of Technology, 2014 - (Baaz, M.; Ciabattoni, A.; Hetzl, S.). s. 131-133 [LATD 2014. Logic, Algebra and Truth Degrees. 16.07.2014-19.07.2014, Vienna] R&D Projects: GA MŠk ED1.1.00/02.0070; GA MŠk EE2.3.30.0010 Institutional support: RVO:67985807 Keywords : type theory * higher-order logic * weakly implicative logics * teorie typ? * logika vyššího ?ádu * slab? implika?ní logiky Subject RIV: BA - General Mathematics
Nucleation of vacuum bubbles in Brans-Dicke type theory
Kim, Hongsu; Lee, Bum-Hoon; Lee, Wonwoo; Lee, Young Jae; Yeom, Dong-han(Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto, 606-8502, Japan)
2010-01-01
In this paper, we explore the nucleation of vacuum bubbles in the Brans-Dicke type theory of gravity. In the Euclidean signature, we evaluate the fields at the vacuum bubbles as solutions of the Euler-Lagrange equations of motion as well as the bubble nucleation probabilities by integrating the Euclidean action. We illustrate three possible ways to obtain vacuum bubbles: true vacuum bubbles for \\omega>-3/2, false vacuum bubbles for \\omega-3/2 when the vacuum energy of the false...
DEFF Research Database (Denmark)
Balint, Adam; Tenk, Miklós
2009-01-01
A real-time PCR assay, based on Primer-Probe Energy Transfer (PriProET), was developed to improve the detection and quantification of porcine circovirus type 2 (PVC2). PCV2 is recognised as the essential infectious agent in post-weaning multisystemic wasting syndrome (PMWS) and has been associated with other disease syndromes such as porcine dermatitis and nephropathy syndrome (PDNS) and porcine respiratory disease complex (PRDC). Since circoviruses commonly occur in the pig populations and there is a correlation between the severity of the disease and the viral load in the organs and blood, it is important not only to detect PCV2 but also to determine the quantitative aspects of viral load. The PriProET real-time PCR assay described in this study was tested on various virus strains and clinical forms of PMWS in order to investigate any correlation between the clinical signs and viral loads in different organs. The data obtained in this study correlate with those described earlier; namely, the viral load in 1ml plasma and in 500 ng tissue DNA exceeds 10(7) copies in the case of PMWS. The results indicate that the new assay provides a specific, sensitive and robust tool for the improved detection and quantification of PCV2.
Warren, A D; Harniman, R L; Collins, A M; Davis, S A; Younes, C M; Flewitt, P E J; Scott, T B
2015-01-01
Several analytical techniques that are currently available can be used to determine the spatial distribution and amount of austenite, ferrite and precipitate phases in steels. The application of magnetic force microscopy, in particular, to study the local microstructure of stainless steels is beneficial due to the selectivity of this technique for detection of ferromagnetic phases. In the comparison of Magnetic Force Microscopy and Electron Back-Scatter Diffraction for the morphological mapping and quantification of ferrite, the degree of sub-surface measurement has been found to be critical. Through the use of surface shielding, it has been possible to show that Magnetic Force Microscopy has a measurement depth of 105-140 nm. A comparison of the two techniques together with the depth of measurement capabilities are discussed. PMID:25195013
Costa, Joana; Ansari, Parisa; Mafra, Isabel; Oliveira, M Beatriz P P; Baumgartner, Sabine
2015-04-15
Hazelnut is one of the most appreciated nuts being virtually found in a wide range of processed foods. The simple presence of trace amounts of hazelnut in foods can represent a potential risk for eliciting allergic reactions in sensitised individuals. The correct labelling of processed foods is mandatory to avoid adverse reactions. Therefore, adequate methodology evaluating the presence of offending foods is of great importance. Thus, the aim of this study was to develop a highly specific and sensitive sandwich enzyme-linked immunosorbent assay (ELISA) for the detection and quantification of hazelnut in complex food matrices. Using in-house produced antibodies, an ELISA system was developed capable to detect hazelnut down to 1 mg kg(-1) and quantify this nut down to 50 mg kg(-1) in chocolates spiked with known amounts of hazelnut. These results highlight and reinforce the value of ELISA as rapid and reliable tool for the detection of allergens in foods. PMID:25466021
Type IIB flux vacua from G-theory I
Candelas, Philip; Damian, Cesar; Larfors, Magdalena; Morales, Jose Francisco
2014-01-01
We construct non-perturbatively exact four-dimensional Minkowski vacua of type IIB string theory with non-trivial fluxes. These solutions are found by gluing together, consistently with U-duality, local solutions of type IIB supergravity on $T^4 \\times \\mathbb{C}$ with the metric, dilaton and flux potentials varying along $\\mathbb{C}$ and the flux potentials oriented along $T^4$. We focus on solutions locally related via U-duality to non-compact Ricci-flat geometries. More general solutions and a complete analysis of the supersymmetry equations are presented in the companion paper [1]. We build a precise dictionary between fluxes in the global solutions and the geometry of an auxiliary $K3$ surface fibered over $\\mathbb{CP}^1$. In the spirit of F-theory, the flux potentials are expressed in terms of locally holomorphic functions that parametrize the complex structure moduli space of the $K3$ fiber in the auxiliary geometry. The brane content is inferred from the monodromy data around the degeneration points o...
Aspects of moduli stabilization in type IIB string theory
Khalil, Shaaban; Nassar, Ali
2015-01-01
We review moduli stabilization in type IIB string theory compactification with fluxes. We focus on the KKLT and Large Volume Scenario (LVS). We show that the predicted soft SUSY breaking terms in KKLT model are not phenomenological viable. In LVS, the following result for scalar mass, gaugino mass, and trilinear term is obtained: $m_0 =m_{1/2}= - A_0=m_{3/2}$, which may account for Higgs mass limit if $m_{3/2} \\sim {\\cal O}(1.5)$ TeV. However, in this case the relic abundance of the lightest neutralino can not be consistent with the measured limits. We also study the cosmological consequences of moduli stabilization in both models. In particular, the associated inflation models such as racetrack inflation and K\\"ahler inflation are analyzed. Finally the problem of moduli destabilization and the effect of string moduli backreaction on the inflation models are discussed.
Raimundo, S; Giray, J; Volff, J N; Schwab, M; Altenbuchner, J; Ratge, D; Wisser, H
1999-07-01
Adenylyl cyclases (Acs) and VI are the predominant form of Acs in mammalian heart where they are part of the beta-adrenergic pathway. Up to now, the sequences for both enzymes from human tissues have not yet been reported. We investigated the mRNA expression AC V and VI in human colon, heart, liver, lung and MNL. Partial cDNAs of human types V and VI Acs were amplified by PCR using oligonucleotides derived from the cytoplasmatic domain sequences of the corresponding enzymes from dog heart. Primers derived from the human sequence were used to detect the mRNAs corresponding to both Acs. For quantification of mRNAs we constructed internal standards for competitive quantitative reverse transcriptase PCR (RT-PCR). Both types of transcripts could be found in all investigated tissues except MNL where only type VI could be detected. Further we demonstrated a more than 60 times higher amount of AC V-mRNA in human heart compared to AC VI-mRNA. PMID:10481931
Geometry of model building in type IIB superstring theory and F-theory compactifications
International Nuclear Information System (INIS)
The present thesis is devoted to the study and geometrical description of type IIB superstring theory and F-theory model building. After a concise exposition of the basic concepts of type IIB flux compactifications, we explain their relation to F-theory. Moreover, we give a brief introduction to toric geometry focusing on the construction and the analysis of compact Calabi-Yau (CY) manifolds, which play a prominent role in the compactification of extra spatial dimensions. We study the 'Large Volume Scenario' on explicit new compact four-modulus CY manifolds. We thoroughly analyze the possibility of generating neutral non-perturbative superpotentials from Euclidean D3-branes in the presence of chirally intersecting D7-branes. We find that taking proper account of the Freed-Witten anomaly on non-spin cycles and of the Kaehler cone conditions imposes severe constraints on the models. Furthermore, we systematically construct a large number of compact CY fourfolds that are suitable for F-theory model building. These elliptically fibered CYs are complete intersections of two hypersurfaces in a six-dimensional ambient space. We first construct three-dimensional base manifolds that are hypersurfaces in a toric ambient space. We find that elementary conditions, which are motivated by F-theory GUTs (Grand Unified Theory), lead to strong constraints on the geometry, which significantly reduce the number of suitable models. We work out several examples in more detail. At the end,veral examples in more detail. At the end, we focus on the complex moduli space of CY threefolds. It is a known result that infinite sequences of type IIB flux vacua with imaginary self-dual flux can only occur in so-called D-limits, corresponding to singular points in complex structure moduli space. We refine this no-go theorem by demonstrating that there are no infinite sequences accumulating to the large complex structure point of a certain class of one-parameter CY manifolds. We perform a similar analysis for conifold points and for the decoupling limit, obtaining identical results. Furthermore, we establish the absence of infinite sequences in a D-limit corresponding to the large complex structure limit of a two-parameter CY. We corroborate our results with a numerical study ofthe sequences. (author)
Blondeel, A.; Clauws, P.
1999-12-01
The characterization of high-purity (HP) Ge for the fabrication of ?-ray detectors poses very specific demands due to the high degree of purity of the material (shallow concentration of the order 109-1010 cm-3). Deep level transient spectroscopy (DLTS) may still be applied to this kind of material since the sensitivity is relative to the shallow doping concentration. In contrast with p-type HP Ge which was characterized extensively in the 1980s, very little is known about deep defects in n-type HP Ge. Two optical variants of DLTS have been applied to n-type HP Ge and quantified for the first time. Several deep minority carrier traps are detected and identified as mainly Cu-related traps with concentrations in the 106-108 cm-3 range. These Cu-related traps, which are well known as the majority carrier traps appearing in typical p-type HP Ge, are thus present as minority carrier traps in typical n-type HP Ge. The conclusion that deep-level defects in n- and p-type HP Ge are very similar could be expected from the similarity in growing conditions for the two types of materials. In the first DLTS variant, known as optical DLTS or ODLTS the deep levels are filled by optical injection (with light of above bandgap energy) at the back ohmic contact of a reverse biased diode. The spectrum is generated by the capacitance transients following the optical excitation. In the second variant, known as photo induced (Current) transient spectroscopy or PI(C)TS the deep levels are also filled optically with intrinsic light, but here a neutral structure is used with two ohmic contacts in sandwich configuration. The spectrum is generated by current transients instead of capacitance transients. This method is especially suited for high-resistivity or semi-insulating materials which cannot be measured with capacitance-based DLTS. PICTS was applied to n-type Ge with a shallow concentration as low as 109 cm-3.
Type IIA flux compactifications. Vacua, effective theories and cosmological challenges
Energy Technology Data Exchange (ETDEWEB)
Koers, Simon
2009-07-30
In this thesis, we studied a number of type IIA SU(3)-structure compactifications with 06-planes on nilmanifolds and cosets, which are tractable enough to allow for an explicit derivation of the low energy effective theory. In particular we calculated the mass spectrum of the light scalar modes, using N = 1 supergravity techniques. For the torus and the Iwasawa solution, we have also performed an explicit Kaluza-Klein reduction, which led to the same result. For the nilmanifold examples we have found that there are always three unstabilized moduli corresponding to axions in the RR sector. On the other hand, in the coset models, except for SU(2) x SU(2), all moduli are stabilized. We discussed the Kaluza-Klein decoupling for the supersymmetric AdS vacua and found that it requires going to the Nearly-Calabi Yau limited. We searched for non-trivial de Sitter minima in the original flux potential away from the AdS vacuum. Finally, in chapter 7, we focused on a family of three coset spaces and constructed non-supersymmetric vacua on them. (orig.)
Type IIA flux compactifications. Vacua, effective theories and cosmological challenges
International Nuclear Information System (INIS)
In this thesis, we studied a number of type IIA SU(3)-structure compactifications with 06-planes on nilmanifolds and cosets, which are tractable enough to allow for an explicit derivation of the low energy effective theory. In particular we calculated the mass spectrum of the light scalar modes, using N = 1 supergravity techniques. For the torus and the Iwasawa solution, we have also performed an explicit Kaluza-Klein reduction, which led to the same result. For the nilmanifold examples we have found that there are always three unstabilized moduli corresponding to axions in the RR sector. On the other hand, in the coset models, except for SU(2) x SU(2), all moduli are stabilized. We discussed the Kaluza-Klein decoupling for the supersymmetric AdS vacua and found that it requires going to the Nearly-Calabi Yau limited. We searched for non-trivial de Sitter minima in the original flux potential away from the AdS vacuum. Finally, in chapter 7, we focused on a family of three coset spaces and constructed non-supersymmetric vacua on them. (orig.)
Bauzá, Antonio; Quiñonero, David; Frontera, Antonio; Ballester, Pablo
2015-01-01
In this manuscript we consider from a theoretical point of view the recently reported experimental quantification of anion-? interactions (the attractive force between electron deficient aromatic rings and anions) in solution using aryl extended calix[4]pyrrole receptors as model systems. Experimentally, two series of calix[4]pyrrole receptors functionalized, respectively, with two and four aryl rings at the meso positions, were used to assess the strength of chloride-? interactions in acetonitrile solution. As a result of these studies the contribution of each individual chloride-? interaction was quantified to be very small (<1 kcal/mol). This result is in contrast with the values derived from most theoretical calculations. Herein we report a theoretical study using high-level density functional theory (DFT) calculations that provides a plausible explanation for the observed disagreement between theory and experiment. The study reveals the existence of molecular interactions between solvent molecules and the aromatic walls of the receptors that strongly modulate the chloride-? interaction. In addition, the obtained theoretical results also suggest that the chloride-calix[4]pyrrole complex used as reference to dissect experimentally the contribution of the chloride-? interactions to the total binding energy for both the two and four-wall aryl-extended calix[4]pyrrole model systems is probably not ideal. PMID:25913375
DEFF Research Database (Denmark)
Pirayavaraporn, Chompak; Rades, Thomas
2013-01-01
Coalescence of polymer particles in polymer matrix tablets influences drug release. The literature has emphasized that coalescence occurs above the glass transition temperature (Tg) of the polymer and that water may plasticize (lower Tg) the polymer. However, we have shown previously that nonplasticizing water also influences coalescence of Eudragit RLPO; so there is a need to quantify the different types of water in Eudragit RLPO. The purpose of this study was to distinguish the types of water present in Eudragit RLPO polymer and to investigate the water loss kinetics for these different types of water. Eudragit RLPO was stored in tightly closed chambers at various relative humidities (0, 33, 56, 75, and 94%) until equilibrium was reached. Fourier transform infrared spectroscopy (FTIR)-DRIFTS was used to investigate molecular interactions between water and polymer, and water loss over time. Using a curve fitting procedure, the water region (3100-3,700 cm(-1)) of the spectra was analyzed, and used to identifywater present in differing environments in the polymer and to determine the water loss kinetics upon purging the sample with dry compressed air. It was found that four environments can be differentiated (dipole interaction of water with quaternary ammonium groups, water cluster, and water indirectly and directly binding to the carbonyl groups of the polymer) but it was not possible to distinguish whether the different types of water were lost at different rates. It is suggested that water is trapped in the polymer in different forms and this should be considered when investigating coalescence of polymer matrices.
J Hj. Kamaruzaman,; I Mohd Hasmadi,
2009-01-01
Information about current land cover type is essential at a certain level to ensure the optimum use of the land resources. Several approaches can be used to estimate land cover area, where remote sensing and Geographic Information System (GIS) is among the method. Therefore, this study was undertaken to evaluate how reliable these technologies in preparing information about land cover in Carey Island, Selangor of Peninsular Malaysia. Erdas Imagine 9.1 was used in digital image processing. A p...
Optimal Uncertainty Quantification
Owhadi, Houman; Sullivan, Timothy John; McKerns, Mike; Ortiz, Michael
2010-01-01
We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \\emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as extreme values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions, they have finite-dimensional reductions. As an application, we develop \\emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, contr...
Celi, Simona; Berti, Sergio
2014-10-01
Optical coherence tomography (OCT) is a catheter-based medical imaging technique that produces cross-sectional images of blood vessels. This technique is particularly useful for studying coronary atherosclerosis. In this paper, we present a new framework that allows a segmentation and quantification of OCT images of coronary arteries to define the plaque type and stenosis grading. These analyses are usually carried out on-line on the OCT-workstation where measuring is mainly operator-dependent and mouse-based. The aim of this program is to simplify and improve the processing of OCT images for morphometric investigations and to present a fast procedure to obtain 3D geometrical models that can also be used for external purposes such as for finite element simulations. The main phases of our toolbox are the lumen segmentation and the identification of the main tissues in the artery wall. We validated the proposed method with identification and segmentation manually performed by expert OCT readers. The method was evaluated on ten datasets from clinical routine and the validation was performed on 210 images randomly extracted from the pullbacks. Our results show that automated segmentation of the vessel and of the tissue components are possible off-line with a precision that is comparable to manual segmentation for the tissue component and to the proprietary-OCT-console for the lumen segmentation. Several OCT sections have been processed to provide clinical outcome. PMID:25077844
DEFF Research Database (Denmark)
Jensen, Charlotte Harken; Hansen, M
1998-01-01
This paper compares the results of procollagen type I N-terminal propeptide (PINP) quantification by radioimmunoassay (RIA) and enzyme linked immunosorbent assay (ELISA). PINP in serum from a patient with uremic hyperparathyroidism was measured in RIA and ELISA to 20 micrograms l-1 and 116 micrograms l-1 and the corresponding concentrations in dialysis fluid were 94.5 micrograms l-1 and 140 micrograms l-1, respectively. PINP antigen appears in two distinct peaks following size chromatography and the two peak fractions display immunological identity and identical M(r)'s (27 kDa: SDS-PAGE). Analysis of fractions from size separated amniotic fluid, serum and dialysis fluid demonstrated that the RIA failed to measure the low molecular weight form of PINP. However, the anti-PINP supplied with the RIA-kit and the anti-PINP applied in the ELISA reacted equally well with both molecular forms of PINP when analysed in a direct ELISA. It is concluded that the major difference in the ELISA and RIA results is due to assayefficacy with respect to the low molecular weight form of PINP. Udgivelsesdato: 1998-Jan-12
DEFF Research Database (Denmark)
Pedersen, Henrik; Carlsen, Morten
1999-01-01
Two alpha-amylase-producing strains of Aspergillus oryzae, a wild-type strain and a recombinant containing additional copies of the alpha-amylase gene, were characterized,vith respect to enzyme activities, localization of enzymes to the mitochondria or cytosol, macromolecular composition, and metabolic fluxes through the central metabolism during glucose-limited chemostat cultivations. Citrate synthase and isocitrate dehydrogenase (NAD) activities were found only in the mitochondria, glucose-6-phosphate dehydrogenase and glutamate dehydrogenase (NADP) activities were found only in the cytosol, and isocitrate dehydrogenase (NADP), glutamate oxaloacetate transaminase, malate dehydrogenase, and glutamate dehydrogenase (NAD) activities were found in both the mitochondria and the cytosol, The measured biomass components and ash could account for 95% (wt/wt) of the biomass. The protein and RNA contents increased linearly with increasing specific growth rate, but the carbohydrate and chitin contents decreased. A metabolic model consisting of 69 fluxes and 59 intracellular metabolites was used to calculate the metabolic fluxes through the central metabolism at several specific growth rates, with ammonia or nitrate as the nitrogen source. The flux through the pentose phosphate pathway increased with increasing specific growth rate. The fluxes through the pentose phosphate pathway were 15 to 26% higher for the recombinant strain than for the wild-type strain.
DEFF Research Database (Denmark)
Kuld, Sebastian; Moses, Poul Georg
2014-01-01
Methanol has recently attracted renewed interest because of its potential importance as a solar fuel.1 Methanol is also an important bulk chemical that is most efficiently formed over the industrial Cu/ZnO/Al2O3 catalyst. The identity of the active site and, in particular, the role of ZnO as a promoter for this type of catalyst is still under intense debate.2 Structural changes that are strongly dependent on the pretreatment method have now been observed for an industrial-type methanol synthesis catalyst. A combination of chemisorption, reaction, and spectroscopic techniques provides a consistent picture of surface alloying between copper and zinc. This analysis enables a reinterpretation of the methods that have been used for the determination of the Cu surface area and provides an opportunity to independently quantify the specific Cu and Zn areas. This method may also be applied to other systems where metal–support interactions are important, and this work generally addresses the role of the carrier and the nature of the interactions between carrier and metal in heterogeneous catalysts.
On Behavioral Types for OSGi: From Theory to Implementation
Blech, Jan Olaf; Rueß, Harald; Schätz, Bernhard
2013-01-01
This report presents our work on behavioral types for OSGi component systems. It extends previously published work and presents features and details that have not yet been published. In particular, we cover a discussion on behavioral types in general, and Eclipse based implementation work on behavioral types . The implementation work covers: editors, means for comparing types at development and runtime, a tool connection to resolve incompatibilities, and an AspectJ based inf...
DEFF Research Database (Denmark)
Lahriri, Said; Santos, Ilmar
2013-01-01
This paper treats the experimental study on a shaft impacting its stator for different cases. The paper focuses mainly on the measured contact forces and the shaft motion in two different types of backup bearings. As such, the measured contact forces are thoroughly studied. These measured contact forces enable the hysteresis loops to be computed and analyzed. Consequently, the contact forces are plotted against the local deformation in order to assess the contact force loss during the impacts. The shaft motion during contact with the backup bearing is verified with a two-sided spectrum analyses. The analyses show that by use of a conventional annular guide, the shaft undergoes a direct transition from normal operation to a full annular backward whirling state for the case of external excitation. However, in a self-excited vibration case, where the speed is gradually increased and decreased through the first critical speed, the investigation revealed that different paths initiated the onset of backward whip and whirling motion. In order to improve the whirling and the full annular contact behavior, an unconventional pinned backup bearing is realized. The idea is to utilize pin connections that center the rotor during impacts and prevent the shaft from entering a full annular contact state. The experimental results show that the shaft escapes the pins and returns to a normal operational condition during an impact event. © 2013 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Weber, Tim F. [University of Heidelberg, Department of Diagnostic and Interventional Radiology, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)], E-mail: tim.weber@med.uni-heidelberg.de; Ganten, Maria-Katharina [German Cancer Research Center, Department of Radiology, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)], E-mail: m.ganten@dkfz.de; Boeckler, Dittmar [University of Heidelberg, Department of Vascular and Endovascular Surgery, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)], E-mail: dittmar.boeckler@med.uni-heidelberg.de; Geisbuesch, Philipp [University of Heidelberg, Department of Vascular and Endovascular Surgery, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)], E-mail: philipp.geisbuesch@med.uni-heidelberg.de; Kauczor, Hans-Ulrich [University of Heidelberg, Department of Diagnostic and Interventional Radiology, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)], E-mail: hu.kauczor@med.uni-heidelberg.de; Tengg-Kobligk, Hendrik von [German Cancer Research Center, Department of Radiology, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany)], E-mail: h.vontengg@dkfz.de
2009-12-15
Purpose: The purpose of this study was to characterize the heartbeat-related displacement of the thoracic aorta in patients with chronic aortic dissection type B (CADB). Materials and methods: Electrocardiogram-gated computed tomography angiography was performed during inspiratory breath-hold in 11 patients with CADB: Collimation 16 mm x 1 mm, pitch 0.2, slice thickness 1 mm, reconstruction increment 0.8 mm. Multiplanar reformations were taken for 20 equidistant time instances through both ascending (AAo) and descending aorta (true lumen, DAoT; false lumen, DAoF) and the vertex of the aortic arch (VA). In-plane vessel displacement was determined by region of interest analysis. Results: Mean displacement was 5.2 {+-} 1.7 mm (AAo), 1.6 {+-} 1.0 mm (VA), 0.9 {+-} 0.4 mm (DAoT), and 1.1 {+-} 0.4 mm (DAoF). This indicated a significant reduction of displacement from AAo to VA and DAoT (p < 0.05). The direction of displacement was anterior for AAo and cranial for VA. Conclusion: In CADB, the thoracic aorta undergoes a heartbeat-related displacement that exhibits an unbalanced distribution of magnitude and direction along the thoracic vessel course. Since consecutive traction forces on the aortic wall have to be assumed, these observations may have implications on pathogenesis of and treatment strategies for CADB.
International Nuclear Information System (INIS)
Purpose: The purpose of this study was to characterize the heartbeat-related displacement of the thoracic aorta in patients with chronic aortic dissection type B (CADB). Materials and methods: Electrocardiogram-gated computed tomography angiography was performed during inspiratory breath-hold in 11 patients with CADB: Collimation 16 mm x 1 mm, pitch 0.2, slice thickness 1 mm, reconstruction increment 0.8 mm. Multiplanar reformations were taken for 20 equidistant time instances through both ascending (AAo) and descending aorta (true lumen, DAoT; false lumen, DAoF) and the vertex of the aortic arch (VA). In-plane vessel displacement was determined by region of interest analysis. Results: Mean displacement was 5.2 ± 1.7 mm (AAo), 1.6 ± 1.0 mm (VA), 0.9 ± 0.4 mm (DAoT), and 1.1 ± 0.4 mm (DAoF). This indicated a significant reduction of displacement from AAo to VA and DAoT (p < 0.05). The direction of displacement was anterior for AAo and cranial for VA. Conclusion: In CADB, the thoracic aorta undergoes a heartbeat-related displacement that exhibits an unbalanced distribution of magnitude and direction along the thoracic vessel course. Since consecutive traction forces on the aortic wall have to be assumed, these observations may have implications on pathogenesis of and treatment strategies for CADB.
Approximate Newton-type methods via theory of control
Yap, Chui Ying; Leong, Wah June
2014-12-01
In this paper, we investigate the possible use of control theory, particularly theory on optimal control to derive some numerical methods for unconstrained optimization problems. Based upon this control theory, we derive a Levenberg-Marquardt-like method that guarantees greatest descent in a particular search region. The implementation of this method in its original form requires inversion of a non-sparse matrix or equivalently solving a linear system in every iteration. Thus, an approximation of the proposed method via quasi-Newton update is constructed. Numerical results indicate that the new method is more effective and practical.
Directory of Open Access Journals (Sweden)
Voigt Christopher A
2010-10-01
Full Text Available Abstract Background The type III secretion system (T3SS is a molecular machine in gram negative bacteria that exports proteins through both membranes to the extracellular environment. It has been previously demonstrated that the T3SS encoded in Salmonella Pathogenicity Island 1 (SPI-1 can be harnessed to export recombinant proteins. Here, we demonstrate the secretion of a variety of unfolded spider silk proteins and use these data to quantify the constraints of this system with respect to the export of recombinant protein. Results To test how the timing and level of protein expression affects secretion, we designed a hybrid promoter that combines an IPTG-inducible system with a natural genetic circuit that controls effector expression in Salmonella (psicA. LacO operators are placed in various locations in the psicA promoter and the optimal induction occurs when a single operator is placed at the +5nt (234-fold and a lower basal level of expression is achieved when a second operator is placed at -63nt to take advantage of DNA looping. Using this tool, we find that the secretion efficiency (protein secreted divided by total expressed is constant as a function of total expressed. We also demonstrate that the secretion flux peaks at 8 hours. We then use whole gene DNA synthesis to construct codon optimized spider silk genes for full-length (3129 amino acids Latrodectus hesperus dragline silk, Bombyx mori cocoon silk, and Nephila clavipes flagelliform silk and PCR is used to create eight truncations of these genes. These proteins are all unfolded polypeptides and they encompass a variety of length, charge, and amino acid compositions. We find those proteins fewer than 550 amino acids reliably secrete and the probability declines significantly after ~700 amino acids. There also is a charge optimum at -2.4, and secretion efficiency declines for very positively or negatively charged proteins. There is no significant correlation with hydrophobicity. Conclusions We show that the natural system encoded in SPI-1 only produces high titers of secreted protein for 4-8 hours when the natural psicA promoter is used to drive expression. Secretion efficiency can be high, but declines for charged or large sequences. A quantitative characterization of these constraints will facilitate the effective use and engineering of this system.
DEFF Research Database (Denmark)
Li, Yiping; Handberg, K.J.
2007-01-01
In present study, different types of infectious bursal disease virus (IBDV), virulent strain DK01, classic strain F52/70 and vaccine strain D78 were quantified and detected in infected bursa of Fabricius (BF) and cloacal swabs using quantitative real time RT-PCR with SYBR green dye. For selection of a suitable internal control gene, real time PCR parameters were evaluated for three candidate genes, glyceraldehyde-3-phosphate dehydrogenase (GAPDH), 28S rRNA and beta-actin to IBDVs. Based on this P-actin was selected as an internal control for quantification of IBDVs in BF. All BF samples with D78, DK01 or F52/70 inoculation were detected as virus positive at day I post inoculation (p.i.). The D78 viral load peaked at day 4 and day 8 p.i., while the DK01 and F52/70 viral load showed relatively high levels at day 2 p.i. In cloacal swabs, viruses detectable were at day 2 p.i. for DK01 and F52/70, day 8 p.i. for D78. Importantly, the primers set were specific as the D78 primer set gave no amplification of F52/70 and DK01 and the DK01 primer set gave no amplification of D78, thus DK01 and D78 could be quantified simultaneously in dually infected chickens by use of these two set of primers. The method described here is robust and may sever as a useful tool with high capacity for diagnostics as well as in viral pathogenesis studies.
Substructural Simple Type Theories for Separation and In-place Update
Atkey, Robert
2006-01-01
This thesis studies two substructural simple type theories, extending the "separation" and "number-of-uses" readings of the basic substructural simply typed lambda-calculus with exchange. The first calculus, lambda_sep, extends the alpha lambda-calculus of O'Hearn and Pym by directly considering the representation of separation in a type system. We define type contexts with separation relations and introduce new type constructors of separated products and separated funct...
Krichever-Novikov type algebras theory and applications
Schlichenmaier, Martin
2014-01-01
Krichever and Novikov introduced certain classes of infinite dimensionalLie algebrasto extend the Virasoro algebra and its related algebras to Riemann surfaces of higher genus. The author of this book generalized and extended them toa more general setting needed by the applications. Examples of applications are Conformal Field Theory, Wess-Zumino-Novikov-Witten models, moduli space problems, integrable systems, Lax operator algebras, and deformation theory of Lie algebra. Furthermore they constitute an important class of infinite dimensional Lie algebras which due to their geometric origin are
The ultimate speed implied by theories of Weber's type
International Nuclear Information System (INIS)
As in the last few years there has been a renewed interest in the laws of Ampere for the force between current elements and of Weber for the force between charges, we analyze the limiting velocity which appears in Weber's law. Then we make the same analysis for Phipps' potential and for generalizations of it. Comparing the results with the relativistic calculation, we obtain that these theories can yield c for the ultimate speed of the charges or for the ultimate relative speed between the charges but not for both simultaneously, as in the case in the special theory of relativity. 59 refs., 2 figs
M Theory, Type IIA Superstrings, and Elliptic Cohomology
Kriz, Igor; Sati, Hisham
2004-01-01
The topological part of the M-theory partition function was shown by Witten to be encoded in the index of an E8 bundle in eleven dimensions. This partition function is, however, not automatically anomaly-free. We observe here that the vanishing W_7=0 of the Diaconescu-Moore-Witten anomaly in IIA and compactified M-theory partition function is equivalent to orientability of spacetime with respect to (complex-oriented) elliptic cohomology. Motivated by this, we define an ellip...
Bianchi Type VI1 Viscous Fluid Cosmological Model in Wesson´s Theory of Gravitation
KHADEKAR, Govardhan S.; AVACHAR, Gajanan Rambhau
2007-01-01
Field equations of a scale invariant theory of gravitation proposed by Wesson [1, 2] are obtained in the presence of viscous fluid with the aid of Bianchi type VIh space-time with the time dependent gauge function (Dirac gauge). It is found that Bianchi type VIh (h = 1) space-time with viscous fluid is feasible in this theory, whereas Bianchi type VIh (h = -1, 0) space-times are not feasible in this theory, even in the presence of viscosity. For the feasible case, by assuming a relat...
A New Look at Generalized Rewriting in Type Theory
Directory of Open Access Journals (Sweden)
Matthieu Sozeau
2009-01-01
Full Text Available Rewriting is an essential tool for computer-based reasoning, both automated and assisted. This is because rewriting is a general notion that permits modeling a wide range of problems and provides a means to effectively solve them. In a proof assistant, rewriting can be used to replace terms in arbitrary contexts, generalizing the usual equational reasoning to reasoning modulo arbitrary relations. This can be done provided the necessary proofs that functions appearing in goals are congruent with respect to specific relations. We present a new implementation of generalized rewriting in the Coq proof assistant, making essential use of the expressive power of dependent types and the recently implemented type class mechanism. The new rewrite tactic improves on and generalizes previous versions by natively supporting higher-order functions, polymorphism and subrelations. The type class system inspired by Haskell provides a perfect interface between the user and the tactic, making it easily extensible.
The universal theory of ordered equidecomposability types semigroups
Wehrung, Friedrich
1994-01-01
We prove that a commutative preordered monoid $S$ embeds into the space of all equidecomposability types of subsets of some set equipped with a group action (in short, a full type space) if and only if for all $x$, $y$, $u$, $v$ in $S$, the following statements hold: $0\\leq x$; $x\\leq y$ and $y\\leq x$ implies that $x=y$; $x+u\\leq y+u$ and $u\\leq v$ implies that $x+v\\leq y+v$; $mx\\leq my$ implies that $x\\leq y$, for all positive integers $m$. Furthermore, such a structure can always be embedde...
The Classification of Gun’s Type Using Image Recognition Theory
M.L.Kulthon Kasemsan
2014-01-01
The research aims to develop the Gun’s Type and Models Classification (GTMC) system using image recognition theory. It is expected that this study can serve as a guide for law enforcement agencies or at least serve as the catalyst for a similar type of research. Master image storage and image recognition are the two main processes. The procedures involved original images, scaling, gray scale, canny edge detector, SUSAN corner detector, block matching template, and finally gun type’s recogniti...
Eady Solitary Waves: A Theory of Type B Cyclogenesis.
Mitsudera, Humio
1994-11-01
Localized baroclinic instability in a weakly nonlinear, long-wave limit using an Eady model is studied. The resulting evolution equations have a form of the KdV type, including extra terms representing linear coupling. Baroclinic instability is triggered locally by the collision between two neutral solitary waves (one trapped at the upper boundary and the other at the lower boundary) if their incident amplitudes are sufficiently large. This characteristic is explained from the viewpoint of resonance when the relative phase speed, which depends on the amplitudes, is less than a critical value. The upper and lower disturbances grow in a coupled manner (resembling a normal-mode structure) initially, but they reverse direction slowly as the amplitudes increase, and eventually separate from each other.The motivation of this study is to investigate a type of extratropical cyclogenesis that involves a preexisting upper trough (termed as Type B development) from the viewpoint of resonant solitary waves. Two cases are of particular interest. First, the author examines a case where an upper disturbance preexists over an undisturbed low-level waveguide. The solitary waves exhibit behavior similar to that conceived by Hoskins et al. for Type B development; the lower disturbance is forced one sidedly by a preexisting upper disturbance initially, but in turn forces the latter once the former attains a sufficient amplitude, thus resulting in mutual reinforcement. Second, if a weak perturbation exists at the surface ahead of the preexisting strong upper disturbance, baroclinic instability is triggered when the two waves interact. Even though the amplitude of the lower disturbance is initially much weaker, it is intensified quickly and catches up with the amplitude of the upper disturbance, so that the coupled vertical structure resembles that of an unstable normal mode eventually. These results describe the observed behavior in Type B atmospheric cyclogenesis quite well.
Matrix Models, Integrable Structures, and T-duality of Type 0 String Theory
Yin, Xi
2003-01-01
Instanton matrix models (IMM) for two dimensional string theories are obtained from the matrix quantum mechanics (MQM) of the T-dual theory. In this paper we study the connection between the IMM and MQM, which amounts to understand T-duality from the viewpoint of matrix models. We show that type 0A and type 0B matrix models perturbed by purely closed string momentum modes (or purely winding modes) have the integrable structure of Toda hierarchies, extending the well known re...
Cosmic web-type classification using decision theory
Leclercq, Florent; Wandelt, Benjamin
2015-01-01
We propose a decision criterion for segmenting the cosmic web into different structure types (voids, sheets, filaments and clusters) on the basis of their respective probabilities and the strength of data constraints. Our approach is inspired by an analysis of games of chance where the gambler only plays if a positive expected net gain can be achieved based on some degree of privileged information. The result is a general solution for classification problems in the face of uncertainty, including the option of not committing to a class for a candidate object. As an illustration, we produce high-resolution maps of web-type constituents in the nearby Universe as probed by the Sloan Digital Sky Survey main galaxy sample. Other possible applications include the selection and labeling of objects in catalogs derived from astronomical survey data.
Cosmic web-type classification using decision theory
Leclercq, F.; Jasche, J.; Wandelt, B.
2015-04-01
Aims: We propose a decision criterion for segmenting the cosmic web into different structure types (voids, sheets, filaments, and clusters) on the basis of their respective probabilities and the strength of data constraints. Methods: Our approach is inspired by an analysis of games of chance where the gambler only plays if a positive expected net gain can be achieved based on some degree of privileged information. Results: The result is a general solution for classification problems in the face of uncertainty, including the option of not committing to a class for a candidate object. As an illustration, we produce high-resolution maps of web-type constituents in the nearby Universe as probed by the Sloan Digital Sky Survey main galaxy sample. Other possible applications include the selection and labelling of objects in catalogues derived from astronomical survey data.
On the homotopy theory of n-types
Biedermann, Georg
2006-01-01
An n-truncated model structure on simplicial (pre-)sheaves is described having as weak equivalences maps that induce isomorphisms on certain homotopy sheaves only up to degree n. Starting from one of Jardine's intermediate model structures we construct such an n-type model structure via Bousfield-Friedlander localization and exhibit useful generating sets of trivial cofibrations. Injectively fibrant objects in these categories are called n-hyperstacks. The whole setup can co...
Counting BPS Blackholes in Toroidal Type II String Theory
Maldacena, Juan; Moore, Gregory; Strominger, Andrew
1999-01-01
We derive a $U$-duality invariant formula for the degeneracies of BPS multiplets in a D1-D5 system for toroidal compactification of the type II string. The elliptic genus for this system vanishes, but it is found that BPS states can nevertheless be counted using a certain topological partition function involving two insertions of the fermion number operator. This is possible due to four extra toroidal U(1) symmetries arising from a Wigner contraction of a large $\\CN=4$ algeb...
Quantification of margins and uncertainties: Alternative representations of epistemic uncertainty
International Nuclear Information System (INIS)
In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, 'Quantification of Margins and Uncertainties: Conceptual and Computational Basis,' describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.
E7 type modular invariant Wess-Zumino theory and Gepner's string compactification
International Nuclear Information System (INIS)
The report addresses the development of a general procedure to study the structure of operator algebra in off-diagonal modular invariant theories. An effort is made to carry out this procedure in E7 type modular invariant Wess-Zumino-Witten theory and explicitly check the closure of operator product algebra, which is required for any consistent conformal field theory. The conformal field theory is utilized to construct perturbative vacuum in string theory. Apparently quite nontrivial vacuums can be constructed out of minimal models of the N = 2 superconformal theory. Here, an investigation made of the Yukawa couplings of such a model which uses E7 type off-diagonal modular invariance. Phenomenological properties of this model is also discussed. Although off-diagonal modular invariant theories are rather special, realistic models seem to require very special manifolds. Therefore they may enhance the viability of string theory to describe real world. A study is also made on Verlinde's fusion algebra in E7 modular invariant theory. It is determined in the holomorphic sector only. Furthermore the indicator is given by the modular transformation matrix. A pair of operators which operate on the characters play a crucial role in this theory. (Nogami, K.)
Theory of zeolite supralattices: Se in zeolite Linde type A
International Nuclear Information System (INIS)
We study theoretically properties of Se clusters in zeolites, and choose zeolite Linde type A (LTA) as a prototype system. The geometries of free-space Se clusters are first determined, and we report the energetics and electronic and vibrational properties of these clusters. The work on clusters includes an investigation of the energetics of C3-C1 defect formation in Se rings and chains. The electronic properties of two Se crystalline polymorphs, trigonal Se and -monoclinic Se, are also determined. Electronic and vibrational properties of the zeolite LTA are investigated. Next we investigate the electronic and optical properties of ring-like Se clusters inside the large -cages of LTA. We find that Se clusters inside cages of silaceous LTA have very little interaction with the zeolite, and that the HOMO-LUMO gaps (HOMO standing for highest occupied molecular orbital and LUMO for lowest unoccupied molecular orbital) are nearly those of the isolated cluster. The HOMO-LUMO gaps of Se6, Se8, and Se12 are found to be similar, which makes it difficult to identify them experimentally by absorption spectroscopy. We find that the zeolite/Se8 nanocomposite is lower in energy than the two separated systems. We also investigate two types of infinite chain encapsulated in LTA. Finally, we carry out finite-temperature molecular dynamics simulations for an encapsulated Se12 cluster, which shows cluster melting and formation of nanoscale Se droplets in the?-cages of LTA. (author)
On the field theory of the extended-type electron
International Nuclear Information System (INIS)
In a recent paper, the classical theory of Barut and Zhanghi (BZ) for the electron spin [which interpreted the Zitterbewegung (zbw) motion as an internal motion along helical paths] and its ''quantum'' version have been investigated by using the language of Clifford algebras. In so doing, a new non-linear Dirac-like equation (NDE) was derived. We want to readdress the whole subject, and ''complete'' it, by adopting - for the sake of physical clarity - the ordinary tensorial language, within the frame of a first quantization formalism. In particular, we re-derive here the NDE for the electron field, show it to be associated with a new conserved probability current which allows us to work out a quantum probability interpretation of NDE. Actually, we propose this equation in substitution for the Dirac equation, which is obtained from the former by averaging over a zbw cycle. We then derive a new equation of motion for the 4-velocity field which will allow us to regard the electron as an extended object with a classically intelligible internal structure (thus overcoming some known, long-standing problems). We carefully study the solutions of the NDE; with special attention to those implying (at the classical limit) light-like helical motions, since they appear to be the most adequate solutions for the electron description from a kinematical and physical point of view, and to cope with the electromagnetic properties of the electron. (author). 18 refs
Digital games for type 1 and type 2 diabetes: underpinning theory with three illustrative examples.
Kamel Boulos, Maged N; Gammon, Shauna; Dixon, Mavis C; MacRury, Sandra M; Fergusson, Michael J; Miranda Rodrigues, Francisco; Mourinho Baptista, Telmo; Yang, Stephen P
2015-01-01
Digital games are an important class of eHealth interventions in diabetes, made possible by the Internet and a good range of affordable mobile devices (eg, mobile phones and tablets) available to consumers these days. Gamifying disease management can help children, adolescents, and adults with diabetes to better cope with their lifelong condition. Gamification and social in-game components are used to motivate players/patients and positively change their behavior and lifestyle. In this paper, we start by presenting the main challenges facing people with diabetes-children/adolescents and adults-from a clinical perspective, followed by three short illustrative examples of mobile and desktop game apps and platforms designed by Ayogo Health, Inc. (Vancouver, BC, Canada) for type 1 diabetes (one example) and type 2 diabetes (two examples). The games target different age groups with different needs-children with type 1 diabetes versus adults with type 2 diabetes. The paper is not meant to be an exhaustive review of all digital game offerings available for people with type 1 and type 2 diabetes, but rather to serve as a taster of a few of the game genres on offer today for both types of diabetes, with a brief discussion of (1) some of the underpinning psychological mechanisms of gamified digital interventions and platforms as self-management adherence tools, and more, in diabetes, and (2) some of the hypothesized potential benefits that might be gained from their routine use by people with diabetes. More research evidence from full-scale evaluation studies is needed and expected in the near future that will quantify, qualify, and establish the evidence base concerning this gamification potential, such as what works in each age group/patient type, what does not, and under which settings and criteria. PMID:25791276
International Nuclear Information System (INIS)
This paper describes the numerical analysis of the magnetization process of type II superconductors based on the Ginzburg-Landau theory. A computer code based on the finite difference method was developed and it was applied to simulate the magnetization process of type I and type II superconductors. As regards the time integration method, the CPU times required for the computation in some cases of GL-parameter were compared for two difference schemes, the backward and the forward schemes
Vortex-type half-BPS solitons in Aharony-Bergman-Jafferis-Maldacena theory
International Nuclear Information System (INIS)
We study the Aharony-Bergman-Jafferis-Maldacena (ABJM) theory without and with mass deformation. It is shown that maximally supersymmetry preserving, D-term, and F-term mass deformations of single mass parameter are equivalent. We obtain vortex-type half-BPS equations and the corresponding energy bound. For the undeformed ABJM theory, the resulting half-BPS equation is the same as that in supersymmetric Yang-Mills theory and no finite energy regular BPS solution is found. For the mass-deformed ABJM theory, the half-BPS equations for U(2)xU(2) case reduce to the vortex equation in Maxwell-Higgs theory, which supports static regular multivortex solutions. In U(N)xU(N) case with N>2 the non-Abelian vortex equation of Yang-Mills-Higgs theory is obtained.
MAMA Software Features: Visual Examples of Quantification
Energy Technology Data Exchange (ETDEWEB)
Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-05-20
This document shows examples of the results from quantifying objects of certain sizes and types in the software. It is intended to give users a better feel for some of the quantification calculations, and, more importantly, to help users understand the challenges with using a small set of ‘shape’ quantification calculations for objects that can vary widely in shapes and features. We will add more examples to this in the coming year.
An SL(2, Z) Multiplet of Black Holes in $D = 4$ Type II Superstring Theory
Das, A C; Roy, S; Das, Ashok; Maharana, Jnanadeva; Roy, Shibaji
1998-01-01
It is well-known that the conjectured SL(2, Z) invariance of type IIB string theory in ten dimensions also persists in lower dimensions when the theory is compactified on tori. By making use of this recent observation, we construct an infinite family of magnetically charged black hole solutions of type II superstring theory in four space-time dimensions. These solutions are characterized by two relatively prime integers corresponding to the magnetic charges associated with the two gauge fields (from NS-NS and R-R sectors) of the theory and form an SL(2, Z) multiplet. In the extremal limit these solutions are stable as they are prevented from decaying into black holes of lower masses by a `mass gap' equation.
Cartan's equations define a topological field theory of the BF type
International Nuclear Information System (INIS)
Cartan's first and second structure equations together with first and second Bianchi identities can be interpreted as equations of motion for the tetrad, the connection and a set of two-form fields TI and RJI. From this viewpoint, these equations define by themselves a field theory. Restricting the analysis to four-dimensional spacetimes (keeping gravity in mind), it is possible to give an action principle of the BF type from which these equations of motion are obtained. The action turns out to be equivalent to a linear combination of the Nieh-Yan, Pontrjagin, and Euler classes, and so the field theory defined by the action is topological. Once Einstein's equations are added, the resulting theory is general relativity. Therefore, the current results show that the relationship between general relativity and topological field theories of the BF type is also present in the first-order formalism for general relativity
Maximal R-symmetry violating amplitudes in type IIb superstring theory.
Boels, Rutger H
2012-08-24
On-shell superspace techniques are used to quantify R-symmetry violation in type IIB superstring theory amplitudes in a flat background in 10 dimensions. This shows the existence of a particularly simple class of nonvanishing amplitudes in this theory, which violate R symmetry maximally. General properties of the class and some of its extensions are established that at string tree level are shown to determine the first three nontrivial effective field theory contributions to all multiplicity. This leads to a natural conjecture for the exact analytic part of the first two of these. PMID:23002738
Nonperturbative type IIB model building in the F-theory framework
Energy Technology Data Exchange (ETDEWEB)
Jurke, Benjamin Helmut Friedrich
2011-02-28
This dissertation is concerned with the topic of non-perturbative string theory, which is generally considered to be the most promising approach to a consistent description of quantum gravity. The five known 10-dimensional perturbative string theories are all interconnected by numerous dualities, such that an underlying non-perturbative 11-dimensional theory, called M-theory, is postulated. Due to several technical obstacles, little is known about the fundamental objects in this theory. There exists an alternative non-perturbative description to type IIB string theory, namely F-theory. Here the SL(2;Z) self-duality of IIB theory is geometrized in the form of an elliptic fibration over the space-time. Moreover, higher-dimensional objects like 7-branes are included via singularities into the geometric picture. This formally elegant description, however, requires significant technical effort for the construction of suitable compactification geometries, as many different aspects necessarily have to be dealt with at the same time. On the other hand, the generation of essential GUT building blocks like certain Yukawa couplings or spinor representations is easier compared to perturbative string theory. The goal of this study is therefore to formulate a unified theory within the framework of F-theory, that satisfies basic phenomenological constraints. Within this thesis, at first E3-brane instantons in type IIB string theory - 4-dimensional objects that are entirely wrapped around the invisible dimensions of space-time - are matched with M5-branes in F-theory. Such objects are of great importance in the generation of critical Yukawa couplings or the stabilization of the free parameters of a theory. Certain properties of M5-branes then allow to derive a new criterion for E3-branes to contribute to the superpotential. In the aftermath of this analysis, several compactification geometries are constructed and checked for basic properties that are relevant for semi-realistic unified model building. An important aspect is the proper handling of the gauge flux on the 7-branes. Via the spectral cover description - which at first requires further refinements - chiral matter can be generated and the unified gauge group can be broken to the Standard Model. Ultimately, in this thesis an explicit unified model based on the gauge group SU(5) is constructed within the F-theory framework, such that an acceptable phenomenology and the observed three chiral matter generations are obtained. (orig.)
Nonperturbative type IIB model building in the F-theory framework
International Nuclear Information System (INIS)
This dissertation is concerned with the topic of non-perturbative string theory, which is generally considered to be the most promising approach to a consistent description of quantum gravity. The five known 10-dimensional perturbative string theories are all interconnected by numerous dualities, such that an underlying non-perturbative 11-dimensional theory, called M-theory, is postulated. Due to several technical obstacles, little is known about the fundamental objects in this theory. There exists an alternative non-perturbative description to type IIB string theory, namely F-theory. Here the SL(2;Z) self-duality of IIB theory is geometrized in the form of an elliptic fibration over the space-time. Moreover, higher-dimensional objects like 7-branes are included via singularities into the geometric picture. This formally elegant description, however, requires significant technical effort for the construction of suitable compactification geometries, as many different aspects necessarily have to be dealt with at the same time. On the other hand, the generation of essential GUT building blocks like certain Yukawa couplings or spinor representations is easier compared to perturbative string theory. The goal of this study is therefore to formulate a unified theory within the framework of F-theory, that satisfies basic phenomenological constraints. Within this thesis, at first E3-brane instantons in type IIB string theory - 4-dimensional objects that are entirely wrapped around the invisible dimensions of space-time - are matched with M5-branes in F-theory. Such objects are of great importance in the generation of critical Yukawa couplings or the stabilization of the free parameters of a theory. Certain properties of M5-branes then allow to derive a new criterion for E3-branes to contribute to the superpotential. In the aftermath of this analysis, several compactification geometries are constructed and checked for basic properties that are relevant for semi-realistic unified model building. An important aspect is the proper handling of the gauge flux on the 7-branes. Via the spectral cover description - which at first requires further refinements - chiral matter can be generated and the unified gauge group can be broken to the Standard Model. Ultimately, in this thesis an explicit unified model based on the gauge group SU(5) is constructed within the F-theory framework, such that an acceptable phenomenology and the observed three chiral matter generations are obtained. (orig.)
What does it take for a specific prospect theory type household to engage in risky investment?
HLOUSKOVA, Jaroslava; Tsigaris, Panagiotis
2012-01-01
This research note examines the conditions which will induce a prospect theory type investor, whose reference level is set by 'playing it safe', to invest in a risky asset. The conditions indicate that this type of investor requires a large equity premium to invest in risky assets. However, once she does invest because of a large risk premium, she becomes aggressive and buys/sells till an externally imposed upper/lower bound is reached.
On theory of chemical evolution of neutral comets of a Halley type
International Nuclear Information System (INIS)
Analytical theory of neutral chemical evolution atmospheres of comets, type of Gallea comet is presented, iced kernels which basically consist from ice water. The hydrogen atoms distribution law which photo dissociation molecules of water starting appears in result from a kernel under influence of short-wave Sun radiations, such types of comets were found in atmospheres and realization opportunity of this law is shown
International Nuclear Information System (INIS)
In this paper, the iteration formula of the Maslov-type index theory for linear Hamiltonian systems with continuous periodic and symmetric coefficients is established. This formula yields a new method to determine the minimality of the period for solutions of nonlinear autonomous Hamiltonian systems via their Maslov-type indices. Applications of this formula give new results on the existence of periodic solutions with prescribed minimal period for such systems. (author). 40 refs
Cosmic string solution in a Born-Infeld type theory of gravity
Energy Technology Data Exchange (ETDEWEB)
Rocha, W.J. da [Universidade de Brasilia (UnB), DF (Brazil). Inst. de Fisica; Naves de Oliveira, A.L. [Universidade Federal de Vicosa (UFV), Rio Paranaiba, MG (Brazil); Guimaraes, M.E.X. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Inst. de Fisica
2009-07-01
Full text. Advances in the formal structure of string theory point to the emergence, and necessity, of a scalar-tensorial theory of gravity. It seems that, at least at high energy scales, the Einstein's theory is not enough to explain the gravitational phenomena. In other words, the existence of a scalar (gravitational) field acting as a mediator of the gravitational interaction together with the usual purely rank-2 tensorial field is, indeed, a natural prediction of unification models as supergravity, superstrings and M-theory. This type of modified gravitation was first introduced in a different context in the 60's in order to incorporate the Mach's principle into relativity, but nowadays it acquired different sense in cosmology and gravity theories. Although such unification theories are the most acceptable, they all exist in higher dimensional spaces. The compactification from these higher dimensions to the 4-dimensional physics is not unique and there exist many effective theories of gravity which come from the unification process. Each of them must, of course, satisfy some predictions. Here, in this paper, we will deal with one of them. The so-called NDL theory. One important assumption in General Relativity is that all field interact in the same way with gravity. This is the so called Strong Equivalence Principle (SEP). It is well known, with good accuracy, that this is true when we concern with matter to matter interaction, i.e, the Weak Equivalence Principle(WEP) is tested. But, until now, there is no direct observational confirmation of this affirmation to the gravity to gravity interaction. In an extension of the field theoretical description of General Relativity constructed by is used to propose an alternative field theory of gravity. In this theory gravitons propagate in a different spacetime. The velocity of propagation of the gravitational waves in this theory does not coincide with the General Relativity predictions. (author)
Modular invariance and the gravitational anomaly in type II superstring theory
Energy Technology Data Exchange (ETDEWEB)
Hayashi, Masahito; Kawamoto, Noboru; Kuramoto, Tetsuji; Shigemoto, Kazuyasu
1987-12-07
By explicit calculations we show that the one-loop parity-violating amplitude with six external gravitons is modular invariant and finite. As a natural consequence of the modular invariance and double periodicity of the amplitude with respect to torus parameters, the gravitational anomaly of type II superstring theory vanishes.
Modular invariance and the gravitational anomaly in type II superstring theory
International Nuclear Information System (INIS)
By explicit calculations we show that the one-loop parity-violating amplitude with six external gravitons is modular invariant and finite. As a natural consequence of the modular invariance and double periodicity of the amplitude with respect to torus parameters, the gravitational anomaly of type II superstring theory vanishes. (orig.)
The central error of M. W. Evans ECE theory - a type mismatch
Bruhn, G W
2006-01-01
This note corrects an erroneous article by M.W. Evans on his GCUFT theory which he took over in his GCUFT book. Due to Evans' bad habit of suppressing seemingly unimportant indices type match errors occur that cannot be removed. In addition some further errors of that article/book chapter are pointed out.
Sparticle spectrum and dark matter in type I string theory with an intermediate scale
Bailin, David; Love, A
2000-01-01
The supersymmetric particle spectrum is calculated in type I string theories formulated as orientifold compactifications of type IIB string theory. A string scale at an intermediate value of $10^{11}-10^{12}$ GeV is assumed and extra vector-like matter states are introduced to allow unification of gauge coupling constants to occur at this scale. The qualitative features of the spectrum are compared with Calabi-Yau compactification of the weakly coupled heterotic string and with the eleven dimensional supergravity limit of $M$-theory. Some striking differences are observed. Assuming that the lightest neutralino provides the dark matter in the universe, further constraints on the sparticle spectrum are obtained. Direct detection rates for dark matter are estimated.
International Nuclear Information System (INIS)
Many inner ear disorders, including Meniere's disease, are believed to be based on endolymphatic hydrops. We evaluated a newly proposed method for semi-quantification of endolymphatic size in patients with suspected endolymphatic hydrops that uses 2 kinds of processed magnetic resonance (MR) images. Twenty-four consecutive patients underwent heavily T2-weighted (hT2W) MR cisternography (MRC), hT2W 3-dimensional (3D) fluid-attenuated inversion recovery (FLAIR) with inversion time of 2250 ms (positive perilymph image, PPI), and hT2W-3D-IR with inversion time of 2050 ms (positive endolymph image, PEI) 4 hours after intravenous administration of single-dose gadolinium-based contrast material (IV-SD-GBCM). Two images were generated using 2 new methods to process PPI, PEI, and MRC. Three radiologists contoured the cochlea and vestibule on MRC, copied regions of interest (ROIs) onto the 2 kinds of generated images, and semi-quantitatively measured the size of the endolymph for the cochlea and vestibule by setting a threshold pixel value. Each observer noted a strong linear correlation between endolymphatic size of both the cochlea and vestibule of the 2 kinds of generated images. The Pearson correlation coefficients (r) were 0.783, 0.734, and 0.800 in the cochlea and 0.924, 0.930, and 0.933 in the vestibule (P<0.001, for all). In both the cochlea and vestibule, repeated-measures analysis of variance showed no statistically significant difference between observers. Use of the 2 kinds of generated images generated from MR images obtained 4 hours after IV-SD-GBCM might enable semi-quantification of endolymphatic size with little observer dependency. (author)
Spontaneous supersymmetry breaking and instanton sum in 2D type IIA Superstring Theory
International Nuclear Information System (INIS)
We consider a double-well supersymmetric matrix model and its interpretation as a nonperturbative definition of two-dimensional type IIA superstring theory in the presence of a nontrivial Ramond-Ramond background. The interpretation is based on symmetries in both sides of the matrix model and the IIA string theory, and confirmed by direct comparison of various correlation functions . The full nonperturbative free energy of the matrix model in its double scaling limit is represented by the Tracy-Widom distribution in random matrix theory. We show that instanton contributions in the matrix model survive in the double scaling limit and trigger spontaneous supersymmetry breaking. It implies that the target-space supersymmetry is spontaneously broken due to nonperturbative effects in the IIA string theory
E$_{6(6)}$ Exceptional Field Theory: Review and Embedding of Type IIB
Baguet, Arnaud; Samtleben, Henning
2015-01-01
We review E$_{6(6)}$ exceptional field theory with a particular emphasis on the embedding of type IIB supergravity, which is obtained by picking the GL$(5)\\times {\\rm SL}(2)$ invariant solution of the section constraint. We work out the precise decomposition of the E$_{6(6)}$ covariant fields on the one hand and the Kaluza-Klein-like decomposition of type IIB supergravity on the other. Matching the symmetries, this allows us to establish the precise dictionary between both sets of fields. Finally, we establish on-shell equivalence. In particular, we show how the self-duality constraint for the four-form potential in type IIB is reconstructed from the duality relations in the off-shell formulation of the E$_{6(6)}$ exceptional field theory.
The Classification of Gun’s Type Using Image Recognition Theory
Directory of Open Access Journals (Sweden)
M. L. Kulthon Kasemsan
2014-01-01
Full Text Available The research aims to develop the Gun’s Type and Models Classification (GTMC system using image recognition theory. It is expected that this study can serve as a guide for law enforcement agencies or at least serve as the catalyst for a similar type of research. Master image storage and image recognition are the two main processes. The procedures involved original images, scaling, gray scale, canny edge detector, SUSAN corner detector, block matching template, and finally gun type’s recognition. Of the 505 images, 80 were control or master images, and 425 were experimental images of the eight gun types. The finding from the experiment indicated that the GTMC was able to classify the images of the semi-automatic gun with the highest accuracy of 99.06 percent, and the average accurate gun image classification was 81.25 percent respectively.
Conifold Type Singularities, N=2 Liouville and SL(2;R)/U(1) Theories
Eguchi, T; Eguchi, Tohru; Sugawara, Yuji
2005-01-01
In this paper we discuss various aspects of non-compact models of CFT of the type: $ \\prod_{j=1}^{N_L} {N=2 Liouville theory}_j \\otimes \\prod_{i=1}^{N_M} {N=2 minimal model}_i $ and $ \\prod_{j=1}^{N_L}{SL(2;R)/U(1) supercoset}_j \\otimes \\prod_{i=1}^{N_M} {N=2 minimal model}_i $. These models are related to each other by T-duality. Such string vacua are expected to describe non-compact Calabi-Yau compactifications, typically ALE fibrations over (weighted) projective spaces. We find that when the Liouville ($SL(2;R)/U(1)$) theory is coupled to minimal models, there exist only (c,c), (a,a) ((c,a), (a,c))-type of massless states in CY 3 and 4-folds and the theory possesses only complex (K\\"{a}hler) structure deformations. Thus the space-time has the characteristic feature of a conifold type singularity whose deformation (resolution) is given by the N=2 Liouville (SL(2;R)/U(1)) theory. Spectra of compact BPS D-branes determined from the open string sector are compared with those of massless moduli. We compute the ...
Generalized N=1 and N=2 structures in M-theory and type II orientifolds
Graña, Mariana
2012-01-01
We consider M-theory and type IIA reductions to four dimensions with N=2 and N=1 supersymmetry and discuss their interconnection. Our work is based on the framework of Exceptional Generalized Geometry (EGG), which extends the tangent bundle to include all symmetries in M-theory and type II string theory, covariantizing the local U-duality group E7. We describe general N=1 and N=2 reductions in terms of SU(7) and SU(6) structures on this bundle and thereby derive the effective four-dimensional N=1 and N=2 couplings, in particular we compute the Kahler and hyper-Kahler potentials as well as the triplet of Killing prepotentials (or the superpotential in the N=1 case). These structures and couplings can be described in terms of forms on an eight-dimensional tangent space where SL(8) contained in E7 acts, which might indicate a description in terms of an eight-dimensional internal space, similar to F-theory. We finally discuss an orbifold action in M-theory and its reduction to O6 orientifolds, and show how the pr...
The structure of the R8 term in type IIB string theory
International Nuclear Information System (INIS)
Based on the structure of the on-shell linearized superspace of type IIB supergravity, we argue that there is a non-BPS 16 derivative interaction in the effective action of type IIB string theory of the form (t8t8R4)2, which we call the R8 interaction. It lies in the same supermultiplet as the G8R4 interaction. Using the Kawai–Lewellen–Tye relation, we analyze the structure of the tree level eight-graviton scattering amplitude in the type IIB theory, which leads to the R8 interaction at the linearized level. This involves an analysis of color-ordered multi-gluon disc amplitudes in the type I theory, which shows an intricate pole structure and transcendentality consistent with various other interactions. Considerations of S-duality show that the R8 interaction receives non-analytic contributions in the string coupling at one and two loops. Apart from receiving perturbative contributions, we show that the R8 interaction receives a non-vanishing contribution in the one D-instanton-anti-instanton background at leading order in the weak coupling expansion. (paper)
A Novel Framework for Quantification of Supply Chain Risks
Qazi, Abroon; Quigley, John; Dickson, Alex
2014-01-01
Supply chain risk management is an active area of research and there is a research gap of exploring established risk quantification techniques in other fields for application in the context of supply chain management. We have developed a novel framework for quantification of supply chain risks that integrates two techniques of Bayesian belief network and Game theory. Bayesian belief network can capture interdependency between risk factors and Game theory can assess risks associated with confl...
Ma, Fuyin; Wu, Jiu Hui; Huang, Meng
2015-09-01
In order to overcome the influence of the structural resonance on the continuous structures and obtain a lightweight thin-layer structure which can effectively isolate the low-frequency noises, an elastic membrane structure was proposed. In the low-frequency range below 500 Hz, the sound transmission loss (STL) of this membrane type structure is greatly higher than that of the current sound insulation material EVA (ethylene-vinyl acetate copo) of vehicle, so it is possible to replace the EVA by the membrane-type metamaterial structure in practice engineering. Based on the band structure, modal shapes, as well as the sound transmission simulation, the sound insulation mechanism of the designed membrane-type acoustic metamaterials was analyzed from a new perspective, which had been validated experimentally. It is suggested that in the frequency range above 200 Hz for this membrane-mass type structure, the sound insulation effect was principally not due to the low-level locally resonant mode of the mass block, but the continuous vertical resonant modes of the localized membrane. So based on such a physical property, a resonant modal group theory is initially proposed in this paper. In addition, the sound insulation mechanism of the membrane-type structure and thin plate structure were combined by the membrane/plate resonant theory.
Four types of coping with COPD-induced breathlessness in daily living: a grounded theory study
DEFF Research Database (Denmark)
Bastrup, Lene; Dahl, Ronald
2013-01-01
Coping with breathlessness is a complex and multidimensional challenge for people with chronic obstructive pulmonary disease (COPD) and involves interacting physiological, cognitive, affective, and psychosocial dimensions. The aim of this study was to explore how people with moderate to most severe COPD predominantly cope with breathlessness during daily living. We chose a multimodal grounded theory design that holds the opportunity to combine qualitative and quantitative data to capture and explain the multidimensional coping behaviour among poeple with COPD. The participants' main concern in coping with breathlessness appeared to be an endless striving to economise on resources in an effort to preserve their integrity. In this integrity-preserving process, four predominant coping types emerged and were labelled: `Overrater´, `Challenger´, `Underrater´, and `Leveller´. Each coping type comprised distrinctive physiological, cognitive, affective and psychosocial features constituting coping-type-specific indicators. In theory, four predominant coping types with distinct physiological, cognitive, affective and psychosocial properties are observed among people with COPD. The four coping types seem to constitute a coping trajectory. This hypothesis should be further tested in a longitudinal study.
Towards reduction of type II theories on SU(3) structure manifolds
Kashani-Poor, Amir-Kian; Minasian, Ruben
2006-01-01
We revisit the reduction of type II supergravity on SU(3) structure manifolds, conjectured to lead to gauged N=2 supergravity in 4 dimensions. The reduction proceeds by expanding the invariant 2- and 3-forms of the SU(3) structure as well as the gauge potentials of the type II theory in the same set of forms, the analogues of harmonic forms in the case of Calabi-Yau reductions. By focussing on the metric sector, we arrive at a list of constraints these expansion forms should...
A sufficient condition for de Sitter vacua in type IIB string theory
International Nuclear Information System (INIS)
We derive a sufficient condition for realizing meta-stable de Sitter vacua with small positive cosmological constant within type IIB string theory flux compactifications with spontaneously broken supersymmetry. There are a number of 'lamp post' constructions of de Sitter vacua in type IIB string theory and supergravity. We show that one of them - the method of 'Kaehler uplifting' by F-terms from an interplay between non-perturbative effects and the leading ?'-correction - allows for a more general parametric understanding of the existence of de Sitter vacua. The result is a condition on the values of the flux induced superpotential and the topological data of the Calabi-Yau compactification, which guarantees the existence of a meta-stable de Sitter vacuum if met. Our analysis explicitly includes the stabilization of all moduli, i.e. the Kaehler, dilaton and complex structure moduli, by the interplay of the leading perturbative and non-perturbative effects at parametrically large volume. (orig.)
WKB - type approximations in the theory of vacuum particle creation in strong fields
Smolyansky, S A; Panferov, A D; Prozorkevich, A V; Blaschke, D; Juchnowski, L
2014-01-01
Within the theory of vacuum creation of an $e^{+}e^{-}$ - plasma in the strong electric fields acting in the focal spot of counter-propagating laser beams we compare predictions on the basis of different WKB-type approximations with results obtained in the framework of a strict kinetic approach. Such a comparison demonstrates a considerable divergence results. We analyse some reasoning for this observation and conclude that WKB-type approximations have an insufficient foundation for QED in strong nonstationary fields. The results obtained in this work on the basis of the kinetic approach are most optimistic for the observation of an $e^{+}e^{-}$ - plasma in the range of optical and x-ray laser facilities. We discuss also the influence of unphysical features of non-adiabatic field models on the reliability of predictions of the kinetic theory.
Type IIB string theory on AdS5 x Tnn'
International Nuclear Information System (INIS)
We study Kaluza-Klein spectrum of type IIB string theory compactified on AdS5 x Tnn' in the context of AdS/CFT correspondence. We examine some of the modes of the complexified 2 form potential as an example and show that for the states at the bottom of the Kaluza-Klein tower the corresponding d=4 boundary field operators have rational conformal dimensions. The masses of some of the fermionic modes in the bottom of each tower as functions of the R charge in the boundary conformal theory are also rational. Furthermore the modes in the bottom of the towers originating from q forms on T11 can be put in correspondence with the BRS cohomology classes of the c = 1 non critical string theory with ghost number q. (author)
Specimens: "most of" generic NPs in a contextually flexible type theory
Retoré, Christian
2011-01-01
This paper proposes to compute the meanings associated to sentences with generic NPs correspond- ing to the most of generalized quantifier. We call these generics specimens and they resemble stereotypes or pro- totypes in lexical semantics. The meanings are viewed as logical formulae that can be thereafter interpreted in your favorite models. We rather depart from the dominant Fregean single untyped universe and go for type theory with hints from Hilbert epsilon calculus and from medieval phi...
Didarloo, A.; Shojaeizadeh, D.; asl, R. Gharaaghaji; S NIKNAMI; A. Khorami
2014-01-01
The study evaluated the efficacy of the Theory of Reasoned Action (TRA), along with self-efficacy to predict dietary behaviour in a group of Iranian women with type 2 diabetes. A sample of 352 diabetic women referred to Khoy Diabetes Clinic, Iran, were selected and given a self-administered survey to assess eating behaviour, using the extended TRA constructs. Bivariate correlations and Enter regression analyses of the extended TRA model were performed with SPSS software. Overall, the proposed...
Transient theory of double slope floating cum tilted - wick type solar still
International Nuclear Information System (INIS)
A double slope floating cum tilted-wick solar still has been fabricated and transient theory of floating cum tilted-wick type solar still has been proposed. Analytical expressions have been derived for the different temperatures components of the proposed system. For elocution of the analytical results, numerical calculations have been carried out using the meteorological parameters for a typical summer day in Coimbatore. Analytical expression results are found to be in the close agreement with the experimental results. (authors)
Bianchi type VI1 cosmological model with wet dark fluid in scale invariant theory of gravitation
Mishra, B
2014-01-01
In this paper, we have investigated Bianchi type VIh, II and III cosmological model with wet dark fluid in scale invariant theory of gravity, where the matter field is in the form of perfect fluid and with a time dependent gauge function (Dirac gauge). A non-singular model for the universe filled with disorder radiation is constructed and some physical behaviors of the model are studied for the feasible VIh (h = 1) space-time.
Type I Superconductivity upon Monopole Condensation in Seiberg-Witten Theory
Vainshtein, A.; Yung, A.
2000-01-01
We study the confinement scenario in N=2 supersymmetric SU(2) gauge theory near the monopole point upon breaking of N=2 supersymmetry by the adjoint matter mass term. We confirm claims made previously that the Abrikosov-Nielsen-Olesen string near the monopole point fails to be a BPS state once next-to-leading corrections in the adjoint mass parameter taken into account. Our results shows that type I superconductivity arises upon monopole condensation. This conclusion allows ...
Generalization of Wertheim's theory for the assembly of various types of rings.
Tavares, J M; Almarza, N G; Telo da Gama, M M
2015-07-15
We generalize Wertheim's first order perturbation theory to account for the effect in the thermodynamics of the self-assembly of rings characterized by two energy scales. The theory is applied to a lattice model of patchy particles and tested against Monte Carlo simulations on a fcc lattice. These particles have 2 patches of type A and 10 patches of type B, which may form bonds AA or AB that decrease the energy by ?AA and by ?AB ? r?AA, respectively. The angle ? between the 2 A-patches on each particle is fixed at 60°, 90° or 120°. For values of r below 1/2 and above a threshold rth(?) the models exhibit a phase diagram with two critical points. Both theory and simulation predict that rth increases when ? decreases. We show that the mechanism that prevents phase separation for models with decreasing values of ? is related to the formation of loops containing AB bonds. Moreover, we show that by including the free energy of B-rings (loops containing one AB bond), the theory describes the trends observed in the simulation results, but that for the lowest values of ?, the theoretical description deteriorates due to the increasing number of loops containing more than one AB bond. PMID:26098611
Type synthesis for 4-DOF parallel press mechanism using GF set theory
He, Jun; Gao, Feng; Meng, Xiangdun; Guo, Weizhong
2015-07-01
Parallel mechanisms is used in the large capacity servo press to avoid the over-constraint of the traditional redundant actuation. Currently, the researches mainly focus on the performance analysis for some specific parallel press mechanisms. However, the type synthesis and evaluation of parallel press mechanisms is seldom studied, especially for the four degrees of freedom(DOF) press mechanisms. The type synthesis of 4-DOF parallel press mechanisms is carried out based on the generalized function(GF) set theory. Five design criteria of 4-DOF parallel press mechanisms are firstly proposed. The general procedure of type synthesis of parallel press mechanisms is obtained, which includes number synthesis, symmetrical synthesis of constraint GF sets, decomposition of motion GF sets and design of limbs. Nine combinations of constraint GF sets of 4-DOF parallel press mechanisms, ten combinations of GF sets of active limbs, and eleven combinations of GF sets of passive limbs are synthesized. Thirty-eight kinds of press mechanisms are presented and then different structures of kinematic limbs are designed. Finally, the geometrical constraint complexity( GCC), kinematic pair complexity( KPC), and type complexity( TC) are proposed to evaluate the press types and the optimal press type is achieved. The general methodologies of type synthesis and evaluation for parallel press mechanism are suggested.
Convex ordering and quantification of quantumness
Sperling, J.; Vogel, W.
2015-06-01
The characterization of physical systems requires a comprehensive understanding of quantum effects. One aspect is a proper quantification of the strength of such quantum phenomena. Here, a general convex ordering of quantum states will be introduced which is based on the algebraic definition of classical states. This definition resolves the ambiguity of the quantumness quantification using topological distance measures. Classical operations on quantum states will be considered to further generalize the ordering prescription. Our technique can be used for a natural and unambiguous quantification of general quantum properties whose classical reference has a convex structure. We apply this method to typical scenarios in quantum optics and quantum information theory to study measures which are based on the fundamental quantum superposition principle.
Type and structure of time-like singularities in general relativity theory
International Nuclear Information System (INIS)
A method is proposed which permits one to deterMine whether a time-like singularity refers to a point, linear or some other type of gravitational field singularity. It is shown that in the general theory of relativity an altogether different type of source may be possible which does not have any analogs in finite curvature space. An analysis is made of a number of solutions containing time-like singularities whose type varies depending on the sign of the functions involved in the solutions. The form of the solution near simple linear sources and of generalized anisotropic solutions is determined more accurately. The space-time described by the ?-metric is investigated completely and the form of the metric near the ends and at singular points of linear Weyl singularities is found
Towards reduction of type II theories on SU(3) structure manifolds
International Nuclear Information System (INIS)
We revisit the reduction of type II supergravity on SU(3) structure manifolds conjectured to lead to gauged N = 2 supergravity in 4 dimensions. The reduction proceeds by expanding the invariant 2- and 3-forms of the SU(3) structure as well as the gauge potentials of the type II theory in the same set of forms, the analogues of harmonic forms in the case of Calabi-Yau reductions. By focussing on the metric sector, we arrive at a list of constraints these expansion forms should satisfy to yield a base point independent reduction. Identifying these constraints is a first step towards a first-principles reduction of type II on SU(3) structure manifolds
Towards reduction of type II theories on SU(3) structure manifolds
Kashani-Poor, A K; Kashani-Poor, Amir-Kian; Minasian, Ruben
2007-01-01
We revisit the reduction of type II supergravity on SU(3) structure manifolds, conjectured to lead to gauged N=2 supergravity in 4 dimensions. The reduction proceeds by expanding the invariant 2- and 3-forms of the SU(3) structure as well as the gauge potentials of the type II theory in the same set of forms, the analogues of harmonic forms in the case of Calabi-Yau reductions. By focussing on the metric sector, we arrive at a list of constraints these expansion forms should satisfy to yield a base point independent reduction. Identifying these constraints is a first step towards a first-principles reduction of type II on SU(3) structure manifolds.
A Density Functional Theory Study of Doped Tin Monoxide as a Transparent p-type Semiconductor
Bianchi Granato, Danilo
2012-05-01
In the pursuit of enhancing the electronic properties of transparent p-type semiconductors, this work uses density functional theory to study the effects of doping tin monoxide with nitrogen, antimony, yttrium and lanthanum. An overview of the theoretical concepts and a detailed description of the methods employed are given, including a discussion about the correction scheme for charged defects proposed by Freysoldt and others [Freysoldt 2009]. Analysis of the formation energies of the defects points out that nitrogen substitutes an oxygen atom and does not provide charge carriers. On the other hand, antimony, yttrium, and lanthanum substitute a tin atom and donate n-type carriers. Study of the band structure and density of states indicates that yttrium and lanthanum improves the hole mobility. Present results are in good agreement with available experimental works and help to improve the understanding on how to engineer transparent p-type materials with higher hole mobilities.
Localized Modes in Type II and Heterotic Singular Calabi-Yau Conformal Field Theories
Mizoguchi, Shun'ya
2008-01-01
We consider type II and heterotic string compactifications on an isolated singularity in the noncompact Gepner model approach. The conifold-type ADE noncompact Calabi-Yau threefolds, as well as the ALE twofolds, are modeled by a tensor product of the SL(2,R)/U(1) Kazama-Suzuki model and an N=2 minimal model. Based on the string partition functions on these internal Calabi-Yaus previously obtained by Eguchi and Sugawara, we construct new modular invariant, space-time supersymmetric partition functions for both type II and heterotic string theories, where the GSO projection is performed before the continuous and discrete state contributions are separated. We investigate in detail the massless spectra of the localized modes. In particular, we propose an interesting three generation model, in which each flavor is in the 27+1 representation of E6 and localized on a four-dimensional space-time residing at the tip of the cigar.
A construction principle for ADM-type theories in maximal slicing gauge
Gomes, Henrique
2013-01-01
The differing concepts of time in general relativity and quantum mechanics are widely accused as the main culprits in our persistent failure in finding a complete theory of quantum gravity. Here we address this issue by constructing ADM-type theories \\emph{in a particular time gauge} directly from first principles. The principles are expressed as conditions on phase space constraints: we search for two sets of spatially covariant constraints, which generate symmetries (are first class) and gauge-fix each other leaving two propagating degrees of freedom. One of the sets is the Weyl generator tr$(\\pi)$, and the other is a one-parameter family containing the ADM scalar constraint $\\lambda R- \\beta(\\pi^{ab}\\pi_{ab}+(\\mbox{tr}(\\pi))^2/2))$. The two sets of constraints can be seen as defining ADM-type theories with a maximal slicing gauge-fixing. The principles above are motivated by a heuristic argument relying in the relation between symmetry doubling and exact renormalization arguments for quantum gravity, aside...
Canonical BF-type topological field theory and fractional statistics of strings
International Nuclear Information System (INIS)
We consider BF-type topological field theory coupled to non-dynamical particle and string sources on spacetime manifolds of the form R1xM 3, where M 3 is a 3-manifold without boundary. Canonical quantization of the theory is carried out in the hamiltonian formalism and explicit solutions of the Schroedinger equation are obtained. We show that the Hilbert space is finite dimensional and the physical states carry a one-dimensional projective representation of the local gauge symmetries. When M 3 is homologically non-trivial the wavefunctions in addition carry a multi-dimensional projective representation, in terms of the linking matrix of the homology cycles of M 3, of the discrete group of large gauge transformations. The wavefunctions also carry a one-dimensional representation of the non-trivial linking of the particle trajectories and string surfaces in M 3. This topological field theory therefore provides a phenomenological generalization of anyons to (3+1) dimensions where the holonomies representing fractional statistics arise from the adiabatic transport of particles around strings. We also discuss a duality between large gauge transformations and these linking operations around the homology cycles of M 3, and show that this canonical quantum field theory provides novel quantum representations of the cohomology of M 3 and its associated motion group. ((orig.))
Reissner—Nordstroem-de—Sitter-type Solution by a Gauge Theory of Gravity
International Nuclear Information System (INIS)
We use the theory based on a gravitational gauge group (Wu's model) to obtain a spherical symmetric solution of the Geld equations for the gravitational potential on a Minkowski spacetime. The gauge group, the gauge covariant derivative, the strength tensor of the gauge Held, the gauge invariant Lagrangean with the cosmological constant, the Geld equations of the gauge potentials with a gravitational energy-momentum tensor as well as with a tensor of the Geld of a point like source are determined. Finally, a Reissner-Nordstrom-de Sitter-type metric on the gauge group space is obtained
S-matrix elements and covariant tachyon action in type 0 theory
Garousi, Mohammad R.
2003-01-01
We evaluate the sphere level S-matrix element of two tachyons and two massless NS states, the S-matrix element of four tachyons, and the S-matrix element of two tachyons and two Ramon-Ramond vertex operators, in type 0 theory. We then find an expansion for theses amplitudes that their leading order terms correspond to a covariant tachyon action. To the order considered, there are no $T^4$, $T^2(\\prt T)^2$, $T^2H^2$, nor $T^2R$ tachyon couplings, whereas, the tachyon coupling...
A new type of disconnectedness problem in a field-theory model of the NN? system
International Nuclear Information System (INIS)
When treated as an effective three-body problem in the framework of a simple field-theory model, the NN? system acquires, in addition to the disconnected subsystem interactions usually considered, a new type of disconnected driving term, possible only for non-conserved particles such as the ?. These terms pose a disconnectedness problem more intricate than that solved by Faddeev's equations or their known modifications for connected three-body forces. The solution of this problem in terms of a set of connected-kernel integral equations is presented. (Auth.)
Comment on the one-loop finiteness in type-I superstring theory
International Nuclear Information System (INIS)
By using the Pauli-Villars method, one-loop divergence of the 4-point amplitude in SO(N) type-I superstring theory is studied. If one assigns the equal mass to the Pauli-Villars regulators appearing in the planar and nonorientable diagrams, the one-loop finiteness does not hold for N = 32. From the present view point, the principal-part prescription by Green and Schwarz corresponds to the different regulator mass assignment for the planar and nonorientable diagrams. (author)
New Type of Hamiltonians Without Ultraviolet Divergence for Quantum Field Theories
Teufel, Stefan
2015-01-01
We propose a novel type of Hamiltonians for quantum field theories. They are mathematically well-defined (and in particular, ultraviolet finite) without any ultraviolet cut-off such as smearing out the particles over a nonzero radius; rather, the particles are assigned radius zero. We describe explicit examples of such Hamiltonians. Their definition, which is best expressed in the particle-position representation of the wave function, involves a novel type of boundary condition on the wave function, which we call an interior-boundary condition. The relevant configuration space is one of a variable number of particles, and the relevant boundary consists of the configurations with two or more particles at the same location. The interior-boundary condition relates the value (or derivative) of the wave function at a boundary point to the value of the wave function at an interior point (here, in a sector of configuration space corresponding to a lesser number of particles).
T-dualization of type IIB superstring theory in double space
Nikoli?, Bojan
2015-01-01
In this article we offer the new interpretation of T-dualization procedure of type IIB superstring theory in double space framework. We use the ghost free action of type IIB superstring in pure spinor formulation in approximation of constant background fields up to the quadratic terms. T-dualization along any subset of the initial coordinates, $x^a$, is equivalent to the permutation of this subset with subset of the corresponding T-dual coordinates, $y_a$, in double space coordinate $Z^M=(x^\\mu,y_\\mu)$. Demanding that the T-dual transformation law after exchange $x^a\\leftrightarrow y_a$ has the same form as initial one, we obtain the T-dual NS-NS and NS-R background fields. The T-dual R-R field strength is determined up to one arbitrary constant under some assumptions.
On the effective theory of type II string compactifications on nilmanifolds and coset spaces
Energy Technology Data Exchange (ETDEWEB)
Caviezel, Claudio
2009-07-30
In this thesis we analyzed a large number of type IIA strict SU(3)-structure compactifications with fluxes and O6/D6-sources, as well as type IIB static SU(2)-structure compactifications with fluxes and O5/O7-sources. Restricting to structures and fluxes that are constant in the basis of left-invariant one-forms, these models are tractable enough to allow for an explicit derivation of the four-dimensional low-energy effective theory. The six-dimensional compact manifolds we studied in this thesis are nilmanifolds based on nilpotent Lie-algebras, and, on the other hand, coset spaces based on semisimple and U(1)-groups, which admit a left-invariant strict SU(3)- or static SU(2)-structure. In particular, from the set of 34 distinct nilmanifolds we identified two nilmanifolds, the torus and the Iwasawa manifold, that allow for an AdS{sub 4}, N = 1 type IIA strict SU(3)-structure solution and one nilmanifold allowing for an AdS{sub 4}, N = 1 type IIB static SU(2)-structure solution. From the set of all the possible six-dimensional coset spaces, we identified seven coset spaces suitable for strict SU(3)-structure compactifications, four of which also allow for a static SU(2)-structure compactification. For all these models, we calculated the four-dimensional low-energy effective theory using N = 1 supergravity techniques. In order to write down the most general four-dimensional effective action, we also studied how to classify the different disconnected ''bubbles'' in moduli space. (orig.)
Mild to severe social fears: ranking types of feared social situations using item response theory.
Crome, Erica; Baillie, Andrew
2014-06-01
Social anxiety disorder is one of the most common mental disorders, and is associated with long term impairment, distress and vulnerability to secondary disorders. Certain types of social fears are more common than others, with public speaking fears typically the most prevalent in epidemiological surveys. The distinction between performance- and interaction-based fears has been the focus of long-standing debate in the literature, with evidence performance-based fears may reflect more mild presentations of social anxiety. This study aims to explicitly test whether different types of social fears differ in underlying social anxiety severity using item response theory techniques. Different types of social fears were assessed using items from three different structured diagnostic interviews in four different epidemiological surveys in the United States (n=2261, n=5411) and Australia (n=1845, n=1497); and ranked using 2-parameter logistic item response theory models. Overall, patterns of underlying severity indicated by different fears were consistent across the four samples with items functioning across a range of social anxiety. Public performance fears and speaking at meetings/classes indicated the lowest levels of social anxiety, with increasing severity indicated by situations such as being assertive or attending parties. Fears of using public bathrooms or eating, drinking or writing in public reflected the highest levels of social anxiety. Understanding differences in the underlying severity of different types of social fears has important implications for the underlying structure of social anxiety, and may also enhance the delivery of social anxiety treatment at a population level. PMID:24873885
On the effective theory of type II string compactifications on nilmanifolds and coset spaces
International Nuclear Information System (INIS)
In this thesis we analyzed a large number of type IIA strict SU(3)-structure compactifications with fluxes and O6/D6-sources, as well as type IIB static SU(2)-structure compactifications with fluxes and O5/O7-sources. Restricting to structures and fluxes that are constant in the basis of left-invariant one-forms, these models are tractable enough to allow for an explicit derivation of the four-dimensional low-energy effective theory. The six-dimensional compact manifolds we studied in this thesis are nilmanifolds based on nilpotent Lie-algebras, and, on the other hand, coset spaces based on semisimple and U(1)-groups, which admit a left-invariant strict SU(3)- or static SU(2)-structure. In particular, from the set of 34 distinct nilmanifolds we identified two nilmanifolds, the torus and the Iwasawa manifold, that allow for an AdS4, N = 1 type IIA strict SU(3)-structure solution and one nilmanifold allowing for an AdS4, N = 1 type IIB static SU(2)-structure solution. From the set of all the possible six-dimensional coset spaces, we identified seven coset spaces suitable for strict SU(3)-structure compactifications, four of which also allow for a static SU(2)-structure compactification. For all these models, we calculated the four-dimensional low-energy effective theory using N = 1 supergravity techniques. In order to write down the most general four-dimensional effective action, we also studied how to classify the different disconnected ''bubbles'' in moduli space. (orig.)
DEFF Research Database (Denmark)
Kjems, L L; VØlund, A
2001-01-01
AIMS/HYPOTHESIS: We compared four methods to assess their accuracy in measuring insulin secretion during an intravenous glucose tolerance test in patients with Type II (non-insulin-dependent) diabetes mellitus and with varying beta-cell function and matched control subjects. METHODS: Eight control subjects and eight Type II diabetic patients underwent an intravenous glucose tolerance test with tolbutamide and an intravenous bolus injection of C-peptide to assess C-peptide kinetics. Insulin secretion rates were determined by the Eaton deconvolution (reference method), the Insulin SECretion method (ISEC) based on population kinetic parameters as well as one-compartment and two-compartment versions of the combined model of insulin and C-peptide kinetics. To allow a comparison of the accuracy of the four methods, fasting rates and amounts of insulin secreted during the first phase (0-10 min) and the second phase (10-180 min) were calculated. RESULTS: All secretion responses from the ISEC method were strongly correlated to those obtained by the Eaton deconvolution method (r = 0.83-0.92). The one-compartment combined model, however, showed a high correlation to the reference method only for the first-phase insulin response (r = 0.78). The two-compartment combined model failed to provide reliable estimates of insulin secretion in three of the control subjects and in two patients with Type II diabetes. The four methods were accurate with respect to mean basal and first-phase secretion response. The one-compartment and two-compartment combined models were less accurate in measuring the second-phase response. CONCLUSION/INTERPRETATION: The ISEC method can be applied to normal, obese or Type II diabetic patients. In patients with deviating kinetics of C-peptide the Eaton deconvolution method is the method of choice while the one-compartment combined model is suitable for measuring only the first-phase insulin secretion.
Zhang, Mei; Xu, Wei; Deng, Yulin
2013-01-01
The early diagnosis of diabetes, one of the top three chronic incurable diseases, is becoming increasingly important. Here, we investigated the applicability of an 18O-labeling technique for the development of a standard-free, label-free liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for the early diagnosis of type 2 diabetes mellitus (T2DM). Rather than attempting to identify quantitative differences in proteins as biomarkers, glycation of the highest abundance protein in h...
A sufficient condition for de Sitter vacua in type IIB string theory
Energy Technology Data Exchange (ETDEWEB)
Rummel, Markus [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2011-07-15
We derive a sufficient condition for realizing meta-stable de Sitter vacua with small positive cosmological constant within type IIB string theory flux compactifications with spontaneously broken supersymmetry. There are a number of 'lamp post' constructions of de Sitter vacua in type IIB string theory and supergravity. We show that one of them - the method of 'Kaehler uplifting' by F-terms from an interplay between non-perturbative effects and the leading {alpha}'-correction - allows for a more general parametric understanding of the existence of de Sitter vacua. The result is a condition on the values of the flux induced superpotential and the topological data of the Calabi-Yau compactification, which guarantees the existence of a meta-stable de Sitter vacuum if met. Our analysis explicitly includes the stabilization of all moduli, i.e. the Kaehler, dilaton and complex structure moduli, by the interplay of the leading perturbative and non-perturbative effects at parametrically large volume. (orig.)
Cahill, Roberta J; Pigeon, Kathleen; Strong-Townsend, Marilyn I; Drexel, Jan P; Clark, Genevieve H; Buch, Jesse S
2015-01-01
N-terminal pro-B-type natriuretic peptide (NT-proBNP) has been shown to have clinical utility as a biomarker in dogs with heart disease. There were several limitations associated with early diagnostic assay formats including a limited dynamic range and the need for protease inhibitors to maintain sample stability. A second-generation Cardiopet® proBNP enzyme-linked immunosorbent assay (IDEXX Laboratories Inc., Westbrook, Maine) was developed to address these limitations, and the present study reports the results of the analytical method validation for the second-generation assay. Coefficients of variation for intra-assay, interassay, and total precision based on 8 samples ranged from 3.9% to 8.9%, 2.0% to 5.0%, and 5.5% to 10.6%, respectively. Analytical sensitivity was established at 102 pmol/l. Accuracy averaged 102.0% based on the serial dilutions of 5 high-dose canine samples. Bilirubin, lipids, and hemoglobin had no effect on results. Reproducibility across 3 unique assay lots was excellent with an average coefficient of determination (r (2)) of 0.99 and slope of 1.03. Both ethylenediamine tetra-acetic acid plasma and serum gave equivalent results at time of blood draw (slope = 1.02, r (2) = 0.89; n = 51) but NT-proBNP was more stable in plasma at 25°C with median half-life measured at 244 hr and 136 hr for plasma and serum, respectively. Plasma is the preferred sample type and is considered stable up to 48 hr at room temperature whereas serum should be frozen or refrigerated when submitted for testing. Results of this study validate the second-generation canine Cardiopet proBNP assay for accurate and precise measurement of NT-proBNP in routine sample types from canine patients. PMID:25525139
Triques, K.; Coste, J.; Perret, J. L.; Segarra, C; Mpoudi, E.; Reynes, J.; Delaporte, E; Butcher, A; Dreyer, K; Herman, S; Spadoro, J.; Peeters, M.
1999-01-01
Three versions of a commercial human immunodeficiency virus (HIV) type 1 (HIV-1) load test (the AMPLICOR HIV-1 MONITOR Test versions 1.0, 1.0+, and 1.5; Roche Diagnostics, Branchburg, N.J.) were evaluated for their ability to detect and quantify HIV-1 RNA of different genetic subtypes. Plasma samples from 96 patients infected with various subtypes of HIV-1 (55 patients infected with subtype A, 9 with subtype B, 21 with subtype C, 2 with subtype D, 7 with subtype E, and 2 with subtype G) and c...
Ramamonjisoa, Nirilanto; Ratiney, Helene; Mutel, Elodie; Guillou, Herve; Mithieux, Gilles; Pilleul, Frank; Rajas, Fabienne; Beuf, Olivier; Cavassila, Sophie
2013-01-01
The assessment of liver lipid content and composition is needed in preclinical research to investigate steatosis and steatosis-related disorders. The purpose of this study was to quantify in vivo hepatic fatty acid content and composition using a method based on short echo time proton magnetic resonance spectroscopy (MRS) at 7 Tesla. A mouse model of glycogen storage disease type 1a with inducible liver-specific deletion of the glucose-6-phosphatase gene (L-G6pc?/?) mice and control mice were...
Quantification of Cannabinoid Content in Cannabis
Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.
2015-09-01
Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.
Pittner, Jiri; Piecuch, Piotr
2009-01-01
Abstract We apply the method of moments to the multireference (MR) coupled cluster (CC) formalism representing the continuous transition between the Brillouin-Wigner-type and Rayleigh-Schr\\"{o}dinger-type theories based on the Jeziorski-Monkhorst wave function ansatz and derive the formula for the noniterative energy corrections to the corresponding MRCC energies that recover the exact, full configuration interaction energies in the general model space case, inclu...
Lazar, Markus; Agiasofitou, Eleni; Polyzos, Demosthenes
2015-01-01
The Comment by Aifantis that criticizes the article 'On non-singular crack fields in Helmholtz type enriched elasticity theories' [Lazar, M., Polyzos, D., 2014. Int. J. Solids Struct. doi: 10.1016/j.ijsolstr.2014.01.002] is refuted by means of clear and straightforward arguments. Important theoretical aspects of gradient enriched elasticity theories which emerge in this work are also discussed.
Godin Gaston; Boudreau François
2009-01-01
Abstract Background Regular physical activity is considered a cornerstone for managing type 2 diabetes. However, in Canada, most individuals with type 2 diabetes do not meet national physical activity recommendations. When designing a theory-based intervention, one should first determine the key determinants of physical activity for this population. Unfortunately, there is a lack of information on this aspect among adults with type 2 diabetes. The purpose of this cross-sectional study is to f...
Holographic-Type Gravitation via Non-Differentiability in Weyl-Dirac Theory
Mihai Pricop; Mugur R?ut; Zoltan Borsos; Anca Baciu; Maricel Agop
2013-01-01
In the Weyl-Dirac non-relativistic hydrodynamics approach, the non-linear interaction between sub-quantum level and particle gives non-differentiable properties to the space. Therefore, the movement trajectories are fractal curves, the dynamics are described by a complex speed field and the equation of motion is identified with the geodesics of a fractal space which corresponds to a Schrodinger non-linear equation. The real part of the complex speed field assures, through a quantification co...
Dimitriou-Christidis, Petros; Bonvin, Alex; Samanipour, Saer; Hollender, Juliane; Rutler, Rebecca; Westphale, Jimmy; Gros, Jonas; Arey, J Samuel
2015-07-01
We report the development and validation of a method to detect and quantify diverse nonpolar halogenated micropollutants in wastewater treatment plant (WWTP) influent, effluent, primary sludge, and secondary sludge matrices (including both the liquid and particle phases) by comprehensive two-dimensional gas chromatography (GC×GC) coupled to micro- electron capture detector (?ECD). The 59 target analytes included toxaphenes, polychlorinated naphthalenes, organochlorine pesticides, polychlorinated biphenyls, polybrominated diphenyl ethers, and emerging persistent and bioaccumulative chemicals. The method is robust for a wide range of nonpolar halogenated micropollutants in all matrices. For most analytes, recoveries fell between 70% and 130% in all matrix types. GC×GC-?ECD detections of several target analytes were confirmed qualitatively by further analysis with GC×GC coupled to electron capture negative chemical ionization-time-of-flight mass spectrometry (ENCI-TOFMS). We then quantified the concentrations and apparent organic solid-water partition coefficients (Kp) of target micropollutants in samples from a municipal WWTP in Switzerland. Several analyzed pollutants exhibited a high frequency of occurrence in WWTP stream samples, including octachloronaphthalene, PCB-44, PCB-52, PCB-153, PCB-180, several organochlorine pesticides, PBDE-10, PBDE-28, PBDE-116, musk tibetene, and pentachloronitrobenzene. Our results suggest that sorption to dissolved organic carbon (DOC) can contribute substantially to the apparent solids-liquid distribution of hydrophobic micropollutants in WWTP streams. PMID:26066666
International Nuclear Information System (INIS)
A new type of coupled wave theory is described which is capable, in a very natural way, of analytically describing polychromatic gratings. In contrast to the well known and extremely successful coupled wave theory of Kogelnik, the new theory is based on a differential formulation of the process of Fresnel reflection within the grating. The fundamental coupled wave equations, which are an exact solution of Maxwell's equations for the case of the un-slanted reflection grating, can be analytically solved with minimal approximation. The equations may also be solved in a rotated frame of reference to provide useful formulae for the diffractive efficiency of the general polychromatic slanted grating in three dimensions. The new theory is compared with Kogelnik's theory where extremely good agreement is found for most cases. The theory has also been compared to a rigorous computational chain matrix simulation of the un-slanted grating with excellent agreement for cases typical to display holography. In contrast, Kogelnik's theory shows small discrepancies away from Bragg resonance. The new coupled wave theory may easily be extended to an N-coupled wave theory for the case of the multiplexed polychromatic grating and indeed for the purposes of analytically describing diffraction in the colour hologram. In the simple case of a monochromatic spatially-multiplexed grating at Bragg resonance the theory is in exact agreement with the predictions of conventional N-coupled wave theory.
Adaptation of learning resources based on the MBTI theory of psychological types
Directory of Open Access Journals (Sweden)
Amel Behaz
2012-01-01
Full Text Available Today, the resources available on the web increases significantly. The motivation for the dissemination of knowledge and their acquisition by learners is central to learning. However, learners show differences between the ways of learning that suits them best. The objective of the work presented in this paper is to study how it is possible to integrate models from cognitive theories and ontologies for the adaptation of educational resources. The goal is to provide the system capabilities to conduct reasoning on descriptions obtained in order to automatically adapt the resources to a learner according to his preferences. We rely on the model MBTI (Myers-Briggs Type Indicator for the consideration of learning styles of learners as a criterion for adaptation.
A calibration of mixing length theory based on RHD simulations of solar-type convection
Ludwig, H.-G.; Freytag, B.; Steffen, M.
1997-12-01
Radiation hydrodynamics (RHD) models provide detailed information about the dynamics, thermal structure, and convective efficiency of the superadiabatic region at the top of solar-type convection zones, and allow an extrapolation of the entropy (s*) in their deep, adiabatic layers. For the Sun we find a close agreement between s* inferred from our RHD models and an empirical determination of s* from helioseismology. In the framework of mixing length theory (MLT), s* is translated to an effective mixing-length parameter (alpha c) appropriate to construct global stellar models. The calibration based on our present set of 2D RHD models shows a moderate variation of alpha c across the domain of the HRD investigated so far.
On quasi-local charges and Newman--Penrose type quantities in Yang--Mills theories
Farkas, Réka
2010-01-01
We generalize the notion of quasi-local charges, introduced by P. Tod for Yang--Mills fields with unitary groups, to non-Abelian gauge theories with arbitrary gauge group, and calculate its small sphere and large sphere limits both at spatial and null infinity. We show that for semisimple gauge groups no reasonable definition yield conserved total charges and Newman--Penrose (NP) type quantities at null infinity in generic, radiative configurations. The conditions of their conservation, both in terms of the field configurations and the structure of the gauge group, are clarified. We also calculate the NP quantities for stationary, asymptotic solutions of the field equations with vanishing magnetic charges, and illustrate these by explicit solutions with various gauge groups.
On quasi-local charges and Newman-Penrose type quantities in Yang-Mills theories
Energy Technology Data Exchange (ETDEWEB)
Farkas, Reka [Institute for Theoretical Physics, Roland Eoetvoes University, H-1117 Budapest, Pazmany P. setany 1/A (Hungary); Szabados, Laszlo B, E-mail: lbszab@rmki.kfki.hu [Research Institute for Particle and Nuclear Physics, H-1525 Budapest 114, PO Box 49 (Hungary)
2011-07-21
We generalize the notion of quasi-local charges, introduced by P Tod for Yang-Mills fields with unitary gauge groups, to non-Abelian gauge theories with arbitrary gauge groups, and calculate its small sphere and large sphere limits both at spatial and null infinity. We show that for semisimple gauge groups no reasonable definition yield conserved total charges and Newman-Penrose (NP) type quantities at null infinity in generic, radiative configurations. The conditions of their conservation, both in terms of the field configurations and the structure of the gauge group, are clarified. We also calculate the NP quantities for stationary, asymptotic solutions of the field equations with vanishing magnetic charges, and illustrate these by explicit solutions with various gauge groups.
DEFF Research Database (Denmark)
Johansen, Markku; Nielsen, MaiBritt
2013-01-01
As a part of a prospective cohort study in four herds, a nested case control study was carried out. Five slow growing pigs (cases) and five fast growing pigs (controls) out of 60 pigs were selected for euthanasia and laboratory examination at the end of the study in each herd. A total of 238 pigs, all approximately 12 weeks old, were included in the study during the first week in the grower–finisher barn. In each herd, approximately 60 pigs from four pens were individually ear tagged. The pigs were weighed at the beginning of the study and at the end of the 6–8 weeks observation period. Clinical data, blood and faecal samples were serially collected from the 60 selected piglets every second week in the observation period. In the killed pigs serum was examined for antibodies against Lawsonia intracellularis (LI) and procine circovirus type 2 (PCV2) and in addition PCV2 viral DNA content was quantified. In faeces the quantity of LI cells/g faeces and number of PCV2 copies/g faeces was measured by qPCR. The objective of the study was to examine if growth rate in grower-finishing pig is associated with the detection of LI and PCV2 infection or clinical data. This study has shown that diarrhoea is a significant risk factor for low growth rate and that one log10 unit increase in LI load increases the odds ratio for a pig to have a low growth rate by 2.0 times. Gross lesions in the small intestine and LI load > log10 6/g were significant risk factors for low growth. No association between PCV2 virus and low growth was found.
Topological and geometrical quantum computation in cohesive Khovanov homotopy type theory
Ospina, Juan
2015-05-01
The recently proposed Cohesive Homotopy Type Theory is exploited as a formal foundation for central concepts in Topological and Geometrical Quantum Computation. Specifically the Cohesive Homotopy Type Theory provides a formal, logical approach to concepts like smoothness, cohomology and Khovanov homology; and such approach permits to clarify the quantum algorithms in the context of Topological and Geometrical Quantum Computation. In particular we consider the so-called "open-closed stringy topological quantum computer" which is a theoretical topological quantum computer that employs a system of open-closed strings whose worldsheets are open-closed cobordisms. The open-closed stringy topological computer is able to compute the Khovanov homology for tangles and for hence it is a universal quantum computer given than any quantum computation is reduced to an instance of computation of the Khovanov homology for tangles. The universal algebra in this case is the Frobenius Algebra and the possible open-closed stringy topological quantum computers are forming a symmetric monoidal category which is equivalent to the category of knowledgeable Frobenius algebras. Then the mathematical design of an open-closed stringy topological quantum computer is involved with computations and theorem proving for generalized Frobenius algebras. Such computations and theorem proving can be performed automatically using the Automated Theorem Provers with the TPTP language and the SMT-solver Z3 with the SMT-LIB language. Some examples of application of ATPs and SMT-solvers in the mathematical setup of an open-closed stringy topological quantum computer will be provided.
Spontaneous N=2 to N=1 Supersymmetry Breaking in Supergravity and Type II String Theory
Louis, Jan; Triendl, Hagen
2009-01-01
Using the embedding tensor formalism we give the general conditions for the existence of N=1 vacua in spontaneously broken N=2 supergravities. Our results confirm the necessity of having both electrically and magnetically charged multiplets in the spectrum, but also show that no further constraints on the special Kahler geometry of the vector multiplets arise. The quaternionic field space of the hypermultiplets must have two commuting isometries, and as an example we discuss the special quaternionic-Kahler geometries which appear in the low-energy limit of type II string theories. For these cases we find the general solution for stable Minkowski and AdS N=1 vacua, and determine the charges in terms of the holomorphic prepotentials. We find that the string theory realisation of the N=1 Minkowski vacua requires the presence of non-geometric fluxes, whereas they are not needed for the AdS vacua. We also argue that our results should hold in the presence of spacetime and worldsheet instanton corrections.
Constraining f(R) theories with Type Ia Supernovae and Gamma Ray Bursts
Cardone, Vincenzo F; Camera, Stefano
2009-01-01
Fourth - order gravity theories have received much interest in recent years thanks to their ability to provide an accelerated cosmic expansion in a matter only universe. In these theories, the Lagrangian density of the gravitational field has the form R + f(R), and the explicit choice of the arbitrary function f(R) must meet the local tests of gravity and the constraints from the primordial abundance of the light elements. Two popular classes of f(R) models, which are expected to fulfill all the above requirements, have recently been proposed. However, neither of these models has ever been quantitatively tested against the available astrophysical data. Here, by combining Type Ia Supernovae and Gamma Ray Bursts, we investigate the ability of these models to reproduce the observed Hubble diagram over the redshift range (0, 7). We find that both models fit very well this dataset with the present day values of the matter density and deceleration parameters which agree with previous estimates. However, the strong ...
Statistical analysis of 4 types of neck whiplash injuries based on classical meridian theory.
Chen, Yemeng; Zhao, Yan; Xue, Xiaolin; Li, Hui; Wu, Xiuyan; Zhang, Qunce; Zheng, Xin; Wang, Tianfang
2015-01-01
As one component of the Chinese medicine meridian system, the meridian sinew (Jingjin, (see text), tendino-musculo) is specially described as being for acupuncture treatment of the musculoskeletal system because of its dynamic attributes and tender point correlations. In recent decades, the therapeutic importance of the sinew meridian has become revalued in clinical application. Based on this theory, the authors have established therapeutic strategies of acupuncture treatment in Whiplash-Associated Disorders (WAD) by categorizing four types of neck symptom presentations. The advantage of this new system is to make it much easier for the clinician to find effective acupuncture points. This study attempts to prove the significance of the proposed therapeutic strategies by analyzing data collected from a clinical survey of various WAD using non-supervised statistical methods, such as correlation analysis, factor analysis, and cluster analysis. The clinical survey data have successfully verified discrete characteristics of four neck syndromes, based upon the range of motion (ROM) and tender point location findings. A summary of the relationships among the symptoms of the four neck syndromes has shown the correlation coefficient as having a statistical significance (P symptoms, which implies syndrome factors are more related to the Liver, as originally described in classical theory. The hypothesis of meridian sinew syndromes in WAD is clearly supported by the statistical analysis of the clinical trials. This new discovery should be beneficial in improving therapeutic outcomes. PMID:25980049
Contributions to the theory of Tonks-Langmuir-type bounded plasma systems
International Nuclear Information System (INIS)
One of the most basic and important bounded-plasma models is the 'classical' Tonks-Langmuir (TL) model [Phys. Rev. 34, 876 (1929)], a quasineutral, collisionless symmetric discharge with Boltzmann-distributed electrons and a cold ion source provided by electron-impact ionization of cold neutrals. Over the decades, this model has been the basis of many similar ('TL-type') bounded-plasma models and fundamental studies associated with them. The main contributions of the present thesis to the theory of TL-type systems may be summarized as follows. (i) The first correct fluid treatment of the classical TL model has been given by using in the closure equation the correct ion 'polytropic-coefficient function (PCF)' gammai (z), which is a strongly varying function of position. It is concluded that the cold-ion or gammai =const closure approximations used previously are inadequate and, wherever correct fluid solutions are important in practice, should be substantially improved by means of kinetic methods. (ii) A new general matrix formalism for calculating the potential distribution in the plasma region of a TL-type bounded-plasma model has been developed. In this formalism, the electron number density and the ionization rate may a priori have any dependence on the potential. (iii) A new classification scheme for ion and electron phase-space domains has been introduced on the basis of the 'CSS-free trajectories' (where 'CSS' stands for 'collision/sink/source'), which may be of types p (passing), r (reflected) or t (trapped). (iv) A new class of electron velocity distribution functions (VDFs) allowing for cases distinctly different from the ones considered previously has been proposed, taking into account the different physical properties of the type-p and type-t domains of electron phase space. Since the potential distribution has been shown to depend visibly on the electron VDF, we conclude that in the future, instead of approximating by the Maxwellian VDF, efforts should be made towards calculating more realistic VDFs for the electrons. (author)
DEFF Research Database (Denmark)
Ståhl, Marie; Kokotovic, Branko
2011-01-01
Four quantitative PCR (qPCR) assays were evaluated for quantitative detection of Brachyspira pilosicoli, Lawsonia intracellularis, and E. coli fimbrial types F4 and F18 in pig feces. Standard curves were based on feces spiked with the respective reference strains. The detection limits from the spiking experiments were 102 bacteria/g feces for BpiloqPCR and Laws-qPCR, 103 CFU/g feces for F4-qPCR and F18-qPCR. The PCR efficiency for all four qPCR assays was between 0.91 and 1.01 with R2 above 0.993. Standard curves, slopes and elevation, varied between assays and between measurements from pure DNA from reference strains and feces spiked with the respective strains. The linear ranges found for spiked fecal samples differed both from the linear ranges from pure culture of the reference strains and between the qPCR tests. The linear ranges were five log units for F4- qPCR, and Laws-qPCR, six log units for F18-qPCR and three log units for Bpilo-qPCR in spiked feces. When measured on pure DNA from the reference strains used in spiking experiments, the respective log ranges were: seven units for Bpilo-qPCR, Laws-qPCR and F18-qPCR and six log units for F4-qPCR. This shows the importance of using specific standard curves, where each pathogen is analysed in the same matrix as sample DNA. The qPCRs were compared to traditional bacteriological diagnostic methods and found to be more sensitive than cultivation for E. coli and B. pilosicoli. The qPCR assay for Lawsonia was also more sensitive than the earlier used method due to improvements in DNA extraction. In addition, as samples were not analysed for all four pathogen agents by traditional diagnostic methods, many samples were found positive for agents that were not expected on the basis of age and case history. The use of quantitative PCR tests for diagnosis of enteric diseases provides new possibilities for veterinary diagnostics. The parallel simultaneous analysis for several bacteria in multi-qPCR and the determination of the quantities of the infectious agents increases the information obtained from the samples and the chance for obtaining a relevant diagnosis.
Directory of Open Access Journals (Sweden)
Marlene Silva de Moraes
2008-03-01
Full Text Available O presente texto descreve um equipamento na escala-piloto e um método simples para comparar a eficiência de distribuidores de líquido. A técnica consiste basicamente em analisar a massa do líquido coletado em 21 tubos verticais de 52mm de diâmetro interno e 800 mm de comprimento dispostos em arranjo quadrático colocados abaixo do distribuidor. Uma manta acrílica que não dispersa o líquido com 50 mm de espessura foi fixada entre o distribuidor e o banco de tubos para evitar respingos. Como exemplo de aplicação foram realizados ensaios com nove distribuidores do tipo espinha de peixe de 4 tubos paralelos cada, para uma coluna com 400 mm de diâmetro. Variaram-se o número (n de furos (95, 127 e 159 furos/m², o diâmetro (d dos furos (2, 3 e 4 mm e as vazões (q de (1,2; 1,4 e 1,6m³/h. A melhor eficiência de espalhamento pelo menor desvio-padrão foi obtida com n de 159, d de 2 e q de 1,4 indicando as limitações de regras práticas de projeto. A pressão (p, na entrada do distribuidor, para essa condição, foi de apenas 51000 Pa (0,51 kgf/cm² e a velocidade média (v em cada orifício foi de 6,3 m/s.This paper describes a device developed on the pilot scale and a simple approach to compare liquid distributor efficiencies. The technique consists basically of analyzing the mass of the liquid collected in 21 vertical pipes measuring 52 mm in internal diameter and 800 mm in length placed in a quadratic arrangement and positioned below the distributor. A 50 mm thick acrylic blanket that does not disperse liquids was placed between the distributor and the pipe bank to avoid splashes. Assays were carried out with ladder-type distributors equipped with 4 parallel pipes each for a column measuring 400 mm in diameter as an example of the application. The number (n of orifices (95, 127, and 159 orifices/m², orifice diameter (d (2, 3, and 4 mm and the flowrate (q (1.2; 1.4; and 1.6 m3/h were varied. The best spread efficiency, which presented the lowest standard deviation, was achieved with 159 orifices, 2 mm and 1.4 m³/h. The pressure (p at the distributor's inlet for this condition was only 51000 Pa (0.51 kgf/cm², while the average velocity (v was 6.3 m/s in each orifice. These results show some limitations of the practical rules used in distributor designs.
Scientific Electronic Library Online (English)
Marlene Silva de, Moraes; José Renato Baptista de, Lima; Deovaldo de, Moraes Júnior; Luis Renato Bastos, Lia; Sandro Megale, Pizzo.
2008-03-01
Full Text Available O presente texto descreve um equipamento na escala-piloto e um método simples para comparar a eficiência de distribuidores de líquido. A técnica consiste basicamente em analisar a massa do líquido coletado em 21 tubos verticais de 52mm de diâmetro interno e 800 mm de comprimento dispostos em arranjo [...] quadrático colocados abaixo do distribuidor. Uma manta acrílica que não dispersa o líquido com 50 mm de espessura foi fixada entre o distribuidor e o banco de tubos para evitar respingos. Como exemplo de aplicação foram realizados ensaios com nove distribuidores do tipo espinha de peixe de 4 tubos paralelos cada, para uma coluna com 400 mm de diâmetro. Variaram-se o número (n) de furos (95, 127 e 159 furos/m²), o diâmetro (d) dos furos (2, 3 e 4 mm) e as vazões (q) de (1,2; 1,4 e 1,6m³/h). A melhor eficiência de espalhamento pelo menor desvio-padrão foi obtida com n de 159, d de 2 e q de 1,4 indicando as limitações de regras práticas de projeto. A pressão (p), na entrada do distribuidor, para essa condição, foi de apenas 51000 Pa (0,51 kgf/cm²) e a velocidade média (v) em cada orifício foi de 6,3 m/s. Abstract in english This paper describes a device developed on the pilot scale and a simple approach to compare liquid distributor efficiencies. The technique consists basically of analyzing the mass of the liquid collected in 21 vertical pipes measuring 52 mm in internal diameter and 800 mm in length placed in a quadr [...] atic arrangement and positioned below the distributor. A 50 mm thick acrylic blanket that does not disperse liquids was placed between the distributor and the pipe bank to avoid splashes. Assays were carried out with ladder-type distributors equipped with 4 parallel pipes each for a column measuring 400 mm in diameter as an example of the application. The number (n) of orifices (95, 127, and 159 orifices/m²), orifice diameter (d) (2, 3, and 4 mm) and the flowrate (q) (1.2; 1.4; and 1.6 m3/h) were varied. The best spread efficiency, which presented the lowest standard deviation, was achieved with 159 orifices, 2 mm and 1.4 m³/h. The pressure (p) at the distributor's inlet for this condition was only 51000 Pa (0.51 kgf/cm²), while the average velocity (v) was 6.3 m/s in each orifice. These results show some limitations of the practical rules used in distributor designs.
LRS Bianchi type -V cosmology with heat flow in scalar: tensor theory
Scientific Electronic Library Online (English)
C.P., Singh.
2009-12-01
Full Text Available In this paper we present a spatially homogeneous locally rotationally symmetric (LRS) Bianchi type -V perfect fluid model with heat conduction in scalar tensor theory proposed by Saez and Ballester. The field equations are solved with and without heat conduction by using a law of variation for the m [...] ean Hubble parameter, which is related to the average scale factor of metric and yields a constant value for the deceleration parameter. The law of variation for the mean Hubble parameter generates two types of cosmologies one is of power -law form and second the exponential form. Using these two forms singular and non -singular solutions are obtained with and without heat conduction. We observe that a constant value of the deceleration parameter is reasonable a description of the different phases of the universe. We arrive to the conclusion that the universe decelerates for positive value of deceleration parameter where as it accelerates for negative one. The physical constraints on the solutions of the field equations, and, in particular, the thermodynamical laws and energy conditions that govern such solutions are discussed in some detail.The behavior of the observationally important parameters like expansion scalar, anisotropy parameter and shear scalar is considered in detail.
BRST symmetric formulation of a theory with Gribov-type copies
International Nuclear Information System (INIS)
A path integral with BRST symmetry can be formulated by summing the Gribov-type copies in a very specific way if the functional correspondence between ? and the gauge parameter ? defined by ?(x)=f(A??) is globally single valued, where f(A??)=0 specifies the gauge condition. A soluble gauge model with Gribov-type copies recently analyzed by Friedberg, Lee, Pang and Ren satisfies this criterion. A detailed BRST analysis of the soluble model proposed by the above authors is presented. The BRST symmetry, when consistently implemented, ensures the gauge independence of physical quantities. In particular, the vacuum (ground) state and the perturbative corrections to the ground state energy in the above model are analyzed from a view point of BRST symmetry and R?-gauge. Possible implications of the present analysis on some aspects of the Gribov problem in non-Abelian gauge theory, such as the 1/N expansion in QCD and also the dynamical instability of BRST symmetry, are briefly discussed. (orig.)
International Nuclear Information System (INIS)
So far the second-order perturbation theory has been only applied to the hydrogen molecule. No application was attempted for another molecule, probably because of technical difficulties of such calculations. The purpose of this contribution is to show that the calculations of this type are now feasible on larger polyatomic molecules even on commonly used computers
?ársky, Petr
2015-01-01
So far the second-order perturbation theory has been only applied to the hydrogen molecule. No application was attempted for another molecule, probably because of technical difficulties of such calculations. The purpose of this contribution is to show that the calculations of this type are now feasible on larger polyatomic molecules even on commonly used computers.
Classification and quantification of leaf curvature
Liu, Zhongyuan; Jia, Liguo; Mao, Yanfei; He, Yuke
2010-01-01
Various mutants of Arabidopsis thaliana deficient in polarity, cell division, and auxin response are characterized by certain types of leaf curvature. However, comparison of curvature for clarification of gene function can be difficult without a quantitative measurement of curvature. Here, a novel method for classification and quantification of leaf curvature is reported. Twenty-two mutant alleles from Arabidopsis mutants and transgenic lines deficient in leaf flatness were selected. The muta...
Quantification and Negation in Event Semantics
Lucas Champollion
2010-01-01
Recently, it has been claimed that event semantics does not go well together with quantification, especially if one rejects syntactic, LF-based approaches to quantifier scope. This paper shows that such fears are unfounded, by presenting a simple, variable-free framework which combines a Neo-Davidsonian event semantics with a type-shifting based account of quantifier scope. The main innovation is that the event variable is bound inside the verbal denotation, rather than at sentence level by e...
Non-perturbative black holes in Type-IIA String Theory versus the No-Hair conjecture
International Nuclear Information System (INIS)
We obtain the first black hole solution to Type-IIA String Theory compactified on an arbitrary self-mirror Calabi–Yau manifold in the presence of non-perturbative quantum corrections. Remarkably enough, the solution involves multivalued functions, which could lead to a violation of the No-Hair conjecture. We discuss how String Theory forbids such scenario. However, the possibility still remains open in the context of four-dimensional ungauged Supergravity. (paper)
The D^4 R^4 term in type IIB string theory on T^2 and U-duality
Basu, Anirban
2007-01-01
We propose a manifestly U-duality invariant modular form for the D^4 R^4 interaction in type IIB string theory compactified on T^2. It receives perturbative contributions upto two loops, and non-perturbative contributions from D-instantons and (p,q) string instantons wrapping T^2. We provide evidence for this modular form by showing that the coefficients at tree level and at one loop precisely match those obtained using string perturbation theory. Using duality, parts of the...
Cosmic String Solution in a Born-Infeld Type Theory of Gravity
da Rocha, W J; Guimarães, M E X
2009-01-01
In this work we derive an exact solution for the exterior metric of a local cosmic string in an effective theory of gravity, the so-called NDL theory, which is inspired in the Born-Infeld theory. The solution is given by a family of parameters which presents quite different features from that of the General Relativity theory. The differences come from the specific choice of the gravitation Lagrangian which is based on a spin-1 construction of the gravitation theory.
Efficient quantification of non-Gaussian spin distributions
Dubost, B; Napolitano, M; Behbood, N; Sewell, R J; Mitchell, M W
2011-01-01
We study theoretically and experimentally the quantification of non-Gaussian distributions via non-destructive measurements. Using the theory of cumulants, their unbiased estimators, and the uncertainties of these estimators, we describe a quantification which is simultaneously efficient, unbiased by measurement noise, and suitable for hypothesis tests, e.g., to detect non-classical states. The theory is applied to cold $^{87}$Rb spin ensembles prepared in non-gaussian states by optical pumping and measured by non-destructive Faraday rotation probing. We find an optimal use of measurement resources under realistic conditions, e.g., in atomic ensemble quantum memories.
A theory of the thermal depinning transition in type-2 superconductors
Yamafuji, K.; Fujiyoshi, T.; Kiss, T.
2003-10-01
It is pointed out theoretically that the electric field, E, vs. current density, J, characteristic of the vortex glass state is different from that predicted by M.P.A. Fisher; that is, the electric resistivity, ?= E/ J has a finite value even at J?0, while the Fisher theory predicts ??0 with J?0. It is also pointed out that the vortex glass-liquid transition is merely a kind of the bifurcation transition in the mixed state of type-2 superconductors containing pinning centers, which may be called the thermal depinning transition between the pinning state and the depinning state of fluxoids resulting from the thermal agitation on fluxoids. Furthermore, it is shown theoretically that only the flux flow resistivity obeys the scaling law near the thermal depinning transition temperature, Tdp, while the flux creep resistivity approaches a finite value as J is decreased to zero, even in the pinning state below Tdp. When ? is measured down to a very small level of E, therefore, the noticeable deviation from the scaling law of ? against J that was predicted by Fisher is expected to appear due to the above-mentioned behavior of the flux creep resistivity. The above theoretical conclusion that is contrary to the theoretical prediction by Fisher, however, seems to be supported by the recently observed data over a very wide range of the electric field, E provided by Kodama et al., because the present theoretical expression for the E vs. J, characteristics agrees quantitatively with these observed data.
Sandryhaila, Aliaksei; Pueschel, Markus
2010-01-01
A polynomial transform is the multiplication of an input vector $x\\in\\C^n$ by a matrix $\\PT_{b,\\alpha}\\in\\C^{n\\times n},$ whose $(k,\\ell)$-th element is defined as $p_\\ell(\\alpha_k)$ for polynomials $p_\\ell(x)\\in\\C[x]$ from a list $b=\\{p_0(x),\\dots,p_{n-1}(x)\\}$ and sample points $\\alpha_k\\in\\C$ from a list $\\alpha=\\{\\alpha_0,\\dots,\\alpha_{n-1}\\}$. Such transforms find applications in the areas of signal processing, data compression, and function interpolation. Important examples include the discrete Fourier and cosine transforms. In this paper we introduce a novel technique to derive fast algorithms for polynomial transforms. The technique uses the relationship between polynomial transforms and the representation theory of polynomial algebras. Specifically, we derive algorithms by decomposing the regular modules of these algebras as a stepwise induction. As an application, we derive novel $O(n\\log{n})$ general-radix algorithms for the discrete Fourier transform and the discrete cosine transform of type 4.
From Peierls brackets to a generalized Moyal bracket for type-I gauge theories
Esposito, G; Esposito, Giampiero; Stornaiolo, Cosimo
2006-01-01
In the space-of-histories approach to gauge fields and their quantization, the Maxwell, Yang--Mills and gravitational field are well known to share the property of being type-I theories, i.e. Lie brackets of the vector fields which leave the action functional invariant are linear combinations of such vector fields, with coefficients of linear combination given by structure constants. The corresponding gauge-field operator in the functional integral for the in-out amplitude is an invertible second-order differential operator. For such an operator, we consider advanced and retarded Green functions giving rise to a Peierls bracket among group-invariant functionals. Our Peierls bracket is a Poisson bracket on the space of all group-invariant functionals in two cases only: either the gauge-fixing is arbitrary but the gauge fields lie on the dynamical sub-space; or the gauge-fixing is a linear functional of gauge fields, which are generic points of the space of histories. In both cases, the resulting Peierls bracke...
Schematic homotopy types and non-abelian Hodge theory I The Hodge decomposition
Katzarkov, L; Toen, B
2001-01-01
In this work we use Hodge theoretic methods to study homotopy types of complex projective manifolds with arbitrary fundamental groups. The main tool we use is the schematization functor $X \\mapsto (X\\otimes \\mathbb{C})^{sch}$, introduced by the third author as a substitute for the rationalization functor in homotopy theory in the case of non-simply connected spaces. Our main result is the construction of a Hodge decomposition on $(X\\otimes\\mathbb{C})^{sch}$. This Hodge decomposition is encoded in an action of the discrete group $\\mathbb{C}^{\\times \\delta}$ on the object $(X\\otimes \\mathbb{C})^{sch}$ and is shown to recover the usual Hodge decomposition on cohomology, the Hodge filtration on the pro-algebraic fundamental group as defined by C.Simpson, and in the simply connected case, the Hodge decomposition on the complexified homotopy groups as defined by P.Deligne, P.Griffiths, J.Morgan and D.Sullivan. Finally, using the construction $X \\mapsto (X\\otimes \\mathbb{C})^{sch}$, we define new homotopy invariants...
Didarloo, A; Shojaeizadeh, D; Gharaaghaji Asl, R; Niknami, S; Khorami, A
2014-06-01
The study evaluated the efficacy of the Theory of Reasoned Action (TRA), along with self-efficacy to predict dietary behaviour in a group of Iranian women with type 2 diabetes. A sample of 352 diabetic women referred to Khoy Diabetes Clinic, Iran, were selected and given a self-administered survey to assess eating behaviour, using the extended TRA constructs. Bivariate correlations and Enter regression analyses of the extended TRA model were performed with SPSS software. Overall, the proposed model explained 31.6% of variance of behavioural intention and 21.5% of variance of dietary behaviour. Among the model constructs, self-efficacy was the strongest predictor of intentions and dietary practice. In addition to the model variables, visit intervals of patients and source of obtaining information about diabetes from sociodemographic factors were also associated with dietary behaviours of the diabetics. This research has highlighted the relative importance of the extended TRA constructs upon behavioural intention and subsequent behaviour. Therefore, use of the present research model in designing educational interventions to increase adherence to dietary behaviours among diabetic patients was recommended and emphasized. PMID:25076670
Anderson, Edward
2013-01-01
I already showed that Kendall's shape geometry work was the geometrical description of Barbour's relational mechanics' reduced configuration spaces (alias shape spaces). I now describe the extent to which Kendall's subsequent statistical application to such as the `standing stones problem' realizes further ideas along the lines of Barbour-type timeless records theories, albeit just at the classical level.
Advances in type-2 fuzzy sets and systems theory and applications
Mendel, Jerry; Tahayori, Hooman
2013-01-01
This book explores recent developments in the theoretical foundations and novel applications of general and interval type-2 fuzzy sets and systems, including: algebraic properties of type-2 fuzzy sets, geometric-based definition of type-2 fuzzy set operators, generalizations of the continuous KM algorithm, adaptiveness and novelty of interval type-2 fuzzy logic controllers, relations between conceptual spaces and type-2 fuzzy sets, type-2 fuzzy logic systems versus perceptual computers; modeling human perception of real world concepts with type-2 fuzzy sets, different methods for generating membership functions of interval and general type-2 fuzzy sets, and applications of interval type-2 fuzzy sets to control, machine tooling, image processing and diet. The applications demonstrate the appropriateness of using type-2 fuzzy sets and systems in real world problems that are characterized by different degrees of uncertainty.
Accident sequence quantification with KIRAP
International Nuclear Information System (INIS)
The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs
Quantification and Negation in Event Semantics
Directory of Open Access Journals (Sweden)
Lucas Champollion
2010-12-01
Full Text Available Recently, it has been claimed that event semantics does not go well together with quantification, especially if one rejects syntactic, LF-based approaches to quantifier scope. This paper shows that such fears are unfounded, by presenting a simple, variable-free framework which combines a Neo-Davidsonian event semantics with a type-shifting based account of quantifier scope. The main innovation is that the event variable is bound inside the verbal denotation, rather than at sentence level by existential closure. Quantifiers can then be interpreted in situ. The resulting framework combines the strengths of event semantics and type-shifting accounts of quantifiers and thus does not force the semanticist to posit either a default underlying word order or a syntactic LF-style level. It is therefore well suited for applications to languages where word order is free and quantifier scope is determined by surface order. As an additional benefit, the system leads to a straightforward account of negation, which has also been claimed to be problematic for event-based frameworks.ReferencesBarker, Chris. 2002. ‘Continuations and the nature of quantification’. Natural Language Semantics 10: 211–242.http://dx.doi.org/10.1023/A:1022183511876Barker, Chris & Shan, Chung-chieh. 2008. ‘Donkey anaphora is in-scope binding’. Semantics and Pragmatics 1: 1–46.Beaver, David & Condoravdi, Cleo. 2007. ‘On the logic of verbal modification’. In Maria Aloni, Paul Dekker & Floris Roelofsen (eds. ‘Proceedings of the Sixteenth Amsterdam Colloquium’, 3–9. Amsterdam, Netherlands: University of Amsterdam.Beghelli, Filippo & Stowell, Tim. 1997. ‘Distributivity and negation: The syntax of each and every’. In Anna Szabolcsi (ed. ‘Ways of scope taking’, 71–107. Dordrecht, Netherlands: Kluwer.Brasoveanu, Adrian. 2010. ‘Modified Numerals as Post-Suppositions’. In Maria Aloni, Harald Bastiaanse, Tikitu de Jager & Katrin Schulz (eds. ‘Logic, Language and Meaning’, Lecture Notes in Computer Science, vol. 6042, 203–212. Berlin, Germany: Springer.Carlson, Gregory N. 1977. Reference to Kinds in English. Ph.D. thesis, University of Massachusetts, Amherst, MA.Carlson, Gregory N. 1984. ‘Thematic roles and their role in semantic interpretation’. Linguistics 22: 259–279.http://dx.doi.org/10.1515/ling.1984.22.3.259Champollion, Lucas. 2010. Parts of a whole: Distributivity as a bridge between aspect and measurement. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA.Champollion, Lucas, Tauberer, Josh & Romero, Maribel. 2007. ‘The Penn Lambda Calculator: Pedagogical software for natural language semantics’. In Tracy Holloway King & Emily Bender (eds. ‘Proceedings of the Grammar Engineering Across Frameworks(GEAF 2007 Workshop’, Stanford, CA: CSLI Online Publications.Condoravdi, Cleo. 2002. ‘Punctual until as a scalar NPI’. In Sharon Inkelas & Kristin Hanson (eds. ‘The nature of the word’, 631–654. Cambridge, MA: MIT Press.Csirmaz, Aniko. 2006. ‘Aspect, Negation and Quantifiers’. In Liliane Haegeman, Joan Maling, James McCloskey & Katalin E. Kiss (eds. ‘Event Structure And The Left Periphery’, Studies in Natural Language and Linguistic Theory, vol. 68, 225–253. SpringerNetherlands.Davidson, Donald. 1967. ‘The logical form of action sentences’. In Nicholas Rescher (ed. ‘The logic of decision and action’, 81–95. Pittsburgh, PA: University of Pittsburgh Press.de Swart, Henriëtte. 1996. ‘Meaning and use of not . . . until’. Journal of Semantics 13: 221–263.http://dx.doi.org/10.1093/jos/13.3.221de Swart, Henriëtte & Molendijk, Arie. 1999. ‘Negation and the temporal structure of narrative discourse’. Journal of Semantics 16: 1–42.http://dx.doi.org/10.1093/jos/16.1.1Dowty, David R. 1979. Word meaning and Montague grammar. Dordrecht, Netherlands: Reidel.Eckardt, Regine. 2010. ‘A Logic for Easy Linking Semantics’. In Maria Aloni, Harald Bastiaanse, Tikitu de Jager & Katrin Schulz (eds. ‘Logic, Language and Meaning’, Lecture Notes in Computer Science, vo
Directory of Open Access Journals (Sweden)
Godin Gaston
2009-06-01
Full Text Available Abstract Background Regular physical activity is considered a cornerstone for managing type 2 diabetes. However, in Canada, most individuals with type 2 diabetes do not meet national physical activity recommendations. When designing a theory-based intervention, one should first determine the key determinants of physical activity for this population. Unfortunately, there is a lack of information on this aspect among adults with type 2 diabetes. The purpose of this cross-sectional study is to fill this gap using an extended version of Ajzen's Theory of Planned Behavior (TPB as reference. Methods A total of 501 individuals with type 2 diabetes residing in the Province of Quebec (Canada completed the study. Questionnaires were sent and returned by mail. Results Multiple hierarchical regression analyses indicated that TPB variables explained 60% of the variance in intention. The addition of other psychosocial variables in the model added 7% of the explained variance. The final model included perceived behavioral control (? = .38, p Conclusion The findings suggest that interventions aimed at individuals with type 2 diabetes should ensure that people have the necessary resources to overcome potential obstacles to behavioral performance. Interventions should also favor the development of feelings of personal responsibility to exercise and promote the advantages of exercising for individuals with type 2 diabetes.
Transparent quantification into hyperintensional contexts.
Czech Academy of Sciences Publication Activity Database
Duží, M.; Jespersen, Bjorn
London : College Publications, 2011 - (Peliš, M.; Pun?ochá?, V.), s. 81-97 ISBN 978-1-84890-038-7. [ LOGICA 2010. Hejnice (CZ), 21.06.2010-25.06.2010] Institutional research plan: CEZ:AV0Z90090514 Keywords : hyperintensions * type theory * Transparant intensional logic * propositional attitudes Subject RIV: AA - Philosophy ; Religion
International Nuclear Information System (INIS)
We discuss Bianchi type-VII0 cosmology with a Dirac field in the Einstein—Cartan (E-C) theory and obtain the equations of the Dirac and gravitational fields in the E-C theory. A Bianchi type-VII0 inflationary solution is found. When (3)/16S2 - ?2 > 0, the Universe may avoid singularity. (geophysics, astronomy, and astrophysics)
Quantification of plant volatiles.
Qualley, Anthony V; Dudareva, Natalia
2014-01-01
Plant volatiles occupy diverse roles as signaling molecules, defensive compounds, hormones, and even waste products. Exponential growth in the related literature coupled with the availability of new analytical and computational technologies has inspired novel avenues of inquiry while giving researchers the tools to analyze the plant metabolome to an unprecedented level of detail. As availability of instrumentation and the need for qualitative and especially quantitative metabolic analysis grow within the scientific community so does the need for robust, adaptable, and widely disseminated protocols to enable rapid progression from experimental design to data analysis with minimal input toward method development. This protocol describes the collection and quantitative analysis of plant volatile headspace compounds. It is intended to guide those with little to no experience in analytical chemistry in the quantification of plant volatiles using gas chromatography coupled to mass spectrometry by describing procedures for calibrating and optimizing collection and analysis of these diverse compounds. PMID:24218209
Plotnikoff, Ronald C; Lippke, Sonia; Courneya, Kerry; Birkett, Nicholas; Sigal, Ronald
2010-01-01
Physical activity (PA) plays a key role in the management of Type 1 (T1D) and Type 2 diabetes (T2D) but there are few theory-based, effective programs to promote PA for individuals with diabetes. The purpose of this study was to investigate the utility of the Theory of Planned Behaviour (TPB) in understanding PA in an adult population with T1D or T2D. A total of 2311 individuals (691 T1D; 1614 T2D) completed self-report TPB constructs of attitude, subjective norms, perceived behavioural control (PBC), intention and PA at baseline and 1717 (524 T1D; 1123 T2D) completed the PA measure again at 6-month follow-up. Multi-group Structural Equation Modelling was conducted to: (1) test the fit of the TPB structure (2) determine the TPB structural invariance between the two types of diabetes and (3) to examine the explained variances in PA and compare the strength of associations of the TPB constructs in the two types of diabetes. The TPB constructs explained > or =40% of the variance in intentions for both diabetes groups. In cross-sectional models, the TPB accounted for 23 and 19% of the variance in PA for T1D and T2D, respectively. In prospective models, the TPB explained 13 and 8% of the variance in PA for T1D and T2D, respectively. When adjusting for past PA behaviour, the impact of PBC and intention on behaviour was reduced in both groups. The findings provide evidence for the utility of the TPB for the design of PA promotion interventions for adults with either T1D or T2D. PMID:20391204
Microscopic entropy of the most general BPS black hole for type II/M-theory on torii
International Nuclear Information System (INIS)
In the present dissertation we review the statistical computation of the entropy for the most general static BPS black hole solution in the framework of toroidally compactified type II/M-theory. This achievement is inscribed within a research project aimed to the study of the microscopic properties of this kind of solutions in relation to U-duality invariants (e.g. the entropy) computed on the corresponding macroscopic (supergravity) description. (orig.)
Cosmological solution of Bianchi type I in a new theory of gravitation
International Nuclear Information System (INIS)
We present a homogeneous, plane-symmetric, matter-free solution to a new theory of gravitation. In the limit of large t, the solution goes over into the plane-symmetric Kasner metric of general relativity
Exact combinatorics of Bern-Kosower-type amplitudes for two-loop ?3 theory
International Nuclear Information System (INIS)
Counting the contribution rate of a world-line formula to Feynman diagrams in ?3 theory, we explain the idea of how to determine precise combinatorics of Bern-Kosower-like amplitudes derived from a bosonic string theory for N-point two-loop Feynman amplitudes. In this connection we also present a method to derive simple and compact world-line forms for the effective action. (orig.)
Spectral analysis of polynomial potentials and its relation with ABJ/M-type theories
Energy Technology Data Exchange (ETDEWEB)
Garcia del Moral, M.P., E-mail: garciamormaria@uniovi.e [Departamento de Fisica, Universidad de Oviedo, Calvo Sotelo 18, 33007 Oviedo (Spain); Martin, I., E-mail: isbeliam@usb.v [Departamento de Fisica, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Navarro, L., E-mail: lnavarro@ma.usb.v [Departamento de Matematicas, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Perez, A.J., E-mail: ajperez@ma.usb.v [Departamento de Matematicas, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of); Restuccia, A., E-mail: arestu@usb.v [Departamento de Fisica, Universidad Simon Bolivar, Apartado 89000, Caracas 1080-A (Venezuela, Bolivarian Republic of)
2010-11-01
We obtain a general class of polynomial potentials for which the Schroedinger operator has a discrete spectrum. This class includes all the scalar potentials in membrane, 5-brane, p-branes, multiple M2 branes, BLG and ABJM theories. We provide a proof of the discreteness of the spectrum of the associated Schroedinger operators. This is the first step in order to analyze BLG and ABJM supersymmetric theories from a non-perturbative point of view.
Petrova, L. I.
2008-01-01
Historically it happen so that in branches of physics connected with field theory and of physics of material systems (continuous media) the concept of "conservation laws" has a different meaning. In field theory "conservation laws" are those that claim the existence of conservative physical quantities or objects. These are conservation laws for physical fields. In contrast to that in physics (and mechanics) of material systems the concept of "conservation laws" relates to co...
International Nuclear Information System (INIS)
A method is proposed that makes it possible to determine whether a timelike singularity corresponds to a point, linear, or other type of gravitational field source. It is shown that in the general theory of relativity it is also possible to have sources of a quite different type with no analogs in a space of finite curvature. An analysis is made of some well-known solutions containing timelike singularities whose type varies depending on the signs of the functions that occur in the solutions. The form of the solution near simple linear sources [W. Israel, Phys. Rev. D15, 935 (1977)] and generalized anisotropic solutions [S. L. Parnovsky, Physica (Utrecht) 104A, 210 (1980); E. M. Lifshitz and I. M. Khalatnikov, Sov. Phys. Usp. 6, 359 (1963)] is determined more accurately; the space-time described by the ? metric (3) is completely investigated; and the form of the metric near the ends and singular points of linear Weyl singularities is found
Cairns, Iver H.; Robinson, P. A.
1998-01-01
Existing, competing theories for coronal and interplanetary type III solar radio bursts appeal to one or more of modulational instability, electrostatic (ES) decay processes, or stochastic growth physics to preserve the electron beam, limit the levels of Langmuir-like waves driven by the beam, and produce wave spectra capable of coupling nonlinearly to generate the observed radio emission. Theoretical constraints exist on the wavenumbers and relative sizes of the wave bandwidth and nonlinear growth rate for which Langmuir waves are subject to modulational instability and the parametric and random phase versions of ES decay. A constraint also exists on whether stochastic growth theory (SGT) is appropriate. These constraints are evaluated here using the beam, plasma, and wave properties (1) observed in specific interplanetary type III sources, (2) predicted nominally for the corona, and (3) predicted at heliocentric distances greater than a few solar radii by power-law models based on interplanetary observations. It is found that the Langmuir waves driven directly by the beam have wavenumbers that are almost always too large for modulational instability but are appropriate to ES decay. Even for waves scattered to lower wavenumbers (by ES decay, for instance), the wave bandwidths are predicted to be too large and the nonlinear growth rates too small for modulational instability to occur for the specific interplanetary events studied or the great majority of Langmuir wave packets in type III sources at arbitrary heliocentric distances. Possible exceptions are for very rare, unusually intense, narrowband wave packets, predominantly close to the Sun, and for the front portion of very fast beams traveling through unusually dilute, cold solar wind plasmas. Similar arguments demonstrate that the ES decay should proceed almost always as a random phase process rather than a parametric process, with similar exceptions. These results imply that it is extremely rare for modulational instability or parametric decay to proceed in type III sources at any heliocentric distance: theories for type III bursts based on modulational instability or parametric decay are therefore not viable in general. In contrast, the constraint on SGT can be satisfied and random phase ES decay can proceed at all heliocentric distances under almost all circumstances. (The contrary circumstances involve unusually slow, broad beams moving through unusually hot regions of the Corona.) The analyses presented here strongly justify extending the existing SGT-based model for interplanetary type III bursts (which includes SGT physics, random phase ES decay, and specific electromagnetic emission mechanisms) into a general theory for type III bursts from the corona to beyond 1 AU. This extended theory enjoys strong theoretical support, explains the characteristics of specific interplanetary type III bursts very well, and can account for the detailed dynamic spectra of type III bursts from the lower corona and solar wind.
On the error term in a Parseval type formula in the theory of Ramanujan expansions
Murty, M. Ram; Saha, Biswajyoti
2015-01-01
Given two arithmetical functions $f,g$ we derive, under suitable conditions, asymptotic formulas with error term, for the convolution sums $\\sum_{n \\le N} f(n) g(n+h)$, building on an earlier work of Gadiyar, Murty and Padma. A key role in our method is played by the theory of Ramanujan expansions for arithmetical functions.
International Nuclear Information System (INIS)
A generalization of renormalization group equations to the theories with arbitrary Lagrangians including nonrenormalizable ones is presented. In the framework of dimensional regularization these equations enable us to determine the coefficient functions of higher poles starting from a simple pole or generalized ?-functions
Kase, Ryotaro; Gergely, László Á.; Tsujikawa, Shinji
2014-12-01
We consider perturbations of a static and spherically symmetric background endowed with a metric tensor and a scalar field in the framework of the effective field theory of modified gravity. We employ the previously developed 2 +1 +1 canonical formalism of a double Arnowitt-Deser-Misner (ADM) decomposition of space-time, which singles out both time and radial directions. Our building block is a general gravitational action that depends on scalar quantities constructed from the 2 +1 +1 canonical variables and the lapse. Variation of the action up to first order in perturbations gives rise to three independent background equations of motion, as expected from spherical symmetry. The dynamical equations of linear perturbations follow from the second-order Lagrangian after a suitable gauge fixing. We derive conditions for the avoidance of ghosts and Laplacian instabilities for the odd-type perturbations. We show that our results not only incorporate those derived in the most general scalar-tensor theories with second-order equations of motion (the Horndeski theories) but they can be applied to more generic theories beyond Horndeski.
Przytycki, J H
1995-01-01
We describe in this talk three methods of constructing different links with the same Jones type invariant. All three can be thought as generalizations of mutation. The first combines the satellite construction with mutation. The second uses the notion of rotant, taken from the graph theory, the third, invented by Jones, transplants into knot theory the idea of the Yang-Baxter equation with the spectral parameter (idea employed by Baxter in the theory of solvable models in statistical mechanics). We extend the Jones result and relate it to Traczyk's work on rotors of links. We also show further applications of the Jones idea, e.g. to 3-string links in the solid torus. We stress the fact that ideas coming from various areas of mathematics (and theoretical physics) has been fruitfully used in knot theory, and vice versa. (This is the detailed version of the talk given at the Banach Center Colloquium, Warsaw, Poland, March 24, 1994: ``W poszukiwaniu nietrywialnego wezla z trywialnym wielomianem Jonesa: grafy i me...
Session Types = Intersection Types + Union Types
Padovani, Luca
2011-01-01
We propose a semantically grounded theory of session types which relies on intersection and union types. We argue that intersection and union types are natural candidates for modeling branching points in session types and we show that the resulting theory overcomes some important defects of related behavioral theories. In particular, intersections and unions provide a native solution to the problem of computing joins and meets of session types. Also, the subtyping relation turns out to be a pre-congruence, while this is not always the case in related behavioral theories.
Yao, David; Krempl, Erhard
1988-01-01
The isotropic theory of viscoplasticity based on overstress does not use a yield surface or a loading and unloading criterion. The inelastic strain rate depends on overstress, the difference between the stress and the equilibrium stress, and is assumed to be rate dependent. Special attention is paid to the modeling of elastic regions. For the modeling of cyclic hardening, such as observed in annealed Type 304 stainless steel, and additional growth law for a scalar quantity which represents the rate independent asymptotic value of the equilibrium stress is added. It is made to increase with inelastic deformation using a new scalar measure which differentiates between nonproportional and proportional loading. The theory is applied to correlate uniaxial data under two step amplitude loading including the effect of further hardening at the high amplitude and proportional and nonproportional cyclic loadings. Results are compared with corresponding experiments.
Uncertainty Quantification in Climate Modeling
Sargsyan, K.; Safta, C.; Berry, R.; Debusschere, B.; Najm, H.
2011-12-01
We address challenges that sensitivity analysis and uncertainty quantification methods face when dealing with complex computational models. In particular, climate models are computationally expensive and typically depend on a large number of input parameters. We consider the Community Land Model (CLM), which consists of a nested computational grid hierarchy designed to represent the spatial heterogeneity of the land surface. Each computational cell can be composed of multiple land types, and each land type can incorporate one or more sub-models describing the spatial and depth variability. Even for simulations at a regional scale, the computational cost of a single run is quite high and the number of parameters that control the model behavior is very large. Therefore, the parameter sensitivity analysis and uncertainty propagation face significant difficulties for climate models. This work employs several algorithmic avenues to address some of the challenges encountered by classical uncertainty quantification methodologies when dealing with expensive computational models, specifically focusing on the CLM as a primary application. First of all, since the available climate model predictions are extremely sparse due to the high computational cost of model runs, we adopt a Bayesian framework that effectively incorporates this lack-of-knowledge as a source of uncertainty, and produces robust predictions with quantified uncertainty even if the model runs are extremely sparse. In particular, we infer Polynomial Chaos spectral expansions that effectively encode the uncertain input-output relationship and allow efficient propagation of all sources of input uncertainties to outputs of interest. Secondly, the predictability analysis of climate models strongly suffers from the curse of dimensionality, i.e. the large number of input parameters. While single-parameter perturbation studies can be efficiently performed in a parallel fashion, the multivariate uncertainty analysis requires a large number of training runs, as well as an output parameterization with respect to a fast-growing spectral basis set. To alleviate this issue, we adopt the Bayesian view of compressive sensing, well-known in the image recognition community. The technique efficiently finds a sparse representation of the model output with respect to a large number of input variables, effectively obtaining a reduced order surrogate model for the input-output relationship. The methodology is preceded by a sampling strategy that takes into account input parameter constraints by an initial mapping of the constrained domain to a hypercube via the Rosenblatt transformation, which preserves probabilities. Furthermore, a sparse quadrature sampling, specifically tailored for the reduced basis, is employed in the unconstrained domain to obtain accurate representations. The work is supported by the U.S. Department of Energy's CSSEF (Climate Science for a Sustainable Energy Future) program. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Smirnov-Rueda, R
2005-01-01
Close insight into mathematical and conceptual structure of classical field theories shows serious inconsistencies in their common basis. In other words, we claim in this work to have come across two severe mathematical blunders in the very foundations of theoretical hydrodynamics. One of the defects concerns the traditional treatment of time derivatives in Eulerian hydrodynamic description. The other one resides in the conventional demonstration of the so-called Convection Theorem. Both approaches are thought to be necessary for cross-verification of the standard differential form of continuity equation. Any revision of these fundamental results might have important implications for all classical field theories. Rigorous reconsideration of time derivatives in Eulerian description shows that it evokes Minkowski metric for any flow field domain without any previous postulation. Mathematical approach is developed within the framework of congruences for general 4-dimensional differentiable manifold and the final...
Probabilistic bounding analysis in the Quantification of Margins and Uncertainties
International Nuclear Information System (INIS)
The current challenge of nuclear weapon stockpile certification is to assess the reliability of complex, high-consequent, and aging systems without the benefit of full-system test data. In the absence of full-system testing, disparate kinds of information are used to inform certification assessments such as archival data, experimental data on partial systems, data on related or similar systems, computer models and simulations, and expert knowledge. In some instances, data can be scarce and information incomplete. The challenge of Quantification of Margins and Uncertainties (QMU) is to develop a methodology to support decision-making in this informational context. Given the difficulty presented by mixed and incomplete information, we contend that the uncertainty representation for the QMU methodology should be expanded to include more general characterizations that reflect imperfect information. One type of generalized uncertainty representation, known as probability bounds analysis, constitutes the union of probability theory and interval analysis where a class of distributions is defined by two bounding distributions. This has the advantage of rigorously bounding the uncertainty when inputs are imperfectly known. We argue for the inclusion of probability bounds analysis as one of many tools that are relevant for QMU and demonstrate its usefulness as compared to other methods in a reliability example with imperfect input information.
Adaptation of learning resources based on the MBTI theory of psychological types
Amel Behaz; Mahieddine Djoudi
2012-01-01
Today, the resources available on the web increases significantly. The motivation for the dissemination of knowledge and their acquisition by learners is central to learning. However, learners show differences between the ways of learning that suits them best. The objective of the work presented in this paper is to study how it is possible to integrate models from cognitive theories and ontologies for the adaptation of educational resources. The goal is to provide the system capabilities to c...
Electrostatic field in superconductors IV: theory of Ginzburg-Landau type.
Czech Academy of Sciences Publication Activity Database
Lipavský, P.; Kolá?ek, Jan
Singapore : World Scientific Publ. Co, 2010 - (Kusmartsev, F.), s. 581-587 ISBN 978-981-4289-14-6. - (24). [International Workshop on Condensed Matter Theories /32./. Loughborough (GB), 12.08.2008-19.08.2008] Institutional research plan: CEZ:AV0Z10100521 Keywords : superconductor * electric field Subject RIV: BM - Solid Matter Physics ; Magnetism http://eproceedings.worldscinet.com/9789814289153/9789814289153.shtml
Electrostatic field in superconductors IV: theory of Ginzburg-Landau type.
Czech Academy of Sciences Publication Activity Database
Lipavský, Pavel; Kolá?ek, Jan
2009-01-01
Ro?. 23, 20-21 (2009), s. 4505-4511. ISSN 0217-9792 R&D Projects: GA ?R GA202/04/0585; GA ?R GA202/05/0173; GA AV ?R IAA1010312 Institutional research plan: CEZ:AV0Z10100521 Keywords : superconductivity * Ginzburg-Landau theory Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.408, year: 2009
Classical Morse theory revisited I -- Backward $\\lambda$-Lemma and homotopy type
Weber, Joa
2014-01-01
We introduce a tool, dynamical thickening, which overcomes the infamous discontinuity of the gradient flow endpoint map near non-degenerate critical points. More precisely, we interpret the stable foliations of certain Conley pairs $(N,L)$, established in [4], as a \\emph{dynamical thickening of the stable manifold}. As a first application and to illustrate efficiency of the concept we reprove a fundamental theorem of classical Morse theory, Milnor's homotopical cell attachme...
The double Mellin-Barnes type integrals and their applications to convolution theory
Hai, Nguyen Thanh
1992-01-01
This book presents new results in the theory of the double Mellin-Barnes integrals popularly known as the general H-function of two variables.A general integral convolution is constructed by the authors and it contains Laplace convolution as a particular case and possesses a factorization property for one-dimensional H-transform. Many examples of convolutions for classical integral transforms are obtained and they can be applied for the evaluation of series and integrals.
International Nuclear Information System (INIS)
A Gaussian-type quadrature formula is derived for Lebesgue-Stieltjes integrals pertaining to the neutron transport theory (one-velocity, isotropic scattering, plane geometry). The quadrature formula originates from an orthogonality property satisfied by the well-known gsub(n)(c, ?) functions which appear in the solution by Legendre expansion of the transport equation. The quadrature formula thus obtained reduces to the classical Gaussian one in the case of a purely capturing medium. An application to the Milne problem is given. Examples of numerical quadratures are carried out in the appendix
Modeling the size dependent pull-in instability of beam-type NEMS using strain gradient theory
Scientific Electronic Library Online (English)
Ali, Koochi; Hamid M., Sedighi; Mohamadreza, Abadyan.
Full Text Available It is well recognized that size dependency of materials characteristics, i.e. size-effect, often plays a significant role in the performance of nano-structures. Herein, strain gradient continuum theory is employed to investigate the size dependent pull-in instability of beam-type nano-electromechani [...] cal systems (NEMS). Two most common types of NEMS i.e. nano-bridge and nano-cantilever are considered. Effects of electrostatic field and dispersion forces i.e. Casimir and van der Waals (vdW) attractions have been considered in the nonlinear governing equations of the systems. Two different solution methods including numerical and Rayleigh-Ritz have been employed to solve the constitutive differential equations of the system. Effect of dispersion forces, the size dependency and the importance of coupling between them on the instability performance are discussed.
AdS3 xw (S3 x S3 x S1) solutions of type IIB string theory
International Nuclear Information System (INIS)
We analyse a recently constructed class of local solutions of type IIB supergravity that consist of a warped product of AdS3 with a sevendimensional internal space. In one duality frame the only other nonvanishing fields are the NS three-form and the dilaton. We analyse in detail how these local solutions can be extended to globally well-defined solutions of type IIB string theory, with the internal space having topology S3 x S3 x S1 and with properly quantised three-form flux. We show that many of the dual (0,2) SCFTs are exactly marginal deformations of the (0,2) SCFTs whose holographic duals are warped products of AdS3 with seven-dimensional manifolds of topology S3 x S2 x T2. (orig.)
International Nuclear Information System (INIS)
We study T11-D-q x Tq/Zn orbifold compactifications of eleven-dimensional supergravity and M-theory using a purely algebraic method. Given the description of maximal supergravities reduced on square tori as non-linear coset ?-models, we exploit the mapping between scalar fields of the reduced theory and directions in the tangent space over the coset to construct the orbifold action as a non-Cartan preserving finite order inner automorphism of the complexified U-duality algebra. Focusing on the exceptional serie of Cremmer-Julia groups, we compute the residual U-duality symmetry after orbifold projection and determine the reality properties of their corresponding Lie algebras. We carry out this analysis as far as the hyperbolic e10 algebra, conjectured to be a symmetry of M-theory. In this case the residual subalgebras are shown to be described by a special class of Borcherds and Kac-Moody algebras, modded out by their centres and derivations. Furthermore, we construct an alternative description of the orbifold action in terms of equivalence classes of shift vectors, and, in D 1, we show that a root of e10 can always be chosen as the class representative. Then, in the framework of the E10/10/K(E10/10) effective ?-model approach to M-theory near a spacelike singularity, we identify these roots with brane configurations stabilizing the corresponding orbifolds. In the particular case of Z2 orbifolds of M-theory descending to type 0' orientifolds, we argue that these roots can be interpreted as pairs of magnetized D9- and D9'-branes, carrying the lower-dimensional brane charges required for tadpole cancellation. More generally, we provide a classification of all such roots generating Zn product orbifolds for n?6, and hint at their possible interpretation
Dimitrov, Bogdan G
2009-01-01
On the base of the distinction between covariant and contravariant metric tensor components, a new (multivariable) cubic algebraic equation for reparametrization invariance of the gravitational Lagrangian has been derived and parametrized with complicated non - elliptic functions, depending on the (elliptic) Weierstrass function and its derivative. This is different from standard algebraic geometry, where only two-dimensional cubic equations are parametrized with elliptic functions and not multivariable ones. Physical applications of the approach have been considered in reference to theories with extra dimensions. The s.c. "length function" l(x) has been introduced and found as a solution of quasilinear differential equations in partial derivatives for two different cases of "compactification + rescaling" and "rescaling + compactification". New physically important relations (inequalities) between the parameters in the action are established, which cannot be derived in the case $l=1$ of the standard gravitati...
Fixed point theory for compact absorbing contractions in extension type spaces
Donal ORegan
2010-01-01
Several new fixed point results for self maps in extension type spaces are presented in this paper. In particular we discuss compact absorbing contractions.Son presentados en este artículo varios resultados nuevos de punto fijo para autoaplicaciones en espacios de tipo extensión. En particular discutimos contracciones compactas absorbentes.
Warped anti-de Sitter spaces from brane intersections in type II string theory
Orlando, Domenico
2010-01-01
We consider explicit type II string constructions of backgrounds containing warped and squashed anti de Sitter spaces. These are obtained via Hopf T duality from brane intersections including dyonic black strings, plane waves and monopoles. We also study the supersymmetry of these solutions and discuss special values of the deformation parameters.
Weyl Group Multiple Dirichlet Series Type A Combinatorial Theory (AM-175)
Brubaker, Ben; Friedberg, Solomon
2011-01-01
Weyl group multiple Dirichlet series are generalizations of the Riemann zeta function. Like the Riemann zeta function, they are Dirichlet series with analytic continuation and functional equations, having applications to analytic number theory. By contrast, these Weyl group multiple Dirichlet series may be functions of several complex variables and their groups of functional equations may be arbitrary finite Weyl groups. Furthermore, their coefficients are multiplicative up to roots of unity, generalizing the notion of Euler products. This book proves foundational results about these series an
International Nuclear Information System (INIS)
An attempt has been made to obtain a strategy coherent with the available instruments and that could be implemented with future developments. A calculation methodology was developed for fuel reload in PWR reactors, which evolves cell calculation with the HAMMER-TECHNION code and neutronics calculation with the CITATION code.The management strategy adopted consists of fuel element position changing at the beginning of each reactor cycle in order to decrease the radial peak factor. The bi-dimensional, two group First Order perturbation theory was used for the mathematical modeling. (L.C.J.A.)
Algebraic Signal Processing Theory: Cooley-Tukey Type Algorithms for DCTs and DSTs
Pueschel, M; Pueschel, Markus; Moura, Jose M. F.
2007-01-01
This paper presents a systematic methodology based on the algebraic theory of signal processing to classify and derive fast algorithms for linear transforms. Instead of manipulating the entries of transform matrices, our approach derives the algorithms by stepwise decomposition of the associated signal models, or polynomial algebras. This decomposition is based on two generic methods or algebraic principles that generalize the well-known Cooley-Tukey FFT and make the algorithms' derivations concise and transparent. Application to the 16 discrete cosine and sine transforms yields a large class of fast algorithms, many of which have not been found before.
MAMA Software Features: Quantification Verification Documentation-1
Energy Technology Data Exchange (ETDEWEB)
Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-05-21
This document reviews the verification of the basic shape quantification attributes in the MAMA software against hand calculations in order to show that the calculations are implemented mathematically correctly and give the expected quantification results.
Nonlinear Spinor Fields in LRS Bianchi type-I spacetime: Theory and observation
Saha, Bijan
2015-01-01
Within the scope of a LRS Bianchi type-I cosmological model we study the role of the nonlinear spinor field in the evolution of the Universe. In doing so we consider a polynomial type of nonlinearity that describes different stages of the evolution. Finally we also use the observational data to fix the problem parameters that match best with the real picture of the evolution. The assessment of the age of the Universe in case of the soft beginning of expansion (initial speed of expansion in a point of singularity is equal to zero) the age was found 15 billion years, whereas in case of the hard beginning (nontrivial initial speed) it was found that the Universe is 13.7 billion years old.
Noether-type theory for discrete mechanico-electrical dynamical systems with nonregular lattices
Fu, Jingli; Chen, Liqun; Chen, Benyong
2010-09-01
We investigate Noether symmetries and conservation laws of the discrete mechanico-electrical systems with nonregular lattices. The operators of discrete transformation and discrete differentiation to the right and left are introduced for the systems. Based on the invariance of discrete Hamilton action on nonregular lattices of the systems with the dissipation forces under the infinitesimal transformations with respect to the time, generalized coordinates and generalized charge quantities, we work out the discrete analog of the generalized variational formula. From this formula we derive the discrete analog of generalized Noether-type identity, and then we present the generalized quasi-extremal equations and properties of these equations for the systems. We also obtain the discrete analog of Noether-type conserved laws and the discrete analog of generalized Noether theorems for the systems. Finally we use an example to illustrate these results.
BOUSSINESQ SYSTEMS OF BONA-SMITH TYPE ON PLANE DOMAINS: THEORY AND NUMERICAL ANALYSIS
Dougalis, Vassilios; Mitsotakis, Dimitrios; Saut, Jean-Claude
2009-01-01
We consider a class of Boussinesq systems of Bona-Smith type in two space dimensions approximating surface wave flows modelled by the three-dimensional Euler equations. We show that various initial-boundary-value problems for these systems, posed on a bounded plane domain are well posed locally in time. In the case of reflective boundary conditions, the systems are discretized by a modified Galerkin method which is proved to converge in $L^2$ at an optimal rate. Numerical ex...
A constrained theory of non-BCS type superconductivity in gapped Graphene
Vyas, Vivek M; Panigrahi, Prasanta. K.
2011-01-01
We show that gapped Graphene, with a local constraint that current arising from the two valley fermions are exactly equal, shows a non-BCS type superconductivity. Unlike the conventional mechanisms, this superconductivity phenomenon does not require any pairing. We estimate the critical temperature for superconducting-to-normal transition via Berezinskii-Kosterlitz-Thouless mechanism, and find that it is proportional to the gap.
To the theory of $q$-ary Steiner and other-type trades
Krotov, Denis; Mogilnykh, Ivan; Potapov, Vladimir
2014-01-01
We introduce the concept of a clique bitrade, which generalizes several known types of bitrades, including latin bitrades, Steiner $T(k-1,k,v)$ bitrades, extended $1$-perfect bitrades. For a distance-transitive graph $G$, we show a one-to-one correspondence between the clique bitrades that meet the weight-distribution lower bound on the cardinality and the bipartite isometric subgraphs that are distance-regular with certain parameters. As an application of the results, we fi...
Theory of the normal modes of vibrations in the lanthanide type crystals
Energy Technology Data Exchange (ETDEWEB)
Acevedo, Roberto [Instituto de Ciencias Basicas. Facultad de Ingenieria, Universidad Diego Portales, Avenida Ejercito 441, Santiago (Chile); Soto-Bubert, Andres, E-mail: roberto.acevedo@umayor.cl
2008-11-01
For the lanthanide type crystals, a vast and rich, though incomplete amount of experimental data has been accumulated, from linear and non linear optics, during the last decades. The main goal of the current research work is to report a new methodology and strategy to put forward a more representative approach to account for the normal modes of vibrations for a complex N-body system. For illustrative purposes, the chloride lanthanide type crystals Cs{sub 2}NaLnCl{sub 6} have been chosen and we develop new convergence tests as well as a criterion to deal with the details of the F-matrix (potential energy matrix). A novel and useful concept of natural potential energy distributions (NPED) is introduced and examined throughout the course of this work. The diagonal and non diagonal contributions to these NPED-values, are evaluated for a series of these crystals explicitly. Our model is based upon a total of seventy two internal coordinates and ninety eight internal Hooke type force constants. An optimization mathematical procedure is applied with reference to the series of chloride lanthanide crystals and it is shown that the strategy and model adopted is sound from both a chemical and a physical viewpoints. We can argue that the current model is able to accommodate a number of interactions and to provide us with a very useful physical insight. The limitations and advantages of the current model and the most likely sources for improvements are discussed in detail.
Chern-Simons and Born-Infeld gravity theories and Maxwell algebras type
Concha, P K; Rodríguez, E K; Salgado, P
2014-01-01
Recently was shown that standard odd and even-dimensional General Relativity can be obtained from a $(2n+1)$-dimensional Chern-Simons Lagrangian invariant under the $B_{2n+1}$ algebra and from a $(2n)$-dimensional Born-Infeld Lagrangian invariant under a subalgebra $\\cal{L}^{B_{2n+1}}$ respectively. Very Recently, it was shown that the generalized In\\"on\\"u-Wigner contraction of the generalized AdS-Maxwell algebras provides Maxwell algebras types $\\cal{M}_{m}$ which correspond to the so called $B_{m}$ Lie algebras. In this article we report on a simple model that suggests a mechanism by which standard odd-dimensional General Relativity may emerge as a weak coupling constant limit of a $(2p+1)$-dimensional Chern-Simons Lagrangian invariant under the Maxwell algebra type $\\cal{M}_{2m+1}$, if and only if $m\\geq p$. Similarly, we show that standard even-dimensional General Relativity emerges as a weak coupling constant limit of a $(2p)$-dimensional Born-Infeld type Lagrangian invariant under a subalgebra $\\cal{L}...
Recent progress on Kubas-type hydrogen-storage nanomaterials: from theories to experiments
Chung, ChiHye; Ihm, Jisoon; Lee, Hoonkyung
2015-06-01
Transition-metal (TM) atoms are known to form TM-H2 complexes, which are collectively called Kubas dihydrogen complexes. The TM-H2 complexes are formed through the hybridization of the TM d orbitals with the H2 ? and ?* orbitals. The adsorption energy of H2 molecules in the TM-H2 complexes is usually within the range of energy required for reversible H2 storage at room temperature and ambient pressure (-0.4 ~ -0.2 eV/H2). Thus, TM-H2 complexes have been investigated as potential Kubas-type hydrogen-storage materials. Recently, TM-decorated nanomaterials have attracted much attention because of their promising high capacity and reversibility as Kubas-type hydrogen-storage materials. The hydrogen storage capacity of TM-decorated nanomaterials is expected to be as large as ~9 wt%, which is suitable for certain vehicular applications. However, in the TM-decorated nanostructures, the TM atoms prefer to form clusters because of the large cohesive energy (approximately 4 eV), which leads to a significant reduction in the hydrogen-storage capacity. On the other hand, Ca atoms can form complexes with H2 molecules via Kubas-like interactions. Ca atoms attached to nanomaterials have been reported to be able to adsorb as many H2 molecules as TM atoms. Ca atoms tend to cluster less because of the small cohesive energy of bulk Ca (1.83 eV), which is much smaller than those of bulk TMs. These observations suggest thatKubas interactions can occur in d orbital-free elements, thereby making Ca a more suitable element for attracting H2 in hydrogen-storage materials. Recently, Kubas-type TM-based, hydrogen- stor ge materials were experimentally synthesized, and the Kubas-type interactions were measured to be stronger than the van der Waals interactions. In this review, the recent progress of Kubas-type hydrogen- storage materials will be discussed from both theoretical and experimental viewpoints.
Statistical image quantification toward optimal scan fusion and change quantification
Potesil, Vaclav; Zhou, Xiang Sean
2007-03-01
Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.
A New Survey of types of Uncertainties in Nonlinear System with Fuzzy Theory
Directory of Open Access Journals (Sweden)
Fereshteh Mohammadi
2013-03-01
Full Text Available This paper is an attempt to introduce a new framework to handle both uncertainty and time in spatial domain. The application of the fuzzy temporal constraint network (FTCN method is proposed for representation and reasoning of uncertain temporal data. A brief introduction of the fuzzy sets theory is followed by description of the FTCN method with its main algorithms. The paper then discusses the issues of incorporating fuzzy approach into current spatio-temporal processing framework. The general temporal data model is extended to accommodate uncertainties with temporal data and relationships among events. A theoretical FTCN process of fuzzy transition for the imprecise information is introduced with an example. A summary of the paper is given together with outlining some contributions of the paper and future research directions.
Extension Theory and Krein-type Resolvent Formulas for Nonsmooth Boundary Value Problems
DEFF Research Database (Denmark)
Abels, Helmut; Grubb, Gerd
2014-01-01
The theory of selfadjoint extensions of symmetric operators, and more generally the theory of extensions of dual pairs, was implemented some years ago for boundary value problems for elliptic operators on smooth bounded domains. Recently, the questions have been taken up again for nonsmooth domains. In the present work we show that pseudodifferential methods can be used to obtain a full characterization, including Kre?n resolvent formulas, of the realizations of nonselfadjoint second-order operators on C32+? domains; more precisely, we treat domains with Bp,232-smoothness and operators with Hq1-coefficients, for suitable p>2(n?1)p>2(n?1) and q>nq>n. The advantage of the pseudodifferential boundary operator calculus is that the operators are represented by a principal part and a lower-order remainder, leading to regularity results; in particular we analyze resolvents, Poisson solution operators and Dirichlet-to-Neumann operators in this way, also in Sobolev spaces of negative order.
Uncertainty Quantification in Hybrid Dynamical Systems
Sahai, Tuhin
2011-01-01
Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontinuities, we use a wavelet-based Wiener-Haar expansion. We develop a boundary layer approach to propagate uncertainty through separable reset conditions. We also introduce a transport theory based approach for propagating uncertainty through hybrid dynamical systems. Here the expansion yields a set of hyperbolic equations that are solved by integrating along characteristics. The solution of the partial differential equation along the characteristics allows one to quantify uncertainty in hybrid or switching dynamical systems. The above method...
On the theory of type I solar radio bursts. Pt. 2
International Nuclear Information System (INIS)
It is shown that, if at some altitude in the corona 1) coronal electrons are accelerated perpendicularly to the magnetic field, 2) A=?2(pe)/?2(He) decreases in the corona just above the acceleration region, one may explain the formation of electron beams with non zero pitch angle and small momentum dispersion. These electron beams become unstable at some height above the acceleration region and produce a burst of radiation, the characteristics of which agree very well with the observed characteristics of type I bursts. (orig./ BJ)
A recipe for EFT uncertainty quantification in nuclear physics
Furnstahl, R. J.; Phillips, D. R.; Wesolowski, S.
2014-01-01
The application of effective field theory (EFT) methods to nuclear systems provides the opportunity to rigorously estimate the uncertainties originating in the nuclear Hamiltonian. Yet this is just one source of uncertainty in the observables predicted by calculations based on nuclear EFTs. We discuss the goals of uncertainty quantification in such calculations and outline a recipe to obtain statistically meaningful error bars for their predictions. We argue that the differe...
Confinement in the Abelian-Higgs-type theories: string picture and field correlators
International Nuclear Information System (INIS)
Field correlators and the string representation are used as two complementary approaches for the description of confinement in the SU(N)-inspired dual Abelian-Higgs-type model. In the London limit of the simplest, SU(2)- inspired, model, bilocal electric field-strength correlators have been derived with accounting for the contributions to these averages produced by closed dual strings. The Debye screening in the plasma of such strings yields a novel long-range interaction between points lying on the contour of the Wilson loop. This interaction generates a Luescher-type term, even when one restricts oneself to the minimal surface, as it is usually done in the bilocal approximation to the stochastic vacuum model. Beyond the London limit, it has been shown that a modified interaction appears, which becomes reduced to the standard Yukawa one in the London limit. Finally, a string representation of the SU(N)-inspired model with the ? term, in the London limit, can be constructed
Six-Dimensional Superconformal Theories and their Compactifications from Type IIA Supergravity
Apruzzi, Fabio; Fazzi, Marco; Passias, Achilleas; Rota, Andrea; Tomasiello, Alessandro
2015-08-01
We describe three analytic classes of infinitely many AdSd supersymmetric solutions of massive IIA supergravity, for d =7 ,5 ,4 . The three classes are related by simple universal maps. For example, the AdS7×M3 solutions (where M3 is topologically S3 ) are mapped to AdS5×?2×M3' , where ?2 is a Riemann surface of genus g ?2 and the metric on M3' is obtained by distorting M3 in a certain way. The solutions can have localized D6 or O6 sources, as well as an arbitrary number of D8-branes. The AdS7 case (previously known only numerically) is conjecturally dual to an NS5-D6-D8 system. The field theories in three and four dimensions are not known, but their number of degrees of freedom can be computed in the supergravity approximation. The AdS4 solutions have numerical "attractor" generalizations that might be useful for flux compactification purposes.
Secret symmetries of type IIB superstring theory on Ad{{S}_{3}} × {{S}^{3}} × {{M}^{4}}
Pittelli, Antonio; Torrielli, Alessandro; Wolf, Martin
2014-11-01
We establish features of so-called Yangian secret symmetries for AdS3 type IIB superstring backgrounds, thus verifying the persistence of such symmetries to this new instance of the AdS/CFT correspondence. Specifically, we find two a priori different classes of secret symmetry generators. One class of generators, anticipated from the previous literature, is more naturally embedded in the algebra governing the integrable scattering problem. The other class of generators is more elusive and somewhat closer in its form to its higher-dimensional AdS5 counterpart. All of these symmetries respect left-right crossing. In addition, by considering the interplay between left and right representations, we gain a new perspective on the AdS5 case. We also study the RTT-realisation of the Yangian in AdS3 backgrounds, thus establishing a new incarnation of the Beisert–de Leeuw construction.
Secret symmetries of type IIB superstring theory on AdS3 × S3 × M4
International Nuclear Information System (INIS)
We establish features of so-called Yangian secret symmetries for AdS3 type IIB superstring backgrounds, thus verifying the persistence of such symmetries to this new instance of the AdS/CFT correspondence. Specifically, we find two a priori different classes of secret symmetry generators. One class of generators, anticipated from the previous literature, is more naturally embedded in the algebra governing the integrable scattering problem. The other class of generators is more elusive and somewhat closer in its form to its higher-dimensional AdS5 counterpart. All of these symmetries respect left-right crossing. In addition, by considering the interplay between left and right representations, we gain a new perspective on the AdS5 case. We also study the RTT-realisation of the Yangian in AdS3 backgrounds, thus establishing a new incarnation of the Beisert–de Leeuw construction. (paper)
Understanding microwave heating effects in single mode type cavities-theory and experiment.
Robinson, John; Kingman, Sam; Irvine, Derek; Licence, Peter; Smith, Alastair; Dimitrakis, Georgios; Obermayer, David; Kappe, C Oliver
2010-05-14
This paper explains the phenomena which occur in commercially available laboratory microwave equipment, and highlights several situations where experimental observations are often misinterpreted as a 'microwave effect'. Electromagnetic simulations and heating experiments were used to show the quantitative effects of solvent type, solvent volume, vessel material, vessel internals and stirring rate on the distribution of the electric field, the power density and the rate of heating. The simulations and experiments show how significant temperature gradients can exist within the heated materials, and that very different results can be obtained depending on the method used to measure temperature. The overall energy balance is shown for a number of different solvents, and the interpretation and implications of using the results from commercially available microwave equipment are discussed. PMID:20428555
XPS quantification of the hetero-junction interface energy
Energy Technology Data Exchange (ETDEWEB)
Ma, Z.S. [Key Laboratory of Low-Dimensional Materials and Application Technology of Ministry of Education, Institute for Quantum Engineering and Micro-Nano Energy Technology and Faculty of Materials, Optoelectronics and Physics, Xiangtan University, Hunan 411105 (China); Wang Yan [Key Laboratory of Low-Dimensional Materials and Application Technology of Ministry of Education, Institute for Quantum Engineering and Micro-Nano Energy Technology and Faculty of Materials, Optoelectronics and Physics, Xiangtan University, Hunan 411105 (China); School of Information and Electronic Engineering, Hunan University of Science and Technology, Hunan 411201 (China); Huang, Y.L.; Zhou, Z.F. [Key Laboratory of Low-Dimensional Materials and Application Technology of Ministry of Education, Institute for Quantum Engineering and Micro-Nano Energy Technology and Faculty of Materials, Optoelectronics and Physics, Xiangtan University, Hunan 411105 (China); Zhou, Y.C., E-mail: zhouyc@xtu.edu.cn [Key Laboratory of Low-Dimensional Materials and Application Technology of Ministry of Education, Institute for Quantum Engineering and Micro-Nano Energy Technology and Faculty of Materials, Optoelectronics and Physics, Xiangtan University, Hunan 411105 (China); Zheng Weitao [School of Materials Science, Jilin University, Changchun 130012 (China); Sun, Chang Q. [Key Laboratory of Low-Dimensional Materials and Application Technology of Ministry of Education, Institute for Quantum Engineering and Micro-Nano Energy Technology and Faculty of Materials, Optoelectronics and Physics, Xiangtan University, Hunan 411105 (China); School of Materials Science, Jilin University, Changchun 130012 (China); School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798 (Singapore)
2013-01-15
Highlights: Black-Right-Pointing-Pointer Quantum entrapment or polarization dictates the performance of dopant, impurity, interface, alloy and compounds. Black-Right-Pointing-Pointer Interface bond energy, energy density, and atomic cohesive energy can be determined using XPS and our BOLS theory. Black-Right-Pointing-Pointer Presents a new and reliable method for catalyst design and identification. Black-Right-Pointing-Pointer Entrapment makes CuPd to be a p-type catalyst and polarization derives AgPd as an n-type catalyst. - Abstract: We present an approach for quantifying the heterogeneous interface bond energy using X-ray photoelectron spectroscopy (XPS). Firstly, from analyzing the XPS core-level shift of the elemental surfaces we obtained the energy levels of an isolated atom and their bulk shifts of the constituent elements for reference; then we measured the energy shifts of the specific energy levels upon interface alloy formation. Subtracting the referential spectrum from that collected from the alloy, we can distil the interface effect on the binding energy. Calibrated based on the energy levels and their bulk shifts derived from elemental surfaces, we can derive the bond energy, energy density, atomic cohesive energy, and free energy at the interface region. This approach has enabled us to clarify the dominance of quantum entrapment at CuPd interface and the dominance of polarization at AgPd and BeW interfaces, as the origin of interface energy change. Developed approach not only enhances the power of XPS but also enables the quantification of the interface energy at the atomic scale that has been an issue of long challenge.
Development of flow network analysis code for block type VHTR core by linear theory method
International Nuclear Information System (INIS)
VHTR (Very High Temperature Reactor) is high-efficiency nuclear reactor which is capable of generating hydrogen with high temperature of coolant. PMR (Prismatic Modular Reactor) type reactor consists of hexagonal prismatic fuel blocks and reflector blocks. The flow paths in the prismatic VHTR core consist of coolant holes, bypass gaps and cross gaps. Complicated flow paths are formed in the core since the coolant holes and bypass gap are connected by the cross gap. Distributed coolant was mixed in the core through the cross gap so that the flow characteristics could not be modeled as a simple parallel pipe system. It requires lot of effort and takes very long time to analyze the core flow with CFD analysis. Hence, it is important to develop the code for VHTR core flow which can predict the core flow distribution fast and accurate. In this study, steady state flow network analysis code is developed using flow network algorithm. Developed flow network analysis code was named as FLASH code and it was validated with the experimental data and CFD simulation results. (authors)
White, Katherine M; Terry, Deborah J; Troup, Carolyn; Rempel, Lynn A; Norman, Paul; Mummery, Kerry; Riley, Malcolm; Posner, Natasha; Kenardy, Justin
2012-07-01
A randomized controlled trial evaluated the effectiveness of a 4-wk extended theory of planned behavior (TPB) intervention to promote regular physical activity and healthy eating among older adults diagnosed with Type 2 diabetes or cardiovascular disease (N = 183). Participants completed TPB measures of attitude, subjective norm, perceived behavioral control, and intention, as well as planning and behavior, at preintervention and 1 wk and 6 wk postintervention for each behavior. No significant time-by-condition effects emerged for healthy eating. For physical activity, significant time-by-condition effects were found for behavior, intention, planning, perceived behavioral control, and subjective norm. In particular, compared with control participants, the intervention group showed short-term improvements in physical activity and planning, with further analyses indicating that the effect of the intervention on behavior was mediated by planning. The results indicate that TPB-based interventions including planning strategies may encourage physical activity among older people with diabetes and cardiovascular disease. PMID:22190336
Automated quantification and analysis of mandibular asymmetry
DEFF Research Database (Denmark)
Darvann, T. A.; Hermann, N. V.
2010-01-01
We present an automated method of spatially detailed 3D asymmetry quantification in mandibles extracted from CT and apply it to a population of infants with unilateral coronal synostosis (UCS). An atlas-based method employing non-rigid registration of surfaces is used for determining deformation fields, thereby establishing detailed anatomical point correspondence between subjects as well as between points on the left and right side of the mid-sagittal plane (MSP). Asymmetry is defined in terms of the vector between a point and the corresponding anatomical point on the opposite side of the MSP after mirroring the mandible across the MSP. A principal components analysis of asymmetry characterizes the major types of asymmetry in the population, and successfully separates the asymmetric UCS mandibles from a number of less asymmetric mandibles from a control population.
El Naschie's ? (?) space-time, hydrodynamic model of scale relativity theory and some applications
International Nuclear Information System (INIS)
A generalization of the Nottale's scale relativity theory is elaborated: the generalized Schroedinger equation results as an irrotational movement of Navier-Stokes type fluids having an imaginary viscosity coefficient. Then ? simultaneously becomes wave-function and speed potential. In the hydrodynamic formulation of scale relativity theory, some implications in the gravitational morphogenesis of structures are analyzed: planetary motion quantizations, Saturn's rings motion quantizations, redshift quantization in binary galaxies, global redshift quantization etc. The correspondence with El Naschie's ? (?) space-time implies a special type of superconductivity (El Naschie's superconductivity) and Cantorian-fractal sequences in the quantification of the Universe
Ovchinnikov, Igor V
2011-01-01
Here we propose a scenario according to which a generic self-organized critical (SOC) system can be looked upon as a Witten-type topological field theory (W-TFT) with spontaneously broken BRST-symmetry. One of the conditions for the SOC is the slow external driving that unambiguously suggests the Stratanovich interpretation of noise in the corresponding stochastic differential equation (SDE). This necessitates the use of the Parisi-Wu quantization of the SDE leading to a model with a BRST-exact action, \\emph i.e., to a W-TFT. For a general SDE with a mixed-type drift term (Langevin + Hamilton parts), the BRST-symmetry is spontaneously broken and there is the Goldstone mode of Fadeev-Popov ghosts. In the low-energy/long-wavelength limit, the ghosts represent instanton/avalanche modulii and being gapless are responsible for the critical distribution of avalanches. The above arguments are robust against a moderate variation of the SDE's parameters and the criticality is "self-tuned". Our proposition suggests tha...
Barutello, Vivina; Jadanza, Riccardo D.; Portaluri, Alessandro
2015-07-01
It is well known that the linear stability of the Lagrangian elliptic solutions in the classical planar three-body problem depends on a mass parameter ? and on the eccentricity e of the orbit. We consider only the circular case (e = 0) but under the action of a broader family of singular potentials: ?-homogeneous potentials, for ? in (0, 2) , and the logarithmic one. It turns out indeed that the Lagrangian circular orbit persists also in this more general setting. We discover a region of linear stability expressed in terms of the homogeneity parameter ? and the mass parameter ?, then we compute the Morse index of this orbit and of its iterates and we find that the boundary of the stability region is the envelope of a family of curves on which the Morse indices of the iterates jump. In order to conduct our analysis we rely on a Maslov-type index theory devised and developed by Y. Long, X. Hu and S. Sun; a key role is played by an appropriate index theorem and by some precise computations of suitable Maslov-type indices.
Quantification of atmospheric water soluble inorganic and organic nitrogen
Benítez, Juan Manuel González
2010-01-01
The key aims of this project were: (i) investigation of atmospheric nitrogen deposition, focused on discrimination between bulk, wet and dry deposition, and between particulate matter and gas phase, (ii) accurate quantification of the contributions of dissolved organic and inorganic nitrogen to each type of deposition, and (iii) exploration of the origin and potential sources of atmospheric water soluble organic nitrogen (WSON). This project was particularly focused on the WSON fraction becau...
MODEL VALIDATION AND UNCERTAINTY QUANTIFICATION.
Energy Technology Data Exchange (ETDEWEB)
Hemez, F.M.; Doebling, S.W.
2000-10-01
This session offers an open forum to discuss issues and directions of research in the areas of model updating, predictive quality of computer simulations, model validation and uncertainty quantification. Technical presentations review the state-of-the-art in nonlinear dynamics and model validation for structural dynamics. A panel discussion introduces the discussion on technology needs, future trends and challenges ahead with an emphasis placed on soliciting participation of the audience, One of the goals is to show, through invited contributions, how other scientific communities are approaching and solving difficulties similar to those encountered in structural dynamics. The session also serves the purpose of presenting the on-going organization of technical meetings sponsored by the U.S. Department of Energy and dedicated to health monitoring, damage prognosis, model validation and uncertainty quantification in engineering applications. The session is part of the SD-2000 Forum, a forum to identify research trends, funding opportunities and to discuss the future of structural dynamics.
Inverse problems and uncertainty quantification
Litvinenko, Alexander; Matthies, Hermann G.
2013-01-01
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) model - are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time-consuming and slowly convergent Monte Carlo sampling. The developed sampling-free non...
International Nuclear Information System (INIS)
In this paper we classify Bianchi type VIII and IX space—times according to their teleparallel Killing vector fields in the teleparallel theory of gravitation by using a direct integration technique. It turns out that the dimensions of the teleparallel Killing vector fields are either 4 or 5. From the above study we have shown that the Killing vector fields for Bianchi type VIII and IX space—times in the context of teleparallel theory are different from that in general relativity. (general)
Borkar, M. S.; Ameen, A.
2015-01-01
In this paper, Bianchi type VI0 magnetized anisotropic dark energy models with constant deceleration parameter have been studied by solving the Rosen's field equations in Bimetric theory of gravitation. The models corresponding to power law expansion and exponential law expansion have been evaluated and studied their nature geometrically and physically. It is seen that there is real visible matter (baryonic matter) suddenly appeared only for small interval of time 0.7 ? t < 0.7598 and for the remaining whole range of time t, there is dark energy matter in the universe. Our investigations are supported to the observational fact that the usual matter described by known particle theory is about 4% and the dark energy cause the accelerating expansion of the universe and several high precision observational experiments, especially the Wilkinson Microwave Anisotropic Probe (WMAP) satellite experiment (see [C. L. Bennett et al., Astrophys. J. Suppl. Ser. 148 (2003) 1; WMAP Collab. (D. N. Spergel et al.), Astrophys. J. Suppl. Ser. 148 (2003) 175; D. N. Spergel et al., Astrophys. J. Suppl. 170 (2007) 377; WMAP Collab. (E. Komastu et al.), Astrophys. J. Suppl. 180 (2009) 330; WMAP Collab. (G. Hinshaw et al.), Astrophys. J. Suppl. 208 (2013) 19; Plank Collab. (P. A. R. Ade), arXiv:1303.5076; arXiv:1303.5082]) conclude that the dark energy occupies near about 73% of the energy of the universe and dark matter is about 23%. In exponential law of expansion, our model is fully occupied by real visible matter and there is no chance of dark energy and dark matter.
Karunamuni Nandini; Trinh Linda; Courneya Kerry S; Plotnikoff Ronald C; Sigal Ronald J
2008-01-01
Abstract Background Aerobic physical activity (PA) and resistance training are paramount in the treatment and management of type 2 diabetes (T2D), but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB) in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random na...
An uncertainty inventory demonstration - a primary step in uncertainty quantification
Energy Technology Data Exchange (ETDEWEB)
Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM
2009-01-01
Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.
Scaling theory in Tomonaga-Luttinger liquid with $1/r^\\beta$ type long-range interactions
Inoue, H
2001-01-01
We discuss effects of $1/r^\\beta$ type long-range (LR) interactions in a tight-binding model by utilizing the bosonization technique, renormalization group and conformal field theory (CFT). We obtain the low energy action known for Kibble's model which generates the mass gap in 3 dimension when $\\beta =1$, the Coulomb force case. In one dimension, the dispersion relations predict that the system remains gapless even for $\\beta =1$ and the existences of Tomonaga-Luttinger liquid (TLL) when $\\beta > 1$. When $\\beta =1$, the LR interactions break TLL in the long wavelength limits, even if the strength is very small. We make the more precise arguments from the stand point of the renormalization group and CFT. Finally we derive the accurate finite size scaling of energies and thermodynamics. Moreover we proceed to numerical calculations, considering the LR umklapp process terms. We conclude that the TLL phase become wider in the strength space of interactions as the power $\\beta$ approaches to 1.
Quantification of Information in a One-Way Plant-to-Animal Communication System
Directory of Open Access Journals (Sweden)
Laurance R. Doyle
2009-08-01
Full Text Available In order to demonstrate possible broader applications of information theory to the quantification of non-human communication systems, we apply calculations of information entropy to a simple chemical communication from the cotton plant (Gossypium hirsutum to the wasp (Cardiochiles nigriceps studied by DeMoraes et al. The purpose of this chemical communication from cotton plants to wasps is presumed to be to allow the predatory wasp to more easily obtain the location of its preferred prey—one of two types of parasitic herbivores feeding on the cotton plants. Specification of the plant-eating herbivore feeding on it by the cotton plants allows preferential attraction of the wasps to those individual plants. We interpret the emission of nine chemicals by the plants as individual signal differences, (depending on the herbivore type, to be detected by the wasps as constituting a nine-signal one-way communication system across kingdoms (from the kingdom Plantae to the kingdom Animalia. We use fractional differences in the chemical abundances, (emitted as a result of the two herbivore types, to calculate the Shannon information entropic measures (marginal, joint, and mutual entropies, as well as the ambiguity, etc. of the transmitted message. We then compare these results with the subsequent behavior of the wasps, (calculating the equivocation in the message reception, for possible insights into the history and actual working of this one-way communication system.
WATSON, GRAEME WILLIAM; SCANLON, DAVID
2009-01-01
CuCrO2 is the most promising Cu-based delafossite for p-type optoelectronic devices. Despite this, little is known about the p-type conduction mechanism of this material, with both CuI/CuII and CrIII/CrIV hole mechanisms being proposed. In this article we examine the electronic structure, thermodynamic stability and the p-type defect chemistry of this ternary compound using density functional theory with three different approaches to the exchange and correlation; the generalized-gradient-appr...
Quantification of radiation transformation frequencies
International Nuclear Information System (INIS)
The occurrence of late lethal mutations in many of the progeny of cells which survive ionizing radiation seriously affects the quantification of transformation frequencies following irradiation. Lethal mutations are particularly relevant where focal assays are used or where transformations are scored following serial passaging of survivors. This paper examines the influence of lethal mutations on the radiation transformation dose response curve for two typical assays, viz. a C3H 10T1/2 focal assay and a primary thyroid serial subculture assay. (author)
Monleau, Marjorie; Montavon, Céline; Laurent, Christian; Segondy, Michel; Montes, Brigitte; DELAPORTE, ERIC; Boillot, François; Peeters, Martine
2009-01-01
The development and validation of dried sample spots as a method of specimen collection are urgently needed in developing countries for monitoring of human immunodeficiency virus (HIV) infection. Our aim was to test some crucial steps in the use of dried spots, i.e., viral recovery and storage over time. Moreover, we investigated whether dried plasma and blood spots (DPS and DBS, respectively) give comparable viral load (VL) results. Four manual RNA extraction methods from commercial HIV type...
Hölzl, Gabriele; Stöcher, Markus; Leb, Victoria; Stekel, Herbert; Berg, Jörg
2003-01-01
The ultrasensitive COBAS AMPLICOR HIV-1 Monitor test was complemented with automated RNA purification on the MagNA Pure LC instrument. This enabled entirely automated ultrasensitive assessment of viral loads in human immunodeficiency virus type 1 (HIV-1)-infected individuals. The detection limit of the fully automated assay and the viral load measurements in 80 clinical samples were found to be in good agreement with those of the conventional ultrasensitive COBAS AMPLICOR HIV-1 Monitor test. ...
Scientific Electronic Library Online (English)
Cristóbal, Serrano García; Francisco, Barrera; Pilar, Labbé; Jessica, Liberona; Marco, Arrese; Pablo, Irarrázabal; Cristián, Tejos; Sergio, Uribe.
2012-12-01
Full Text Available [...] Abstract in english Background: Visceral fat accumulation is associated with the development of metabolic diseases. Anthropometry is one of the methods used to quantify it. aim: to evaluate the relationship between visceral adipose tissue volume (VAT), measured with magnetic resonance imaging (MRI), and anthropometric [...] indexes, such as body mass index (BMI) and waist circumference (WC), in type 2 diabetic patients (DM2). Patients and Methods: Twenty four type 2 diabetic patients aged 55 to 78 years (15 females) and weighting 61.5 to 97 kg, were included. The patients underwent MRI examination on a Philips Intera® 1.5T MR scanner. The MRI protocol included a spectral excitation sequence centered at the fat peak. The field of view included from L4-L5 to the diaphragmatic border. VAT was measured using the software Image J®. Weight, height, BMI, WC and body fat percentage (BF%), derived from the measurement offour skinfolds with the equation of Durnin and Womersley, were also measured. The association between MRIVAT measurement and anthropometry was evaluated using the Pearson's correlation coefficient. Results: Mean VAT was 2478 ± 758 ml, mean BMI29.5 ± 4.7 kg/m², and mean WC was 100 ± 9.7 cm. There was a poor correlation between VAT, BMI (r = 0.18) and WC (r = 0.56). Conclusions: BMI and WC are inaccurate predictors of VAT volume in type 2 diabetic patients.
Quantification of wastewater sludge dewatering.
Skinner, Samuel J; Studer, Lindsay J; Dixon, David R; Hillis, Peter; Rees, Catherine A; Wall, Rachael C; Cavalida, Raul G; Usher, Shane P; Stickland, Anthony D; Scales, Peter J
2015-10-01
Quantification and comparison of the dewatering characteristics of fifteen sewage sludges from a range of digestion scenarios are described. The method proposed uses laboratory dewatering measurements and integrity analysis of the extracted material properties. These properties were used as inputs into a model of filtration, the output of which provides the dewatering comparison. This method is shown to be necessary for quantification and comparison of dewaterability as the permeability and compressibility of the sludges varies by up to ten orders of magnitude in the range of solids concentration of interest to industry. This causes a high sensitivity of the dewaterability comparison to the starting concentration of laboratory tests, thus simple dewaterability comparison based on parameters such as the specific resistance to filtration is difficult. The new approach is demonstrated to be robust relative to traditional methods such as specific resistance to filtration analysis and has an in-built integrity check. Comparison of the quantified dewaterability of the fifteen sludges to the relative volatile solids content showed a very strong correlation in the volatile solids range from 40 to 80%. The data indicate that the volatile solids parameter is a strong indicator of the dewatering behaviour of sewage sludges. PMID:26003332
Liu, Xiandong; Lu, Xiancai; Wang, Rucheng; Zhou, Huiqun; Xu, Shijin
2008-12-01
To explore the complexation mechanisms of carboxylate on phyllosilicate edge surfaces, we simulate acetate complexes on the (0 1 0) type edge of pyrophyllite by using density functional theory method. We take into account the intrinsic long-range order and all the possible complex sets under common environments. This study discloses that H-bonding interactions occur widely and play important roles in both inner-sphere and outer-sphere fashions. In inner-sphere complexes, one acetate C-O bond elongates to form a covalent bond with surface Al atom; the other C-O either forms a covalent bond with Al or interacts with surface hydroxyls via H-bonds. In outer-sphere complexes, the acetate can capture a proton from the surface groups to form an acid molecule. For the groups of both substrate and ligand, the variations in geometrical parameters caused by H-bonding interactions depend on the role it plays (i.e., proton donor or acceptor). By comparing the edge structures before and after interaction, we found that the carboxylate binding can modify the surface structures. In the inner-sphere complexes, the exposed Al atom can be stabilized by a single acetate ion through either monodentate or bidentate schemes, whereas the Al atoms complexing both an acetate and a hydroxyl may significantly deviate outwards from the bulk equilibrium positions. In the outer-sphere complexes, some H-bondings are strong enough to polarize the metal-oxygen bonds and therefore distort the local coordination structure of metal in the substrate, which may make the metal susceptible to release.
An EPGPT-based approach for uncertainty quantification
International Nuclear Information System (INIS)
Generalized Perturbation Theory (GPT) has been widely used by many scientific disciplines to perform sensitivity analysis and uncertainty quantification. This manuscript employs recent developments in GPT theory, collectively referred to as Exact-to-Precision Generalized Perturbation Theory (EPGPT), to enable uncertainty quantification for computationally challenging models, e.g. nonlinear models associated with many input parameters and many output responses and with general non-Gaussian parameters distributions. The core difference between EPGPT and existing GPT is in the way the problem is formulated. GPT formulates an adjoint problem that is dependent on the response of interest. It tries to capture via the adjoint solution the relationship between the response of interest and the constraints on the state variations. EPGPT recasts the problem in terms of a smaller set of what is referred to as the 'active' responses which are solely dependent on the physics model and the boundary and initial conditions rather than on the responses of interest. The objective of this work is to apply an EPGPT methodology to propagate cross-sections variations in typical reactor design calculations. The goal is to illustrate its use and the associated impact for situations where the typical Gaussian assumption for parameters uncertainties is not valid and when nonlinear behavior must be considered. To allow this demonstration, exaggerated variations will be employed to stimulate nonlinear behavior in simple prototypical neutronics models. (authors)
Energy Technology Data Exchange (ETDEWEB)
Svigler, J. [Westboehmische Univ. Pilsen (Czech Republic). Fak. fuer Angewandte Wissenschaften
2001-07-01
The paper deals with designing screw machine rotor gearings and provides the general and particular theory of the design of a tool for gearing production. This tool is a generating cutter or a grinding or planning tool. The theory is based on kinematical principles, which, in contrast to the geometrical method, provide a simple and concrete process for the production of cutters. The concrete and rather simple theory of tool production makes possible a rapid realisation of necessary modifications to the tool, which guarantees the correct gearing changes. (orig.) [German] Der vorliegende Beitrag beschaeftigt sich mit der Gestaltung der Rotorverzahnung, er enthaelt eine allgemeine und ganz praezise Theorie der Entstehung und Herstellung eines Werkzeuges fuer diese Verzahnung. Dieses Werkzeug ist ein Abwaelzfraeser, eventuell ein Schleifwerkzeug. Die Theorie ist auf kinematischen Prinzipien gegruendet, die im Gegensatz zur geometrischen Betrachtung einen einfachen und anschaulichen Vorgang bei der Gestaltung des Werkzeugsfraesers ermoeglicht. Die anschauliche und transparente Darstellung des Werkzeugs ermoeglicht eine schnelle Realisierung der notwendigen Veraenderungen am Werkzeug, die das Erreichen der geforderten Veraenderung der Verzahnung garantiert. (orig.)
DEFF Research Database (Denmark)
McCloskey, Douglas; Gangoiti, Jon A.
2015-01-01
Comprehensive knowledge of intracellular biochemistry is needed to accurately understand, model, and manipulate metabolism for industrial and therapeutic applications. Quantitative metabolomics has been driven by advances in analytical instrumentation and can add valuable knowledge to the understanding of intracellular metabolism. Liquid chromatography coupled to mass spectrometry (LC–MS and LC–MS/MS) has become a reliable means with which to quantify a multitude of intracellular metabolites in parallel with great specificity and accuracy. This work details a method that builds and extends upon existing reverse phase ion-paring liquid chromatography methods for separation and detection of polar and anionic compounds that comprise key nodes of intracellular metabolism by optimizing pH and solvent composition. In addition, the presented method utilizes multiple scan types provided by hybrid instrumentation to improve confidence in compound identification. The developed method was validated for a broad coverage of polar and anionic metabolites of intracellular metabolism
Development of Quantification Method for Bioluminescence Imaging
International Nuclear Information System (INIS)
Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time
Development of Quantification Method for Bioluminescence Imaging
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il [Chonnam National University Hospital, Hwasun (Korea, Republic of); Choi, Eun Seo [Chosun University, Gwangju (Korea, Republic of); Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young [Inje University, Kimhae (Korea, Republic of)
2009-10-15
Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time
Scientific Electronic Library Online (English)
Carlos, Martel; Lianka, Cairampoma.
2012-08-01
Full Text Available La llanura amazónica peruana se caracteriza por la presencia de múltiples formaciones vegetales. Éstas cada vez reciben mayor impacto por actividades antropogénicas tales como la minería y tala. Todo esto, sumado al cambio climático global, genera desconcierto sobre el futuro de los bosques. La iden [...] tificación de los niveles de almacenamiento de carbono en áreas boscosas, y específicamente en cada formación vegetal, permitiría un mejor manejo de las zonas de conservación, así como identificar las áreas potenciales que servirían para el financiamiento de la absorción de carbono y otros servicios ambientales. El presente estudio fue desarrollado en la estación Biológica del Centro de Investigación y Capacitación Río Los Amigos (CICRA). En el CICRA se identificaron tres formaciones vegetales principales, el bosque de terraza, el bosque inundable y el aguajal. Siendo los bosques de terraza los de mayor extensión y mayor cantidad de carbono acumulado. Como resultado se valorizó la vegetación presente en el CICRA, en alrededor de 11 millones de dólares americanos. El ingreso a la oferta de los bonos de carbono promovería la conservación de los bosques. Abstract in english The Peruvian Amazon Basin is characterized by the presence of multiple vegetation types. They are being given great impact by human activities such as mining and, logging. All this, coupled with global climate change, creates confusion about the future of our forests. The identification of levels of [...] carbon storage in forested areas, and specifically in each vegetation type, would allow better management of conservation areas, and then identify potential areas that could serve to finance carbon sequestration and other environmental services. This study was conducted at the Biological Station for Research and Training Center Rio Los Amigos (CICRA, Spanish acronym). At the station three main formations were identified, alluvial terrace forests, flood terrace forests and Mauritia swamps. The alluvial terrace forest presents the most extensive area and the highest amount of carbon stored. As result, CICRA vegetations were valued at approx. 11 millions U.S. dollars. Admission to the supply of carbon credits could promote Amazon forest conservation.
Accessible quantification of multiparticle entanglement
Cianciaruso, Marco; Adesso, Gerardo
2015-01-01
Entanglement is a key ingredient for quantum technologies and a fundamental signature of quantumness in a broad range of phenomena encompassing many-body physics, thermodynamics, cosmology, and life sciences. For arbitrary multiparticle systems, the quantification of entanglement typically involves hard optimisation problems, and requires demanding tomographical techniques. In this paper we show that such difficulties can be overcome by developing an experimentally friendly method to evaluate measures of multiparticle entanglement via a geometric approach. The method provides exact analytical results for a relevant class of mixed states of $N$ qubits, and computable lower bounds to entanglement for any general state. For practical purposes, the entanglement determination requires local measurements in just three settings for any $N$. We demonstrate the power of our approach to quantify multiparticle entanglement in $N$-qubit bound entangled states and other states recently engineered in laboratory using quant...
Iodine quantification by analytical ion microscopy (AIM)
International Nuclear Information System (INIS)
AIM allows imaging and quantification of all elements on a biological tissue section. However quantification is limited by physical parameters which are difficult to assess and which intervene for a given element in the relation between its concentration and the measured signal. Thus, to quantify iodine in thyroid follicles, an iodine standard was prepared. Homogeneity in its atomic iodine distribution was tested by surface and depth analyses. Then, a standard curve was generated to be used as a reference for quantification of iodine on histological preparation by AIM
Nemeth, Michael P.
2013-01-01
A detailed exposition on a refined nonlinear shell theory suitable for nonlinear buckling analyses of laminated-composite shell structures is presented. This shell theory includes the classical nonlinear shell theory attributed to Leonard, Sanders, Koiter, and Budiansky as an explicit proper subset. This approach is used in order to leverage the exisiting experience base and to make the theory attractive to industry. In addition, the formalism of general tensors is avoided in order to expose the details needed to fully understand and use the theory. The shell theory is based on "small" strains and "moderate" rotations, and no shell-thinness approximations are used. As a result, the strain-displacement relations are exact within the presumptions of "small" strains and "moderate" rotations. The effects of transverse-shearing deformations are included in the theory by using analyst-defined functions to describe the through-the-thickness distributions of transverse-shearing strains. Constitutive equations for laminated-composite shells are derived without using any shell-thinness approximations, and simplified forms and special cases are presented.
Johnston Marie; Dijkstra Rob; Bosch Marije; Francis Jill J; Eccles Martin P; Hrisos Susan; Grol Richard; Kaner Eileen FS; Steen Ian N
2009-01-01
Abstract Background Long term management of patients with Type 2 diabetes is well established within Primary Care. However, despite extensive efforts to implement high quality care both service provision and patient health outcomes remain sub-optimal. Several recent studies suggest that psychological theories about individuals' behaviour can provide a valuable framework for understanding generalisable factors underlying health professionals' clinical behaviour. In the context of the team mana...
Directory of Open Access Journals (Sweden)
D. C. de Morais Filho
1996-01-01
Full Text Available In this paper we employ the Monotone Iteration Method and the Leray-Schauder Degree Theory to study an Ã¢Â„Â2-parametrized system of elliptic equations. We obtain a curve dividing the plane into two regions. Depending on which region the parameter is, the system will or will not have solutions. This is an Ambrosetti-Prodi-type problem for a system of equations.
International Nuclear Information System (INIS)
An homogeneous model which simulates the stationary behavior of steam generators of PWR type reactors and uses the differential formalism of perturbation theory for analysing sensibility of linear and non-linear responses, is presented. The PERGEVAP computer code to calculate the temperature distribution in the steam generator and associated importance function, is developed. The code also evaluates effects of the thermohydraulic parameter variation on selected functionals. The obtained results are compared with results obtained by GEVAP computer code . (M.C.K.)
On a singular Fredholm-type integral equation arising in N=2 super-Yang-Mills theories
Energy Technology Data Exchange (ETDEWEB)
Ferrari, Franco, E-mail: ferrari@fermi.fiz.univ.szczecin.pl [Institute of Physics and CASA, University of Szczecin, Wielkopolska 15, 70451 Szczecin (Poland); Piatek, Marcin, E-mail: piatek@fermi.fiz.univ.szczecin.pl [Institute of Physics and CASA, University of Szczecin, Wielkopolska 15, 70451 Szczecin (Poland); Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)
2013-01-08
In this work we study the Nekrasov-Shatashvili limit of the Nekrasov instanton partition function of Yang-Mills field theories with N=2 supersymmetry and gauge group SU(N{sub c}). The theories are coupled with N{sub f} flavors of fundamental matter. The equation that determines the density of eigenvalues at the leading order in the saddle-point approximation is exactly solved when N{sub f}=2N{sub c}. The dominating contribution to the instanton free energy is computed. The requirement that this energy is finite imposes quantization conditions on the parameters of the theory that are in agreement with analogous conditions that have been derived in previous works. The instanton energy and thus the instanton contribution to the prepotential of the gauge theory is computed in closed form.
On a singular Fredholm-type integral equation arising in N=2 super-Yang–Mills theories
International Nuclear Information System (INIS)
In this work we study the Nekrasov–Shatashvili limit of the Nekrasov instanton partition function of Yang–Mills field theories with N=2 supersymmetry and gauge group SU(Nc). The theories are coupled with Nf flavors of fundamental matter. The equation that determines the density of eigenvalues at the leading order in the saddle-point approximation is exactly solved when Nf=2Nc. The dominating contribution to the instanton free energy is computed. The requirement that this energy is finite imposes quantization conditions on the parameters of the theory that are in agreement with analogous conditions that have been derived in previous works. The instanton energy and thus the instanton contribution to the prepotential of the gauge theory is computed in closed form.
Joseph, Robert M.; TAGER–FLUSBERG, HELEN
2004-01-01
Although neurocognitive impairments in theory of mind and in executive functions have both been hypothesized to play a causal role in autism, there has been little research investigating the explanatory power of these impairments with regard to autistic symptomatology. The present study examined the degree to which individual differences in theory of mind and executive functions could explain variations in the severity of autism symptoms. Participants included 31 verbal, school-aged children ...
Czech Academy of Sciences Publication Activity Database
?ársky, Petr
2015-01-01
Ro?. 191, ?. 2015 (2015), s. 191-192. ISSN 1551-7616 R&D Projects: GA MŠk OC09079; GA MŠk(CZ) OC10046; GA ?R GA202/08/0631 Grant ostatní: COST(XE) CM0805; COST(XE) CM0601 Institutional support: RVO:61388955 Keywords : electron-scattering * calculation of cross sections * second-order perturbation theory Subject RIV: CF - Physical ; Theoretical Chemistry
The NASA Langley Multidisciplinary Uncertainty Quantification Challenge
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.
Uncertainty Quantification in Aerodynamics Simulations Project
National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...
Quantification of nanowire uptake by live cells
Margineanu, Michael B.
2015-05-01
Nanostructures fabricated by different methods have become increasingly important for various applications at the cellular level. In order to understand how these nanostructures “behave” and for studying their internalization kinetics, several attempts have been made at tagging and investigating their interaction with living cells. In this study, magnetic iron nanowires with an iron oxide layer are coated with (3-Aminopropyl)triethoxysilane (APTES), and subsequently labeled with a fluorogenic pH-dependent dye pHrodo™ Red, covalently bound to the aminosilane surface. Time-lapse live imaging of human colon carcinoma HCT 116 cells interacting with the labeled iron nanowires is performed for 24 hours. As the pHrodo™ Red conjugated nanowires are non-fluorescent outside the cells but fluoresce brightly inside, internalized nanowires are distinguished from non-internalized ones and their behavior inside the cells can be tracked for the respective time length. A machine learning-based computational framework dedicated to automatic analysis of live cell imaging data, Cell Cognition, is adapted and used to classify cells with internalized and non-internalized nanowires and subsequently determine the uptake percentage by cells at different time points. An uptake of 85 % by HCT 116 cells is observed after 24 hours incubation at NW-to-cell ratios of 200. While the approach of using pHrodo™ Red for internalization studies is not novel in the literature, this study reports for the first time the utilization of a machine-learning based time-resolved automatic analysis pipeline for quantification of nanowire uptake by cells. This pipeline has also been used for comparison studies with nickel nanowires coated with APTES and labeled with pHrodo™ Red, and another cell line derived from the cervix carcinoma, HeLa. It has thus the potential to be used for studying the interaction of different types of nanostructures with potentially any live cell types.
Torba, S
2008-01-01
Arbitrary operator A on a Banach space X which is the generator of C_0-group with certain growth condition at infinity is considered. The relationship between its exponential type entire vectors and its spectral subspaces is found. Inverse theorems on connection between the degree of smoothness of vector $x\\in X$ with respect to operator A, the rate of convergence to zero of the best approximation of x by exponential type entire vectors for operator A, and the k-module of continuity are established. Also, a generalization of the Bernstein-type inequality is obtained. The results allow to obtain Bernstein-type inequalities in weighted L_p spaces.
Sakamoto, Atsushi; Matsumaru, Takehisa; Yamamura, Norio; Suzuki, Shinobu; Uchida, Yasuo; Tachikawa, Masanori; Terasaki, Tetsuya
2015-09-01
Understanding the mechanisms of drug transport in the human lung is an important issue in pulmonary drug discovery and development. For this purpose, there is an increasing interest in immortalized lung cell lines as alternatives to primary cultured lung cells. We recently reported the protein expression in human lung tissues and pulmonary epithelial cells in primary culture, (Sakamoto A, Matsumaru T, Yamamura N, Uchida Y, Tachikawa M, Ohtsuki S, Terasaki T. 2013. J Pharm Sci 102(9):3395-3406) whereas comprehensive quantification of protein expressions in immortalized lung cell lines is sparse. Therefore, the aim of the present study was to clarify the drug transporter protein expression of five commercially available immortalized lung cell lines derived from tracheobronchial cells (Calu-3 and BEAS2-B), bronchiolar-alveolar cells (NCI-H292 and NCI-H441), and alveolar type II cells (A549), by liquid chromatography-tandem mass spectrometry-based approaches. Among transporters detected, breast cancer-resistance protein in Calu-3, NCI-H292, NCI-H441, and A549 and OCTN2 in BEAS2-B showed the highest protein expression. Compared with data from our previous study,(Sakamoto A, Matsumaru T, Yamamura N, Uchida Y, Tachikawa M, Ohtsuki S, Terasaki T. 2013. J Pharm Sci 102(9):3395-3406) NCI-H441 was the most similar with primary lung cells from all regions in terms of protein expression of organic cation/carnitine transporter 1 (OCTN1). In conclusion, the protein expression profiles of transporters in five immortalized lung cell lines were determined, and these findings may contribute to a better understanding of drug transport in immortalized lung cell lines. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci 104:3029-3038, 2015. PMID:25690838
DEFF Research Database (Denmark)
Chouin, Nicolas; Lindegren, Sture
2013-01-01
Targeted ?-therapy (TAT) appears to be an ideal therapeutic technique for eliminating malignant circulating, minimal residual, or micrometastatic cells. These types of malignancies are typically infraclinical, complicating the evaluation of potential treatments. This study presents a method of ex vivo activity quantification with an ?-camera device, allowing measurement of the activity taken up by tumor cells in biologic structures a few tens of microns.
Quantification and Genotyping of Human Sapoviruses in the Llobregat River Catchment, Spain? †
Sano, Daisuke; Pérez-Sautu, Unai; Guix, Susana; Pintó, Rosa Maria; Miura, Takayuki; Okabe, Satoshi; Bosch, Albert
2010-01-01
Human sapoviruses (SaVs) were quantified and characterized in an 18-month survey conducted along the Llobregat river catchment area in Spain. Sample types included freshwater, untreated and treated wastewater, and drinking water. All genogroups were recovered, and a seasonal distribution was observed. This is the first report of SaV quantification and genotyping in the environment outside Japan.
Finite-temperature time-dependent effective theory for the Goldstone field in a BCS-type superfluid
International Nuclear Information System (INIS)
We extend to finite temperature the time-dependent effective theory for the Goldstone field (the phase of the pair field) ? which is appropriate for a superfluid containing one species of fermions with s-wave interactions, described by the BCS Lagrangian. We show that, when Landau damping is neglected, the effective theory can be written as a local time-dependent nonlinear Schroedinger Lagrangian which preserves the Galilean invariance of the zero-temperature effective theory and is identified with the superfluid component. We then calculate the relevant Landau terms that are nonlocal and that destroy the Galilean invariance. We show that the retarded ? propagator (in momentum space) can be well represented by two poles in the lower-half frequency plane, describing damping with a predicted temperature, frequency, and momentum dependence. It is argued that the real parts of the Landau terms can be approximately interpreted as contributing to the normal fluid component
Damage quantification of shear buildings using deflections obtained by modal flexibility
International Nuclear Information System (INIS)
This paper presents a damage quantification method for shear buildings using the damage-induced inter-storey deflections (DI-IDs) estimated by the modal flexibilities from ambient vibration measurements. This study intends to provide a basis for the damage quantification problem of more complex building structures by investigating a rather idealized type of structures, shear buildings. Damage in a structure represented by loss of stiffness generally induces additional deflection, which may contain essential information about the damage. From an analytical investigation, the general equation of damage quantification by the damage-induced deflection is proposed and its special case for shear buildings is also proposed based on the damage-induced inter-storey deflection. The proposed damage quantification method is advantageous compared to conventional FE updating approaches since the number of variables in the optimization problem is only dependent on the complexity of damage parametrization, not on the complexity of the structure. For this reason, the damage quantification for shear buildings is simplified to a form that does not require any FE updating. Numerical and experimental studies on a five-storey shear building were carried out for two damage scenarios with 10% column EI reductions. From the numerical study, it was found that the lower four natural frequencies and mode shapes were enough to make errors in the deflection estimation and the damage quantification below 1%. From the experimental study, deflections estimated by the modal flexibilities were found to agree well with the deflections obtained from static push-over tests. Damage quantifications by the proposed method were also found to agree well with true amounts of damage obtained from static push-over tests
Stirling Convertor Fasteners Reliability Quantification
Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.
2006-01-01
Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.
Directory of Open Access Journals (Sweden)
Gerald Jarre
2014-11-01
Full Text Available Nanodiamonds functionalized with different organic moieties carrying terminal amino groups have been synthesized. These include conjugates generated by Diels–Alder reactions of ortho-quinodimethanes formed in situ from pyrazine and 5,6-dihydrocyclobuta[d]pyrimidine derivatives. For the quantification of primary amino groups a modified photometric assay based on the Kaiser test has been developed and validated for different types of aminated nanodiamond. The results correspond well to values obtained by thermogravimetry. The method represents an alternative wet-chemical quantification method in cases where other techniques like elemental analysis fail due to unfavourable combustion behaviour of the analyte or other impediments.
Jarre, Gerald; Heyer, Steffen; Memmel, Elisabeth; Meinhardt, Thomas; Krueger, Anke
2014-01-01
Nanodiamonds functionalized with different organic moieties carrying terminal amino groups have been synthesized. These include conjugates generated by Diels-Alder reactions of ortho-quinodimethanes formed in situ from pyrazine and 5,6-dihydrocyclobuta[d]pyrimidine derivatives. For the quantification of primary amino groups a modified photometric assay based on the Kaiser test has been developed and validated for different types of aminated nanodiamond. The results correspond well to values obtained by thermogravimetry. The method represents an alternative wet-chemical quantification method in cases where other techniques like elemental analysis fail due to unfavourable combustion behaviour of the analyte or other impediments. PMID:25550737
Directory of Open Access Journals (Sweden)
Mourad Boughedaoui
2012-01-01
Full Text Available En complément des termes classiques utilisés par la langue afin d’exprimer la quantité ou la masse, il existe une pléthore de syntagmes caractérisés par l’utilisation de termes de quantité reliés par la préposition « of » au groupe nominal qui les suit, formant ensemble une structure dite partitive, que nous symboliserons par [Q of the NP]. Ce type de quantification renferme une série très diverse de formes selon qu’elles expriment le nombre ou la masse, ou qu’elles se rapportent à des quantifications objectives (mesures précises et vérifiables ou subjectives (liées à l’appréciation du locuteur sur le nombre ou la masse. Cet article se propose d’étudier cette structure de quantification à travers des corpus de textes de la langue anglaise générale et de spécialité. Ceci nous a permis d’observer la façon dont ces quantificateurs se répartissent et se comportent lorsque l’on passe du domaine général au particulier, ainsi que de mettre l’accent sur leurs caractéristiques morphologiques et syntaxiques.In addition to a handful of words used by the language to express entities or mass, there are a plethora of phrases characterized by the use of a quantity word or phrase linked up by ‘of’ to the following noun group in a partitive construction, symbolized by [Q of the NP]. This type of quantification contains a series of various forms depending on whether they refer to number or mass on the one hand, or to objective quantifications (accurate and easily checked measures or subjective ones (left to the speaker’s appraisal of number or mass on the other. Much of this paper will concentrate on the of-partitive construction by analyzing a corpus of both generalist and specialist texts. This has allowed us not only to investigate the way these quantifiers are organized and behave as we move from general to particular domains, but to lay stress on their morphological and syntactic features.
International Nuclear Information System (INIS)
Motivated by studies on 4d black holes and q-deformed 2d Yang-Mills theory, and borrowing ideas from compact geometry of the blowing up of affine ADE singularities, we build a class of local Calabi-Yau threefolds (CY3) extending the local 2-torus model O(m)+O(-m)->T2 considered in [C. Gomez, S. Montanez, A comment on quantum distribution functions and the OSV conjecture, hep-th/0608162] to test OSV conjecture. We first study toric realizations of T2 and then build a toric representation of X3 using intersections of local Calabi-Yau threefolds O(m)+O(-m-2)->P1. We develop the 2d N=2 linear ?-model for this class of toric CY3s. Then we use these local backgrounds to study partition function of 4d black holes in type IIA string theory and the underlying q-deformed 2d quiver gauge theories. We also make comments on 4d black holes obtained from D-branes wrapping cycles in O(m)+O(-m-2)->Bk with m=(m1,...,mk) a k-dim integer vector and Bk a compact complex one dimension base consisting of the intersection of k 2-spheres Si2 with generic intersection matrix Iij. We give as well the explicit expression of the q-deformed path integral measure of the partition function of the 2d quiver gauge theory in terms of Iij. A comment on the link between our analysis and the construction of [N. Caporaso, M. Cirafici, L. Griguolo, S. Pasquetti, D. Seminara, R.J. Szabo, Topological strings, two-dimensional Yang-Mills theory and Chern-Simons theory on torus bundles, hep-th/0609129] is also given
Quantification of nerolidol in mouse plasma using gas chromatography-mass spectrometry.
Saito, Alexandre Yukio; Sussmann, Rodrigo Antonio Ceschini; Kimura, Emilia Akemi; Cassera, Maria Belen; Katzin, Alejandro Miguel
2015-07-10
Nerolidol is a naturally occurring sesquiterpene found in the essential oils of many types of flowers and plants. It is frequently used in cosmetics, as a food flavoring agent, and in cleaning products. In addition, nerolidol is used as a skin penetration enhancer for transdermal delivery of therapeutic drugs. However, nerolidol is hemolytic at low concentrations. A simple and fast GC-MS method was developed for preliminary quantification and assessment of biological interferences of nerolidol in mouse plasma after oral dosing. Calibration curves were linear in the concentration range of 0.010-5 ?g/mL nerolidol in mouse plasma with correlation coefficients (r) greater than 0.99. Limits of detection and quantification were 0.0017 and 0.0035 ?g/mL, respectively. The optimized method was successfully applied to the quantification of nerolidol in mouse plasma. PMID:25880240
Quantification of brain lipids by FTIR spectroscopy and partial least squares regression
Dreissig, Isabell; Machill, Susanne; Salzer, Reiner; Krafft, Christoph
2009-01-01
Brain tissue is characterized by high lipid content. Its content decreases and the lipid composition changes during transformation from normal brain tissue to tumors. Therefore, the analysis of brain lipids might complement the existing diagnostic tools to determine the tumor type and tumor grade. Objective of this work is to extract lipids from gray matter and white matter of porcine brain tissue, record infrared (IR) spectra of these extracts and develop a quantification model for the main lipids based on partial least squares (PLS) regression. IR spectra of the pure lipids cholesterol, cholesterol ester, phosphatidic acid, phosphatidylcholine, phosphatidylethanolamine, phosphatidylserine, phosphatidylinositol, sphingomyelin, galactocerebroside and sulfatide were used as references. Two lipid mixtures were prepared for training and validation of the quantification model. The composition of lipid extracts that were predicted by the PLS regression of IR spectra was compared with lipid quantification by thin layer chromatography.
International Nuclear Information System (INIS)
Sensitivity calculations are very important in design and safety of nuclear reactor cores. Large codes with a great number of physical considerations have been used to perform sensitivity studies. However, these codes need long computation time involving high costs. The perturbation theory has constituted an efficient and economical method to perform sensitivity analysis. The present work is an application of the perturbation theory (matricial formalism) to a simplified model of DNB (Departure from Nucleate Boiling) analysis to perform sensitivity calculations in PWR cores. Expressions to calculate the sensitivity coefficients of enthalpy and coolant velocity with respect to coolant density and hot channel area were developed from the proposed model. The CASNUR.FOR code to evaluate these sensitivity coefficients was written in Fortran. The comparison between results obtained from the matricial formalism of perturbation theory with those obtained directly from the proposed model makes evident the efficiency and potentiality of this perturbation method for nuclear reactor cores sensitivity calculations (author). 23 refs, 4 figs, 7 tabs
Grushka, Ya
2007-01-01
For an arbitrary operator A on a Banach space X which is a generator of C_0-group with certain growth condition at the infinity, the direct theorems on connection between the smoothness degree of a vector $x\\in X$ with respect to the operator A, the order of convergence to zero of the best approximation of x by exponential type entire vectors for the operator A, and the k-module of continuity are given. Obtained results allows to acquire Jackson-type inequalities in many classic spaces of periodic functions and weighted $L_p$ spaces.
International Nuclear Information System (INIS)
Started with analyzing the features of metallogenetic epoch and space distribution of typical interlayer oxidation zone sandstone type uranium deposit both in China and abroad and their relations of basin evolution, the authors have proposed the idea that the last unconformity mainly controls the metallogenetic epoch and the strength of structure activity after the last unconformity determines the deposit space. An exploration theory with the kernel from new events to the old one is put forward. The means and method to use SAR technology to identify ore-controlling key factors are discussed. An application study in Eastern Jungar Basin is performed
Directory of Open Access Journals (Sweden)
Li Song
2010-04-01
Full Text Available Abstract Background Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification. Results We developed a novel discrete wavelet transform (DWT and a 'Spatial Adaptive Algorithm' to remove noise and to identify true peaks. We programmed and compiled WaveletQuant using Visual C++ 2005 Express Edition. We then incorporated the WaveletQuant program in the Trans-Proteomic Pipeline (TPP, a commonly used open source proteomics analysis pipeline. Conclusions We showed that WaveletQuant was able to quantify more proteins and to quantify them more accurately than the ASAPRatio, a program that performs quantification in the TPP pipeline, first using known mixed ratios of yeast extracts and then using a data set from ovarian cancer cell lysates. The program and its documentation can be downloaded from our website at http://systemsbiozju.org/data/WaveletQuant.
International Nuclear Information System (INIS)
Frequency conversion (FC) and type-II parametric down-conversion (PDC) processes serve as basic building blocks for the implementation of quantum optical experiments: type-II PDC enables the efficient creation of quantum states such as photon-number states and Einstein–Podolsky–Rosen (EPR)-states. FC gives rise to technologies enabling efficient atom–photon coupling, ultrafast pulse gates and enhanced detection schemes. However, despite their widespread deployment, their theoretical treatment remains challenging. Especially the multi-photon components in the high-gain regime as well as the explicit time-dependence of the involved Hamiltonians hamper an efficient theoretical description of these nonlinear optical processes. In this paper, we investigate these effects and put forward two models that enable a full description of FC and type-II PDC in the high-gain regime. We present a rigorous numerical model relying on the solution of coupled integro-differential equations that covers the complete dynamics of the process. As an alternative, we develop a simplified model that, at the expense of neglecting time-ordering effects, enables an analytical solution. While the simplified model approximates the correct solution with high fidelity in a broad parameter range, sufficient for many experimental situations, such as FC with low efficiency, entangled photon-pair generation and the heralding of single photons from type-II PDC, our investigations reveal that the rigo investigations reveal that the rigorous model predicts a decreased performance for FC processes in quantum pulse gate applications and an enhanced EPR-state generation rate during type-II PDC, when EPR squeezing values above 12 dB are considered. (paper)
Quantifications and Modeling of Human Failure Events in a Fire PSA
International Nuclear Information System (INIS)
USNRC and EPRI developed guidance, 'Fire Human Reliability Analysis Guidelines, NUREG-1921', for estimating human error probabilities (HEPs) for HFEs under fire conditions. NUREG-1921 classifies HFEs into four types associated with the following human actions: - Type 1: New and existing Main Control Room (MCR) actions - Type 2: New and existing ex-MCR actions - Type 3: Actions associated with using alternate shutdown means (ASD) - Type 4: Actions relating to the error of commissions (EOCs) or error of omissions (EOOs) as a result of incorrect indications (SPI) In this paper, approaches for the quantifications and modeling of HFEs related to Type 1, 2 and 3 human actions are introduced. This paper introduced the human reliability analysis process for a fire PSA of Hanul Unit 3. A multiplier of 10 was used to re-estimate the HEPs for the preexisting internal human actions. The HEPs for all ex- MCR actions were assumed to be one. New MCR human actions were quantified using the scoping analysis method of NUREG-1921. If the quantified human action were identified to be risk-significant, detailed approaches (modeling and quantification) were used for incorporating fire situations into them. Multiple HFEs for single human action were defined and they were separately and were separately quantified to incorporate the specific fire situations into them. From this study, we can confirm that the modeling as well as quantifications of human actions is very important to appropriately treat them in PSA logic structures
Quantifications and Modeling of Human Failure Events in a Fire PSA
Energy Technology Data Exchange (ETDEWEB)
Kang, Dae Il; Kim, Kilyoo; Jang, Seung-Cheol [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
USNRC and EPRI developed guidance, 'Fire Human Reliability Analysis Guidelines, NUREG-1921', for estimating human error probabilities (HEPs) for HFEs under fire conditions. NUREG-1921 classifies HFEs into four types associated with the following human actions: - Type 1: New and existing Main Control Room (MCR) actions - Type 2: New and existing ex-MCR actions - Type 3: Actions associated with using alternate shutdown means (ASD) - Type 4: Actions relating to the error of commissions (EOCs) or error of omissions (EOOs) as a result of incorrect indications (SPI) In this paper, approaches for the quantifications and modeling of HFEs related to Type 1, 2 and 3 human actions are introduced. This paper introduced the human reliability analysis process for a fire PSA of Hanul Unit 3. A multiplier of 10 was used to re-estimate the HEPs for the preexisting internal human actions. The HEPs for all ex- MCR actions were assumed to be one. New MCR human actions were quantified using the scoping analysis method of NUREG-1921. If the quantified human action were identified to be risk-significant, detailed approaches (modeling and quantification) were used for incorporating fire situations into them. Multiple HFEs for single human action were defined and they were separately and were separately quantified to incorporate the specific fire situations into them. From this study, we can confirm that the modeling as well as quantifications of human actions is very important to appropriately treat them in PSA logic structures.
The Types of Axisymmetric Exact Solutions Closely Related to n-SOLITONS for Yang-Mills Theory
Zhong, Zai Zhe
In this letter, we point out that if a symmetric 2×2 real matrix M(?,z) obeys the Belinsky-Zakharov equation and |det(M)|=1, then an axisymmetric Bogomol'nyi field exact solution for the Yang-Mills-Higgs theory can be given. By using the inverse scattering technique, some special Bogomol'nyi field exact solutions, which are closely related to the true solitons, are generated. In particular, the Schwarzschild-like solution is a two-soliton-like solution.
Interpretation of eRBS for H quantification at surfaces
International Nuclear Information System (INIS)
Complete text of publication follows. The quantification of H at the surface is a subject of key importance. However, direct quantification of this element at the surface region (2 samples have to be made with care.
Improving the quantification of Brownian motion
Catipovic, Marco A.; Tyler, Paul M.; Trapani, Josef G.; Carter, Ashley R.
2013-07-01
Brownian motion experiments have become a staple of the undergraduate advanced laboratory, yet quantification of these experiments is difficult, typically producing errors of 10%-15% or more. Here, we discuss the individual sources of error in the experiment: sampling error, uncertainty in the diffusion coefficient, tracking error, vibration, and microscope drift. We model each source of error using theoretical and computational methods and compare the model to our experimental data. Finally, we describe various ways to reduce each source of error to less than 1%, improving the quantification of Brownian motion.
Chiu, Cheng-Chau; Vogt, Thomas; Zhao, Lili; Genest, Alexander; Rösch, Notker
2015-07-28
In this review we address recent efforts from experimental and theoretical side to study MoVO-type mixed metal oxides (MMOs) and their properties. We illustrate how structures of MMOs have been evaluated using a large variety of experimental techniques, such as electron microscopy, neutron diffraction, and X-ray diffraction. Furthermore, we discuss the current view on structure-catalysis correlations, derived from recent experiments. In a second part, we examine useful tools of theoretical chemistry for exploring MoVO-type systems. We discuss the need for using hybrid DFT methods and we analyze how, in the context of MMOs studies, semi-local DFT approximations can encounter problems due to a notable self-interaction error when describing oxidic species and reactions on them. In addition, we discuss various aspects of the model that are important when attempting to map complex MMO systems. PMID:26126874
Random data Cauchy theory for nonlinear wave equations of power-type on $\\mathbb{R}^3$
Luhrmann, Jonas; Mendelson, Dana
2013-01-01
We consider the defocusing nonlinear wave equation of power-type on $\\mathbb{R}^3$. We establish an almost sure global existence result with respect to a suitable randomization of the initial data. In particular, this provides examples of initial data of super-critical regularity which lead to global solutions. The proof is based upon Bourgain's high-low frequency decomposition and improved averaging effects for the free evolution of the randomized initial data.
Directory of Open Access Journals (Sweden)
Marc M. Van Hulle
2011-05-01
Full Text Available The damage caused by corrosion in chemical process installations can lead to unexpected plant shutdowns and the leakage of potentially toxic chemicals into the environment. When subjected to corrosion, structural changes in the material occur, leading to energy releases as acoustic waves. This acoustic activity can in turn be used for corrosion monitoring, and even for predicting the type of corrosion. Here we apply wavelet packet decomposition to extract features from acoustic emission signals. We then use the extracted wavelet packet coefficients for distinguishing between the most important types of corrosion processes in the chemical process industry: uniform corrosion, pitting and stress corrosion cracking. The local discriminant basis selection algorithm can be considered as a standard for the selection of the most discriminative wavelet coefficients. However, it does not take the statistical dependencies between wavelet coefficients into account. We show that, when these dependencies are ignored, a lower accuracy is obtained in predicting the corrosion type. We compare several mutual information filters to take these dependencies into account in order to arrive at a more accurate prediction.
International Nuclear Information System (INIS)
The dynamic phase transitions (DPTs) and dynamic phase diagrams of the kinetic spin-1/2 bilayer system in the presence of a time-dependent oscillating external magnetic field are studied by using Glauber-type stochastic dynamics based on the effective-field theory with correlations for the ferromagnetic/ferromagnetic (FM/FM), antiferromagnetic/ferromagnetic (AFM/FM) and antiferromagnetic/antiferromagnetic (AFM/AFM) interactions. The time variations of average magnetizations and the temperature dependence of the dynamic magnetizations are investigated. The dynamic phase diagrams for the amplitude of the oscillating field versus temperature were presented. The results are compared with the results of the same system within Glauber-type stochastic dynamics based on the mean-field theory. - Highlights: • The Ising bilayer system is investigated within the Glauber dynamics based on EFT. • The time variations of average order parameters to find phases are studied. • The dynamic phase diagrams are found for the different interaction parameters. • The system displays the critical points as well as a re-entrant behavior
Schwabe, O.; Shehab, E.; Erkoyuncu, J.
2015-08-01
The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.
Quantification of Information in a One-Way Plant-to-Animal Communication System
Doyle, Laurance R.
2009-01-01
In order to demonstrate possible broader applications of information theory to the quantification of non-human communication systems, we apply calculations of information entropy to a simple chemical communication from the cotton plant (Gossypium hirsutum) to the wasp (Cardiochiles nigriceps) studied by DeMoraes et al. The purpose of this chemical communication from cotton plants to wasps is presumed to be to allow the predatory wasp to more easily obtain the location of its preferred prey—on...
Entanglement quantification by local unitaries
Monras, A; Adesso, G.; Giampaolo, S. M.; Gualdi, G.; Davies, G. B.; Illuminati, F.
2011-01-01
Invariance under local unitary operations is a fundamental property that must be obeyed by every proper measure of quantum entanglement. However, this is not the only aspect of entanglement theory where local unitaries play a relevant role. In the present work we show that the application of suitable local unitary operations defines a family of bipartite entanglement monotones, collectively referred to as "mirror entanglement". They are constructed by first considering the (...
Advancing agricultural greenhouse gas quantification*
Olander, Lydia; Wollenberg, Eva; Tubiello, Francesco; Herold, Martin
2013-03-01
1. Introduction Better information on greenhouse gas (GHG) emissions and mitigation potential in the agricultural sector is necessary to manage these emissions and identify responses that are consistent with the food security and economic development priorities of countries. Critical activity data (what crops or livestock are managed in what way) are poor or lacking for many agricultural systems, especially in developing countries. In addition, the currently available methods for quantifying emissions and mitigation are often too expensive or complex or not sufficiently user friendly for widespread use. The purpose of this focus issue is to capture the state of the art in quantifying greenhouse gases from agricultural systems, with the goal of better understanding our current capabilities and near-term potential for improvement, with particular attention to quantification issues relevant to smallholders in developing countries. This work is timely in light of international discussions and negotiations around how agriculture should be included in efforts to reduce and adapt to climate change impacts, and considering that significant climate financing to developing countries in post-2012 agreements may be linked to their increased ability to identify and report GHG emissions (Murphy et al 2010, CCAFS 2011, FAO 2011). 2. Agriculture and climate change mitigation The main agricultural GHGs—methane and nitrous oxide—account for 10%-12% of anthropogenic emissions globally (Smith et al 2008), or around 50% and 60% of total anthropogenic methane and nitrous oxide emissions, respectively, in 2005. Net carbon dioxide fluxes between agricultural land and the atmosphere linked to food production are relatively small, although significant carbon emissions are associated with degradation of organic soils for plantations in tropical regions (Smith et al 2007, FAO 2012). Population growth and shifts in dietary patterns toward more meat and dairy consumption will lead to increased emissions unless we improve production efficiencies and management. Developing countries currently account for about three-quarters of direct emissions and are expected to be the most rapidly growing emission sources in the future (FAO 2011). Reducing agricultural emissions and increasing carbon sequestration in the soil and biomass has the potential to reduce agriculture's contribution to climate change by 5.5-6.0 gigatons (Gt) of carbon dioxide equivalent (CO2eq)/year. Economic potentials, which take into account costs of implementation, range from 1.5 to 4.3 GT CO2eq/year, depending on marginal abatement costs assumed and financial resources committed, with most of this potential in developing countries (Smith et al 2007). The opportunity for mitigation in agriculture is thus significant, and, if realized, would contribute to making this sector carbon neutral. Yet it is only through a robust and shared understanding of how much carbon can be stored or how much CO2 is reduced from mitigation practices that informed decisions can be made about how to identify, implement, and balance a suite of mitigation practices as diverse as enhancing soil organic matter, increasing the digestibility of feed for cattle, and increasing the efficiency of nitrogen fertilizer applications. Only by selecting a portfolio of options adapted to regional characteristics and goals can mitigation needs be best matched to also serve rural development goals, including food security and increased resilience to climate change. Expansion of agricultural land also remains a major contributor of greenhouse gases, with deforestation, largely linked to clearing of land for cultivation or pasture, generating 80% of emissions from developing countries (Hosonuma et al 2012). There are clear opportunities for these countries to address mitigation strategies from the forest and agriculture sector, recognizing that agriculture plays a large role in economic and development potential. In this context, multiple development goals can be reinforced by specific climate funding granted on the basis of
Anwendung der "Uncertainty Quantification" bei eisenbahndynamischen problemen
DEFF Research Database (Denmark)
Bigoni, Daniele; Engsig-Karup, Allan Peter
2013-01-01
The paper describes the results of the application of "Uncertainty Quantification" methods in railway vehicle dynamics. The system parameters are given by probability distributions. The results of the application of the Monte-Carlo and generalized Polynomial Chaos methods to a simple bogie model will be discussed.
Recurrence quantification analysis in Liu's attractor
International Nuclear Information System (INIS)
Recurrence Quantification Analysis is used to detect transitions chaos to periodical states or chaos to chaos in a new dynamical system proposed by Liu et al. This system contains a control parameter in the second equation and was originally introduced to investigate the forming mechanism of the compound structure of the chaotic attractor which exists when the control parameter is zero
Energy Technology Data Exchange (ETDEWEB)
Salmani, E. [LMPHE, associe au CNRST (URAC 12), Faculte des Sciences, Universite Mohammed V-Agdal, Rabat (Morocco); Mounkachi, O. [Institute of Nanomaterials and Nanotechnology, MAScIR, Rabat (Morocco); Ez-Zahraouy, H., E-mail: ezahamid@fsr.ac.ma [LMPHE, associe au CNRST (URAC 12), Faculte des Sciences, Universite Mohammed V-Agdal, Rabat (Morocco); El Kenz, A. [LMPHE, associe au CNRST (URAC 12), Faculte des Sciences, Universite Mohammed V-Agdal, Rabat (Morocco); Hamedoun, M. [Institute of Nanomaterials and Nanotechnology, MAScIR, Rabat (Morocco); Benyoussef, A. [LMPHE, associe au CNRST (URAC 12), Faculte des Sciences, Universite Mohammed V-Agdal, Rabat (Morocco); Institute of Nanomaterials and Nanotechnology, MAScIR, Rabat (Morocco); Hassan II Academy of Science and Technology, Rabat (Morocco)
2013-03-15
Based on first-principles spin-density functional calculations, using the Korringa-Kohn-Rostoker method combined with the coherent potential approximation, we investigated the half-metallic ferromagnetic behavior of (Ga, Fe)N co-doped with carbon within the self-interaction-corrected local density approximation. Mechanism of hybridization and interaction between magnetic ions in p-type (Ga, Fe)N is investigated. Stability energy of ferromagnetic and disorder local moment states was calculated for different carbon concentration. The local density and the self-interaction-corrected approximations have been used to explain the strong ferromagnetic interaction observed and the mechanism that stabilizes this state. The transition temperature to the ferromagnetic state has been calculated within the effective field theory, with a Honmura-Kaneyoshi differential operator technique. - Highlights: Black-Right-Pointing-Pointer The paper focus on the study the magnetic properties and electronic structure of p-type (Ga, Fe)N within LDA and SIC approximation. Black-Right-Pointing-Pointer These methods allow us to explain the strong ferromagnetic interaction observed and the mechanism for its stability and the mechanism of hybridization and interaction between magnetic ions in p-type (Ga, Fe). Black-Right-Pointing-Pointer The results obtained are interesting and can be serve as a reference in the field of dilute magnetic semi conductor.
International Nuclear Information System (INIS)
Based on first-principles spin-density functional calculations, using the Korringa–Kohn–Rostoker method combined with the coherent potential approximation, we investigated the half-metallic ferromagnetic behavior of (Ga, Fe)N co-doped with carbon within the self-interaction-corrected local density approximation. Mechanism of hybridization and interaction between magnetic ions in p-type (Ga, Fe)N is investigated. Stability energy of ferromagnetic and disorder local moment states was calculated for different carbon concentration. The local density and the self-interaction-corrected approximations have been used to explain the strong ferromagnetic interaction observed and the mechanism that stabilizes this state. The transition temperature to the ferromagnetic state has been calculated within the effective field theory, with a Honmura–Kaneyoshi differential operator technique. - Highlights: ? The paper focus on the study the magnetic properties and electronic structure of p-type (Ga, Fe)N within LDA and SIC approximation. ? These methods allow us to explain the strong ferromagnetic interaction observed and the mechanism for its stability and the mechanism of hybridization and interaction between magnetic ions in p-type (Ga, Fe). ? The results obtained are interesting and can be serve as a reference in the field of dilute magnetic semi conductor.
Structure and dynamics of Xn-type clusters (n = 3, 4, 6) from spontaneous symmetry breaking theory
International Nuclear Information System (INIS)
On the basis of three symmetries of nature, homogeneity and isotropy of space and indistinguishability of identical particles, we have found a group of coordinate transformations that leaves invariant the electronic energy and the potential energy of nuclei in every molecule subjected to no external fields. From these transformations we derived the formula for the dynamical representation and proved that every molecule has at least one Raman-active, totally symmetric normal mode of vibration. As an example, we studied stable configurations and dynamics of Xn-type molecules (clusters), n = 3, 4, 6, within symmetry-adapted, second-order expansion of the electronic energy with respect to nuclear coordinates, around the united atom. Within this approximation, for a positive coefficient in the expansion, a homonuclear three- (four-, six-) atomic cluster has a stable configuration of D3h (Td, Oh) symmetry. Our calculated mutual ratios of vibrational frequencies for clusters with these geometries are in reasonable agreement with experiment. (paper)
Collapse Theories as Beable Theories
Bacciagaluppi, Guido
2010-01-01
I discuss the interpretation of spontaneous collapse theories, with particular reference to Bell's suggestion that the stochastic jumps in the evolution of the wave function should be considered as local beables of the theory. I develop this analogy in some detail for the case of non-relativistic GRW-type theories, using a generalisation of Bell's notion of beables to POV measures. In the context of CSL-type theories, this strategy appears to fail, and I discuss instead Ghirardi and co-worker...
Wolpert, David H.
2005-01-01
Probability theory governs the outcome of a game; there is a distribution over mixed strat.'s, not a single "equilibrium". To predict a single mixed strategy must use our loss function (external to the game's players. Provides a quantification of any strategy's rationality. Prove rationality falls as cost of computation rises (for players who have not previously interacted). All extends to games with varying numbers of players.
International Nuclear Information System (INIS)
It is shown that the effective Hamiltonian representation, as it is formulated in author’s papers, serves as a basis for distinguishing, in a broadband environment of an open quantum system, independent noise sources that determine, in terms of the stationary quantum Wiener and Poisson processes in the Markov approximation, the effective Hamiltonian and the equation for the evolution operator of the open system and its environment. General stochastic differential equations of generalized Langevin (non-Wiener) type for the evolution operator and the kinetic equation for the density matrix of an open system are obtained, which allow one to analyze the dynamics of a wide class of localized open systems in the Markov approximation. The main distinctive features of the dynamics of open quantum systems described in this way are the stabilization of excited states with respect to collective processes and an additional frequency shift of the spectrum of the open system. As an illustration of the general approach developed, the photon dynamics in a single-mode cavity without losses on the mirrors is considered, which contains identical intracavity atoms coupled to the external vacuum electromagnetic field. For some atomic densities, the photons of the cavity mode are “locked” inside the cavity, thus exhibiting a new phenomenon of radiation trapping and non-Wiener dynamics.
Austin, Stéphanie; Senécal, Caroline; Guay, Frédéric; Nouwen, Arie
2011-09-01
This study tests a model derived from Self-Determination Theory (SDT) (Deci and Ryan, 2000) to explain the mechanisms by which non-modifiable factors influence dietary self-care in adolescents with type 1 diabetes (n = 289). SEM analyses adjusted for HbA1c levels revealed that longer diabetes duration and female gender were indicative of poorer dietary self-care. This effect was mediated by contextual and motivational factors as posited by SDT. Poorer autonomy support from practitioners was predominant in girls with longer diabetes duration. Perceived autonomous motivation and self-efficacy were indicative of greater autonomy support, and led to better dietary self-care. PMID:21430132
Entanglement quantification by local unitaries
Monras, A; Giampaolo, S M; Gualdi, G; Davies, G B; Illuminati, F
2011-01-01
Invariance under local unitary operations is a fundamental property that must be obeyed by every proper measure of quantum entanglement. However, this is not the only aspect of entanglement theory where local unitaries play a relevant role. In the present work we show that the application of suitable local unitary operations defines a family of bipartite entanglement monotones, collectively referred to as "shield entanglement". They are constructed by first considering the (squared) Hilbert- Schmidt distance of the state from the set of states obtained by applying to it a given local unitary. To the action of each different local unitary there corresponds a different distance. We then minimize these distances over the sets of local unitaries with different spectra, obtaining an entire family of different entanglement monotones. We show that these shield entanglement monotones are organized in a hierarchical structure, and we establish the conditions that need to be imposed on the spectrum of a local unitary f...
Sharma, Leigh; Markon, Kristian E; Clark, Lee Anna
2014-03-01
Impulsivity is considered a personality trait affecting behavior in many life domains, from recreational activities to important decision making. When extreme, it is associated with mental health problems, such as substance use disorders, as well as with interpersonal and social difficulties, including juvenile delinquency and criminality. Yet, trait impulsivity may not be a unitary construct. We review commonly used self-report measures of personality trait impulsivity and related constructs (e.g., sensation seeking), plus the opposite pole, control or constraint. A meta-analytic principal-components factor analysis demonstrated that these scales comprise 3 distinct factors, each of which aligns with a broad, higher order personality factor-Neuroticism/Negative Emotionality, Disinhibition versus Constraint/Conscientiousness, and Extraversion/Positive Emotionality/Sensation Seeking. Moreover, Disinhibition versus Constraint/Conscientiousness comprise 2 correlated but distinct subfactors: Disinhibition versus Constraint and Conscientiousness/Will versus Resourcelessness. We also review laboratory tasks that purport to measure a construct similar to trait impulsivity. A meta-analytic principal-components factor analysis demonstrated that these tasks constitute 4 factors (Inattention, Inhibition, Impulsive Decision-Making, and Shifting). Although relations between these 2 measurement models are consistently low to very low, relations between both trait scales and laboratory behavioral tasks and daily-life impulsive behaviors are moderate. That is, both independently predict problematic daily-life impulsive behaviors, such as substance use, gambling, and delinquency; their joint use has incremental predictive power over the use of either type of measure alone and furthers our understanding of these important, problematic behaviors. Future use of confirmatory methods should help to ascertain with greater precision the number of and relations between impulsivity-related components. PMID:24099400
International Nuclear Information System (INIS)
Anelastic modal equations are used to examine thermal convection occurring over many density scale heights in the entire outer envelope of an A-type star, encompassing both the hydrogen and helium convectively unstable zones. Single-mode anelastic solutions for such compressible convection display strong overshooting of the motions into adjacent radiative zones. Such mixing would preclude diffusive separation of elements in the supposedly quiescent region between the two unstable zones. Indeed, the anelastic solutions reveal that the two zones of convective instability are dynamically coupled by the overshooting motions. The nonlinear single-mode equations admit two solutions for the same horizontal wavelength, and these are distinguished by the sense of the vertical velocity at the center of the three-dimensional cell. The upward directed flows experience large pressure effects when they penetrate into regions where the vertical scale height has become small compared to their horizontal scale. The fluctuating pressure can modify the density fluctuations so that the sense of the buoyancy force is changed, with buoyancy braking actually achieved near the top of the convection zone, even though the mean stratification is still superadiabatic. The pressure and buoyancy work there serves to decelerate the vertical motions and deflect them laterally, leading to strong horizontal shearing motions. Thus the shallow but highly unstable hydrogen ionization zone may serve to pre hydrogen ionization zone may serve to prevent convection with a horizontal scale comparable to supergranulation from getting through into the atmosphere with any significant portion of its original momentum. This suggests that strong horizontal shear flows should be present just below the surface of the star, and similarly that strong horizontal shear flows should be present just below the surface of the star, and similarly that the large-scale motions extending into the stable atmosphere would appear mainly as horizontal flows
Forest Carbon Leakage Quantification Methods and Their Suitability for Assessing Leakage in REDD
Directory of Open Access Journals (Sweden)
Sabine Henders
2012-01-01
Full Text Available This paper assesses quantification methods for carbon leakage from forestry activities for their suitability in leakage accounting in a future Reducing Emissions from Deforestation and Forest Degradation (REDD mechanism. To that end, we first conducted a literature review to identify specific pre-requisites for leakage assessment in REDD. We then analyzed a total of 34 quantification methods for leakage emissions from the Clean Development Mechanism (CDM, the Verified Carbon Standard (VCS, the Climate Action Reserve (CAR, the CarbonFix Standard (CFS, and from scientific literature sources. We screened these methods for the leakage aspects they address in terms of leakage type, tools used for quantification and the geographical scale covered. Results show that leakage methods can be grouped into nine main methodological approaches, six of which could fulfill the recommended REDD leakage requirements if approaches for primary and secondary leakage are combined. The majority of methods assessed, address either primary or secondary leakage; the former mostly on a local or regional and the latter on national scale. The VCS is found to be the only carbon accounting standard at present to fulfill all leakage quantification requisites in REDD. However, a lack of accounting methods was identified for international leakage, which was addressed by only two methods, both from scientific literature.
International Nuclear Information System (INIS)
After noting some advantages of using perturbation theory some of the various types are related on a chart and described, including many-body nonlinear summations, quartic force-field fit for geometry, fourth-order correlation approximations, and a survey of some recent work. Alternative initial approximations in perturbation theory are also discussed. 25 references
Directory of Open Access Journals (Sweden)
Karunamuni Nandini
2008-12-01
Full Text Available Abstract Background Aerobic physical activity (PA and resistance training are paramount in the treatment and management of type 2 diabetes (T2D, but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random national sample which was created by generating a random list of household phone numbers. The list was proportionate to the actual number of household telephone numbers for each Canadian province (with the exception of Quebec. These individuals completed self-report TPB constructs of attitude, subjective norm, perceived behavioral control and intention, and a 3-month follow-up that assessed aerobic PA and resistance training. Results TPB explained 10% and 8% of the variance respectively for aerobic PA and resistance training; and accounted for 39% and 45% of the variance respectively for aerobic PA and resistance training intentions. Conclusion These results may guide the development of appropriate PA interventions for aerobic PA and resistance training based on the TPB.
QuasR: quantification and annotation of short reads in R
Gaidatzis, Dimos; Lerch, Anita; Hahne, Florian; Stadler, Michael B.
2014-01-01
Summary: QuasR is a package for the integrated analysis of high-throughput sequencing data in R, covering all steps from read preprocessing, alignment and quality control to quantification. QuasR supports different experiment types (including RNA-seq, ChIP-seq and Bis-seq) and analysis variants (e.g. paired-end, stranded, spliced and allele-specific), and is integrated in Bioconductor so that its output can be directly processed for statistical analysis and visualization.
Jarre, Gerald; Heyer, Steffen; Memmel, Elisabeth; Meinhardt, Thomas; Krueger, Anke
2014-01-01
Nanodiamonds functionalized with different organic moieties carrying terminal amino groups have been synthesized. These include conjugates generated by Diels–Alder reactions of ortho-quinodimethanes formed in situ from pyrazine and 5,6-dihydrocyclobuta[d]pyrimidine derivatives. For the quantification of primary amino groups a modified photometric assay based on the Kaiser test has been developed and validated for different types of aminated nanodiamond. The results correspond well to values o...
Marangon, Iris; Boggetto, Nicole; Ménard-Moyon, Cécilia; Luciani, Nathalie; Wilhelm, Claire; Bianco, Alberto; Gazeau, Florence
2013-01-01
Carbon-based nanomaterials, like carbon nanotubes (CNTs), belong to this type of nanoparticles which are very difficult to discriminate from carbon-rich cell structures and de facto there is still no quantitative method to assess their distribution at cell and tissue levels. What we propose here is an innovative method allowing the detection and quantification of CNTs in cells using a multispectral imaging flow cytometer (ImageStream, Amnis). This newly developed device integrates both a high...
Energy Technology Data Exchange (ETDEWEB)
Gray, George T., III [Los Alamos National Laboratory; Livescu, Veronica [Los Alamos National Laboratory; Cerreta, Ellen K [Los Alamos National Laboratory
2010-03-18
Orientation-imaging microscopy offers unique capabilities to quantify the defects and damage evolution occurring in metals following dynamic and shock loading. Examples of the quantification of the types of deformation twins activated, volume fraction of twinning, and damage evolution as a function of shock loading in Ta are presented. Electron back-scatter diffraction (EBSD) examination of the damage evolution in sweeping-detonation-wave shock loading to study spallation in Cu is also presented.
SPECT quantification of regional radionuclide distributions
International Nuclear Information System (INIS)
SPECT quantification of regional radionuclide activities within the human body is affected by several physical and instrumental factors including attenuation of photons within the patient, Compton scattered events, the system's finite spatial resolution and object size, finite number of detected events, partial volume effects, the radiopharmaceutical biokinetics, and patient and/or organ motion. Furthermore, other instrumentation factors such as calibration of the center-of-rotation, sampling, and detector nonuniformities will affect the SPECT measurement process. These factors are described, together with examples of compensation methods that are currently available for improving SPECT quantification. SPECT offers the potential to improve in vivo estimates of absorbed dose, provided the acquisition, reconstruction, and compensation procedures are adequately implemented and utilized. 53 references, 2 figures
Uncertainty quantification for porous media flows
International Nuclear Information System (INIS)
Uncertainty quantification is an increasingly important aspect of many areas of computational science, where the challenge is to make reliable predictions about the performance of complex physical systems in the absence of complete or reliable data. Predicting flows of oil and water through oil reservoirs is an example of a complex system where accuracy in prediction is needed primarily for financial reasons. Simulation of fluid flow in oil reservoirs is usually carried out using large commercially written finite difference simulators solving conservation equations describing the multi-phase flow through the porous reservoir rocks. This paper examines a Bayesian Framework for uncertainty quantification in porous media flows that uses a stochastic sampling algorithm to generate models that match observed data. Machine learning algorithms are used to speed up the identification of regions in parameter space where good matches to observed data can be found
Uncertainty Quantification in Hybrid Dynamical Systems
Sahai, Tuhin; Pasini, Jose Miguel
2011-01-01
Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontin...
Tarde’s idea of quantification
Latour, Bruno
2010-01-01
Even though Tarde is said to have had a literary view of social science, he himself was deeply involved in statistics (especially criminal statistics) and took an essentially quantitative view of social phenomena. What is so paradoxical in his view of quantification is that it relies not only on the aggregates but also on the individual element. The paper reviews this paradox, the reason why Tarde was son intent on finding a quantitative grasp for establishing the social sciences and relates ...
Quantification of Permafrost Creep by Remote Sensing
Roer, I.; Kaeaeb, A.
2008-12-01
Rockglaciers and frozen talus slopes are distinct landforms representing the occurrence of permafrost conditions in high mountain environments. The interpretation of ongoing permafrost creep and its reaction times is still limited due to the complex setting of interrelating processes within the system. Therefore, a detailed monitoring of rockglaciers and frozen talus slopes seems advisable to better understand the system as well as to assess possible consequences like rockfall hazards or debris-flow starting zones. In this context, remote sensing techniques are increasingly important. High accuracy techniques and data with high spatial and temporal resolution are required for the quantification of rockglacier movement. Digital Terrain Models (DTMs) derived from optical stereo, synthetic aperture radar (SAR) or laser scanning data are the most important data sets for the quantification of permafrost-related mass movements. Correlation image analysis of multitemporal orthophotos allow for the quantification of horizontal displacements, while vertical changes in landform geometry are computed by DTM comparisons. In the European Alps the movement of rockglaciers is monitored over a period of several decades by the combined application of remote sensing and geodetic methods. The resulting kinematics (horizontal and vertical displacements) as well as spatio-temporal variations thereof are considered in terms of rheology. The distinct changes in process rates or landform failures - probably related to permafrost degradation - are analysed in combination with data on surface and subsurface temperatures and internal structures (e.g., ice content, unfrozen water content).
Automated quantification of synapses by fluorescence microscopy.
Schätzle, Philipp; Wuttke, René; Ziegler, Urs; Sonderegger, Peter
2012-02-15
The quantification of synapses in neuronal cultures is essential in studies of the molecular mechanisms underlying synaptogenesis and synaptic plasticity. Conventional counting of synapses based on morphological or immunocytochemical criteria is extremely work-intensive. We developed a fully automated method which quantifies synaptic elements and complete synapses based on immunocytochemistry. Pre- and postsynaptic elements are detected by their corresponding fluorescence signals and their proximity to dendrites. Synapses are defined as the combination of a pre- and postsynaptic element within a given distance. The analysis is performed in three dimensions and all parameters required for quantification can be easily adjusted by a graphical user interface. The integrated batch processing enables the analysis of large datasets without any further user interaction and is therefore efficient and timesaving. The potential of this method was demonstrated by an extensive quantification of synapses in neuronal cultures from DIV 7 to DIV 21. The method can be applied to all datasets containing a pre- and postsynaptic labeling plus a dendritic or cell surface marker. PMID:22108140
Extending Existential Quantification in Conjunctions of BDDs
Directory of Open Access Journals (Sweden)
Sean A. Weaver
2006-06-01
Full Text Available We introduce new approaches intended to speed up determining the satisfiability of a given Boolean formula ? expressed as a conjunction of Boolean functions. A common practice in such cases, when using constraint-oriented methods, is to represent the functions as BDDs, then repeatedly cluster BDDs containing one or more variables, and finally existentially quantify those variables away from the cluster. Clustering is essential because, in general, existential quantification cannot be applied unless the variables occur in only a single BDD. But, clustering incurs significant overhead and may result in BDDs that are too big to allow the process to complete in a reasonable amount of time. There are two significant contributions in this paper. First, we identify elementary conditions under which the existential quantification of a subset of variables V may be distributed over all BDDs without clustering. We show that when these conditions are satisfied, safe assignments to the variables of V are automatically generated. This is significant because these assignments can be applied, as though they were inferences, to simplify ?. Second, some efficient operations based on these conditions are introduced and can be integrated into existing frameworks of both search-oriented and constraint-oriented methods of satisfiability. All of these operations are relaxations in the use of existential quantification and therefore may fail to find one or more existing safe assignments. Finally, we compare and contrast the relationship of these operations to autarkies and present some preliminary results.
Quantification of gastrointestinal sodium channelopathy.
Poh, Yong Cheng; Beyder, Arthur; Strege, Peter R; Farrugia, Gianrico; Buist, Martin L
2012-01-21
Na(v)1.5 sodium channels, encoded by SCN5A, have been identified in human gastrointestinal interstitial cells of Cajal (ICC) and smooth muscle cells (SMC). A recent study found a novel, rare missense R76C mutation of the sodium channel interacting protein telethonin in a patient with primary intestinal pseudo-obstruction. The presence of a mutation in a patient with a motility disorder, however, does not automatically imply a cause-effect relationship between the two. Patch clamp experiments on HEK-293 cells previously established that the R76C mutation altered Na(v)1.5 channel function. Here the process through which these data were quantified to create stationary Markov state models of wild-type and R76C channel function is described. The resulting channel descriptions were included in whole cell ICC and SMC computational models and simulations were performed to assess the cellular effects of the R76C mutation. The simulated ICC slow wave was decreased in duration and the resting membrane potential in the SMC was depolarized. Thus, the R76C mutation was sufficient to alter ICC and SMC cell electrophysiology. However, the cause-effect relationship between R76C and intestinal pseudo-obstruction remains an open question. PMID:21959314
Characterization and LC-MS/MS based quantification of hydroxylated fullerenes
Chao, Tzu-Chiao; Song, Guixue; Hansmeier, Nicole; Westerhoff, Paul; Herckes, Pierre; Halden, Rolf U.
2011-01-01
Highly water-soluble hydroxylated fullerene derivatives are being investigated for a wide range of commercial products as well as for potential cytotoxicity. However, no analytical methods are currently available for their quantification at sub-ppm concentrations in environmental matrices. Here, we report on the development and comparison of liquid chromatography-ultra violet/visible spectroscopy (LC-UV/vis) and mass spectrometry (LC-MS) based detection and quantification methods for a commercial fullerols. We achieved good separation efficiency using an amide-type hydrophilic interaction liquid chromatography (HILIC) column (plate number >2000) under isocratic conditions with 90% acetonitrile as the mobile phase. The method detection limits (MDLs) ranged from 42.8 ng/mL (UV detection) to 0.19 pg/mL (using MS with multiple reaction monitoring, MRM). Other MS measurement modes achieved MDLs of 125 pg/mL (single quad scan, Q1) and 1.5 pg/mL (multiple ion monitoring, MI). Each detection method exhibited a good linear response over several orders of magnitude. Moreover, we tested the robustness of these methods in the presence of Suvanee River fulvic acids (SRFA) as an example of organic matter commonly found in environmental water samples. While SRFA significantly interfered with UV- and Q1-based quantifications, the interference was relatively low using MI or MRM (relative error in presence of SRFA: 8.6% and 2.5%, respectively). This first report of a robust MS-based quantification method for modified fullerenes dissolved in water suggests the feasibility of implementing MS techniques more broadly for identification and quantification of fullerols and other water-soluble fullerene derivatives in environmental samples. PMID:21294534
Characterization and liquid chromatography-MS/MS based quantification of hydroxylated fullerenes.
Chao, Tzu-Chiao; Song, Guixue; Hansmeier, Nicole; Westerhoff, Paul; Herckes, Pierre; Halden, Rolf U
2011-03-01
Highly water-soluble hydroxylated fullerene derivatives are being investigated for a wide range of commercial products as well as for potential cytotoxicity. However, no analytical methods are currently available for their quantification at sub-ppm concentrations in environmental matrixes. Here, we report on the development and comparison of liquid chromatography-ultraviolet/visible spectroscopy (LC-UV/vis) and liquid chromatography-mass spectrometry (LC-MS) based detection and quantification methods for commercial fullerols. We achieved good separation efficiency using an amide-type hydrophilic interaction liquid chromatography (HILIC) column (plate number >2000) under isocratic conditions with 90% acetonitrile as the mobile phase. The method detection limits (MDLs) ranged from 42.8 ng/mL (UV detection) to 0.19 pg/mL (using MS with multiple reaction monitoring, MRM). Other MS measurement modes achieved MDLs of 125 pg/mL (single quad scan, Q1) and 1.5 pg/mL (multiple ion monitoring, MI). Each detection method exhibited a good linear response over several orders of magnitude. Moreover, we tested the robustness of these methods in the presence of Suvanee River fulvic acids (SRFA) as an example of organic matter commonly found in environmental water samples. While SRFA significantly interfered with UV- and Q1-based quantifications, the interference was relatively low using MI or MRM (relative error in presence of SRFA: 8.6% and 2.5%, respectively). This first report of a robust MS-based quantification method for modified fullerenes dissolved in water suggests the feasibility of implementing MS techniques more broadly for identification and quantification of fullerols and other water-soluble fullerene derivatives in environmental samples. PMID:21294534
Protocol for Quantification of Defects in Natural Fibres for Composites
DEFF Research Database (Denmark)
Mortensen, Ulrich Andreas; Madsen, Bo
2014-01-01
Natural bast-type plant fibres are attracting increasing interest for being used for structural composite applications where high quality fibres with good mechanical properties are required. A protocol for the quantification of defects in natural fibres is presented. The protocol is based on the experimental method of optical microscopy and the image analysis algorithms of the seeded region growing method and Otsu’s method. The use of the protocol is demonstrated by examining two types of differently processed flax fibres to give mean defect contents of 6.9 and 3.9%, a difference which is tested to be statistically significant. The protocol is evaluated with respect to the selection of image analysis algorithms, and Otsu’s method is found to be a more appropriate method than the alternative coefficient of variation method. The traditional way of defining defect size by area is compared to the definition of defect size by width, and it is shown that both definitions can be used to give unbiased findings for the comparison between fibre types. Finally, considerations are given with respect to true measures of defect content, number of determinations, and number of significant figures used for the descriptive statistics.
International Nuclear Information System (INIS)
A scattering theory for the wave equation with compactly supported perturbations was developed by Lax-Phillips in 1967. Using Enss approach, Phillips developed a Lax-Phillips scattering theory with short range perturbations of the type: V(x)=o((1)/|x|?), ? > 2. In this paper we develop a scattering theory for more general perturbations, i.e. for V(x)=(?(x))/|x|?, where ?=2-(n)/s, ? is an element of Ls(Rn), s > 2 and s ? (n)/2. Refs
Directional biases in phylogenetic structure quantification: a Mediterranean case study.
Molina-Venegas, Rafael; Roquet, Cristina
2014-06-01
Recent years have seen an increasing effort to incorporate phylogenetic hypotheses to the study of community assembly processes. The incorporation of such evolutionary information has been eased by the emergence of specialized software for the automatic estimation of partially resolved supertrees based on published phylogenies. Despite this growing interest in the use of phylogenies in ecological research, very few studies have attempted to quantify the potential biases related to the use of partially resolved phylogenies and to branch length accuracy, and no work has examined how tree shape may affect inference of community phylogenetic metrics. In this study, using a large plant community and elevational dataset, we tested the influence of phylogenetic resolution and branch length information on the quantification of phylogenetic structure; and also explored the impact of tree shape (stemminess) on the loss of accuracy in phylogenetic structure quantification due to phylogenetic resolution. For this purpose, we used 9 sets of phylogenetic hypotheses of varying resolution and branch lengths to calculate three indices of phylogenetic structure: the mean phylogenetic distance (NRI), the mean nearest taxon distance (NTI) and phylogenetic diversity (stdPD) metrics. The NRI metric was the less sensitive to phylogenetic resolution, stdPD showed an intermediate sensitivity, and NTI was the most sensitive one; NRI was also less sensitive to branch length accuracy than NTI and stdPD, the degree of sensitivity being strongly dependent on the dating method and the sample size. Directional biases were generally towards type II errors. Interestingly, we detected that tree shape influenced the accuracy loss derived from the lack of phylogenetic resolution, particularly for NRI and stdPD. We conclude that well-resolved molecular phylogenies with accurate branch length information are needed to identify the underlying phylogenetic structure of communities, and also that sensitivity of phylogenetic structure measures to low phylogenetic resolution can strongly differ depending on phylogenetic tree shape. PMID:25076812
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...
Aerodynamic Modeling with Heterogeneous Data Assimilation and Uncertainty Quantification Project
National Aeronautics and Space Administration — Clear Science Corp. proposes to develop an aerodynamic modeling tool that assimilates data from different sources and facilitates uncertainty quantification. The...
Development of a VHH-Based Erythropoietin Quantification Assay
DEFF Research Database (Denmark)
Kol, Stefan; Beuchert Kallehauge, Thomas
2015-01-01
Erythropoietin (EPO) quantification during cell line selection and bioreactor cultivation has traditionally been performed with ELISA or HPLC. As these techniques suffer from several drawbacks, we developed a novel EPO quantification assay. A camelid single-domain antibody fragment directed against human EPO was evaluated as a capturing antibody in a label-free biolayer interferometry-based quantification assay. Human recombinant EPO can be specifically detected in Chinese hamster ovary cell supernatants in a sensitive and pH-dependent manner. This method enables rapid and robust quantification of EPO in a high-throughput setting.
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — This proposal addresses methods for efficient quantification of margins and uncertainties (QMU) for models that couple multiple, large-scale commercial or...
Quantification of Uncertainties in Integrated Spacecraft System Models Project
National Aeronautics and Space Administration — The objective for the Phase II effort will be to develop a comprehensive, efficient, and flexible uncertainty quantification (UQ) framework implemented within a...
Quantification of Uncertainties in Integrated Spacecraft System Models Project
National Aeronautics and Space Administration — The proposed effort is to investigate a novel uncertainty quantification (UQ) approach based on non-intrusive polynomial chaos (NIPC) for computationally efficient...
Directory of Open Access Journals (Sweden)
Sarin Hemant
2010-08-01
Full Text Available Abstract Background Much of our current understanding of microvascular permeability is based on the findings of classic experimental studies of blood capillary permeability to various-sized lipid-insoluble endogenous and non-endogenous macromolecules. According to the classic small pore theory of microvascular permeability, which was formulated on the basis of the findings of studies on the transcapillary flow rates of various-sized systemically or regionally perfused endogenous macromolecules, transcapillary exchange across the capillary wall takes place through a single population of small pores that are approximately 6 nm in diameter; whereas, according to the dual pore theory of microvascular permeability, which was formulated on the basis of the findings of studies on the accumulation of various-sized systemically or regionally perfused non-endogenous macromolecules in the locoregional tissue lymphatic drainages, transcapillary exchange across the capillary wall also takes place through a separate population of large pores, or capillary leaks, that are between 24 and 60 nm in diameter. The classification of blood capillary types on the basis of differences in the physiologic upper limits of pore size to transvascular flow highlights the differences in the transcapillary exchange routes for the transvascular transport of endogenous and non-endogenous macromolecules across the capillary walls of different blood capillary types. Methods The findings and published data of studies on capillary wall ultrastructure and capillary microvascular permeability to lipid-insoluble endogenous and non-endogenous molecules from the 1950s to date were reviewed. In this study, the blood capillary types in different tissues and organs were classified on the basis of the physiologic upper limits of pore size to the transvascular flow of lipid-insoluble molecules. Blood capillaries were classified as non-sinusoidal or sinusoidal on the basis of capillary wall basement membrane layer continuity or lack thereof. Non-sinusoidal blood capillaries were further sub-classified as non-fenestrated or fenestrated based on the absence or presence of endothelial cells with fenestrations. The sinusoidal blood capillaries of the liver, myeloid (red bone marrow, and spleen were sub-classified as reticuloendothelial or non-reticuloendothelial based on the phago-endocytic capacity of the endothelial cells. Results The physiologic upper limit of pore size for transvascular flow across capillary walls of non-sinusoidal non-fenestrated blood capillaries is less than 1 nm for those with interendothelial cell clefts lined with zona occludens junctions (i.e. brain and spinal cord, and approximately 5 nm for those with clefts lined with macula occludens junctions (i.e. skeletal muscle. The physiologic upper limit of pore size for transvascular flow across the capillary walls of non-sinusoidal fenestrated blood capillaries with diaphragmed fenestrae ranges between 6 and 12 nm (i.e. exocrine and endocrine glands; whereas, the physiologic upper limit of pore size for transvascular flow across the capillary walls of non-sinusoidal fenestrated capillaries with open 'non-diaphragmed' fenestrae is approximately 15 nm (kidney glomerulus. In the case of the sinusoidal reticuloendothelial blood capillaries of myeloid bone marrow, the transvascular transport of non-endogenous macromolecules larger than 5 nm into the bone marrow interstitial space takes place via reticuloendothelial cell-mediated phago-endocytosis and transvascular release, which is the case for systemic bone marrow imaging agents as large as 60 nm in diameter. Conclusions The physiologic upper limit of pore size in the capillary walls of most non-sinusoidal blood capillaries to the transcapillary passage of lipid-insoluble endogenous and non-endogenous macromolecules ranges between 5 and 12 nm. Therefore, macromolecules larger than the physiologic upper limits of pore size in the non-sinusoidal blood capillary types generally do not accumulate within the respective tissue interstitial
Directory of Open Access Journals (Sweden)
Ebrahim Hajizadeh
2011-11-01
Full Text Available Background and Aim: Many studies show that the only way to control diabetes and prevent its debilitating complications is continuous self-care. This study aimed to determine factors affecting self-care behavior of diabetic women in Khoy City, Iran based the extended theory of reasoned action (ETRA. Materials and Methods: A sample of 352 women with type 2 diabetes referring to a Diabetes Clinic in Khoy City in West Azarbaijan Province, Iran participated in the study. Appropriate instruments were designed to measure the relevant variables (diabetes knowledge, personal beliefs, subjective norm, self-efficacy and behavioral intention, and self-care behavior based on ETRA. Reliability and validity of the instruments were determined prior to the study. Statistical analysis of the data was done using the SPSS-version 16 software.Results: Based on the data obtained, the proposed model could predict and explain 41% and 26.2% of the variance of behavioral intention and self-care, respectively, in women with type-2 diabetes. The data also indicated that among the constructs of the model perceived self-efficacy was the strongest predictor for intention for self-care behavior. This construct affected both directly and indirectly self-care behavior. The next strongest predictors were attitudes, social pressures, social norms, and intervals between visiting patients by the treating team.Conclusion: The proposed model can predict self-care behavior very well. Thus, it may form the basis for educational interventions aiming at promoting self-care and, ultimately, controlling diabetes.
Caramello, Olivia
2013-01-01
We introduce an abstract topos-theoretic framework for building Galois-type theories in a variety of different mathematical contexts; such theories are obtained from representations of certain atomic two-valued toposes as toposes of continuous actions of a topological group. Our framework subsumes in particular Grothendieck's Galois theory and allows to build Galois-type equivalences in new contexts, such as for example graph theory and finite group theory.
International Nuclear Information System (INIS)
Highlights: • The thermodynamic characters of TMB2 have been firstly studied using the QHA method. • WB2 and TaB2 are good candidates for the structural application at high temperature. • Most of the early-transition-metal diborides cannot be easily machined. • The correlations between elastic constants and VECs of TMB2 have been discussed. - Abstract: The thermodynamic, electronic and elastic properties of a class of early-transition-metal diborides (TMB2, TM = Sc, Ti, V, Cr, Y, Zr, Nb, Mo, Hf, Ta, W) with AlB2-type structure have been investigated using the quasi-harmonic Debye model and the ab initio calculation based on the density functional theory, respectively. According to the characters of temperature dependent bulk modulus and coefficient of thermal expansion, the TMB2 compounds can be divided into three groups. The results also indicate that 4d- and 5d-TMB2 compounds are good high-temperature structural materials. The five independent stiffness coefficients, bulk and shear moduli of the diborides are obtained and well agreement with the available experimental and theoretical data. The correlations between elastic properties and electronic structure are discussed in detail. Due to the high values of hardness, the VIB-transition-metal diborides with relatively high B/G and B/C44 ratios are still difficult to machine with usual methods
International Nuclear Information System (INIS)
Recent computational and experimental studies have confirmed that high energy cascades produce clustered defects of both vacancy- and interstitial-types as well as isolated point defects. However, the production probability, configuration, stability and other characteristics of the cascade clusters are not well understood in spite of the fact that clustered defect production would substantially affect the irradiation-induced microstructures and the consequent property changes in a certain range of temperatures and displacement rates. In this work, a model of point defect and cluster evolution in irradiated materials under cascade damage conditions was developed by combining the conventional reaction rate theory and the results from the latest molecular dynamics simulation studies. This paper provides a description of the model and a model-based fundamental investigation of the influence of configuration, production efficiency and the initial size distribution of cascade-produced vacancy clusters. In addition, using the model, issues on characterizing cascade-induced defect production by microstructural analysis will be discussed. In particular, the determination of cascade vacancy cluster configuration, surviving defect production efficiency and cascade-interaction volume is attempted by analyzing the temperature dependence of swelling rate and loop growth rate in austenitic steels and model alloys. (author)
Rastegarzadeh, M.; Tavassoly, M. K.
2015-02-01
In this article, by using the perturbation theory, we analytically solve the eigenvalue problem for the Hamiltonian describing the interaction of a ?-type three-level atom with a single-mode radiation field without the rotating wave approximation (RWA). For this purpose, the atom–field interaction Hamiltonian, which contains the counter-rotating terms (CRTs), is transformed to an analytically solvable Hamiltonian by applying three successive unitary transformations. According to our calculations, the contribution of CRTs within the transformed Hamiltonian is in fact replaced by transforming the ‘constant detuning’ with the ‘intensity-dependent detuning’ in the first order, and the ‘constant atom–field coupling’ with the intensity-dependent coupling in the second order of the perturbation parameters. Then, by solving the eigenvalue problem for the transformed Hamiltonian, the eigenvector of the considered atom–field Hamiltonian is obtained analytically. Finally, after achieving the state vector of the atom–field system at an arbitrary time, a few nonclassical properties of the system state are investigated numerically. Meanwhile, we compare our results with the presence of RWA, from which the role of CRTs will be established.
Van Eeckhaut, Ann; Mangelings, Debby
2015-09-10
Peptide-based biopharmaceuticals represent one of the fastest growing classes of new drug molecules. New reaction types included in the synthesis strategies to reduce the rapid metabolism of peptides, along with the availability of new formulation and delivery technologies, resulted in an increased marketing of peptide drug products. In this regard, the development of analytical methods for quantification of peptides in pharmaceutical and biological samples is of utmost importance. From the sample preparation step to their analysis by means of chromatographic or electrophoretic methods, many difficulties should be tackled to analyze them. Recent developments in analytical techniques emphasize more and more on the use of green analytical techniques. This review will discuss the progresses in and challenges observed during green analytical method development for the quantification of peptides in pharmaceutical and biological samples. PMID:25864956
Interference microscopes for tribology and corrosion quantification
Novak, Erik; Blewett, Nelson; Stout, Tom
2007-06-01
Interference microscopes remain one of the most accurate, repeatable, and versatile metrology systems for precision surface measurements. Such systems successfully measure material in both research labs and production lines in micro-optics, MEMS, data storage, medical device, and precision machining industries to sub-nanometer vertical resolution. Increasingly, however, these systems are finding uses outside of traditional surface-measurement applications, including film thickness determination, environmental responses of material, and determination of behavior under actuation. Most recently, these systems are enabling users to examine behavior of materials over varying time-scales as they are used in cutting or grinding operations or where the material is merely in continual contact with another such as in medical implants. In particular, quantification of wear of surfaces with varying coatings and under different conditions is of increasing value as tolerances decrease and consistency in final products is more valuable. Also, response of materials in corrosive environments allows users to quantify the gains of varying surface treatments against the cost of those treatments. Such quantification requires novel hardware and software for the system to ensure results are fast, accurate, and relevant. In this paper we explore three typical applications in tribology and corrosion. Deterioration of the cutting surfaces on a multi-blade razor is explored, with quantification of key surface features. Next, wear of several differently coated drill bits under similar use conditions is examined. Thirdly, in situ measurement of corrosion of several metal surfaces in harsh environmental conditions is performed. These case studies highlight how standard interference microscopes are evolving to serve novel industrial applications.
Tutorial examples for uncertainty quantification methods.
Energy Technology Data Exchange (ETDEWEB)
De Bord, Sarah [Univ. of California, Davis, CA (United States)
2015-08-01
This report details the work accomplished during my 2015 SULI summer internship at Sandia National Laboratories in Livermore, CA. During this internship, I worked on multiple tasks with the common goal of making uncertainty quantification (UQ) methods more accessible to the general scientific community. As part of my work, I created a comprehensive numerical integration example to incorporate into the user manual of a UQ software package. Further, I developed examples involving heat transfer through a window to incorporate into tutorial lectures that serve as an introduction to UQ methods.
Quantification of Ammonium Sulphate Nitrate (ASN) fertilizers
Montejo-Bernardo, J. M.; García-Granda, S.; Fernández-González, A.
2011-01-01
This paper shows a simple procedure for the quantification of industrial and labo-ratory samples of Ammonium Sulphate-Nitrate fertilizers (ASN) based on a Rietveld fit from X-ray powder diffraction profiles. The Rietveld fit is performed by means of the structural models of the double salts 2NH4NO3·(NH4)2SO4 and 3NH4NO3·(NH4)2SO4, previously re-ported by the authors. The proposed method demonstrated to be highly accurate even when medium-low quality X-ray Powder Diffraction profiles are used....
Czech Academy of Sciences Publication Activity Database
Wötzel, U.; Mäder, H.; Harder, H.; Pracna, Petr; Sarka, K.
780-781, - (2007), s. 206-221. ISSN 0022-2860 R&D Projects: GA AV ?R 1ET400400410 Institutional research plan: CEZ:AV0Z40400503 Keywords : symmetric top * Fourier transform microwave spectroscopy * direct l-type resonance and rotational spectrum * theory of reduction Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.486, year: 2007
An improved competitive inhibition enzymatic immunoassay method for tetrodotoxin quantification
Directory of Open Access Journals (Sweden)
Stokes Amber N
2012-03-01
Full Text Available Abstract Quantifying tetrodotoxin (TTX has been a challenge in both ecological and medical research due to the cost, time and training required of most quantification techniques. Here we present a modified Competitive Inhibition Enzymatic Immunoassay for the quantification of TTX, and to aid researchers in the optimization of this technique for widespread use with a high degree of accuracy and repeatability.
Directory of Open Access Journals (Sweden)
Johnston Marie
2009-08-01
Full Text Available Abstract Background Long term management of patients with Type 2 diabetes is well established within Primary Care. However, despite extensive efforts to implement high quality care both service provision and patient health outcomes remain sub-optimal. Several recent studies suggest that psychological theories about individuals' behaviour can provide a valuable framework for understanding generalisable factors underlying health professionals' clinical behaviour. In the context of the team management of chronic disease such as diabetes, however, the application of such models is less well established. The aim of this study was to identify motivational factors underlying health professional teams' clinical management of diabetes using a psychological model of human behaviour. Methods A predictive questionnaire based on the Theory of Planned Behaviour (TPB investigated health professionals' (HPs' cognitions (e.g., beliefs, attitudes and intentions about the provision of two aspects of care for patients with diabetes: prescribing statins and inspecting feet. General practitioners and practice nurses in England and the Netherlands completed parallel questionnaires, cross-validated for equivalence in English and Dutch. Behavioural data were practice-level patient-reported rates of foot examination and use of statin medication. Relationships between the cognitive antecedents of behaviour proposed by the TPB and healthcare teams' clinical behaviour were explored using multiple regression. Results In both countries, attitude and subjective norm were important predictors of health professionals' intention to inspect feet (Attitude: beta = .40; Subjective Norm: beta = .28; Adjusted R2 = .34, p 2 = .40, p Conclusion Using the TPB, we identified modifiable factors underlying health professionals' intentions to perform two clinical behaviours, providing a rationale for the development of targeted interventions. However, we did not observe a relationship between health professionals' intentions and our proxy measure of team behaviour. Significant methodological issues were highlighted concerning the use of models of individual behaviour to explain behaviours performed by teams. In order to investigate clinical behaviours performed by teams it may be necessary to develop measures that reflect the collective cognitions of the members of the team to facilitate the application of these theoretical models to team behaviours.
Energy Technology Data Exchange (ETDEWEB)
Agop, M. [Department of Physics, University of Athens, Athens 15771 (Greece) and Department of Physics, Technical Gh., Asachi University, Iasi 700050 (Romania)]. E-mail: magop@phys.tuiasi.ro; Nica, P. [Department of Physics, University of Athens, Athens 15771 (Greece); Department of Physics, Technical Gh., Asachi University, Iasi 700050 (Romania); Ioannou, P.D. [Department of Physics, University of Athens, Athens 15771 (Greece); Malandraki, Olga [Department of Physics, University of Athens, Athens 15771 (Greece); Gavanas-Pahomi, I. [Department of Physics, Technical Gh., Asachi University, Iasi 700050 (Romania)
2007-12-15
A generalization of the Nottale's scale relativity theory is elaborated: the generalized Schroedinger equation results as an irrotational movement of Navier-Stokes type fluids having an imaginary viscosity coefficient. Then {psi} simultaneously becomes wave-function and speed potential. In the hydrodynamic formulation of scale relativity theory, some implications in the gravitational morphogenesis of structures are analyzed: planetary motion quantizations, Saturn's rings motion quantizations, redshift quantization in binary galaxies, global redshift quantization etc. The correspondence with El Naschie's {epsilon} {sup ({infinity})} space-time implies a special type of superconductivity (El Naschie's superconductivity) and Cantorian-fractal sequences in the quantification of the Universe.
International Nuclear Information System (INIS)
The dynamical aspects of a cylindrical Ising nanotube in the presence of a time-varying magnetic field are investigated within the effective-field theory with correlations and Glauber-type stochastic approach. Temperature dependence of the dynamic magnetizations, dynamic total magnetization, hysteresis loop areas and correlations are investigated in order to characterize the nature of dynamic transitions as well as to obtain the dynamic phase transition temperatures and compensation behaviors. Some characteristic phenomena are found depending on the ratio of the physical parameters in the surface shell and core, i.e., five different types of compensation behaviors in the Néel classification nomenclature exist in the system. -- Highlights: ? Kinetic cylindrical Ising nanotube is investigated using the effective-field theory. ? The dynamic magnetizations, hysteresis loop areas and correlations are calculated. ? The effects of the exchange interactions have been studied in detail. ? Five different types of compensation behaviors have been found. ? Some characteristic phenomena are found depending on ratio of physical parameters.
In vivo MRS metabolite quantification using genetic optimization
Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; van Ormondt, D.; Graveron-Demilly, D.
2011-11-01
The in vivo quantification of metabolites' concentrations, revealed in magnetic resonance spectroscopy (MRS) spectra, constitutes the main subject under investigation in this work. Significant contributions based on artificial intelligence tools, such as neural networks (NNs), with good results have been presented lately but have shown several drawbacks, regarding their quantification accuracy under difficult conditions. A general framework that encounters the quantification procedure as an optimization problem, which is solved using a genetic algorithm (GA), is proposed in this paper. Two different lineshape models are examined, while two GA configurations are applied on artificial data. Moreover, the introduced quantification technique deals with metabolite peaks' overlapping, a considerably difficult situation occurring under real conditions. Appropriate experiments have proved the efficiency of the introduced methodology, in artificial MRS data, by establishing it as a generic metabolite quantification procedure.
In vivo MRS metabolite quantification using genetic optimization
International Nuclear Information System (INIS)
The in vivo quantification of metabolites' concentrations, revealed in magnetic resonance spectroscopy (MRS) spectra, constitutes the main subject under investigation in this work. Significant contributions based on artificial intelligence tools, such as neural networks (NNs), with good results have been presented lately but have shown several drawbacks, regarding their quantification accuracy under difficult conditions. A general framework that encounters the quantification procedure as an optimization problem, which is solved using a genetic algorithm (GA), is proposed in this paper. Two different lineshape models are examined, while two GA configurations are applied on artificial data. Moreover, the introduced quantification technique deals with metabolite peaks' overlapping, a considerably difficult situation occurring under real conditions. Appropriate experiments have proved the efficiency of the introduced methodology, in artificial MRS data, by establishing it as a generic metabolite quantification procedure
Uncertainty Quantification for Airfoil Icing using Polynomial Chaos Expansions
DeGennaro, Anthony M; Martinelli, Luigi
2014-01-01
The formation and accretion of ice on the leading edge of a wing can be detrimental to airplane performance. Complicating this reality is the fact that even a small amount of uncertainty in the shape of the accreted ice may result in a large amount of uncertainty in aerodynamic performance metrics (e.g., stall angle of attack). The main focus of this work concerns using the techniques of Polynomial Chaos Expansions (PCE) to quantify icing uncertainty much more quickly than traditional methods (e.g., Monte Carlo). First, we present a brief survey of the literature concerning the physics of wing icing, with the intention of giving a certain amount of intuition for the physical process. Next, we give a brief overview of the background theory of PCE. Finally, we compare the results of Monte Carlo simulations to PCE-based uncertainty quantification for several different airfoil icing scenarios. The results are in good agreement and confirm that PCE methods are much more efficient for the canonical airfoil icing un...
Mesh refinement for uncertainty quantification through model reduction
Energy Technology Data Exchange (ETDEWEB)
Li, Jing, E-mail: lixxx873@umn.edu; Stinis, Panos, E-mail: stinis@umn.edu
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan–Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory.
Directory of Open Access Journals (Sweden)
Birkett Nicholas
2010-01-01
Full Text Available Abstract Background The primary aim of this study was to compare the efficacy of three physical activity (PA behavioural intervention strategies in a sample of adults with type 2 diabetes. Method/Design Participants (N = 287 were randomly assigned to one of three groups consisting of the following intervention strategies: (1 standard printed PA educational materials provided by the Canadian Diabetes Association [i.e., Group 1/control group]; (2 standard printed PA educational materials as in Group 1, pedometers, a log book and printed PA information matched to individuals' PA stage of readiness provided every 3 months (i.e., Group 2; and (3 PA telephone counseling protocol matched to PA stage of readiness and tailored to personal characteristics, in addition to the materials provided in Groups 1 and 2 (i.e., Group 3. PA behaviour measured by the Godin Leisure Time Exercise Questionnaire and related social-cognitive measures were assessed at baseline, 3, 6, 9, 12 and 18-months (i.e., 6-month follow-up. Clinical (biomarkers and health-related quality of life assessments were conducted at baseline, 12-months, and 18-months. Linear Mixed Model (LMM analyses will be used to examine time-dependent changes from baseline across study time points for Groups 2 and 3 relative to Group 1. Discussion ADAPT will determine whether tailored but low-cost interventions can lead to sustainable increases in PA behaviours. The results may have implications for practitioners in designing and implementing theory-based physical activity promotion programs for this population. Clinical Trials Registration ClinicalTrials.gov identifier: NCT00221234
Energy Technology Data Exchange (ETDEWEB)
Mizutani, U; Inukai, M; Sato, H; Zijlstra, E S; Lin, Q
2014-05-16
There are three key electronic parameters in elucidating the physics behind the Hume–Rothery electron concentration rule: the square of the Fermi diameter (2kF)2, the square of the critical reciprocal lattice vector and the electron concentration parameter or the number of itinerant electrons per atom e/a. We have reliably determined these three parameters for 10 Rhombic Triacontahedron-type 2/1–2/1–2/1 (N?=?680) and 1/1–1/1–1/1 (N?=?160–162) approximants by making full use of the full-potential linearized augmented plane wave-Fourier band calculations based on all-electron density-functional theory. We revealed that the 2/1–2/1–2/1 approximants Al13Mg27Zn45 and Na27Au27Ga31 belong to two different sub-groups classified in terms of equal to 126 and 109 and could explain why they take different e/a values of 2.13 and 1.76, respectively. Among eight 1/1–1/1–1/1 approximants Al3Mg4Zn3, Al9Mg8Ag3, Al21Li13Cu6, Ga21Li13Cu6, Na26Au24Ga30, Na26Au37Ge18, Na26Au37Sn18 and Na26Cd40Pb6, the first two, the second two and the last four compounds were classified into three sub-groups with ?=?50, 46 and 42; and were claimed to obey the e/a?=?2.30, 2.10–2.15 and 1.70–1.80 rules, respectively.
Librescu, L.; Khdeir, A. A.; Frederick, D.
1989-01-01
This paper deals with the substantiation of a shear deformable theory of cross-ply laminated composite shallow shells. While the developed theory preserves all the advantages of the first order transverse shear deformation theory it succeeds in eliminating some of its basic shortcomings. The theory is further employed in the analysis of the eigenvibration and static buckling problems of doubly curved shallow panels. In this context, the state space concept is used in conjunction with the Levy method, allowing one to analyze these problems in a unified manner, for a variety of boundary conditions. Numerical results are presented and some pertinent conclusions are formulated.
Survey and Evaluate Uncertainty Quantification Methodologies
Energy Technology Data Exchange (ETDEWEB)
Lin, Guang; Engel, David W.; Eslinger, Paul W.
2012-02-01
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon capture processes. As such, we will develop, as needed and beyond existing capabilities, a suite of robust and efficient computational tools for UQ to be integrated into a CCSI UQ software framework.
Directory of Open Access Journals (Sweden)
Mohammad Osama
2014-06-01
Full Text Available Pleurotus ostreatus, a white rot fungus, is capable of bioremediating a wide range of organic contaminants including Polycyclic Aromatic Hydrocarbons (PAHs. Ergosterol is produced by living fungal biomass and used as a measure of fungal biomass. The first part of this work deals with the extraction and quantification of PAHs from contaminated sediments by Lipid Extraction Method (LEM. The second part consists of the development of a novel extraction method (Ergosterol Extraction Method (EEM, quantification and bioremediation. The novelty of this method is the simultaneously extraction and quantification of two different types of compounds, sterol (ergosterol and PAHs and is more efficient than LEM. EEM has been successful in extracting ergosterol from the fungus grown on barley in the concentrations of 17.5-39.94 µg g-1 ergosterol and the PAHs are much more quantified in numbers and amounts as compared to LEM. In addition, cholesterol usually found in animals, has also been detected in the fungus, P. ostreatus at easily detectable levels.
Subnuclear foci quantification using high-throughput 3D image cytometry
Wadduwage, Dushan N.; Parrish, Marcus; Choi, Heejin; Engelward, Bevin P.; Matsudaira, Paul; So, Peter T. C.
2015-07-01
Ionising radiation causes various types of DNA damages including double strand breaks (DSBs). DSBs are often recognized by DNA repair protein ATM which forms gamma-H2AX foci at the site of the DSBs that can be visualized using immunohistochemistry. However most of such experiments are of low throughput in terms of imaging and image analysis techniques. Most of the studies still use manual counting or classification. Hence they are limited to counting a low number of foci per cell (5 foci per nucleus) as the quantification process is extremely labour intensive. Therefore we have developed a high throughput instrumentation and computational pipeline specialized for gamma-H2AX foci quantification. A population of cells with highly clustered foci inside nuclei were imaged, in 3D with submicron resolution, using an in-house developed high throughput image cytometer. Imaging speeds as high as 800 cells/second in 3D were achieved by using HiLo wide-field depth resolved imaging and a remote z-scanning technique. Then the number of foci per cell nucleus were quantified using a 3D extended maxima transform based algorithm. Our results suggests that while most of the other 2D imaging and manual quantification studies can count only up to about 5 foci per nucleus our method is capable of counting more than 100. Moreover we show that 3D analysis is significantly superior compared to the 2D techniques.
Directory of Open Access Journals (Sweden)
Benjamin Burkhard
2014-06-01
Full Text Available The high variety of ecosystem service categorisation systems, assessment frameworks, indicators, quantification methods and spatial localisation approaches allows scientists and decision makers to harness experience, data, methods and tools. On the other hand, this variety of concepts and disagreements among scientists hamper an integration of ecosystem services into contemporary environmental management and decision making. In this article, the current state of the art of ecosystem service science regarding spatial localisation, indication and quantification of multiple ecosystem service supply and demand is reviewed and discussed. Concepts and tables for regulating, provisioning and cultural ecosystem service definitions, distinguishing between ecosystem service potential supply (stocks, flows (real supply and demands as well as related indicators for quantification are provided. Furthermore, spatial concepts of service providing units, benefitting areas, spatial relations, rivalry, spatial and temporal scales are elaborated. Finally, matrices linking CORINE land cover types to ecosystem service potentials, flows, demands and budget estimates are provided. The matrices show that ecosystem service potentials of landscapes differ from flows, especially for provisioning ecosystem services.
Quantification Methods of Management Skills in Shipping
Directory of Open Access Journals (Sweden)
Riana Iren RADU
2012-04-01
Full Text Available Romania can not overcome the financial crisis without business growth, without finding opportunities for economic development and without attracting investment into the country. Successful managers find ways to overcome situations of uncertainty. The purpose of this paper is to determine the managerial skills developed by the Romanian fluvial shipping company NAVROM (hereinafter CNFR NAVROM SA, compared with ten other major competitors in the same domain, using financial information of these companies during the years 2005-2010. For carrying out the work it will be used quantification methods of managerial skills to CNFR NAVROM SA Galati, Romania, as example mentioning the analysis of financial performance management based on profitability ratios, net profit margin, suppliers management, turnover.
On Uncertainty Quantification in Particle Accelerators Modelling
Adelmann, Andreas
2015-01-01
Using a cyclotron based model problem, we demonstrate for the first time the applicability and usefulness of a uncertainty quantification (UQ) approach in order to construct surrogate models for quantities such as emittance, energy spread but also the halo parameter, and construct a global sensitivity analysis together with error propagation and $L_{2}$ error analysis. The model problem is selected in a way that it represents a template for general high intensity particle accelerator modelling tasks. The presented physics problem has to be seen as hypothetical, with the aim to demonstrate the usefulness and applicability of the presented UQ approach and not solving a particulate problem. The proposed UQ approach is based on sparse polynomial chaos expansions and relies on a small number of high fidelity particle accelerator simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobols' ...
Quantification of cell cycle-arresting proteins.
Kepp, Oliver; Martins, Isabelle; Menger, Laurie; Michaud, Mickaël; Adjemian, Sandy; Sukkurwala, Abdul Qader; Galluzzi, Lorenzo; Kroemer, Guido
2013-01-01
Cellular senescence, which can be defined as a stress response preventing the propagation of cells that have accumulated potentially oncogenic alterations, is invariably associated with a permanent cell cycle arrest. Such an irreversible blockage is mainly mediated by the persistent upregulation of one or more cyclin-dependent kinase inhibitors (CKIs), including (though not limited to) p16( INK4A ) and p21( CIP1 ) and p27( KIP1 ). CKIs operate by binding to cyclin-dependent kinases (CDKs), de facto inhibiting their enzymatic activity. Here, we provide an immunoblotting-based method for the detection and quantification of CKIs in vitro and ex vivo, together with a set of guidelines for the interpretation of results. PMID:23296654
Homogeneity of Inorganic Glasses : Quantification and Ranking
DEFF Research Database (Denmark)
Jensen, Martin; Zhang, L.
2011-01-01
Homogeneity of glasses is a key factor determining their physical and chemical properties and overall quality. However, quantification of the homogeneity of a variety of glasses is still a challenge for glass scientists and technologists. Here, we show a simple approach by which the homogeneity of different glass products can be quantified and ranked. This approach is based on determination of both the optical intensity and dimension of the striations in glasses. These two characteristic values areobtained using the image processing method established recently. The logarithmic ratio between the dimension and the intensity is used to quantify and rank the homogeneity of glass products. Compared with the refractive index method, the image processing method has a wider detection range and a lower statistical uncertainty.
Multispectral image analysis for algal biomass quantification.
Murphy, Thomas E; Macon, Keith; Berberoglu, Halil
2013-01-01
This article reports a novel multispectral image processing technique for rapid, noninvasive quantification of biomass concentration in attached and suspended algae cultures. Monitoring the biomass concentration is critical for efficient production of biofuel feedstocks, food supplements, and bioactive chemicals. Particularly, noninvasive and rapid detection techniques can significantly aid in providing delay-free process control feedback in large-scale cultivation platforms. In this technique, three-band spectral images of Anabaena variabilis cultures were acquired and separated into their red, green, and blue components. A correlation between the magnitude of the green component and the areal biomass concentration was generated. The correlation predicted the biomass concentrations of independently prepared attached and suspended cultures with errors of 7 and 15%, respectively, and the effect of varying lighting conditions and background color were investigated. This method can provide necessary feedback for dilution and harvesting strategies to maximize photosynthetic conversion efficiency in large-scale operation. PMID:23554374
Common cause failures: identification and quantification
International Nuclear Information System (INIS)
In the context of Probabilistic Safety Analysis (PSA), treatment of Common Cause Failures (CCFs) may have critical influence on the credibility of the studies, on the question of completeness and on interpretation of results. A Nordic project 'Risk analysis', initiated in 1985, has among its main objectives to perform in-depth studies of dependent failures and human interactions, and generally to investigate assumptions and limitations of current PSAs. During the first phase of the project the activities concentrated on performing a Benchmark Exercise (BE) concerning CCF-data. Preliminary results of the exercise are presented in this report. The main findings concern both procedures for search for CCFs, use of classification systems, and quantification of CCF-contributions by means of direct assessment and use of parametric models
Recurrence quantification analysis of global stock markets
Bastos, João A.; Caiado, Jorge
2011-04-01
This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.
Quantification of bronchial dimensions at MDCT using dedicated software
International Nuclear Information System (INIS)
This study aimed to assess the feasibility of quantification of bronchial dimensions at MDCT using dedicated software (BronCare). We evaluated the reliability of the software to segment the airways and defined criteria ensuring accurate measurements. BronCare was applied on two successive examinations in 10 mild asthmatic patients. Acquisitions were performed at pneumotachographically controlled lung volume (65% TLC), with reconstructions focused on the right lung base. Five validation criteria were imposed: (1) bronchus type: segmental and subsegmental; (2) lumen area (LA)>4 mm2; (3) bronchus length (Lg) > 7 mm; (4) confidence index - giving the percentage of the bronchus not abutted by a vessel - (CI) >55% for validation of wall area (WA) and (5) a minimum of 10 contiguous cross-sectional images fulfilling the criteria. A complete segmentation procedure on both acquisitions made possible an evaluation of LA and WA in 174/223 (78%) and 171/174 (98%) of bronchi, respectively. The validation criteria were met for 56/69 (81%) and for 16/69 (23%) of segmental bronchi and for 73/102 (72%) and 58/102 (57%) of subsegmental bronchi, for LA and WA, respectively. In conclusion, BronCare is reliable to segment the airways in clinical practice. The proposed criteria seem appropriate to select bronchi candidates for measurement. (orig.)
VESGEN Software for Mapping and Quantification of Vascular Regulators
Parsons-Wingerter, Patricia A.; Vickerman, Mary B.; Keith, Patricia A.
2012-01-01
VESsel GENeration (VESGEN) Analysis is an automated software that maps and quantifies effects of vascular regulators on vascular morphology by analyzing important vessel parameters. Quantification parameters include vessel diameter, length, branch points, density, and fractal dimension. For vascular trees, measurements are reported as dependent functions of vessel branching generation. VESGEN maps and quantifies vascular morphological events according to fractal-based vascular branching generation. It also relies on careful imaging of branching and networked vascular form. It was developed as a plug-in for ImageJ (National Institutes of Health, USA). VESGEN uses image-processing concepts of 8-neighbor pixel connectivity, skeleton, and distance map to analyze 2D, black-and-white (binary) images of vascular trees, networks, and tree-network composites. VESGEN maps typically 5 to 12 (or more) generations of vascular branching, starting from a single parent vessel. These generations are tracked and measured for critical vascular parameters that include vessel diameter, length, density and number, and tortuosity per branching generation. The effects of vascular therapeutics and regulators on vascular morphology and branching tested in human clinical or laboratory animal experimental studies are quantified by comparing vascular parameters with control groups. VESGEN provides a user interface to both guide and allow control over the users vascular analysis process. An option is provided to select a morphological tissue type of vascular trees, network or tree-network composites, which determines the general collections of algorithms, intermediate images, and output images and measurements that will be produced.
In vivo cell tracking and quantification method in adult zebrafish
Zhang, Li; Alt, Clemens; Li, Pulin; White, Richard M.; Zon, Leonard I.; Wei, Xunbin; Lin, Charles P.
2012-03-01
Zebrafish have become a powerful vertebrate model organism for drug discovery, cancer and stem cell research. A recently developed transparent adult zebrafish using double pigmentation mutant, called casper, provide unparalleled imaging power in in vivo longitudinal analysis of biological processes at an anatomic resolution not readily achievable in murine or other systems. In this paper we introduce an optical method for simultaneous visualization and cell quantification, which combines the laser scanning confocal microscopy (LSCM) and the in vivo flow cytometry (IVFC). The system is designed specifically for non-invasive tracking of both stationary and circulating cells in adult zebrafish casper, under physiological conditions in the same fish over time. The confocal imaging part in this system serves the dual purposes of imaging fish tissue microstructure and a 3D navigation tool to locate a suitable vessel for circulating cell counting. The multi-color, multi-channel instrument allows the detection of multiple cell populations or different tissues or organs simultaneously. We demonstrate initial testing of this novel instrument by imaging vasculature and tracking circulating cells in CD41: GFP/Gata1: DsRed transgenic casper fish whose thrombocytes/erythrocytes express the green and red fluorescent proteins. Circulating fluorescent cell incidents were recorded and counted repeatedly over time and in different types of vessels. Great application opportunities in cancer and stem cell researches are discussed.
Méthodes de quantification optimale avec applications à la finance.
Sagna, Abass
2008-01-01
CETTE THÈSE EST CONSACRÉE À LA QUANTIFICATION AVEC DES APPLICATIONS À LA FINANCE. LE CHAP.1 RAPPELLE LES BASES DE LA QUANTIFICATION ET LES MÉTHODES DE RECHERCHE DE QUANTIFIEURS OPTIMAUX. AU CHAP.2 ON ÉTUDIE LE COMPORTEMENT ASYMPTOTIQUE, DANS L^S, DE L'ERREUR DE QUANTIFICATION ASSOCIÉE À UNE TRANSFORMATION LINÉAIRE D'UNE SUITE DE QUANTIFIEURS OPTIMALE DANS L^R. ON MONTRE QU'UNE TELLE TRANSFORMATION PERMET DE RENDRE LA SUITE TRANSFORMÉE L^S TAUX OPTIMALE POUR TOUT S, POUR UNE LARGE FAMILLE DE P...
Energy Technology Data Exchange (ETDEWEB)
Dash, K. [Department of Physics, Gopalpur College, Gopalpur 761002, Orissa (India); Tripathi, G.S., E-mail: gs_tripathi@hotmail.com [Department of Physics, Berhampur University, Berhampur 760007, Orissa (India)
2012-02-15
We derive a theory of magnetization for the diluted magnetic semiconductor, p-type Sn{sub 1-x}Gd{sub x}Te including the contributions from Gd{sup 3+} local moments, carrier-local moment hybridization and lattice diamagnetism as a function of temperature and magnetic field. The local moment contribution M{sub local} is a sum of three contributions: M{sub local}=M{sub s}+M{sub p}+M{sub t}, where M{sub s} is the dominant single-spin contribution, and M{sub p} and M{sub t} are the contributions from clusters of two- and three-spins, respectively. We have also calculated the contribution due to spin-polarized holes for carrier densities of order 10{sup 20} cm{sup -3}, using a {rvec k}{center_dot}{rvec {pi}} model, where {rvec {pi}} is the momentum operator in the presence of the spin-orbit interaction and {rvec k} is the hole wave vector. This contribution includes the carrier-local moment hybridization. We have also included a diamagnetic lattice contribution, which comes from inter-band orbital and spin-orbit contributions. In this contribution, the symbol {rvec k} is used for the electronic wave vector. The local moment contribution is dominant and primarily comes from the isolated spins. However, the two- and three-spin contributions increase with increase in the magnetic impurity concentration. The magnitude of the hole-spin polarization is about two orders less than the local moment contribution even at field strength of 25 T. However, the magnetization due to carrier spin-density has intrinsic importance due to its role in possible spintronics applications. The lattice diamagnetism shows considerable anisotropy. The total magnetization is calculated from all the three contributions M{sub local}, M{sub c} (due to carriers, here holes) and M{sub dia}. We have compared our results with experiment wherever available and the agreement is fairly good. - Highlights: > Local moment contributions include single-spin, paired spins and triads. > Single spin contribution is found to be dominant, but other help become main at higher of concentration magnetic impurity. > Diamagnetic and carrier contributions have their intrinsic importance, albeit their contributions are small. > Results are compared with experimental results wherever available and the agreement in most cases is reasonable.
A logical model for quantification of occupational risk
International Nuclear Information System (INIS)
Functional block diagrams (FBDs) and their equivalent event trees are introduced as logical models in the quantification of occupational risks. Although a FBD is similar to an influence diagram or a belief network it provides a framework for introduction in a compact form of the logic of the model through the partition of the paths of the equivalent event tree. This is achieved by consideration of an overall event which has as outcomes the outmost consequences defining the risk under analysis. This event is decomposed into simpler events the outcome space of which is partitioned into subsets corresponding to the outcomes of the initial joint event. The simpler events can be further decomposed into simpler events creating a hierarchy where the events in a given level (parents) are decomposed to a number of simpler events (children) in the next level of the hierarchy. The partitioning of the outcome space is transferred from level to level through logical relationships corresponding to the logic of the model. Occupational risk is modeled trough a general FBD where the undesirable health consequence is decomposed to 'dose' and 'dose/response'; 'dose' is decomposed to 'center event' and 'mitigation'; 'center event' is decomposed to 'initiating event' and 'prevention'. This generic FBD can be transformed to activity-specific FBDs which together with their equivalent event trees are used to delineate the various accident sequences that might lead to injury or death consequences. The methodology and the associated algorithms have been computerized in a program with a graphical user interface (GUI) which allows the user to input the functional relationships between parent and children events, corresponding probabilities for events of the lowest level and obtain at the end the quantified corresponding simplified event tree. The methodology is demonstrated with an application to the risk of falling from a mobile ladder. This type of accidents has been analyzed as part of the Workgroup Occupational Risk Model (WORM) project in the Netherlands aiming at the development and quantification of models for a full range of potential risks from accidents in the workspace
Melissa Ang-Simões Lasaro; Juliana Falcão Rodrigues; Joaquim Cabrera-Crespo; Maria Elisabete Sbrogio-Almeida; Marcio de Oliveira Lasaro; Luís Carlos de Souza Ferreira
2007-01-01
The heat-labile toxin (LT) is a key virulence-associated factor associated with the non-invasive secretory diarrhea caused by enterotoxigenic Escherichia coli (ETEC) strains either in humans or domestic animals. Several LT detection methods have been reported but quantification of the toxin produced by wild-type ETEC strains is usually performed by the GM1 ganglyoside enzyme-linked immunosorbent assay (GM1 ELISA). In this study we conducted the optimization of an alternative LT-quantification...
Sakai, Osamu
2010-01-01
Band calculations for Ce compounds with the AuCu$_{3}$-type crystal structure were carried out on the basis of dynamical mean field theory (DMFT). The auxiliary impurity problem was solved by a method named NCA$f^{2}$vc (noncrossing approximation including the $f^{2}$ state as a vertex correction). The calculations take into account the crystal-field splitting, the spin-orbit interaction, and the correct exchange process of the $f^{1} \\rightarrow f^{0},f^{2}$ virtual excitat...
Heimrich, M; Bönsch, M; Nickl, H; Simat, T J
2012-01-01
Cyclic oligomers are the major substances migrating from polyamide (PA) food contact materials. However, no commercial standards are available for the quantification of these substances. For the first time the quantification of cyclic oligomers was carried out by HPLC coupled with a chemiluminescence nitrogen detector (CLND) and single-substance calibration. Cyclic monomer (MW?=?226?Da) and dimer (MW?=?452?Da) of PA66 were synthesised and equimolar N detection of CLND to synthesised oligomers, caprolactam, 6-aminohexanoic acid (monomers of PA6) and caffeine (a typical nitrogen calibrant) was proven. Relative response factors (UVD at 210?nm) referring to caprolactam were determined for cyclic PA6 oligomers from dimer to nonamer, using HPLC-CLND in combination with a UVD. A method for quantification of cyclic oligomer content in PA materials was introduced using HPLC-CLND analysis and caffeine as a single nitrogen calibrant. The method was applied to the quantification of cyclic PA oligomers in several PA granulates. For two PA6 granulates from different manufacturers markedly different oligomer contents were analysed (19.5 versus 13.4?g?kg?¹). The elution pattern of cyclic oligomers offers the possibility of identifying the PA type and differentiating between PA copolymers and blends. PMID:22329416
Scientific Electronic Library Online (English)
Ira Amira, Rosti; Ramakrishnan Nagasundara, Ramanan; Tau Chuan, Ling; Arbakariya B, Ariff.
2013-11-15
Full Text Available Background: A method for the selection of suitable molecular recognition element (MRE) for the quantification of human epidermal growth factor (hEGF) using surface plasmon resonance (SPR) is presented. Two types of hEGF antibody, monoclonal and polyclonal, were immobilized on the surface of chip and [...] validated for its characteristics and performance in the quantification of hEGF. Validation of this analytical procedure was to demonstrate the stability and suitability of antibody for the quantification of target protein. Results: Specificity, accuracy and precision for all samples were within acceptable limit for both antibodies. The affinity and kinetic constant of antibodies-hEGF binding were evaluated using a 1:1 Langmuir interaction model. The model fitted well to all binding responses simultaneously. Polyclonal antibody (pAb) has better affinity (K D = 7.39e-10 M) than monoclonal antibody (mAb) (K D = 9.54e-9 M). Further evaluation of kinetic constant demonstrated that pAb has faster reaction rate during sample injection, slower dissociation rate during buffer injection and higher level of saturation state than mAb. Besides, pAb has longer shelf life and greater number of cycle run. Conclusions: Thus, pAb was more suitable to be used as a stable MRE for further quantification works from the consideration of kinetic, binding rate and shelf life assessment.
Rédei, Miklós; Summers, Stephen Jeffrey
2006-01-01
The mathematics of classical probability theory was subsumed into classical measure theory by Kolmogorov in 1933. Quantum theory as nonclassical probability theory was incorporated into the beginnings of noncommutative measure theory by von Neumann in the early thirties, as well. To precisely this end, von Neumann initiated the study of what are now called von Neumann algebras and, with Murray, made a first classification of such algebras into three types. The nonrelativisti...
Higher Order Quasi Monte-Carlo Integration in Uncertainty Quantification
Dick, Josef; Gia, Quoc Thong Le; Schwab, Christoph
2014-01-01
We review recent results on dimension-robust higher order convergence rates of Quasi-Monte Carlo Petrov-Galerkin approximations for response functionals of infinite-dimensional, parametric operator equations which arise in computational uncertainty quantification.
Uncertainty Quantification for Production Navier-Stokes Solvers Project
National Aeronautics and Space Administration — The uncertainty quantification methods developed under this program are designed for use with current state-of-the-art flow solvers developed by and in use at NASA....
Watzinger, Franz; Hörth, Elfriede; Lion, Thomas
2001-01-01
Despite the recent introduction of real-time PCR methods, competitive PCR techniques continue to play an important role in nucleic acid quantification because of the significantly lower cost of equipment and consumables. Here we describe a shifted restriction-site competitive PCR (SRS-cPCR) assay based on a modified type of competitor. The competitor fragments are designed to contain a recognition site for a restriction endonuclease that is also present in the target ...
Shi, Tujin; Gao, Yuqian; Quek, Sue Ing; Fillmore, Thomas L.; Nicora, Carrie D; Su, Dian; Zhao, Rui; Kagan, Jacob; Srivastava, Sudhir; Rodland, Karin D; Tao LIU; Smith*, Richard D.; Chan, Daniel W.; Camp, David G.; Liu, Alvin Y
2013-01-01
Anterior gradient 2 (AGR2) is a secreted, cancer-associated protein in many types of epithelial cancer cells. We developed a highly sensitive targeted mass spectrometric assay for quantification of AGR2 in urine and serum. Digested peptides from clinical samples were processed by PRISM (high pressure and high resolution separations coupled with intelligent selection and multiplexing), which incorporates high pH reversed-phase LC separations to fractionate and select target fractions for follo...
Quantification is Incapable of Directly Enhancing Life Quality through Healthcare
Peter A. Moskovitz
2013-01-01
Quantification, the measurement and representational modeling of objects, events and relationships, cannot enhance life quality, not directly. Illustrative is Sydenham’s model of disease (Sydenham, 1848-1850) and its spawn: the checklist quantification that is contained in the DSM (Diagnostic and Statistical Manual of Mental Disorders, now in its fifth edition) and ICD (International Classification of Diseases, now in its ninth edition). The use of these diagnostic catalogs is incapable of di...
Quantification Model for Estimating Temperature Field Distributions of Apple Fruit
ZHANG Min; Yang, Le; Zhao, Huizhong; Zhang, Leijie; Zhong, Zhiyou; Liu, Yanling; Chen, Jianhua
2010-01-01
A quantification model of transient heat conduction was provided to simulate apple fruit temperature distribution in the cooling process. The model was based on the energy variation of apple fruit of different points. It took into account, heat exchange of representative elemental volume, metabolism heat and external heat. The following conclusions could be obtained: first, the quantification model can satisfactorily describe the tendency of apple fruit temperature distribution in the cooling...
FRANX. Application for analysis and quantification of the APS fire
International Nuclear Information System (INIS)
The FRANX application has been developed by EPRI within the Risk and Reliability User Group in order to facilitate the process of quantification and updating APS Fire (also covers floods and earthquakes). By applying fire scenarios are quantified in the central integrating the tasks performed during the APS fire. This paper describes the main features of the program to allow quantification of an APS Fire. (Author)
Uncertainty Quantification with Applications to Engineering Problems
DEFF Research Database (Denmark)
Bigoni, Daniele
2015-01-01
The systematic quantification of the uncertainties affecting dynamical systems and the characterization of the uncertainty of their outcomes is critical for engineering design and analysis, where risks must be reduced as much as possible. Uncertainties stem naturally from our limitations in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor-train (STT) decomposition, a novel high-order method for the effective propagation of uncertainties which aims at providing an exponential convergence rate while tackling the curse of dimensionality. The curse of dimensionality is a problem that afflicts many methods based on meta-models, for which the computational cost increases exponentially with the number of inputs of the approximated function – which we will call dimension in the following. The STT-decomposition is based on the Polynomial Chaos (PC) approximation and the low-rank decomposition of the function describing the Quantity of Interest of the considered problem. The low-rank decomposition is obtained through the discrete tensor-train decomposition, which is constructed using an optimization algorithm for the selection of the relevant points on which the function needs to be evaluated. The selection of these points is informed by the approximated function and thus it is able to adapt to its features. The number of function evaluations needed for the construction grows only linearly with the dimension and quadratically with the rank. In this work we will present and use the functional counterpart of this low-rank decomposition and, after proving some auxiliary properties, we will apply PC on it, obtaining the STT-decomposition. This will allow the decoupling of each dimension, leading to a much cheaper construction of the PC surrogate. In the associated paper, the capabilities of the STT-decomposition are checked on commonly used test functions and on an elliptic problem with random inputs. This work will also present three active research directions aimed at improving the efficiency of the STT-decomposition. In this context, we propose three new strategies for solving the ordering problem suffered by the tensor-train decomposition, for computing better estimates with respect to the norms usually employed in UQ and for the anisotropic adaptivity of the method. The second part of this work presents engineering applications of the UQ framework. Both the applications are characterized by functions whose evaluation is computationally expensive and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose characteristics are uncertain. These analysis are carried out using mostly PC methods, and resorting to random sampling methods for comparison and when strictly necessary. The second application of the UQ framework is on the propagation of the uncertainties entering a fully non-linear and dispersive model of water waves. This computationally challenging task is tackled with the adoption of state-of-the-art software for its numerical solution and of efficient PC methods. The aim of this study is the construction of stochastic benchmarks where to test UQ methodologies before being applied to full-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix.
Volumetric motion quantification by 3D tissue phase mapped CMR
Directory of Open Access Journals (Sweden)
Lutz Anja
2012-10-01
Full Text Available Abstract Background The objective of this study was the quantification of myocardial motion from 3D tissue phase mapped (TPM CMR. Recent work on myocardial motion quantification by TPM has been focussed on multi-slice 2D acquisitions thus excluding motion information from large regions of the left ventricle. Volumetric motion assessment appears an important next step towards the understanding of the volumetric myocardial motion and hence may further improve diagnosis and treatments in patients with myocardial motion abnormalities. Methods Volumetric motion quantification of the complete left ventricle was performed in 12 healthy volunteers and two patients applying a black-blood 3D TPM sequence. The resulting motion field was analysed regarding motion pattern differences between apical and basal locations as well as for asynchronous motion pattern between different myocardial segments in one or more slices. Motion quantification included velocity, torsion, rotation angle and strain derived parameters. Results All investigated motion quantification parameters could be calculated from the 3D-TPM data. Parameters quantifying hypokinetic or asynchronous motion demonstrated differences between motion impaired and healthy myocardium. Conclusions 3D-TPM enables the gapless volumetric quantification of motion abnormalities of the left ventricle, which can be applied in future application as additional information to provide a more detailed analysis of the left ventricular function.
Matrix Theory on Non-Orientable Surfaces
Zwart, Gysbert
1997-01-01
We construct the Matrix theory descriptions of M-theory on the Mobius strip and the Klein bottle. In a limit, these provide the matrix string theories for the CHL string and an orbifold of type IIA string theory.
Yeung, Brendan; Ng, Tuck Wah; Tan, Han Yen; Liew, Oi Wah
2012-01-01
The use of different types of stains in the quantification of proteins separated on gels using electrophoresis offers the capability of deriving good outcomes in terms of linear dynamic range, sensitivity, and compatibility with specific proteins. An inexpensive, simple, and versatile lighting system based on liquid crystal display backlighting is…
Identification and Quantification of Carbonate Species Using Rock-Eval Pyrolysis
Directory of Open Access Journals (Sweden)
Pillot D.
2013-03-01
Full Text Available This paper presents a new reliable and rapid method to characterise and quantify carbonates in solid samples based on monitoring the CO2 flux emitted by progressive thermal decomposition of carbonates during a programmed heating. The different peaks of destabilisation allow determining the different types of carbonates present in the analysed sample. The quantification of each peak gives the respective proportions of these different types of carbonates in the sample. In addition to the chosen procedure presented in this paper, using a standard Rock-Eval 6 pyrolyser, calibration characteristic profiles are also presented for the most common carbonates in nature. This method should allow different types of application for different disciplines, either academic or industrial.
High-Order Metrics for Model Uncertainty Quantification and Validation
International Nuclear Information System (INIS)
It is well known that the true values of measured and computed data are impossible to know exactly because of various uncontrollable errors and uncertainties arising in the data measurement and interpretation reduction processes. Hence, all inferences, predictions, engineering computations, and other applications of measured and/or computed data are necessarily based on weighted averages over the possibly true values, with weights indicating the degree of plausibility of each value. Furthermore, combination of data from different sources involves a weighted propagation (e.g., via sensitivities) of all uncertainties, requiring reasoning from incomplete information and using probability theory for extracting optimal values together with 'best-estimate' uncertainties from often sparse, incomplete, error-afflicted, and occasionally discrepant data. The current state-of-the-art data assimilation/model calibration methodologies1 for large-scale nonlinear systems cannot take into account uncertainties higher-order than secondorder (i.e., covariances) thereby failing to quantify fully the deviations of the problem under consideration from a normal (Gaussian) multivariate distribution. Such deviations would be quantified by the third- and fourth-order moments (skewness and kurtosis) of the model's predicted results (responses). These higher-order moments would be constructed by combining modeling and experimental uncertainties (which also incorporate the corresponding skewnesslso incorporate the corresponding skewness and kurtosis information), using derivatives of the model responses with respect to the model's parameters. This paper presents explicit expressions for skewness and kurtosis of computed responses, thereby permitting quantification of the deviations of the computed response uncertainties from multivariate normality. In addition, this paper presents a new and most efficient procedure for computing the second-order response derivatives with respect to model parameters using the 'adjoint sensitivity analysis procedure' (ASAP)
Quantification of biological aging in young adults
Belsky, Daniel W.; Caspi, Avshalom; Houts, Renate; Cohen, Harvey J.; Corcoran, David L.; Danese, Andrea; Harrington, HonaLee; Israel, Salomon; Levine, Morgan E.; Schaefer, Jonathan D.; Sugden, Karen; Williams, Ben; Yashin, Anatoli I.; Poulton, Richie; Moffitt, Terrie E.
2015-01-01
Antiaging therapies show promise in model organism research. Translation to humans is needed to address the challenges of an aging global population. Interventions to slow human aging will need to be applied to still-young individuals. However, most human aging research examines older adults, many with chronic disease. As a result, little is known about aging in young humans. We studied aging in 954 young humans, the Dunedin Study birth cohort, tracking multiple biomarkers across three time points spanning their third and fourth decades of life. We developed and validated two methods by which aging can be measured in young adults, one cross-sectional and one longitudinal. Our longitudinal measure allows quantification of the pace of coordinated physiological deterioration across multiple organ systems (e.g., pulmonary, periodontal, cardiovascular, renal, hepatic, and immune function). We applied these methods to assess biological aging in young humans who had not yet developed age-related diseases. Young individuals of the same chronological age varied in their “biological aging” (declining integrity of multiple organ systems). Already, before midlife, individuals who were aging more rapidly were less physically able, showed cognitive decline and brain aging, self-reported worse health, and looked older. Measured biological aging in young adults can be used to identify causes of aging and evaluate rejuvenation therapies. PMID:26150497
Quantification of Zolpidem in Canine Plasma
Directory of Open Access Journals (Sweden)
Mario Giorgi
2012-01-01
Full Text Available Problem statement: Zolpidem is a non-benzodiazepine hypnotic agent currently used in human medicine. In contrast to benzodiazepines, zolpidem preferentially binds with the GABAA complex ?? receptors while poorly interacting with the other ? receptor complexes. Recent studies have suggested that ZP may be used to initiate sedation and diminish severe anxiety responses in dogs. The aim of the present study is to develop and validate a new HPLC-FL based method to quantify zolpidem in canine plasma. Approach: Several parameters both in the extraction and in the detection method were evaluated. The applicability of the method was determined by administering zolpidem to one dog. Results: The final mobile phase was acetonitrile: KH2PO4 (15 mM; pH 6.0 40:60 v/v, with a flow rate of 1 mL min-1 and excitation and emission wave lengths of 254 and 400 nm, respectively. The best extraction solvent was CH2Cl2:Et2O (3:7 v/v, this gave recoveries ranging from 83-95%. The limit of quantification was 1 ng mL-1. The chromatographic runs were specific with no interfering peaks at the retention times of the analyte. The other validation parameters were in agreement with the EMEA. Conclusion/Recommendations: This method (extraction, separation and applied techniques is simple and effective. This technique may have applications for pharmacokinetic or toxicological studies.
Machine accurate quantification in magnetic resonance spectroscopy
International Nuclear Information System (INIS)
A powerful and invaluable complement to anatomical diagnostics, Magnetic Resonance Spectroscopy (MRS) provides bio-chemical information about the viability and overall functionality of the scanned tissue. This latter information is not available directly from time signals encoded from patients via MRS. Rather, time signals need to be spectrally analysed by considering the quantification problem. By solving this problem, one can reconstruct the fundamental frequencies and the corresponding amplitudes as well as their total number. Such parameters yield directly the diagnostically most relevant quantities that are the concentrations of the identified metabolites. All these key spectral parameters can unequivocally be reconstructed from the input time signal by using the fast Pade transform (FPT). The present computations demonstrate that the FPT can achieve the 'spectral convergence', i.e. the exponential convergence rate as a function of the signal length for a fixed bandwidth. This is illustrated to within machine accuracy by the exact reconstruction of all the parameters of every physical resonance from a synthesised noiseless time signal with 25 complex damped exponentials, including those for tightly overlapped and nearly degenerate resonances. Overlapped resonances are abundant in spectra from in vivo MRS and, moreover, they are often of utmost relevance for diagnostics, especially in clinical oncology
A regularized method for peptide quantification.
Yang, Chao; Yang, Can; Yu, Weichuan
2010-05-01
Peptide abundance estimation is generally the first step in protein quantification. In peptide abundance estimation, peptide overlapping and peak intensity variation are two challenges. The main objective of this paper is to estimate peptide abundance by taking advantage of peptide isotopic distribution and smoothness of peptide elution profile. Our method proposes to solve the peptide overlapping problem and provides a way to control the variance of estimation. We compare our method with a commonly used method on simulated data sets and two real data sets of standard protein mixtures. The results show that our method achieves more accurate estimation of peptide abundance on different samples. In our method, there is a variance-related parameter. Considering the well-known trade-off between the variance and the bias of estimation, we should not only focus on reducing the variance in real applications. A suggestion about parameter selection is given based on the discussion of variance and bias. Matlab source codes and detailed experimental results are available at http://bioinformatics.ust.hk/PeptideQuant/peptidequant.htm. PMID:20201590
Cross recurrence quantification for cover song identification
International Nuclear Information System (INIS)
There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from real-world dynamics even though these are not necessarily deterministic and stationary. In the present study, we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose, we here propose a recurrence quantification analysis measure that allows the tracking of potentially curved and disrupted traces in cross recurrence plots (CRPs). We apply this measure to CRPs constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Roessler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.
Quantification of moving target cyber defenses
Farris, Katheryn A.; Cybenko, George
2015-05-01
Current network and information systems are static, making it simple for attackers to maintain an advantage. Adaptive defenses, such as Moving Target Defenses (MTD) have been developed as potential "game-changers" in an effort to increase the attacker's workload. With many new methods being developed, it is difficult to accurately quantify and compare their overall costs and effectiveness. This paper compares the tradeoffs between current approaches to the quantification of MTDs. We present results from an expert opinion survey on quantifying the overall effectiveness, upfront and operating costs of a select set of MTD techniques. We find that gathering informed scientific opinions can be advantageous for evaluating such new technologies as it offers a more comprehensive assessment. We end by presenting a coarse ordering of a set of MTD techniques from most to least dominant. We found that seven out of 23 methods rank as the more dominant techniques. Five of which are techniques of either address space layout randomization or instruction set randomization. The remaining two techniques are applicable to software and computer platforms. Among the techniques that performed the worst are those primarily aimed at network randomization.
Tissue quantification for development of pediatric phantom
International Nuclear Information System (INIS)
The optimization of the risk- benefit ratio is a major concern in the pediatric radiology, due to the greater vulnerability of children to the late somatic effects and genetic effects of exposure to radiation compared to adults. In Brazil, it is estimated that the causes of death from head trauma are 18 % for the age group between 1-5 years and the radiograph is the primary diagnostic test for the detection of skull fracture . Knowing that the image quality is essential to ensure the identification of structures anatomical and minimizing errors diagnostic interpretation, this paper proposed the development and construction of homogeneous phantoms skull, for the age group 1-5 years. The construction of the phantoms homogeneous was performed using the classification and quantification of tissue present in the skull of pediatric patients. In this procedure computational algorithms were used, using Matlab, to quantify distinct biological tissues present in the anatomical regions studied , using pictures retrospective CT scans. Preliminary data obtained from measurements show that between the ages of 1-5 years, assuming an average anteroposterior diameter of the pediatric skull region of the 145.73 ± 2.97 mm, can be represented by 92.34 mm ± 5.22 of lucite and 1.75 ± 0:21 mm of aluminum plates of a provision of PEP (Pacient equivalent phantom). After its construction, the phantoms will be used for image and dose optimization in pediatric protocols process to examinations of computerized radiography
Mouse Polyomavirus: Propagation, Purification, Quantification, and Storage.
Horníková, Lenka; Žíla, Vojt?ch; Španielová, Hana; Forstová, Jitka
2015-01-01
Mouse polyomavirus (MPyV) is a member of the Polyomaviridae family, which comprises non-enveloped tumorigenic viruses infecting various vertebrates including humans and causing different pathogenic responses in the infected organisms. Despite the variations in host tropism and pathogenicity, the structure of the virions of these viruses is similar. The capsid, with icosahedral symmetry (ø, 45 nm, T = 7d), is composed of a shell of 72 capsomeres of structural proteins, arranged around the nucleocore containing approximately 5-kbp-long circular dsDNA in complex with cellular histones. MPyV has been one of the most studied polyomaviruses and serves as a model virus for studies of the mechanisms of cell transformation and virus trafficking, and for use in nanotechnology. It can be propagated in primary mouse cells (e.g., in whole mouse embryo cells) or in mouse epithelial or fibroblast cell lines. In this unit, propagation, purification, quantification, and storage of MPyV virions are presented. © 2015 by John Wiley & Sons, Inc. PMID:26237106
Expert elicitation approach for performing ATHEANA quantification
International Nuclear Information System (INIS)
An expert elicitation approach has been developed to estimate probabilities for unsafe human actions (UAs) based on error-forcing contexts (EFCs) identified through the ATHEANA (A Technique for Human Event Analysis) search process. The expert elicitation approach integrates the knowledge of informed analysts to quantify UAs and treat uncertainty ('quantification-including-uncertainty'). The analysis focuses on (a) the probabilistic risk assessment (PRA) sequence EFCs for which the UAs are being assessed, (b) the knowledge and experience of analysts (who should include trainers, operations staff, and PRA/human reliability analysis experts), and (c) facilitated translation of information into probabilities useful for PRA purposes. Rather than simply asking the analysts their opinion about failure probabilities, the approach emphasizes asking the analysts what experience and information they have that is relevant to the probability of failure. The facilitator then leads the group in combining the different kinds of information into a consensus probability distribution. This paper describes the expert elicitation process, presents its technical basis, and discusses the controls that are exercised to use it appropriately. The paper also points out the strengths and weaknesses of the approach and how it can be improved. Specifically, it describes how generalized contextually anchored probabilities (GCAPs) can be developed to serve as reference points for estimates of the likelihood of UAs and their distributions
Jeong, Hyun Cheol; Hong, Hee-Do; Kim, Young-Chan; Rhee, Young Kyoung; Choi, Sang Yoon; Kim, Kyung-Tack; Kim, Sung Soo; Lee, Young-Chul; Cho, Chang-Won
2015-01-01
Background: Maltol, as a type of phenolic compounds, is produced by the browning reaction during the high-temperature treatment of ginseng. Thus, maltol can be used as a marker for the quality control of various ginseng products manufactured by high-temperature treatment including red ginseng. For the quantification of maltol in Korean ginseng products, an effective high-performance liquid chromatography-diode array detector (HPLC-DAD) method was developed. Materials and Methods: The HPLC-DAD method for maltol quantification coupled with a liquid-liquid extraction (LLE) method was developed and validated in terms of linearity, precision, and accuracy. An HPLC separation was performed on a C18 column. Results: The LLE methods and HPLC running conditions for maltol quantification were optimized. The calibration curve of the maltol exhibited good linearity (R2 = 1.00). The limit of detection value of maltol was 0.26 ?g/mL, and the limit of quantification value was 0.79 ?g/mL. The relative standard deviations (RSDs) of the data of the intra- and inter-day experiments were <1.27% and 0.61%, respectively. The results of the recovery test were 101.35–101.75% with an RSD value of 0.21–1.65%. The developed method was applied successfully to quantify the maltol in three ginseng products manufactured by different methods. Conclusion: The results of validation demonstrated that the proposed HPLC-DAD method was useful for the quantification of maltol in various ginseng products.
Akram, Muhammad Farooq Bin
The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.
Type I background fields in terms of type IIB ones
B. Nikolic; Sazdovic, B.
2008-01-01
We choose such boundary conditions for open IIB superstring theory which preserve N=1 SUSY. The explicite solution of the boundary conditions yields effective theory which is symmetric under world-sheet parity transformation $\\Omega:\\sigma\\to-\\sigma$. We recognize effective theory as closed type I superstring theory. Its background fields,beside known $\\Omega$ even fields of the initial IIB theory, contain improvements quadratic in $\\Omega$ odd ones.
Superspace conformal field theory
International Nuclear Information System (INIS)
Conformal sigma models and WZW models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type I supergroups, the classification of conformal sigma models and their embedding into string theory.
Superspace conformal field theory
Energy Technology Data Exchange (ETDEWEB)
Quella, Thomas [Koeln Univ. (Germany). Inst. fuer Theoretische Physik; Schomerus, Volker [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2013-07-15
Conformal sigma models and WZW models on coset superspaces provide important examples of logarithmic conformal field theories. They possess many applications to problems in string and condensed matter theory. We review recent results and developments, including the general construction of WZW models on type I supergroups, the classification of conformal sigma models and their embedding into string theory.
Methodological considerations in quantification of oncological FDG PET studies
Energy Technology Data Exchange (ETDEWEB)
Vriens, Dennis; Visser, Eric P.; Geus-Oei, Lioe-Fee de; Oyen, Wim J.G. [Radboud University Nijmegen Medical Centre, Department of Nuclear Medicine, Nijmegen (Netherlands)
2010-07-15
This review aims to provide insight into the factors that influence quantification of glucose metabolism by FDG PET images in oncology as well as their influence on repeated measures studies (i.e. treatment response assessment), offering improved understanding both for clinical practice and research. Structural PubMed searches have been performed for the many factors affecting quantification of glucose metabolism by FDG PET. Review articles and references lists have been used to supplement the search findings. Biological factors such as fasting blood glucose level, FDG uptake period, FDG distribution and clearance, patient motion (breathing) and patient discomfort (stress) all influence quantification. Acquisition parameters should be adjusted to maximize the signal to noise ratio without exposing the patient to a higher than strictly necessary radiation dose. This is especially challenging in pharmacokinetic analysis, where the temporal resolution is of significant importance. The literature is reviewed on the influence of attenuation correction on parameters for glucose metabolism, the effect of motion, metal artefacts and contrast agents on quantification of CT attenuation-corrected images. Reconstruction settings (analytical versus iterative reconstruction, post-reconstruction filtering and image matrix size) all potentially influence quantification due to artefacts, noise levels and lesion size dependency. Many region of interest definitions are available, but increased complexity does not necessarily result in improved performance. Different methods for the quantification of the tissue of interest can introduce systematic and random inaccuracy. This review provides an up-to-date overview of the many factors that influence quantification of glucose metabolism by FDG PET. (orig.) 3.
Methodological considerations in quantification of oncological FDG PET studies
International Nuclear Information System (INIS)
This review aims to provide insight into the factors that influence quantification of glucose metabolism by FDG PET images in oncology as well as their influence on repeated measures studies (i.e. treatment response assessment), offering improved understanding both for clinical practice and research. Structural PubMed searches have been performed for the many factors affecting quantification of glucose metabolism by FDG PET. Review articles and references lists have been used to supplement the search findings. Biological factors such as fasting blood glucose level, FDG uptake period, FDG distribution and clearance, patient motion (breathing) and patient discomfort (stress) all influence quantification. Acquisition parameters should be adjusted to maximize the signal to noise ratio without exposing the patient to a higher than strictly necessary radiation dose. This is especially challenging in pharmacokinetic analysis, where the temporal resolution is of significant importance. The literature is reviewed on the influence of attenuation correction on parameters for glucose metabolism, the effect of motion, metal artefacts and contrast agents on quantification of CT attenuation-corrected images. Reconstruction settings (analytical versus iterative reconstruction, post-reconstruction filtering and image matrix size) all potentially influence quantification due to artefacts, noise levels and lesion size dependency. Many region of interest definitions are available, but increased complexity does not necessarily result in improved performance. Different methods for the quantification of the tissue of interest can introduce systematic and random inaccuracy. This review provides an up-to-date overview of the many factors that influence quantification of glucose metabolism by FDG PET. (orig.) 3
Uncertainty quantification of bacterial aerosol neutralization in shock heated gases
Schulz, J. C.; Gottiparthi, K. C.; Menon, S.
2015-01-01
A potential method for the neutralization of bacterial endospores is the use of explosive charges since the high thermal and mechanical stresses in the post-detonation flow are thought to be sufficient in reducing the endospore survivability to levels that pose no significant health threat. While several experiments have attempted to quantify endospore survivability by emulating such environments in shock tube configurations, numerical simulations are necessary to provide information in scenarios where experimental data are difficult to obtain. Since such numerical predictions require complex, multi-physics models, significant uncertainties could be present. This work investigates the uncertainty in determining the endospore survivability from using a reduced order model based on a critical endospore temperature. Understanding the uncertainty in such a model is necessary in quantifying the variability in predictions using large-scale, realistic simulations of bacterial endospore neutralization by explosive charges. This work extends the analysis of previous large-scale simulations of endospore neutralization [Gottiparthi et al. in (Shock Waves, 2014. doi:10.1007/s00193-014-0504-9)] by focusing on the uncertainty quantification of predicting endospore neutralization. For a given initial mass distribution of the bacterial endospore aerosol, predictions of the intact endospore percentage using nominal values of the input parameters match the experimental data well. The uncertainty in these predictions are then investigated using the Dempster-Shafer theory of evidence and polynomial chaos expansion. The studies show that the endospore survivability is governed largely by the endospore's mass distribution and their exposure or residence time at the elevated temperatures and pressures. Deviations from the nominal predictions can be as much as 20-30 % in the intermediate temperature ranges. At high temperatures, i.e., strong shocks, which are of the most interest, the residence time is observed to be a dominant parameter, and this coupled with the analysis resulting from the Dempster-Shafer theory of evidence seems to indicate that achieving confident predictions of less than 1 % endospore viability can only occur by extending the residence time of the fluid-particle interaction.
An architectural model for software reliability quantification: sources of data
International Nuclear Information System (INIS)
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to ure is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified
GPU-accelerated voxelwise hepatic perfusion quantification
International Nuclear Information System (INIS)
Voxelwise quantification of hepatic perfusion parameters from dynamic contrast enhanced (DCE) imaging greatly contributes to assessment of liver function in response to radiation therapy. However, the efficiency of the estimation of hepatic perfusion parameters voxel-by-voxel in the whole liver using a dual-input single-compartment model requires substantial improvement for routine clinical applications. In this paper, we utilize the parallel computation power of a graphics processing unit (GPU) to accelerate the computation, while maintaining the same accuracy as the conventional method. Using compute unified device architecture-GPU, the hepatic perfusion computations over multiple voxels are run across the GPU blocks concurrently but independently. At each voxel, nonlinear least-squares fitting the time series of the liver DCE data to the compartmental model is distributed to multiple threads in a block, and the computations of different time points are performed simultaneously and synchronically. An efficient fast Fourier transform in a block is also developed for the convolution computation in the model. The GPU computations of the voxel-by-voxel hepatic perfusion images are compared with ones by the CPU using the simulated DCE data and the experimental DCE MR images from patients. The computation speed is improved by 30 times using a NVIDIA Tesla C2050 GPU compared to a 2.67 GHz Intel Xeon CPU processor. To obtain liver perfusion maps with 626 400 voxels in a patient's liver, it takes 0.9 min with the GPU-accelerated voxelwise computation, compared to 110 min with the CPU, while both methods result in perfusion parameters differences less than 10?6. The method will be useful for generating liver perfusion images in clinical settings. (paper)
Quantification of asphaltene precipitation by scaling equation
Janier, Josefina Barnachea; Jalil, Mohamad Afzal B. Abd.; Samin, Mohamad Izhar B. Mohd; Karim, Samsul Ariffin B. A.
2015-02-01
Asphaltene precipitation from crude oil is one of the issues for the oil industry. The deposition of asphaltene occurs during production, transportation and separating process. The injection of carbon dioxide (CO2) during enhance oil recovery (EOR) is believed to contribute much to the precipitation of asphaltene. Precipitation can be affected by the changes in temperature and pressure on the crude oil however, reduction in pressure contribute much to the instability of asphaltene as compared to temperature. This paper discussed the quantification of precipitated asphaltene in crude oil at different high pressures and at constant temperature. The derived scaling equation was based on the reservoir condition with variation in the amount of carbon dioxide (CO2) mixed with Dulang a light crude oil sample used in the experiment towards the stability of asphaltene. A FluidEval PVT cell with Solid Detection System (SDS) was the instrument used to gain experimental knowledge on the behavior of fluid at reservoir conditions. Two conditions were followed in the conduct of the experiment. Firstly, a 45cc light crude oil was mixed with 18cc (40%) of CO2 and secondly, the same amount of crude oil sample was mixed with 27cc (60%) of CO2. Results showed that for a 45cc crude oil sample combined with 18cc (40%) of CO2 gas indicated a saturation pressure of 1498.37psi and asphaltene onset point was 1620psi. Then for the same amount of crude oil combined with 27cc (60%) of CO2, the saturation pressure was 2046.502psi and asphaltene onset point was 2230psi. The derivation of the scaling equation considered reservoir temperature, pressure, bubble point pressure, mole percent of the precipitant the injected gas CO2, and the gas molecular weight. The scaled equation resulted to a third order polynomial that can be used to quantify the amount of asphaltene in crude oil.
Quantification of water in hydrous ringwoodite
Thomas, Sylvia-Monique; Jacobsen, Steven; Bina, Craig; Reichart, Patrick; Moser, Marcus; Hauri, Erik; Koch-Müller, Monika; Smyth, Joseph; Dollinger, Günther
2014-12-01
Ringwoodite, ?-(Mg,Fe)2SiO4, in the lower 150 km of Earth’s mantle transition zone (410-660 km depth) can incorporate up to 1.5-2 wt% H2O as hydroxyl defects. We present a mineral-specific IR calibration for the absolute water content in hydrous ringwoodite by combining results from Raman spectroscopy, secondary ion mass spectrometery (SIMS) and proton-proton (pp)-scattering on a suite of synthetic Mg- and Fe-bearing hydrous ringwoodites. H2O concentrations in the crystals studied here range from 0.46 to 1.7 wt% H2O (absolute methods), with the maximum H2O in the same sample giving 2.5 wt% by SIMS calibration. Anchoring our spectroscopic results to absolute H-atom concentrations from pp-scattering measurements, we report frequency-dependent integrated IR-absorption coefficients for water in ringwoodite ranging from 78180 to 158880 L mol-1cm-2, depending upon frequency of the OH absorption. We further report a linear wavenumber IR calibration for H2O quantification in hydrous ringwoodite across the Mg2SiO4-Fe2SiO4 solid solution, which will lead to more accurate estimations of the water content in both laboratory-grown and naturally occurring ringwoodites. Re-evaluation of the IR spectrum for a natural hydrous ringwoodite inclusion in diamond from the study of Pearson et al. (2014) indicates the crystal contains 1.43 ± 0.27 wt% H2O, thus confirming near-maximum amounts of H2O for this sample from the transition zone.
Automated lobar quantification of emphysema in patients with severe COPD
International Nuclear Information System (INIS)
Automated lobar quantification of emphysema has not yet been evaluated. Unenhanced 64-slice MDCT was performed in 47 patients evaluated before bronchoscopic lung-volume reduction. CT images reconstructed with a standard (B20) and high-frequency (B50) kernel were analyzed using a dedicated prototype software (MevisPULMO) allowing lobar quantification of emphysema extent. Lobar quantification was obtained following (a) a fully automatic delineation of the lobar limits by the software and (b) a semiautomatic delineation with manual correction of the lobar limits when necessary and was compared with the visual scoring of emphysema severity per lobe. No statistically significant difference existed between automated and semiautomated lobar quantification (p>0.05 in the five lobes), with differences ranging from 0.4 to 3.9%. The agreement between the two methods (intraclass correlation coefficient, ICC) was excellent for left upper lobe (ICC=0.94), left lower lobe (ICC=0.98), and right lower lobe (ICC=0.80). The agreement was good for right upper lobe (ICC=0.68) and moderate for middle lobe (IC=0.53). The Bland and Altman plots confirmed these results. A good agreement was observed between the software and visually assessed lobar predominance of emphysema (kappa 0.78; 95% CI 0.64-0.92). Automated and semiautomated lobar quantifications of emphysema are concordant and show good agreement with visual scoring. (orig.)
GMO quantification: valuable experience and insights for the future.
Milavec, Mojca; Dobnik, David; Yang, Litao; Zhang, Dabing; Gruden, Kristina; Zel, Jana
2014-10-01
Cultivation and marketing of genetically modified organisms (GMOs) have been unevenly adopted worldwide. To facilitate international trade and to provide information to consumers, labelling requirements have been set up in many countries. Quantitative real-time polymerase chain reaction (qPCR) is currently the method of choice for detection, identification and quantification of GMOs. This has been critically assessed and the requirements for the method performance have been set. Nevertheless, there are challenges that should still be highlighted, such as measuring the quantity and quality of DNA, and determining the qPCR efficiency, possible sequence mismatches, characteristics of taxon-specific genes and appropriate units of measurement, as these remain potential sources of measurement uncertainty. To overcome these problems and to cope with the continuous increase in the number and variety of GMOs, new approaches are needed. Statistical strategies of quantification have already been proposed and expanded with the development of digital PCR. The first attempts have been made to use new generation sequencing also for quantitative purposes, although accurate quantification of the contents of GMOs using this technology is still a challenge for the future, and especially for mixed samples. New approaches are needed also for the quantification of stacks, and for potential quantification of organisms produced by new plant breeding techniques. PMID:25182968
He, Jingjing; Wang, Dengjiang; Zhang, Weifang
2015-03-01
This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.
Szulcek, Robert; Bogaard, Harm Jan; van Nieuw Amerongen, Geerten P
2014-01-01
Electric Cell-substrate Impedance Sensing (ECIS) is an in vitro impedance measuring system to quantify the behavior of cells within adherent cell layers. To this end, cells are grown in special culture chambers on top of opposing, circular gold electrodes. A constant small alternating current is applied between the electrodes and the potential across is measured. The insulating properties of the cell membrane create a resistance towards the electrical current flow resulting in an increased electrical potential between the electrodes. Measuring cellular impedance in this manner allows the automated study of cell attachment, growth, morphology, function, and motility. Although the ECIS measurement itself is straightforward and easy to learn, the underlying theory is complex and selection of the right settings and correct analysis and interpretation of the data is not self-evident. Yet, a clear protocol describing the individual steps from the experimental design to preparation, realization, and analysis of the experiment is not available. In this article the basic measurement principle as well as possible applications, experimental considerations, advantages and limitations of the ECIS system are discussed. A guide is provided for the study of cell attachment, spreading and proliferation; quantification of cell behavior in a confluent layer, with regard to barrier function, cell motility, quality of cell-cell and cell-substrate adhesions; and quantification of wound healing and cellular responses to vasoactive stimuli. Representative results are discussed based on human microvascular (MVEC) and human umbilical vein endothelial cells (HUVEC), but are applicable to all adherent growing cells. PMID:24747269
Sakai, Osamu; Harima, Hisatomo
2012-01-01
Band calculations for Ce compounds with the AuCu$_{3}$-type crystal structure were carried out on the basis of dynamical mean field theory (DMFT). The results of applying the calculation to CeIn$_{3}$ and CeSn$_{3}$ are presented as the second in a series of papers. The Kondo temperature and crystal-field splitting are obtained, respectively, as 190 and 390 K (CeSn$_{3}$), 8 and 160 K (CeIn$_{3}$ under ambient pressure), and 30 and 240 K (CeIn$_{3}$ at a pressure of 2.75 GPa...
Gamma camera based Positron Emission Tomography: a study of the viability on quantification
International Nuclear Information System (INIS)
Positron Emission Tomography (PET) is a Nuclear Medicine imaging modality for diagnostic purposes. Pharmaceuticals labeled with positron emitters are used and images which represent the in vivo biochemical process within tissues can be obtained. The positron/electron annihilation photons are detected in coincidence and this information is used for object reconstruction. Presently, there are two types of systems available for this imaging modality: the dedicated systems and those based on gamma camera technology. In this work, we utilized PET/SPECT systems, which also allows for the traditional Nuclear Medicine studies based on single photon emitters. There are inherent difficulties which affect quantification of activity and other indices. They are related to the Poisson nature of radioactivity, to radiation interactions with patient body and detector, noise due to statistical nature of these interactions and to all the detection processes, as well as the patient acquisition protocols. Corrections are described in the literature and not all of them are implemented by the manufacturers: scatter, attenuation, random, decay, dead time, spatial resolution, and others related to the properties of each equipment. The goal of this work was to assess these methods adopted by two manufacturers, as well as the influence of some technical characteristics of PET/SPECT systems on the estimation of SUV. Data from a set of phantoms were collected in 3D mode by one camera and 2D, by the other. We concluded that quantification is viable in PET/SPECT systems, including the estimation of SUVs. This is only possible if, apart from the above mentioned corrections, the camera is well tuned and coefficients for sensitivity normalization and partial volume corrections are applied. We also verified that the shapes of the sources used for obtaining these factors play a role on the final results and should be delt with carefully in clinical quantification. Finally, the choice of the region of interest is critical and it should be the same used to calculate the correction factors. (author)
Estimation of the uncertainty of the quantification limit
International Nuclear Information System (INIS)
A method to compute the standard deviation of detection and quantification limits based on an easy equation was proposed. The results were compared to those coming from the rigorously theoretical approach proposed in the literature which required quite complex computations. The results were in excellent agreement with the theoretical ones. The proposed equation represents an easy tool to establish whether an instrumental technique furnishes equivalent results in different operating conditions and to compare a limit of quantification to a law limit. The application to an experimental calibration of some elements by ICP-MS technique demonstrated its easy use for the interpretation of the results. - Highlights: • We proposed an approximated formula for the standard deviation of the detection limit. • We calculated the standard deviation of the quantification limit. • We successfully compared them with the modified noncentral t distribution ones. • We applied them to easy interpret calibrations of some elements by ICP-MS. • They allowed an easy comparison of different operating conditions
Iron overload in the liver diagnostic and quantification.
Alústiza, Jose M; Castiella, Agustin; De Juan, Maria D; Emparanza, Jose I; Artetxe, Jose; Uranga, Maite
2007-03-01
Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification. PMID:17166681
Iron overload in the liver diagnostic and quantification
International Nuclear Information System (INIS)
Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification
Iron overload in the liver diagnostic and quantification
Energy Technology Data Exchange (ETDEWEB)
Alustiza, Jose M. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)]. E-mail: jmalustiza@osatek.es; Castiella, Agustin [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Juan, Maria D. de [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Emparanza, Jose I. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Artetxe, Jose [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Uranga, Maite [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)
2007-03-15
Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification.
Directory of Open Access Journals (Sweden)
Ravi P. Agarwal
2005-10-01
Full Text Available New Leray-Schauder alternatives are presented for MÃƒÂ¶nch-type maps defined between FrÃƒÂ©chet spaces. The proof relies on viewing a FrÃƒÂ©chet space as the projective limit of a sequence of Banach spaces.
Home News and Events Multimedia Library Videos Absolute Quantification of Somatic DNA Alterations in Human Cancer - Scott Carter Absolute Quantification of Somatic DNA Alterations in Human Cancer - Scott Carter, TCGA Scientific Symposium 2011 You
Light element quantification by lithium elastic scattering
Energy Technology Data Exchange (ETDEWEB)
Portillo, F.E. [Departamento de Física, Universidad Simón Bolívar, Caracas (Venezuela, Bolivarian Republic of); Liendo, J.A., E-mail: jliendo@usb.ve [Departamento de Física, Universidad Simón Bolívar, Caracas (Venezuela, Bolivarian Republic of); González, A.C. [Centro de Física, Instituto Venezolano de Investigaciones Científicas, Caracas (Venezuela, Bolivarian Republic of); Caussyn, D.D.; Fletcher, N.R.; Momotyuk, O.A.; Roeder, B.T.; Wiedenhoever, I.; Kemper, K.W.; Barber, P. [Physics Department, The Florida State University, Tallahassee, FL (United States); Sajo-Bohus, L. [Departamento de Física, Universidad Simón Bolívar, Caracas (Venezuela, Bolivarian Republic of)
2013-06-15
Accurate differential cross sections have been measured at specific beam energies and angles to be used in a method proposed previously for the simultaneous quantification of light elements (Z<11) present in evaporated liquid biological samples. Targets containing {sup 1}H, {sup 7}Li, {sup 12}C, {sup 16}O, {sup 19}F, {sup 28}Si and {sup 197}Au have been bombarded with 13 MeV {sup 6}Li{sup 3+} and 20 MeV {sup 16}O{sup 5+} beams. The {sup 16}O + {sup 1}H, {sup 16}O + {sup 12}C, {sup 16}O + {sup 16}O, {sup 16}O + {sup 19}F, {sup 16}O + {sup 28}Si and {sup 16}O + {sup 197}Au cross sections, shown to be consistent with the Rutherford formula predictions at 15° and 20°, have been used to determine cross sections for the {sup 6}Li + {sup 1}H, {sup 6}Li + {sup 12}C, {sup 6}Li + {sup 16}O, {sup 6}Li + {sup 19}F, {sup 6}Li + {sup 28}Si and {sup 6}Li + {sup 197}Au scatterings respectively at 17.5°, 24°, 25°, 26°, 28° and 30°. Although {sup 6}Li + {sup 7}Li cross sections have not been obtained from {sup 16}O + {sup 7} Li cross sections, they have been determined from measured {sup 6}Li + {sup 19}F cross sections and, in addition, used to obtain {sup 16}O + {sup 7}Li cross sections at 15° and 20°. The reliability of the new cross sections determined in this investigation for the {sup 6}Li + {sup 1}H, {sup 6}Li + {sup 7}Li and {sup 6}Li + {sup 19}F scatterings is based on the Rutherford behavior of the measured {sup 6}Li + {sup 197}Au scattering data as expected and the consistency observed between the {sup 6}Li + {sup 12}C, {sup 6}Li + {sup 16}O and {sup 6}Li + {sup 28}Si cross sections obtained in this work and previously reported values. This research has important implications in applied physics.
Uncertainty Quantification Techniques of SCALE/TSUNAMI
Energy Technology Data Exchange (ETDEWEB)
Rearden, Bradley T [ORNL; Mueller, Don [ORNL
2011-01-01
The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification is useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.
Uncertainty Quantification Techniques of SCALE/TSUNAMI
International Nuclear Information System (INIS)
The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as keff, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification is useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalizedn through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.
Rapid and portable electrochemical quantification of phosphorus.
Kolliopoulos, Athanasios V; Kampouris, Dimitrios K; Banks, Craig E
2015-04-21
Phosphorus is one of the key indicators of eutrophication levels in natural waters where it exists mainly as dissolved phosphorus. Various analytical protocols exist to provide an offsite analysis, and a point of site analysis is required. The current standard method recommended by the Environmental Protection Agency (EPA) for the detection of total phosphorus is colorimetric and based upon the color of a phosphomolybdate complex formed as a result of the reaction between orthophosphates and molybdates ions where ascorbic acid and antimony potassium tartrate are added and serve as reducing agents. Prior to the measurements, all forms of phosphorus are converted into orthophosphates via sample digestion (heating and acidifying). The work presented here details an electrochemical adaptation of this EPA recommended colorimetric approach for the measurement of dissolved phosphorus in water samples using screen-printed graphite macroelectrodes for the first time. This novel indirect electrochemical sensing protocol allows the determination of orthophosphates over the range from 0.5 to 20 ?g L(-1) in ideal pH 1 solutions utilizing cyclic voltammetry with a limit of detection (3?) found to correspond to 0.3 ?g L(-1) of phosphorus. The reaction time and influence of foreign ions (potential interferents) upon this electroanalytical protocol was also investigated, where it was found that a reaction time of 5 min, which is essential in the standard colorimetric approach, is not required in the new proposed electrochemically adapted protocol. The proposed electrochemical method was independently validated through the quantification of orthophosphates and total dissolved phosphorus in polluted water samples (canal water samples) with ion chromatography and ICP-OES, respectively. This novel electrochemical protocol exhibits advantages over the established EPA recommended colorimetric determination for total phosphorus with lower detection limits and shorter experimental times. Additionally this electrochemical adaptation allows the determination of dissolved phosphorus without the use of ascorbic acid and antimony potassium tartrate as reducing agents (as used in the colorimetric method). The potential portability of this protocol is demonstrated in the development of the PhosQuant electrochemical device and provides a portable device for the rapid electrochemical detection of dissolved phosphorus using screen-printed electrodes. PMID:25856498
International Nuclear Information System (INIS)
Fourth-order spatial interference of entangled photon pairs generated in the process of spontaneous parametric down-conversion pumped by a femtosecond pulse laser has been performed for the first time. In theory, it takes into account the transverse correlation between the two photons and is used to calculate the dependence of the visibility of the interference pattern obtained in Young's double-slit experiment. In this experiment, a short focal length lens and two narrow band interference filters were adopted to eliminate the effects of the broadband pump laser and improve the visibility of the interference pattern under the condition of nearly collinear light and degenerate phase matching
Chang, Guo-En; Chang, Shu-Wei; Chuang, Shun Lien
2009-07-01
We propose and develop a theoretical gain model for an n-doped, tensile-strained Ge-Si(x)Ge(y)Sn(1-x-y) quantum-well laser. Tensile strain and n doping in Ge active layers can help achieve population inversion in the direct conduction band and provide optical gain. We show our theoretical model for the bandgap structure, the polarization-dependent optical gain spectrum, and the free-carrier absorption of the n-type doped, tensile-strained Ge quantum-well laser. Despite the free-carrier absorption due to the n-type doping, a significant net gain can be obtained from the direct transition. We also present our waveguide design and calculate the optical confinement factors to estimate the modal gain and predict the threshold carrier density. PMID:19582037
Quantization via Linear homotopy types
Schreiber, Urs
2014-01-01
In the foundational logical framework of homotopy-type theory we discuss a natural formalization of secondary integral transforms in stable geometric homotopy theory. We observe that this yields a process of non-perturbative cohomological quantization of local pre-quantum field theory; and show that quantum anomaly cancellation amounts to realizing this as the boundary of a field theory that is given by genuine (primary) integral transforms, hence by linear polynomial functors. Recalling that traditional linear logic has semantics in symmetric monoidal categories and serves to formalize quantum mechanics, what we consider is its refinement to linear homotopy-type theory with semantics in stable infinity-categories of bundles of stable homotopy types (generalized cohomology theories) formalizing Lagrangian quantum field theory, following Nuiten and closely related to recent work by Haugseng and Hopkins-Lurie. For the reader interested in technical problems of quantization we provide non-perturbative quantizati...
Ljung, Sofia; Olsson, Cecilia; Rask, Merith; Lindahl, Bernt
2013-01-01
BACKGROUND: Cardiovascular disease and type 2 diabetes are two of the most common public health diseases, and up to 80 % of the cases may be prevented by lifestyle modification. The physiological effects of lifestyle-focused treatment are relatively well studied, but how patients actually experience such treatments is still rather unclear. PURPOSE: The aim of this study was to explore how patients experience lifestyle-focused group treatment in primary and secondary prevention of cardiovascul...
Multiparty Symmetric Sum Types
Directory of Open Access Journals (Sweden)
Lasse Nielsen
2010-11-01
Full Text Available This paper introduces a new theory of multiparty session types based on symmetric sum types, by which we can type non-deterministic orchestration choice behaviours. While the original branching type in session types can represent a choice made by a single participant and accepted by others determining how the session proceeds, the symmetric sum type represents a choice made by agreement among all the participants of a session. Such behaviour can be found in many practical systems, including collaborative workflow in healthcare systems for clinical practice guidelines (CPGs. Processes using the symmetric sums can be embedded into the original branching types using conductor processes. We show that this type-driven embedding preserves typability, satisfies semantic soundness and completeness, and meets the encodability criteria adapted to the typed setting. The theory leads to an efficient implementation of a prototypical tool for CPGs which automatically translates the original CPG specifications from a representation called the Process Matrix to symmetric sum types, type checks programs and executes them.
Multiparty Symmetric Sum Types
DEFF Research Database (Denmark)
Nielsen, Lasse; Yoshida, Nobuko
2010-01-01
This paper introduces a new theory of multiparty session types based on symmetric sum types, by which we can type non-deterministic orchestration choice behaviours. While the original branching type in session types can represent a choice made by a single participant and accepted by others determining how the session proceeds, the symmetric sum type represents a choice made by agreement among all the participants of a session. Such behaviour can be found in many practical systems, including collaborative workflow in healthcare systems for clinical practice guidelines (CPGs). Processes with the symmetric sums can be embedded into the original branching types using conductor processes. We show that this type-driven embedding preserves typability, satisfies semantic soundness and completeness, and meets the encodability criteria adapted to the typed setting. The theory leads to an efficient implementation of a prototypical tool for CPGs which automatically translates the original CPG specifications from a representation called the Process Matrix to symmetric sum types, type checks programs and executes them.
DEFF Research Database (Denmark)
Fusaroli, Riccardo; Konvalinka, Ivana
2014-01-01
The scientific investigation of social interactions presents substantial challenges: interacting agents engage each other at many different levels and timescales (motor and physiological coordination, joint attention, linguistic exchanges, etc.), often making their behaviors interdependent in non-linear ways. In this paper we review the current use of Cross Recurrence Quantification Analysis (CRQA) in the analysis of social interactions, and assess its potential and challenges. We argue that the method can sensitively grasp the dynamics of human interactions, and that it has started producing valuable knowledge about them. However, much work is still necessary: more systematic analyses and interpretation of the recurrence indexes and more consistent reporting of the results,more emphasis on theory-driven studies, exploring interactions involving more than 2 agents and multiple aspects of coordination,and assessing and quantifying complementary coordinative mechanisms. These challenges are discussed and operationalized in recommendations to further develop the field.
Analyzing Social Interactions: Promises and Challenges of Cross Recurrence Quantification Analysis
DEFF Research Database (Denmark)
Fusaroli, Riccardo; Konvalinka, Ivana
2014-01-01
The scientific investigation of social interactions presents substantial challenges: interacting agents engage each other at many different levels and timescales (motor and physiological coordination, joint attention, linguistic exchanges, etc.), often making their behaviors interdependent in non-linear ways. In this paper we review the current use of Cross Recurrence Quantification Analysis (CRQA) in the analysis of social interactions, and assess its potential and challenges. We argue that the method can sensitively grasp the dynamics of human interactions, and that it has started producing valuable knowledge about them. However, much work is still necessary: more systematic analyses and interpretation of the recurrence indexes and more consistent reporting of the results, more emphasis on theory-driven studies, exploring interactions involving more than 2 agents and multiple aspects of coordination, and assessing and quantifying complementary coordinative mechanisms. These challenges are discussed and operationalized in recommendations to further develop the field.
Energy Technology Data Exchange (ETDEWEB)
Naranjo, Alberto R.; Otero, Maria Elena M.; Poveda, Aylin G. [Higher Institute of Technologies and Applied Sciences, Havana City (Cuba)]. E-mails: rolo@instec.cu; mmontesi@instec.cu; Guerra, Alexeis C. [University of Informatic Sciences, Havana City (Cuba)]. E-mail: alexeis@uci.cu
2007-07-01
The application of non-linear dynamic methods in many scientific fields has demonstrated its great potentiality in the early detection of significant dynamic singularities. The introduction of these methods oriented to the surveillance of anomalies and failures of nuclear reactors and their fundamental equipment have been demonstrated in the last years. Specifically, Recurrence Plot and its Quantification Analysis are methods currently used in many scientific fields. The paper focuses its attention on the estimation of the Recurrence Plots and its Quantification Analysis applied to signal samples obtained from different types of reactors: research reactor TRIGA MARK-III, BWR/5 and PHWR. Different behaviors are compared in order to look for a pattern for the characterization of the power instability events in the nuclear reactor. These outputs have a great importance for its application in systems of surveillance and monitoring in Nuclear Power Plants. For its introduction in a real time monitoring system, the authors propose some useful approaches. The results indicate the potentiality of the method for its implementation in a system of surveillance and monitoring in Nuclear Power Plants. All the calculations were performed with two computational tools developed by Marwan: Cross Recurrence Plot Toolbox for Matlab (Version 5.7, Release 22) and Visual Recurrence Analysis (Version 4.8). (author)
Marangon, Iris; Boggetto, Nicole; Ménard-Moyon, Cécilia; Luciani, Nathalie; Wilhelm, Claire; Bianco, Alberto; Gazeau, Florence
2013-01-01
Carbon-based nanomaterials, like carbon nanotubes (CNTs), belong to this type of nanoparticles which are very difficult to discriminate from carbon-rich cell structures and de facto there is still no quantitative method to assess their distribution at cell and tissue levels. What we propose here is an innovative method allowing the detection and quantification of CNTs in cells using a multispectral imaging flow cytometer (ImageStream, Amnis). This newly developed device integrates both a high-throughput of cells and high resolution imaging, providing thus images for each cell directly in flow and therefore statistically relevant image analysis. Each cell image is acquired on bright-field (BF), dark-field (DF), and fluorescent channels, giving access respectively to the level and the distribution of light absorption, light scattered and fluorescence for each cell. The analysis consists then in a pixel-by-pixel comparison of each image, of the 7,000-10,000 cells acquired for each condition of the experiment. Localization and quantification of CNTs is made possible thanks to some particular intrinsic properties of CNTs: strong light absorbance and scattering; indeed CNTs appear as strongly absorbed dark spots on BF and bright spots on DF with a precise colocalization. This methodology could have a considerable impact on studies about interactions between nanomaterials and cells given that this protocol is applicable for a large range of nanomaterials, insofar as they are capable of absorbing (and/or scattering) strongly enough the light. PMID:24378540
Suitability of Tedlar gas sampling bags for siloxane quantification in landfill gas.
Ajhar, M; Wens, B; Stollenwerk, K H; Spalding, G; Yüce, S; Melin, T
2010-06-30
Landfill or digester gas can contain man-made volatile methylsiloxanes (VMS), usually in the range of a few milligrams per normal cubic metre (Nm(3)). Until now, no standard method for siloxane quantification exists and there is controversy with respect to which sampling procedure is most suitable. This paper presents an analytical and a sampling procedure for the quantification of common VMS in biogas via GC-MS and polyvinyl fluoride (Tedlar) bags. Two commercially available Tedlar bag models are studied. One is equipped with a polypropylene valve with integrated septum, the other with a dual port fitting made from stainless steel. Siloxane recovery in landfill gas samples is investigated as a function of storage time, temperature, surface-to-volume ratio and background gas. Recovery was found to depend on the type of fitting employed. The siloxanes sampled in the bag with the polypropylene valve show high and stable recovery, even after more than 30 days. Sufficiently low detection limits below 10 microg Nm(-3) and good reproducibility can be achieved. The method is therefore well applicable to biogas, greatly facilitating sampling in comparison with other common techniques involving siloxane enrichment using sorption media. PMID:20685441
DEFF Research Database (Denmark)
Rantanen, Jukka; Wikström, Håkan
2005-01-01
Different spectroscopic approaches have proved to be excellent analytical tools for monitoring process-induced transformations of active pharmaceutical ingredients during pharmaceutical unit operations. In order to use these tools effectively, it is necessary to build calibration models that describe the relationship between the amount of each solid-state form of interest and the spectroscopic signal. In this study, near-infrared (NIR) and Raman spectroscopic methods have been evaluated for the quantification of hydrate and anhydrate forms in pharmaceutical powders. Process type spectrometers were used to collect the data and the role of the sampling procedure was examined. Multivariate regression models were compared with traditional univariate calibrations and special emphasis was placed on data treatment prior to multivariate modeling by partial least squares (PLS). It was found that the measured sample volume greatly affected the performance of the model whereby the calibrations were significantly improved by utilizing a larger sampling area. In addition, multivariate regression did not always improve the predictability of the data compared to univariate analysis. The data treatment prior to multivariate modeling had a significant influence on the quality of predictions with standard normal variate transformation generally proving to be the best preprocessing method. When the appropriate sampling techniques and data analysis methods were utilized, both NIR and Raman spectroscopy were found to be suitable methods for the quantification of anhydrate/hydrate in powder systems, and thus the method of choice will depend on the conditions in the process under investigation.
Fee, James A.; Case, David A.; Noodleman, Louis
2008-01-01
A mechanism for proton pumping by the B-type cytochrome c oxidases is presented in which one proton is pumped in conjunction with the weakly-exergonic, two-electron reduction of Fe-bound O2 to the Fe-Cu bridging peroxodianion, and three protons are pumped in conjunction with the highly-exergonic, two-electron reduction of Fe(III)-?O-O?-Cu(II) to form water and the active oxidized enzyme, Fe(III)-?OH, Cu(II). The scheme is based on the active site structure of cytochrome ba3 from Thermus therm...
Reliability quantification and visualization for electric microgrids
Panwar, Mayank
The electric grid in the United States is undergoing modernization from the state of an aging infrastructure of the past to a more robust and reliable power system of the future. The primary efforts in this direction have come from the federal government through the American Recovery and Reinvestment Act of 2009 (Recovery Act). This has provided the U.S. Department of Energy (DOE) with 4.5 billion to develop and implement programs through DOE's Office of Electricity Delivery and Energy Reliability (OE) over the a period of 5 years (2008-2012). This was initially a part of Title XIII of the Energy Independence and Security Act of 2007 (EISA) which was later modified by Recovery Act. As a part of DOE's Smart Grid Programs, Smart Grid Investment Grants (SGIG), and Smart Grid Demonstration Projects (SGDP) were developed as two of the largest programs with federal grants of 3.4 billion and $600 million respectively. The Renewable and Distributed Systems Integration (RDSI) demonstration projects were launched in 2008 with the aim of reducing peak electricity demand by 15 percent at distribution feeders. Nine such projects were competitively selected located around the nation. The City of Fort Collins in co-operative partnership with other federal and commercial entities was identified to research, develop and demonstrate a 3.5MW integrated mix of heterogeneous distributed energy resources (DER) to reduce peak load on two feeders by 20-30 percent. This project was called FortZED RDSI and provided an opportunity to demonstrate integrated operation of group of assets including demand response (DR), as a single controllable entity which is often called a microgrid. As per IEEE Standard 1547.4-2011 (IEEE Guide for Design, Operation, and Integration of Distributed Resource Island Systems with Electric Power Systems), a microgrid can be defined as an electric power system which has following characteristics: (1) DR and load are present, (2) has the ability to disconnect from and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.
Standardless quantification by parameter optimization in electron probe microanalysis
International Nuclear Information System (INIS)
A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: ? A method for standardless quantification in EPMA is presented. ? It gives better results than the commercial software GENESIS Spectrum. ? It gives better results than the software DTSA. ? It allows the determination of the conductive coating thickness. ? It gives an estimation for the concentration uncertainties.
Current Issues in the Quantification of Federal Reserved Water Rights
Brookshire, David S.; Watts, Gary L.; Merrill, James L.
1985-11-01
This paper examines the quantification of federal reserved water rights from legal, institutional, and economic perspectives. Special attention is directed toward Indian reserved water rights and the concept of practicably irrigable acreage. We conclude by examining current trends and exploring alternative approaches to the dilemma of quantifying Indian reserved water rights.
Study on crack leak quantification of pressure pipeline
International Nuclear Information System (INIS)
Based on the analysis of attenuation characteristics of leak acoustic emission signals, the attenuation constant and the correlation with leak rate are obtained by the experiment of pipeline crack leak. The calculation method of crack leak rate for pressure pipeline is presented. The experiment shows that the method is effective for crack leak quantification of pressure pipeline. (authors)
Leishmania parasite detection and quantification using PCR-ELISA.
Czech Academy of Sciences Publication Activity Database
Kobets, Tetyana; Badalová, Jana; Grekov, Igor; Havelková, Helena; Lipoldová, Marie
2010-01-01
Ro?. 5, ?. 6 (2010), s. 1074-1080. ISSN 1754-2189 R&D Projects: GA ?R GA310/08/1697; GA MŠk(CZ) LC06009 Institutional research plan: CEZ:AV0Z50520514 Keywords : polymerase chain reaction * Leishmania major infection * parasite quantification Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 8.362, year: 2010