WorldWideScience

Sample records for rate theory model

  1. Rate theory

    International Nuclear Information System (INIS)

    Maillard, S.; Skorek, R.; Maugis, P.; Dumont, M.

    2015-01-01

    This chapter presents the basic principles of cluster dynamics as a particular case of mesoscopic rate theory models developed to investigate fuel behaviour under irradiation such as in UO 2 . It is shown that as this method simulates the evolution of the concentration of every type of point or aggregated defect in a grain of material. It produces rich information that sheds light on the mechanisms involved in microstructure evolution and gas behaviour that are not accessible through conventional models but yet can provide for improvements in those models. Cluster dynamics parameters are mainly the energetic values governing the basic evolution mechanisms of the material (diffusion, trapping and thermal resolution). In this sense, the model has a general applicability to very different operational situations (irradiation, ion-beam implantation, annealing) provided that they rely on the same basic mechanisms, without requiring additional data fitting, as is required for more empirical conventional models. This technique, when applied to krypton implanted and annealed samples, yields a precise interpretation of the release curves and helps assess migration mechanisms and the krypton diffusion coefficient, for which data is very difficult to obtain due to the low solubility of the gas. (authors)

  2. Situated learning theory: adding rate and complexity effects via Kauffman's NK model.

    Science.gov (United States)

    Yuan, Yu; McKelvey, Bill

    2004-01-01

    For many firms, producing information, knowledge, and enhancing learning capability have become the primary basis of competitive advantage. A review of organizational learning theory identifies two approaches: (1) those that treat symbolic information processing as fundamental to learning, and (2) those that view the situated nature of cognition as fundamental. After noting that the former is inadequate because it focuses primarily on behavioral and cognitive aspects of individual learning, this paper argues the importance of studying learning as interactions among people in the context of their environment. It contributes to organizational learning in three ways. First, it argues that situated learning theory is to be preferred over traditional behavioral and cognitive learning theories, because it treats organizations as complex adaptive systems rather than mere information processors. Second, it adds rate and nonlinear learning effects. Third, following model-centered epistemology, it uses an agent-based computational model, in particular a "humanized" version of Kauffman's NK model, to study the situated nature of learning. Using simulation results, we test eight hypotheses extending situated learning theory in new directions. The paper ends with a discussion of possible extensions of the current study to better address key issues in situated learning.

  3. Model theory

    CERN Document Server

    Chang, CC

    2012-01-01

    Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko

  4. Rate Theory Modeling and Simulation of Silicide Fuel at LWR Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Miao, Yinbin [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Ye, Bei [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Hofman, Gerard [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Yacout, Abdellatif [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States). Fuel Modeling and Simulation; Mei, Zhi-Gang [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division

    2016-08-29

    As a promising candidate for the accident tolerant fuel (ATF) used in light water reactors (LWRs), the fuel performance of uranium silicide (U3Si2) at LWR conditions needs to be well understood. In this report, rate theory model was developed based on existing experimental data and density functional theory (DFT) calculations so as to predict the fission gas behavior in U3Si2 at LWR conditions. The fission gas behavior of U3Si2 can be divided into three temperature regimes. During steady-state operation, the majority of the fission gas stays in intragranular bubbles, whereas the dominance of intergranular bubbles and fission gas release only occurs beyond 1000 K. The steady-state rate theory model was also used as reference to establish a gaseous swelling correlation of U3Si2 for the BISON code. Meanwhile, the overpressurized bubble model was also developed so that the fission gas behavior at LOCA can be simulated. LOCA simulation showed that intragranular bubbles are still dominant after a 70 second LOCA, resulting in a controllable gaseous swelling. The fission gas behavior of U3Si2 at LWR conditions is benign according to the rate theory prediction at both steady-state and LOCA conditions, which provides important references to the qualification of U3Si2 as a LWR fuel material with excellent fuel performance and enhanced accident tolerance.

  5. Basic Exchange Rate Theories

    NARCIS (Netherlands)

    J.G.M. van Marrewijk (Charles)

    2005-01-01

    textabstractThis four-chapter overview of basic exchange rate theories discusses (i) the elasticity and absorption approach, (ii) the (long-run) implications of the monetary approach, (iii) the short-run effects of monetary and fiscal policy under various economic conditions, and (iv) the transition

  6. Rate Theory Modeling and Simulations of Silicide Fuel at LWR Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Miao, Yinbin [Argonne National Lab. (ANL), Argonne, IL (United States); Ye, Bei [Argonne National Lab. (ANL), Argonne, IL (United States); Mei, Zhigang [Argonne National Lab. (ANL), Argonne, IL (United States); Hofman, Gerard [Argonne National Lab. (ANL), Argonne, IL (United States); Yacout, Abdellatif [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-12-10

    Uranium silicide (U3Si2) fuel has higher thermal conductivity and higher uranium density, making it a promising candidate for the accident-tolerant fuel (ATF) used in light water reactors (LWRs). However, previous studies on the fuel performance of U3Si2, including both experimental and computational approaches, have been focusing on the irradiation conditions in research reactors, which usually involve low operation temperatures and high fuel burnups. Thus, it is important to examine the fuel performance of U3Si2 at typical LWR conditions so as to evaluate the feasibility of replacing conventional uranium dioxide fuel with this silicide fuel material. As in-reactor irradiation experiments involve significant time and financial cost, it is appropriate to utilize modeling tools to estimate the behavior of U3Si2 in LWRs based on all those available research reactor experimental references and state-of-the-art density functional theory (DFT) calculation capabilities at the early development stage. Hence, in this report, a comprehensive investigation of the fission gas swelling behavior of U3Si2 at LWR conditions is introduced. The modeling efforts mentioned in this report was based on the rate theory (RT) model of fission gas bubble evolution that has been successfully applied for a variety of fuel materials at devious reactor conditions. Both existing experimental data and DFT-calculated results were used for the optimization of the parameters adopted by the RT model. Meanwhile, the fuel-cladding interaction was captured by the coupling of the RT model with simplified mechanical correlations. Therefore, the swelling behavior of U3Si2 fuel and its consequent interaction with cladding in LWRs was predicted by the rate theory modeling, providing valuable information for the development of U3Si2 fuel as an accident

  7. Model theory

    CERN Document Server

    Hodges, Wilfrid

    1993-01-01

    An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.

  8. Comparison of rate theory based modeling calculations with the surveillance test results of Korean light water reactors

    International Nuclear Information System (INIS)

    Lee, Gyeong Geun; Lee, Yong Bok; Kim, Min Chul; Kwon, Junh Yun

    2012-01-01

    Neutron irradiation to reactor pressure vessel (RPV) steels causes a decrease in fracture toughness and an increase in yield strength while in service. It is generally accepted that the growth of point defect cluster (PDC) and copper rich precipitate (CRP) affects radiation hardening of RPV steels. A number of models have been proposed to account for the embrittlement of RPV steels. The rate theory based modeling mathematically described the evolution of radiation induced microstructures of ferritic steels under neutron irradiation. In this work, we compared the rate theory based modeling calculation with the surveillance test results of Korean Light Water Reactors (LWRs)

  9. Modeling of mitochondria bioenergetics using a composable chemiosmotic energy transduction rate law: theory and experimental validation.

    Directory of Open Access Journals (Sweden)

    Ivan Chang

    Full Text Available Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1 it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2 it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3 it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with

  10. Modeling of mitochondria bioenergetics using a composable chemiosmotic energy transduction rate law: theory and experimental validation.

    Science.gov (United States)

    Chang, Ivan; Heiske, Margit; Letellier, Thierry; Wallace, Douglas; Baldi, Pierre

    2011-01-01

    Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1) it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2) it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3) it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with our experimentally

  11. A dual theory of price and value in a meso-scale economic model with stochastic profit rate

    Science.gov (United States)

    Greenblatt, R. E.

    2014-12-01

    The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.

  12. Combination of poroelasticity theory and constant strain rate test in modelling land subsidence due to groundwater extraction

    Science.gov (United States)

    Pham, Tien Hung; Rühaak, Wolfram; Sass, Ingo

    2017-04-01

    Extensive groundwater extraction leads to a drawdown of the ground water table. Consequently, soil effective stress increases and can cause land subsidence. Analysis of land subsidence generally requires a numerical model based on poroelasticity theory, which was first proposed by Biot (1941). In the review of regional land subsidence accompanying groundwater extraction, Galloway and Burbey (2011) stated that more research and application is needed in coupling of stress-dependent land subsidence process. In geotechnical field, the constant rate of strain tests (CRS) was first introduced in 1969 (Smith and Wahls 1969) and was standardized in 1982 through the designation D4186-82 by American Society for Testing and Materials. From the reading values of CRS tests, the stress-dependent parameters of poroelasticity model can be calculated. So far, there is no research to link poroelasticity theory with CRS tests in modelling land subsidence due to groundwater extraction. One dimensional CRS tests using conventional compression cell and three dimension CRS tests using Rowe cell were performed. The tests were also modelled by using finite element method with mixed elements. Back analysis technique is used to find the suitable values of hydraulic conductivity and bulk modulus that depend on the stress or void ratio. Finally, the obtained results are used in land subsidence models. Biot, M. A. (1941). "General theory of three-dimensional consolidation." Journal of applied physics 12(2): 155-164. Galloway, D. L. and T. J. Burbey (2011). "Review: Regional land subsidence accompanying groundwater extraction." Hydrogeology Journal 19(8): 1459-1486. Smith, R. E. and H. E. Wahls (1969). "Consolidation under constant rates of strain." Journal of Soil Mechanics & Foundations Div.

  13. Rate Change Big Bang Theory

    Science.gov (United States)

    Strickland, Ken

    2013-04-01

    The Rate Change Big Bang Theory redefines the birth of the universe with a dramatic shift in energy direction and a new vision of the first moments. With rate change graph technology (RCGT) we can look back 13.7 billion years and experience every step of the big bang through geometrical intersection technology. The analysis of the Big Bang includes a visualization of the first objects, their properties, the astounding event that created space and time as well as a solution to the mystery of anti-matter.

  14. A Theory of Interest Rate Stepping : Inflation Targeting in a Dynamic Menu Cost Model

    NARCIS (Netherlands)

    Eijffinger, S.C.W.; Schaling, E.; Verhagen, W.H.

    1999-01-01

    Abstract: A stylised fact of monetary policy making is that central banks do not immediately respond to new information but rather seem to prefer to wait until sufficient ‘evidence’ to warrant a change has accumulated. However, theoretical models of inflation targeting imply that an optimising

  15. Rate-distortion theory and human perception.

    Science.gov (United States)

    Sims, Chris R

    2016-07-01

    The fundamental goal of perception is to aid in the achievement of behavioral objectives. This requires extracting and communicating useful information from noisy and uncertain sensory signals. At the same time, given the complexity of sensory information and the limitations of biological information processing, it is necessary that some information must be lost or discarded in the act of perception. Under these circumstances, what constitutes an 'optimal' perceptual system? This paper describes the mathematical framework of rate-distortion theory as the optimal solution to the problem of minimizing the costs of perceptual error subject to strong constraints on the ability to communicate or transmit information. Rate-distortion theory offers a general and principled theoretical framework for developing computational-level models of human perception (Marr, 1982). Models developed in this framework are capable of producing quantitatively precise explanations for human perceptual performance, while yielding new insights regarding the nature and goals of perception. This paper demonstrates the application of rate-distortion theory to two benchmark domains where capacity limits are especially salient in human perception: discrete categorization of stimuli (also known as absolute identification) and visual working memory. A software package written for the R statistical programming language is described that aids in the development of models based on rate-distortion theory. Copyright © 2016 The Author. Published by Elsevier B.V. All rights reserved.

  16. A Theory-Based Model for Understanding Faculty Intention to Use Students Ratings to Improve Teaching in a Health Sciences Institution in Puerto Rico

    Science.gov (United States)

    Collazo, Andrés A.

    2018-01-01

    A model derived from the theory of planned behavior was empirically assessed for understanding faculty intention to use student ratings for teaching improvement. A sample of 175 professors participated in the study. The model was statistically significant and had a very large explanatory power. Instrumental attitude, affective attitude, perceived…

  17. Model theory and modules

    CERN Document Server

    Prest, M

    1988-01-01

    In recent years the interplay between model theory and other branches of mathematics has led to many deep and intriguing results. In this, the first book on the topic, the theme is the interplay between model theory and the theory of modules. The book is intended to be a self-contained introduction to the subject and introduces the requisite model theory and module theory as it is needed. Dr Prest develops the basic ideas concerning what can be said about modules using the information which may be expressed in a first-order language. Later chapters discuss stability-theoretic aspects of module

  18. Theory of nanolaser devices: Rate equation analysis versus microscopic theory

    DEFF Research Database (Denmark)

    Lorke, Michael; Skovgård, Troels Suhr; Gregersen, Niels

    2013-01-01

    A rate equation theory for quantum-dot-based nanolaser devices is developed. We show that these rate equations are capable of reproducing results of a microscopic semiconductor theory, making them an appropriate starting point for complex device simulations of nanolasers. The input...

  19. Information theory and rate distortion theory for communications and compression

    CERN Document Server

    Gibson, Jerry

    2013-01-01

    This book is very specifically targeted to problems in communications and compression by providing the fundamental principles and results in information theory and rate distortion theory for these applications and presenting methods that have proved and will prove useful in analyzing and designing real systems. The chapters contain treatments of entropy, mutual information, lossless source coding, channel capacity, and rate distortion theory; however, it is the selection, ordering, and presentation of the topics within these broad categories that is unique to this concise book. While the cover

  20. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

    Science.gov (United States)

    Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

    2017-06-01

    To extend the evidence on the reliability and construct validity of the Five-Factor Model Rating Form (FFMRF) in its self-report version, two independent samples of Italian participants, which were composed of 510 adolescent high school students and 457 community-dwelling adults, respectively, were administered the FFMRF in its Italian translation. Adolescent participants were also administered the Italian translation of the Borderline Personality Features Scale for Children-11 (BPFSC-11), whereas adult participants were administered the Italian translation of the Triarchic Psychopathy Measure (TriPM). Cronbach α values were consistent with previous findings; in both samples, average interitem r values indicated acceptable internal consistency for all FFMRF scales. A multidimensional graded item response theory model indicated that the majority of FFMRF items had adequate discrimination parameters; information indices supported the reliability of the FFMRF scales. Both categorical (i.e., item-level) and scale-level regression analyses suggested that the FFMRF scores may predict a nonnegligible amount of variance in the BPFSC-11 total score in adolescent participants, and in the TriPM scale scores in adult participants.

  1. Lapse rate modeling

    DEFF Research Database (Denmark)

    De Giovanni, Domenico

    2010-01-01

    prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...

  2. Lapse Rate Modeling

    DEFF Research Database (Denmark)

    De Giovanni, Domenico

    prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...

  3. Theory and modeling group

    Science.gov (United States)

    Holman, Gordon D.

    1989-01-01

    The primary purpose of the Theory and Modeling Group meeting was to identify scientists engaged or interested in theoretical work pertinent to the Max '91 program, and to encourage theorists to pursue modeling which is directly relevant to data which can be expected to result from the program. A list of participants and their institutions is presented. Two solar flare paradigms were discussed during the meeting -- the importance of magnetic reconnection in flares and the applicability of numerical simulation results to solar flare studies.

  4. Modeling emerald ash borer dispersal using percolation theory: estimating the rate of range expansion in a fragmented landscape

    Science.gov (United States)

    Robin A. J. Taylor; Daniel A. Herms; Louis R. Iverson

    2008-01-01

    The dispersal of organisms is rarely random, although diffusion processes can be useful models for movement in approximately homogeneous environments. However, the environments through which all organisms disperse are far from uniform at all scales. The emerald ash borer (EAB), Agrilus planipennis, is obligate on ash (Fraxinus spp...

  5. On the theory of interest rate policy

    Directory of Open Access Journals (Sweden)

    Heinz-Peter Spahn

    2001-12-01

    Full Text Available A new consensus in the theory of monetary policy has been reached pointing to the pivotal role of interest rates that are set in accordance with central banks' reaction functions. The decisive criterion of assessing the Taylor rule, inflation and monetary targeting is not the macrotheoretic foundation of these concepts. They serve as "languages" coordinating heterogeneous beliefs among policy makers and private agents, and should also allow rule-based discretionary policies when markets are in need of leadership. Contrary to the ECB dogma, the Fed is right to have an eye on the risks of inflation and unemployment.

  6. Chartist Trading in Exchange Rate Theory

    OpenAIRE

    Selander, Carina

    2006-01-01

    This thesis consists of four papers, of which paper 1 and 4 are co-written with Mikael Bask. Paper [1] implements chartists trading in a sticky-price monetary model for determining the exchange rate. It is demonstrated that chartists cause the exchange rate to "overshoot the overshooting equilibrium" of a sticky-price monetary model. Chartists base their trading on a short-long moving average. The importance of technical trading depends inversely on the time horizon in currency trade. The exc...

  7. AN EDUCATIONAL THEORY MODEL--(SIGGS), AN INTEGRATION OF SET THEORY, INFORMATION THEORY, AND GRAPH THEORY WITH GENERAL SYSTEMS THEORY.

    Science.gov (United States)

    MACCIA, ELIZABETH S.; AND OTHERS

    AN ANNOTATED BIBLIOGRAPHY OF 20 ITEMS AND A DISCUSSION OF ITS SIGNIFICANCE WAS PRESENTED TO DESCRIBE CURRENT UTILIZATION OF SUBJECT THEORIES IN THE CONSTRUCTION OF AN EDUCATIONAL THEORY. ALSO, A THEORY MODEL WAS USED TO DEMONSTRATE CONSTRUCTION OF A SCIENTIFIC EDUCATIONAL THEORY. THE THEORY MODEL INCORPORATED SET THEORY (S), INFORMATION THEORY…

  8. Lectures on algebraic model theory

    CERN Document Server

    Hart, Bradd

    2001-01-01

    In recent years, model theory has had remarkable success in solving important problems as well as in shedding new light on our understanding of them. The three lectures collected here present recent developments in three such areas: Anand Pillay on differential fields, Patrick Speissegger on o-minimality and Matthias Clasen and Matthew Valeriote on tame congruence theory.

  9. Model integration and a theory of models

    OpenAIRE

    Dolk, Daniel R.; Kottemann, Jeffrey E.

    1993-01-01

    Model integration extends the scope of model management to include the dimension of manipulation as well. This invariably leads to comparisons with database theory. Model integration is viewed from four perspectives: Organizational, definitional, procedural, and implementational. Strategic modeling is discussed as the organizational motivation for model integration. Schema and process integration are examined as the logical and manipulation counterparts of model integr...

  10. Warped models in string theory

    International Nuclear Information System (INIS)

    Acharya, B.S.; Benini, F.; Valandro, R.

    2006-12-01

    Warped models, originating with the ideas of Randall and Sundrum, provide a fascinating extension of the standard model with interesting consequences for the LHC. We investigate in detail how string theory realises such models, with emphasis on fermion localisation and the computation of Yukawa couplings. We find, in contrast to the 5d models, that fermions can be localised anywhere in the extra dimension, and that there are new mechanisms to generate exponential hierarchies amongst the Yukawa couplings. We also suggest a way to distinguish these string theory models with data from the LHC. (author)

  11. Growth rate, population entropy, and perturbation theory.

    OpenAIRE

    Demetrius, L.

    1989-01-01

    This paper is concerned with the connection between two classes of population variables: measures of population growth rate—the Malthusian parameter, the net reproduction rate, the gross reproduction rate, and the mean life expectancy; and measures of demographic heterogeneity—population entropy. It is shown that the entropy functions predict the response of the growth rate parameters to perturbations in the age-specific fecundity and mortality schedule. These results are invoked to introduce...

  12. Model Theory in Algebra, Analysis and Arithmetic

    CERN Document Server

    Dries, Lou; Macpherson, H Dugald; Pillay, Anand; Toffalori, Carlo; Wilkie, Alex J

    2014-01-01

    Presenting recent developments and applications, the book focuses on four main topics in current model theory: 1) the model theory of valued fields; 2) undecidability in arithmetic; 3) NIP theories; and 4) the model theory of real and complex exponentiation. Young researchers in model theory will particularly benefit from the book, as will more senior researchers in other branches of mathematics.

  13. Expectancy Theory Modeling

    Science.gov (United States)

    1982-08-01

    accomplish the task, (2) the instrumentality of task performance for job outcomes, and (3) the instrumentality of outcomes for need satisfaction . We...in this discussion: effort, performance , outcomes, and needs. In order to present briefly the conventional approach to the Vroom models, another...Presumably, this is the final event in the sequence of effort, performance , outcome, and need satisfaction . The actual research reported in expectancy

  14. Minisuperspace models in histories theory

    International Nuclear Information System (INIS)

    Anastopoulos, Charis; Savvidou, Ntina

    2005-01-01

    We study the Robertson-Walker minisuperspace model in histories theory, motivated by the results that emerged from the histories approach to general relativity. We examine, in particular, the issue of time reparametrization in such systems. The model is quantized using an adaptation of reduced state space quantization. We finally discuss the classical limit, the implementation of initial cosmological conditions and estimation of probabilities in the histories context

  15. Item Response Theory Analyses of the Parent and Teacher Ratings of the DSM-IV ADHD Rating Scale

    Science.gov (United States)

    Gomez, Rapson

    2008-01-01

    The graded response model (GRM), which is based on item response theory (IRT), was used to evaluate the psychometric properties of the inattention and hyperactivity/impulsivity symptoms in an ADHD rating scale. To accomplish this, parents and teachers completed the DSM-IV ADHD Rating Scale (DARS; Gomez et al., "Journal of Child Psychology and…

  16. Relaxed Poisson cure rate models.

    Science.gov (United States)

    Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N

    2016-03-01

    The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Foundations of compositional model theory

    Czech Academy of Sciences Publication Activity Database

    Jiroušek, Radim

    2011-01-01

    Roč. 40, č. 6 (2011), s. 623-678 ISSN 0308-1079 R&D Projects: GA MŠk 1M0572; GA ČR GA201/09/1891; GA ČR GEICC/08/E010 Institutional research plan: CEZ:AV0Z10750506 Keywords : multidimensional probability distribution * conditional independence * graphical Markov model * composition of distributions Subject RIV: IN - Informatics, Computer Science Impact factor: 0.667, year: 2011 http://library.utia.cas.cz/separaty/2011/MTR/jirousek-foundations of compositional model theory.pdf

  18. Random walk theory and exchange rate dynamics in transition economies

    Directory of Open Access Journals (Sweden)

    Gradojević Nikola

    2010-01-01

    Full Text Available This paper investigates the validity of the random walk theory in the Euro-Serbian dinar exchange rate market. We apply Andrew Lo and Archie MacKinlay's (1988 conventional variance ratio test and Jonathan Wright's (2000 non-parametric ranks and signs based variance ratio tests to the daily Euro/Serbian dinar exchange rate returns using the data from January 2005 - December 2008. Both types of variance ratio tests overwhelmingly reject the random walk hypothesis over the data span. To assess the robustness of our findings, we examine the forecasting performance of a non-linear, nonparametric model in the spirit of Francis Diebold and James Nason (1990 and find that it is able to significantly improve upon the random walk model, thus confirming the existence of foreign exchange market imperfections in a small transition economy such as Serbia. In the last part of the paper, we conduct a comparative study on how our results relate to those of other transition economies in the region.

  19. The single-process biochemical reaction of Rubisco: a unified theory and model with the effects of irradiance, CO₂ and rate-limiting step on the kinetics of C₃ and C₄ photosynthesis from gas exchange.

    Science.gov (United States)

    Farazdaghi, Hadi

    2011-02-01

    Photosynthesis is the origin of oxygenic life on the planet, and its models are the core of all models of plant biology, agriculture, environmental quality and global climate change. A theory is presented here, based on single process biochemical reactions of Rubisco, recognizing that: In the light, Rubisco activase helps separate Rubisco from the stored ribulose-1,5-bisphosphate (RuBP), activates Rubisco with carbamylation and addition of Mg²(+), and then produces two products, in two steps: (Step 1) Reaction of Rubisco with RuBP produces a Rubisco-enediol complex, which is the carboxylase-oxygenase enzyme (Enco) and (Step 2) Enco captures CO₂ and/or O₂ and produces intermediate products leading to production and release of 3-phosphoglycerate (PGA) and Rubisco. PGA interactively controls (1) the carboxylation-oxygenation, (2) electron transport, and (3) triosephosphate pathway of the Calvin-Benson cycle that leads to the release of glucose and regeneration of RuBP. Initially, the total enzyme participates in the two steps of the reaction transitionally and its rate follows Michaelis-Menten kinetics. But, for a continuous steady state, Rubisco must be divided into two concurrently active segments for the two steps. This causes a deviation of the steady state from the transitional rate. Kinetic models are developed that integrate the transitional and the steady state reactions. They are tested and successfully validated with verifiable experimental data. The single-process theory is compared to the widely used two-process theory of Farquhar et al. (1980. Planta 149, 78-90), which assumes that the carboxylation rate is either Rubisco-limited at low CO₂ levels such as CO₂ compensation point, or RuBP regeneration-limited at high CO₂. Since the photosynthesis rate cannot increase beyond the two-process theory's Rubisco limit at the CO₂ compensation point, net photosynthesis cannot increase above zero in daylight, and since there is always respiration at

  20. Superfield theory and supermatrix model

    International Nuclear Information System (INIS)

    Park, Jeong-Hyuck

    2003-01-01

    We study the noncommutative superspace of arbitrary dimensions in a systematic way. Superfield theories on a noncommutative superspace can be formulated in two folds, through the star product formalism and in terms of the supermatrices. We elaborate the duality between them by constructing the isomorphism explicitly and relating the superspace integrations of the star product lagrangian or the superpotential to the traces of the supermatrices. We show there exists an interesting fine tuned commutative limit where the duality can be still maintained. Namely on the commutative superspace too, there exists a supermatrix model description for the superfield theory. We interpret the result in the context of the wave particle duality. The dual particles for the superfields in even and odd spacetime dimensions are D-instantons and D0-branes respectively to be consistent with the T-duality. (author)

  1. Rate Theory for Correlated Processes: Double Jumps in Adatom Diffusion

    DEFF Research Database (Denmark)

    Jacobsen, J.; Jacobsen, Karsten Wedel; Sethna, J.

    1997-01-01

    We study the rate of activated motion over multiple barriers, in particular the correlated double jump of an adatom diffusing on a missing-row reconstructed platinum (110) surface. We develop a transition path theory, showing that the activation energy is given by the minimum-energy trajectory...... which succeeds in the double jump. We explicitly calculate this trajectory within an effective-medium molecular dynamics simulation. A cusp in the acceptance region leads to a root T prefactor for the activated rate of double jumps. Theory and numerical results agree....

  2. A Membrane Model from Implicit Elasticity Theory

    Science.gov (United States)

    Freed, A. D.; Liao, J.; Einstein, D. R.

    2014-01-01

    A Fungean solid is derived for membranous materials as a body defined by isotropic response functions whose mathematical structure is that of a Hookean solid where the elastic constants are replaced by functions of state derived from an implicit, thermodynamic, internal-energy function. The theory utilizes Biot’s (1939) definitions for stress and strain that, in 1-dimension, are the stress/strain measures adopted by Fung (1967) when he postulated what is now known as Fung’s law. Our Fungean membrane model is parameterized against a biaxial data set acquired from a porcine pleural membrane subjected to three, sequential, proportional, planar extensions. These data support an isotropic/deviatoric split in the stress and strain-rate hypothesized by our theory. These data also demonstrate that the material response is highly non-linear but, otherwise, mechanically isotropic. These data are described reasonably well by our otherwise simple, four-parameter, material model. PMID:24282079

  3. Inflation Rate Modelling in Indonesia

    Directory of Open Access Journals (Sweden)

    Rezzy Eko Caraka

    2016-10-01

    Full Text Available The purposes of this research were to analyse: (i Modelling the inflation rate in Indonesia with parametric regression. (ii Modelling the inflation rate in Indonesia using non-parametric regression spline multivariable (iii Determining the best model the inflation rate in Indonesia (iv Explaining the relationship inflation model parametric and non-parametric regression spline multivariable. Based on the analysis using the two methods mentioned the coefficient of determination (R2 in parametric regression of 65.1% while non-parametric amounted to 99.39%. To begin with, the factor of money supply or money stock, crude oil prices and the rupiah exchange rate against the dollar is significant on the rate of inflation. The stability of inflation is essential to support sustainable economic development and improve people's welfare. In conclusion, unstable inflation will complicate business planning business activities, both in production and investment activities as well as in the pricing of goods and services produced.DOI: 10.15408/etk.v15i2.3260

  4. Models in cooperative game theory

    CERN Document Server

    Branzei, Rodica; Tijs, Stef

    2008-01-01

    This book investigates models in cooperative game theory in which the players have the possibility to cooperate partially. In a crisp game the agents are either fully involved or not involved at all in cooperation with some other agents, while in a fuzzy game players are allowed to cooperate with infinite many different participation levels, varying from non-cooperation to full cooperation. A multi-choice game describes the intermediate case in which each player may have a fixed number of activity levels. Different set and one-point solution concepts for these games are presented. The properties of these solution concepts and their interrelations on several classes of crisp, fuzzy, and multi-choice games are studied. Applications of the investigated models to many economic situations are indicated as well. The second edition is highly enlarged and contains new results and additional sections in the different chapters as well as one new chapter.

  5. Field theory and the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Dudas, E [Orsay, LPT (France)

    2014-07-01

    This brief introduction to Quantum Field Theory and the Standard Model contains the basic building blocks of perturbation theory in quantum field theory, an elementary introduction to gauge theories and the basic classical and quantum features of the electroweak sector of the Standard Model. Some details are given for the theoretical bias concerning the Higgs mass limits, as well as on obscure features of the Standard Model which motivate new physics constructions.

  6. Lattice models and conformal field theories

    International Nuclear Information System (INIS)

    Saleur, H.

    1988-01-01

    Theoretical studies concerning the connection between critical physical systems and the conformal theories are reviewed. The conformal theory associated to a critical (integrable) lattice model is derived. The obtention of the central charge, critical exponents and torus partition function, using renormalization group arguments, is shown. The quantum group structure, in the integrable lattice models, and the theory of Visaro algebra representations are discussed. The relations between off-critical integrable models and conformal theories, in finite geometries, are studied

  7. Mechanism of Strain Rate Effect Based on Dislocation Theory

    International Nuclear Information System (INIS)

    Kun, Qin; Shi-Sheng, Hu; Li-Ming, Yang

    2009-01-01

    Based on dislocation theory, we investigate the mechanism of strain rate effect. Strain rate effect and dislocation motion are bridged by Orowan's relationship, and the stress dependence of dislocation velocity is considered as the dynamics relationship of dislocation motion. The mechanism of strain rate effect is then investigated qualitatively by using these two relationships although the kinematics relationship of dislocation motion is absent due to complicated styles of dislocation motion. The process of strain rate effect is interpreted and some details of strain rate effect are adequately discussed. The present analyses agree with the existing experimental results. Based on the analyses, we propose that strain rate criteria rather than stress criteria should be satisfied when a metal is fully yielded at a given strain rate. (condensed matter: structure, mechanical and thermal properties)

  8. Halo modelling in chameleon theories

    Energy Technology Data Exchange (ETDEWEB)

    Lombriser, Lucas; Koyama, Kazuya [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX (United Kingdom); Li, Baojiu, E-mail: lucas.lombriser@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: baojiu.li@durham.ac.uk [Institute for Computational Cosmology, Ogden Centre for Fundamental Physics, Department of Physics, University of Durham, Science Laboratories, South Road, Durham, DH1 3LE (United Kingdom)

    2014-03-01

    We analyse modelling techniques for the large-scale structure formed in scalar-tensor theories of constant Brans-Dicke parameter which match the concordance model background expansion history and produce a chameleon suppression of the gravitational modification in high-density regions. Thereby, we use a mass and environment dependent chameleon spherical collapse model, the Sheth-Tormen halo mass function and linear halo bias, the Navarro-Frenk-White halo density profile, and the halo model. Furthermore, using the spherical collapse model, we extrapolate a chameleon mass-concentration scaling relation from a ΛCDM prescription calibrated to N-body simulations. We also provide constraints on the model parameters to ensure viability on local scales. We test our description of the halo mass function and nonlinear matter power spectrum against the respective observables extracted from large-volume and high-resolution N-body simulations in the limiting case of f(R) gravity, corresponding to a vanishing Brans-Dicke parameter. We find good agreement between the two; the halo model provides a good qualitative description of the shape of the relative enhancement of the f(R) matter power spectrum with respect to ΛCDM caused by the extra attractive gravitational force but fails to recover the correct amplitude. Introducing an effective linear power spectrum in the computation of the two-halo term to account for an underestimation of the chameleon suppression at intermediate scales in our approach, we accurately reproduce the measurements from the N-body simulations.

  9. Halo modelling in chameleon theories

    International Nuclear Information System (INIS)

    Lombriser, Lucas; Koyama, Kazuya; Li, Baojiu

    2014-01-01

    We analyse modelling techniques for the large-scale structure formed in scalar-tensor theories of constant Brans-Dicke parameter which match the concordance model background expansion history and produce a chameleon suppression of the gravitational modification in high-density regions. Thereby, we use a mass and environment dependent chameleon spherical collapse model, the Sheth-Tormen halo mass function and linear halo bias, the Navarro-Frenk-White halo density profile, and the halo model. Furthermore, using the spherical collapse model, we extrapolate a chameleon mass-concentration scaling relation from a ΛCDM prescription calibrated to N-body simulations. We also provide constraints on the model parameters to ensure viability on local scales. We test our description of the halo mass function and nonlinear matter power spectrum against the respective observables extracted from large-volume and high-resolution N-body simulations in the limiting case of f(R) gravity, corresponding to a vanishing Brans-Dicke parameter. We find good agreement between the two; the halo model provides a good qualitative description of the shape of the relative enhancement of the f(R) matter power spectrum with respect to ΛCDM caused by the extra attractive gravitational force but fails to recover the correct amplitude. Introducing an effective linear power spectrum in the computation of the two-halo term to account for an underestimation of the chameleon suppression at intermediate scales in our approach, we accurately reproduce the measurements from the N-body simulations

  10. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  11. Divided Saddle Theory: A New Idea for Rate Constant Calculation.

    Science.gov (United States)

    Daru, János; Stirling, András

    2014-03-11

    We present a theory of rare events and derive an algorithm to obtain rates from postprocessing the numerical data of a free energy calculation and the corresponding committor analysis. The formalism is based on the division of the saddle region of the free energy profile of the rare event into two adjacent segments called saddle domains. The method is built on sampling the dynamics within these regions: auxiliary rate constants are defined for the saddle domains and the absolute forward and backward rates are obtained by proper reweighting. We call our approach divided saddle theory (DST). An important advantage of our approach is that it requires only standard computational techniques which are available in most molecular dynamics codes. We demonstrate the potential of DST numerically on two examples: rearrangement of alanine-dipeptide (CH3CO-Ala-NHCH3) conformers and the intramolecular Cope reaction of the fluxional barbaralane molecule.

  12. Quiver gauge theories and integrable lattice models

    International Nuclear Information System (INIS)

    Yagi, Junya

    2015-01-01

    We discuss connections between certain classes of supersymmetric quiver gauge theories and integrable lattice models from the point of view of topological quantum field theories (TQFTs). The relevant classes include 4d N=1 theories known as brane box and brane tilling models, 3d N=2 and 2d N=(2,2) theories obtained from them by compactification, and 2d N=(0,2) theories closely related to these theories. We argue that their supersymmetric indices carry structures of TQFTs equipped with line operators, and as a consequence, are equal to the partition functions of lattice models. The integrability of these models follows from the existence of extra dimension in the TQFTs, which emerges after the theories are embedded in M-theory. The Yang-Baxter equation expresses the invariance of supersymmetric indices under Seiberg duality and its lower-dimensional analogs.

  13. A numerical basis for strain-gradient plasticity theory: Rate-independent and rate-dependent formulations

    DEFF Research Database (Denmark)

    Nielsen, Kim Lau; Niordson, Christian Frithiof

    2014-01-01

    of a single plastic zone is analyzed to illustrate the agreement with earlier published results, whereafter examples of (ii) multiple plastic zone interaction, and (iii) elastic–plastic loading/unloading are presented. Here, the simple shear problem of an infinite slab constrained between rigid plates......A numerical model formulation of the higher order flow theory (rate-independent) by Fleck and Willis [2009. A mathematical basis for strain-gradient plasticity theory – part II: tensorial plastic multiplier. Journal of the Mechanics and Physics of Solids 57, 1045-1057.], that allows for elastic–plastic...... loading/unloading and the interaction of multiple plastic zones, is proposed. The predicted model response is compared to the corresponding rate-dependent version of visco-plastic origin, and coinciding results are obtained in the limit of small strain-rate sensitivity. First, (i) the evolution...

  14. Kinetic aspects of the embedded clusters: Reaction - Rate Theory

    International Nuclear Information System (INIS)

    Despa, F.; Apostol, M.

    1995-07-01

    The main stages of the cluster growth process are reviewed using Reaction - Rate Theory. The precipitation stage is shown as a relaxation of the solute towards a cluster state characterized by a higher stability. The kinetic of the late stage of phase separation, the coarsening process, is analyzed by an off-centre diffusion mechanism. The theoretical results are compared to the experimental ones. (author). 37 refs, 6 figs

  15. Benchmark calculations of thermal reaction rates. I - Quantal scattering theory

    Science.gov (United States)

    Chatfield, David C.; Truhlar, Donald G.; Schwenke, David W.

    1991-01-01

    The thermal rate coefficient for the prototype reaction H + H2 yields H2 + H with zero total angular momentum is calculated by summing, averaging, and numerically integrating state-to-state reaction probabilities calculated by time-independent quantum-mechanical scattering theory. The results are very carefully converged with respect to all numerical parameters in order to provide high-precision benchmark results for confirming the accuracy of new methods and testing their efficiency.

  16. New Pathways between Group Theory and Model Theory

    CERN Document Server

    Fuchs, László; Goldsmith, Brendan; Strüngmann, Lutz

    2017-01-01

    This volume focuses on group theory and model theory with a particular emphasis on the interplay of the two areas. The survey papers provide an overview of the developments across group, module, and model theory while the research papers present the most recent study in those same areas. With introductory sections that make the topics easily accessible to students, the papers in this volume will appeal to beginning graduate students and experienced researchers alike. As a whole, this book offers a cross-section view of the areas in group, module, and model theory, covering topics such as DP-minimal groups, Abelian groups, countable 1-transitive trees, and module approximations. The papers in this book are the proceedings of the conference “New Pathways between Group Theory and Model Theory,” which took place February 1-4, 2016, in Mülheim an der Ruhr, Germany, in honor of the editors’ colleague Rüdiger Göbel. This publication is dedicated to Professor Göbel, who passed away in 2014. He was one of th...

  17. Galaxy Alignments: Theory, Modelling & Simulations

    Science.gov (United States)

    Kiessling, Alina; Cacciato, Marcello; Joachimi, Benjamin; Kirk, Donnacha; Kitching, Thomas D.; Leonard, Adrienne; Mandelbaum, Rachel; Schäfer, Björn Malte; Sifón, Cristóbal; Brown, Michael L.; Rassat, Anais

    2015-11-01

    The shapes of galaxies are not randomly oriented on the sky. During the galaxy formation and evolution process, environment has a strong influence, as tidal gravitational fields in the large-scale structure tend to align nearby galaxies. Additionally, events such as galaxy mergers affect the relative alignments of both the shapes and angular momenta of galaxies throughout their history. These "intrinsic galaxy alignments" are known to exist, but are still poorly understood. This review will offer a pedagogical introduction to the current theories that describe intrinsic galaxy alignments, including the apparent difference in intrinsic alignment between early- and late-type galaxies and the latest efforts to model them analytically. It will then describe the ongoing efforts to simulate intrinsic alignments using both N-body and hydrodynamic simulations. Due to the relative youth of this field, there is still much to be done to understand intrinsic galaxy alignments and this review summarises the current state of the field, providing a solid basis for future work.

  18. Applications of model theory to functional analysis

    CERN Document Server

    Iovino, Jose

    2014-01-01

    During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the

  19. Death Rates in the Calorie Model

    Directory of Open Access Journals (Sweden)

    Martin Machay

    2016-01-01

    Full Text Available The Calorie model unifies the Classical demand and the supply in the food market. Hence, solves the major problem of Classical stationary state. It is, hence, formalization of the Classical theory of population. The model does not reflect the imperfections of reality mentioned by Malthus himself. It is the aim of this brief paper to relax some of the strong assumptions of the Calorie model to make it more realistic. As the results show the political economists were correct. The death resulting from malnutrition can occur way sooner than the stationary state itself. Moreover, progressive and retrograde movements can be easily described by the death rate derived in the paper. JEL Classification: J11, Q11, Q15, Q21, Y90.

  20. Attaining the rate-independent limit of a rate-dependent strain gradient plasticity theory

    DEFF Research Database (Denmark)

    El-Naaman, Salim Abdallah; Nielsen, Kim Lau; Niordson, Christian Frithiof

    2016-01-01

    The existence of characteristic strain rates in rate-dependent material models, corresponding to rate-independent model behavior, is studied within a back stress based rate-dependent higher order strain gradient crystal plasticity model. Such characteristic rates have recently been observed...... for steady-state processes, and the present study aims to demonstrate that the observations in fact unearth a more widespread phenomenon. In this work, two newly proposed back stress formulations are adopted to account for the strain gradient effects in the single slip simple shear case, and characteristic...... rates for a selected quantity are identified through numerical analysis. Evidently, the concept of a characteristic rate, within the rate-dependent material models, may help unlock an otherwise inaccessible parameter space....

  1. Theories, Models and Methodology in Writing Research

    NARCIS (Netherlands)

    Rijlaarsdam, Gert; Bergh, van den Huub; Couzijn, Michel

    1996-01-01

    Theories, Models and Methodology in Writing Research describes the current state of the art in research on written text production. The chapters in the first part offer contributions to the creation of new theories and models for writing processes. The second part examines specific elements of the

  2. The Friction Theory for Viscosity Modeling

    DEFF Research Database (Denmark)

    Cisneros, Sergio; Zeberg-Mikkelsen, Claus Kjær; Stenby, Erling Halfdan

    2001-01-01

    , in the case when experimental information is available a more accurate modeling can be obtained by means of a simple tuning procedure. A tuned f-theory general model can deliver highly accurate viscosity modeling above the saturation pressure and good prediction of the liquid-phase viscosity at pressures......In this work the one-parameter friction theory (f-theory) general models have been extended to the viscosity prediction and modeling of characterized oils. It is demonstrated that these simple models, which take advantage of the repulsive and attractive pressure terms of cubic equations of state...... such as the SRK, PR and PRSV, can provide accurate viscosity prediction and modeling of characterized oils. In the case of light reservoir oils, whose properties are close to those of normal alkanes, the one-parameter f-theory general models can predict the viscosity of these fluids with good accuracy. Yet...

  3. What determines crime rates? An empirical test of integrated economic and sociological theories of criminal behavior

    NARCIS (Netherlands)

    Engelen, Peter Jan; Lander, Michel W.; van Essen, Marc

    Research on crime has by no means reached a definitive conclusion on which factors are related to crime rates. We contribute to the crime literature by providing an integrated empirical model of economic and sociological theories of criminal behavior and by using a very comprehensive set of

  4. A quantitative theory of solid tumor growth, metabolic rate and vascularization.

    Directory of Open Access Journals (Sweden)

    Alexander B Herman

    Full Text Available The relationships between cellular, structural and dynamical properties of tumors have traditionally been studied separately. Here, we construct a quantitative, predictive theory of solid tumor growth, metabolic rate, vascularization and necrosis that integrates the relationships between these properties. To accomplish this, we develop a comprehensive theory that describes the interface and integration of the tumor vascular network and resource supply with the cardiovascular system of the host. Our theory enables a quantitative understanding of how cells, tissues, and vascular networks act together across multiple scales by building on recent theoretical advances in modeling both healthy vasculature and the detailed processes of angiogenesis and tumor growth. The theory explicitly relates tumor vascularization and growth to metabolic rate, and yields extensive predictions for tumor properties, including growth rates, metabolic rates, degree of necrosis, blood flow rates and vessel sizes. Besides these quantitative predictions, we explain how growth rates depend on capillary density and metabolic rate, and why similar tumors grow slower and occur less frequently in larger animals, shedding light on Peto's paradox. Various implications for potential therapeutic strategies and further research are discussed.

  5. Targeting the Real Exchange Rate; Theory and Evidence

    OpenAIRE

    Carlos A. Végh Gramont; Guillermo Calvo; Carmen Reinhart

    1994-01-01

    This paper presents a theoretical and empirical analysis of policies aimed at setting a more depreciated level of the real exchange rate. An intertemporal optimizing model suggests that, in the absence of changes in fiscal policy, a more depreciated level of the real exchange can only be attained temporarily. This can be achieved by means of higher inflation and/or higher real interest rates, depending on the degree of capital mobility. Evidence for Brazil, Chile, and Colombia supports the mo...

  6. Crisis in Context Theory: An Ecological Model

    Science.gov (United States)

    Myer, Rick A.; Moore, Holly B.

    2006-01-01

    This article outlines a theory for understanding the impact of a crisis on individuals and organizations. Crisis in context theory (CCT) is grounded in an ecological model and based on literature in the field of crisis intervention and on personal experiences of the authors. A graphic representation denotes key components and premises of CCT,…

  7. Efficiency and credit ratings: a permutation-information-theory analysis

    International Nuclear Information System (INIS)

    Bariviera, Aurelio Fernandez; Martinez, Lisana B; Zunino, Luciano; Belén Guercio, M; Rosso, Osvaldo A

    2013-01-01

    The role of credit rating agencies has been under severe scrutiny after the subprime crisis. In this paper we explore the relationship between credit ratings and informational efficiency of a sample of thirty nine corporate bonds of US oil and energy companies from April 2008 to November 2012. For this purpose we use a powerful statistical tool, relatively new in the financial literature: the complexity–entropy causality plane. This representation space allows us to graphically classify the different bonds according to their degree of informational efficiency. We find that this classification agrees with the credit ratings assigned by Moody’s. In particular, we detect the formation of two clusters, which correspond to the global categories of investment and speculative grades. Regarding the latter cluster, two subgroups reflect distinct levels of efficiency. Additionally, we also find an intriguing absence of correlation between informational efficiency and firm characteristics. This allows us to conclude that the proposed permutation-information-theory approach provides an alternative practical way to justify bond classification. (paper)

  8. Constraint theory multidimensional mathematical model management

    CERN Document Server

    Friedman, George J

    2017-01-01

    Packed with new material and research, this second edition of George Friedman’s bestselling Constraint Theory remains an invaluable reference for all engineers, mathematicians, and managers concerned with modeling. As in the first edition, this text analyzes the way Constraint Theory employs bipartite graphs and presents the process of locating the “kernel of constraint” trillions of times faster than brute-force approaches, determining model consistency and computational allowability. Unique in its abundance of topological pictures of the material, this book balances left- and right-brain perceptions to provide a thorough explanation of multidimensional mathematical models. Much of the extended material in this new edition also comes from Phan Phan’s PhD dissertation in 2011, titled “Expanding Constraint Theory to Determine Well-Posedness of Large Mathematical Models.” Praise for the first edition: "Dr. George Friedman is indisputably the father of the very powerful methods of constraint theory...

  9. Staircase Models from Affine Toda Field Theory

    CERN Document Server

    Dorey, P; Dorey, Patrick; Ravanini, Francesco

    1993-01-01

    We propose a class of purely elastic scattering theories generalising the staircase model of Al. B. Zamolodchikov, based on the affine Toda field theories for simply-laced Lie algebras g=A,D,E at suitable complex values of their coupling constants. Considering their Thermodynamic Bethe Ansatz equations, we give analytic arguments in support of a conjectured renormalisation group flow visiting the neighbourhood of each W_g minimal model in turn.

  10. Reconstructing bidimensional scalar field theory models

    International Nuclear Information System (INIS)

    Flores, Gabriel H.; Svaiter, N.F.

    2001-07-01

    In this paper we review how to reconstruct scalar field theories in two dimensional spacetime starting from solvable Scrodinger equations. Theree different Schrodinger potentials are analyzed. We obtained two new models starting from the Morse and Scarf II hyperbolic potencials, the U (θ) θ 2 In 2 (θ 2 ) model and U (θ) = θ 2 cos 2 (In(θ 2 )) model respectively. (author)

  11. Short-run Exchange-Rate Dynamics: Theory and Evidence

    DEFF Research Database (Denmark)

    Carlson, John A.; Dahl, Christian Møller; Osler, Carol L.

    Recent research has revealed a wealth of information about the microeconomics of currency markets and thus the determination of exchange rates at short horizons. This information is valuable to us as scientists since, like evidence of macroeconomic regularities, it can provide critical guidance...... of currency markets, it accurately reflects the constraints and objectives faced by the major participants, and it fits key stylized facts concerning returns and order flow. With respect to macroeconomics, the model is consistent with most of the major puzzles that have emerged under floating rates....

  12. A course on basic model theory

    CERN Document Server

    Sarbadhikari, Haimanti

    2017-01-01

    This self-contained book is an exposition of the fundamental ideas of model theory. It presents the necessary background from logic, set theory and other topics of mathematics. Only some degree of mathematical maturity and willingness to assimilate ideas from diverse areas are required. The book can be used for both teaching and self-study, ideally over two semesters. It is primarily aimed at graduate students in mathematical logic who want to specialise in model theory. However, the first two chapters constitute the first introduction to the subject and can be covered in one-semester course to senior undergraduate students in mathematical logic. The book is also suitable for researchers who wish to use model theory in their work.

  13. Gauge theories and integrable lattice models

    International Nuclear Information System (INIS)

    Witten, E.

    1989-01-01

    Investigations of new knot polynomials discovered in the last few years have shown them to be intimately connected with soluble models of two dimensional lattice statistical mechanics. In this paper, these results, which in time may illuminate the whole question of why integrable lattice models exist, are reconsidered from the point of view of three dimensional gauge theory. Expectation values of Wilson lines in three dimensional Chern-Simons gauge theories can be computed by evaluating the partition functions of certain lattice models on finite graphs obtained by projecting the Wilson lines to the plane. The models in question - previously considered in both the knot theory and statistical mechanics literature - are IRF models in which the local Boltzmann weights are the matrix elements of braiding matrices in rational conformal field theories. These matrix elements, in turn, can be represented in three dimensional gauge theory in terms of the expectation value of a certain tetrahedral configuration of Wilson lines. This representation makes manifest a surprising symmetry of the braiding matrix elements in conformal field theory. (orig.)

  14. Theory and modeling of active brazing.

    Energy Technology Data Exchange (ETDEWEB)

    van Swol, Frank B.; Miller, James Edward; Lechman, Jeremy B.; Givler, Richard C.

    2013-09-01

    Active brazes have been used for many years to produce bonds between metal and ceramic objects. By including a relatively small of a reactive additive to the braze one seeks to improve the wetting and spreading behavior of the braze. The additive modifies the substrate, either by a chemical surface reaction or possibly by alloying. By its nature, the joining process with active brazes is a complex nonequilibrium non-steady state process that couples chemical reaction, reactant and product diffusion to the rheology and wetting behavior of the braze. Most of the these subprocesses are taking place in the interfacial region, most are difficult to access by experiment. To improve the control over the brazing process, one requires a better understanding of the melting of the active braze, rate of the chemical reaction, reactant and product diffusion rates, nonequilibrium composition-dependent surface tension as well as the viscosity. This report identifies ways in which modeling and theory can assist in improving our understanding.

  15. Quantum theory of enhanced unimolecular reaction rates below the ergodicity threshold

    International Nuclear Information System (INIS)

    Leitner, David M.; Wolynes, Peter G.

    2006-01-01

    A variety of unimolecular reactions exhibit measured rates that exceed Rice-Ramsperger-Kassel-Marcus (RRKM) predictions. We show using the local random matrix theory (LRMT) of vibrational energy flow how the quantum localization of the vibrational states of a molecule, by violating the ergodicity assumption, can give rise to such an enhancement of the apparent reaction rate. We present an illustrative calculation using LRMT for a model 12-vibrational mode organic molecule to show that below the ergodicity threshold the reaction rate may exceed many times the RRKM prediction due to quantum localization of vibrational states

  16. Adult Attachment Ratings (AAR): an item response theory analysis.

    Science.gov (United States)

    Pilkonis, Paul A; Kim, Yookyung; Yu, Lan; Morse, Jennifer Q

    2014-01-01

    The Adult Attachment Ratings (AAR) include 3 scales for anxious, ambivalent attachment (excessive dependency, interpersonal ambivalence, and compulsive care-giving), 3 for avoidant attachment (rigid self-control, defensive separation, and emotional detachment), and 1 for secure attachment. The scales include items (ranging from 6-16 in their original form) scored by raters using a 3-point format (0 = absent, 1 = present, and 2 = strongly present) and summed to produce a total score. Item response theory (IRT) analyses were conducted with data from 414 participants recruited from psychiatric outpatient, medical, and community settings to identify the most informative items from each scale. The IRT results allowed us to shorten the scales to 5-item versions that are more precise and easier to rate because of their brevity. In general, the effective range of measurement for the scales was 0 to +2 SDs for each of the attachment constructs; that is, from average to high levels of attachment problems. Evidence for convergent and discriminant validity of the scales was investigated by comparing them with the Experiences of Close Relationships-Revised (ECR-R) scale and the Kobak Attachment Q-sort. The best consensus among self-reports on the ECR-R, informant ratings on the ECR-R, and expert judgments on the Q-sort and the AAR emerged for anxious, ambivalent attachment. Given the good psychometric characteristics of the scale for secure attachment, however, this measure alone might provide a simple alternative to more elaborate procedures for some measurement purposes. Conversion tables are provided for the 7 scales to facilitate transformation from raw scores to IRT-calibrated (theta) scores.

  17. Cluster model in reaction theory

    International Nuclear Information System (INIS)

    Adhikari, S.K.

    1979-01-01

    A recent work by Rosenberg on cluster states in reaction theory is reexamined and generalized to include energies above the threshold for breakup into four composite fragments. The problem of elastic scattering between two interacting composite fragments is reduced to an equivalent two-particle problem with an effective potential to be determined by extremum principles. For energies above the threshold for breakup into three or four composite fragments effective few-particle potentials are introduced and the problem is reduced to effective three- and four-particle problems. The equivalent three-particle equation contains effective two- and three-particle potentials. The effective potential in the equivalent four-particle equation has two-, three-, and four-body connected parts and a piece which has two independent two-body connected parts. In the equivalent three-particle problem we show how to include the effect of a weak three-body potential perturbatively. In the equivalent four-body problem an approximate simple calculational scheme is given when one neglects the four-particle potential the effect of which is presumably very small

  18. Martingale Regressions for a Continuous Time Model of Exchange Rates

    OpenAIRE

    Guo, Zi-Yi

    2017-01-01

    One of the daunting problems in international finance is the weak explanatory power of existing theories of the nominal exchange rates, the so-called “foreign exchange rate determination puzzle”. We propose a continuous-time model to study the impact of order flow on foreign exchange rates. The model is estimated by a newly developed econometric tool based on a time-change sampling from calendar to volatility time. The estimation results indicate that the effect of order flow on exchange rate...

  19. Economic Modelling in Institutional Economic Theory

    Directory of Open Access Journals (Sweden)

    Wadim Strielkowski

    2017-06-01

    Full Text Available Our paper is centered around the formation of theory of institutional modelling that includes principles and ideas reflecting the laws of societal development within the framework of institutional economic theory. We scrutinize and discuss the scientific principles of this institutional modelling that are increasingly postulated by the classics of institutional theory and find their way into the basics of the institutional economics. We propose scientific ideas concerning the new innovative approaches to institutional modelling. These ideas have been devised and developed on the basis of the results of our own original design, as well as on the formalisation and measurements of economic institutions, their functioning and evolution. Moreover, we consider the applied aspects of the institutional theory of modelling and employ them in our research for formalizing our results and maximising the practical outcome of our paper. Our results and findings might be useful for the researchers and stakeholders searching for the systematic and comprehensive description of institutional level modelling, the principles involved in this process and the main provisions of the institutional theory of economic modelling.

  20. Randomized Item Response Theory Models

    NARCIS (Netherlands)

    Fox, Gerardus J.A.

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by

  1. Graphical Model Theory for Wireless Sensor Networks

    International Nuclear Information System (INIS)

    Davis, William B.

    2002-01-01

    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm

  2. Topological quantum theories and integrable models

    International Nuclear Information System (INIS)

    Keski-Vakkuri, E.; Niemi, A.J.; Semenoff, G.; Tirkkonen, O.

    1991-01-01

    The path-integral generalization of the Duistermaat-Heckman integration formula is investigated for integrable models. It is shown that for models with periodic classical trajectories the path integral reduces to a form similar to the finite-dimensional Duistermaat-Heckman integration formula. This provides a relation between exactness of the stationary-phase approximation and Morse theory. It is also argued that certain integrable models can be related to topological quantum theories. Finally, it is found that in general the stationary-phase approximation presumes that the initial and final configurations are in different polarizations. This is exemplified by the quantization of the SU(2) coadjoint orbit

  3. Self Modeling: Expanding the Theories of Learning

    Science.gov (United States)

    Dowrick, Peter W.

    2012-01-01

    Self modeling (SM) offers a unique expansion of learning theory. For several decades, a steady trickle of empirical studies has reported consistent evidence for the efficacy of SM as a procedure for positive behavior change across physical, social, educational, and diagnostic variations. SM became accepted as an extreme case of model similarity;…

  4. Using SAS PROC MCMC for Item Response Theory Models

    Science.gov (United States)

    Ames, Allison J.; Samonte, Kelli

    2015-01-01

    Interest in using Bayesian methods for estimating item response theory models has grown at a remarkable rate in recent years. This attentiveness to Bayesian estimation has also inspired a growth in available software such as WinBUGS, R packages, BMIRT, MPLUS, and SAS PROC MCMC. This article intends to provide an accessible overview of Bayesian…

  5. Glass Durability Modeling, Activated Complex Theory (ACT)

    International Nuclear Information System (INIS)

    CAROL, JANTZEN

    2005-01-01

    The most important requirement for high-level waste glass acceptance for disposal in a geological repository is the chemical durability, expressed as a glass dissolution rate. During the early stages of glass dissolution in near static conditions that represent a repository disposal environment, a gel layer resembling a membrane forms on the glass surface through which ions exchange between the glass and the leachant. The hydrated gel layer exhibits acid/base properties which are manifested as the pH dependence of the thickness and nature of the gel layer. The gel layer has been found to age into either clay mineral assemblages or zeolite mineral assemblages. The formation of one phase preferentially over the other has been experimentally related to changes in the pH of the leachant and related to the relative amounts of Al +3 and Fe +3 in a glass. The formation of clay mineral assemblages on the leached glass surface layers ,lower pH and Fe +3 rich glasses, causes the dissolution rate to slow to a long-term steady state rate. The formation of zeolite mineral assemblages ,higher pH and Al +3 rich glasses, on leached glass surface layers causes the dissolution rate to increase and return to the initial high forward rate. The return to the forward dissolution rate is undesirable for long-term performance of glass in a disposal environment. An investigation into the role of glass stoichiometry, in terms of the quasi-crystalline mineral species in a glass, has shown that the chemistry and structure in the parent glass appear to control the activated surface complexes that form in the leached layers, and these mineral complexes ,some Fe +3 rich and some Al +3 rich, play a role in whether or not clays or zeolites are the dominant species formed on the leached glass surface. The chemistry and structure, in terms of Q distributions of the parent glass, are well represented by the atomic ratios of the glass forming components. Thus, glass dissolution modeling using simple

  6. Failure and Redemption of Statistical and Nonstatistical Rate Theories in the Hydroboration of Alkenes.

    Science.gov (United States)

    Bailey, Johnathan O; Singleton, Daniel A

    2017-11-08

    Our previous work found that canonical forms of transition state theory incorrectly predict the regioselectivity of the hydroboration of propene with BH 3 in solution. In response, it has been suggested that alternative statistical and nonstatistical rate theories can adequately account for the selectivity. This paper uses a combination of experimental and theoretical studies to critically evaluate the ability of these rate theories, as well as dynamic trajectories and newly developed localized statistical models, to predict quantitative selectivities and qualitative trends in hydroborations on a broader scale. The hydroboration of a series of terminally substituted alkenes with BH 3 was examined experimentally, and a classically unexpected trend is that the selectivity increases as the alkyl chain is lengthened far from the reactive centers. Conventional and variational transition state theories can predict neither the selectivities nor the trends. The canonical competitive nonstatistical model makes somewhat better predictions for some alkenes but fails to predict trends, and it performs poorly with an alkene chosen to test a specific prediction of the model. Added nonstatistical corrections to this model make the predictions worse. Parametrized Rice-Ramsperger-Kassel-Marcus (RRKM)-master equation calculations correctly predict the direction of the trend in selectivity versus alkene size but overpredict its magnitude, and the selectivity with large alkenes remains unpredictable with any parametrization. Trajectory studies in explicit solvent can predict selectivities without parametrization but are impractical for predicting small changes in selectivity. From a lifetime and energy analysis of the trajectories, "localized RRKM-ME" and "competitive localized noncanonical" rate models are suggested as steps toward a general model. These provide the best predictions of the experimental observations and insight into the selectivities.

  7. Security Theorems via Model Theory

    Directory of Open Access Journals (Sweden)

    Joshua Guttman

    2009-11-01

    Full Text Available A model-theoretic approach can establish security theorems for cryptographic protocols. Formulas expressing authentication and non-disclosure properties of protocols have a special form. They are quantified implications for all xs . (phi implies for some ys . psi. Models (interpretations for these formulas are *skeletons*, partially ordered structures consisting of a number of local protocol behaviors. *Realized* skeletons contain enough local sessions to explain all the behavior, when combined with some possible adversary behaviors. We show two results. (1 If phi is the antecedent of a security goal, then there is a skeleton A_phi such that, for every skeleton B, phi is satisfied in B iff there is a homomorphism from A_phi to B. (2 A protocol enforces for all xs . (phi implies for some ys . psi iff every realized homomorphic image of A_phi satisfies psi. Hence, to verify a security goal, one can use the Cryptographic Protocol Shapes Analyzer CPSA (TACAS, 2007 to identify minimal realized skeletons, or "shapes," that are homomorphic images of A_phi. If psi holds in each of these shapes, then the goal holds.

  8. Vacation queueing models theory and applications

    CERN Document Server

    Tian, Naishuo

    2006-01-01

    A classical queueing model consists of three parts - arrival process, service process, and queue discipline. However, a vacation queueing model has an additional part - the vacation process which is governed by a vacation policy - that can be characterized by three aspects: 1) vacation start-up rule; 2) vacation termination rule, and 3) vacation duration distribution. Hence, vacation queueing models are an extension of classical queueing theory. Vacation Queueing Models: Theory and Applications discusses systematically and in detail the many variations of vacation policy. By allowing servers to take vacations makes the queueing models more realistic and flexible in studying real-world waiting line systems. Integrated in the book's discussion are a variety of typical vacation model applications that include call centers with multi-task employees, customized manufacturing, telecommunication networks, maintenance activities, etc. Finally, contents are presented in a "theorem and proof" format and it is invaluabl...

  9. RATING CREATION FOR PROFESSIONAL EDUCATIONAL ORGANIZATIONS BASED ON THE ITEM RESPONSE THEORY

    Directory of Open Access Journals (Sweden)

    N. E. Erganova

    2016-01-01

    Full Text Available The aim of the investigation is to theoretically justify and describe approval of the measurement of the level of provision of educational services, education qualities and rating of vocational educational organizations.Methods. The fundamentals of methodology of the research conducted by authors are made by provisions of system approach; research on a schematization and modeling of pedagogical objects; the provision of the theory of measurement of latent variables. As the main methods of research the analysis, synthesis, the comparative analysis, statistical methods of processing of results of research are applied.Results. The paper gives a short comparative analysis of potentials of qualitative approach and strong points of the theory of latent variables in evaluating the quality of education and ratings of the investigated object. The technique of measurement of level of rendering educational services at creation of a rating of the professional educational organizations is stated.Scientific novelty. Pedagogical opportunities of the theory of measurement of latent variables are investigated; the principles of creation of ratings of the professional educational organizations are designated.Practical significance. The operational construct of the latent variable «quality of education» for the secondary professional education (SPE approved in the Perm Territory which can form base of formation of similar constructs for creation of a rating of the professional educational organizations in other regions is developed.

  10. Stochastic interest rates model in compounding | Galadima ...

    African Journals Online (AJOL)

    Stochastic interest rates model in compounding. ... in finance, real estate, insurance, accounting and other areas of business administration. The assumption that future rates are fixed and known with certainty at the beginning of an investment, ...

  11. Quantum field theory and the standard model

    CERN Document Server

    Schwartz, Matthew D

    2014-01-01

    Providing a comprehensive introduction to quantum field theory, this textbook covers the development of particle physics from its foundations to the discovery of the Higgs boson. Its combination of clear physical explanations, with direct connections to experimental data, and mathematical rigor make the subject accessible to students with a wide variety of backgrounds and interests. Assuming only an undergraduate-level understanding of quantum mechanics, the book steadily develops the Standard Model and state-of-the-art calculation techniques. It includes multiple derivations of many important results, with modern methods such as effective field theory and the renormalization group playing a prominent role. Numerous worked examples and end-of-chapter problems enable students to reproduce classic results and to master quantum field theory as it is used today. Based on a course taught by the author over many years, this book is ideal for an introductory to advanced quantum field theory sequence or for independe...

  12. On the algebraic theory of kink sectors: Application to quantum field theory models and collision theory

    International Nuclear Information System (INIS)

    Schlingemann, D.

    1996-10-01

    Several two dimensional quantum field theory models have more than one vacuum state. An investigation of super selection sectors in two dimensions from an axiomatic point of view suggests that there should be also states, called soliton or kink states, which interpolate different vacua. Familiar quantum field theory models, for which the existence of kink states have been proven, are the Sine-Gordon and the φ 4 2 -model. In order to establish the existence of kink states for a larger class of models, we investigate the following question: Which are sufficient conditions a pair of vacuum states has to fulfill, such that an interpolating kink state can be constructed? We discuss the problem in the framework of algebraic quantum field theory which includes, for example, the P(φ) 2 -models. We identify a large class of vacuum states, including the vacua of the P(φ) 2 -models, the Yukawa 2 -like models and special types of Wess-Zumino models, for which there is a natural way to construct an interpolating kink state. In two space-time dimensions, massive particle states are kink states. We apply the Haag-Ruelle collision theory to kink sectors in order to analyze the asymptotic scattering states. We show that for special configurations of n kinks the scattering states describe n freely moving non interacting particles. (orig.)

  13. Photoionization cross sections and Auger rates calculated by many-body perturbation theory

    International Nuclear Information System (INIS)

    Kelly, H.P.

    1976-01-01

    Methods for applying the many body perturbation theory to atomic calculations are discussed with particular emphasis on calculation of photoionization cross sections and Auger rates. Topics covered include: Rayleigh--Schroedinger theory; many body perturbation theory; calculations of photoionization cross sections; and Auger rates

  14. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  15. Introduction to zeolite theory and modelling

    NARCIS (Netherlands)

    Santen, van R.A.; Graaf, van de B.; Smit, B.; Bekkum, van H.

    2001-01-01

    A review. Some of the recent advances in zeolite theory and modeling are present. In particular the current status of computational chem. in Bronsted acid zeolite catalysis, mol. dynamics simulations of mols. adsorbed in zeolites, and novel Monte Carlo technique are discussed to simulate the

  16. Prospect Theory in the Heterogeneous Agent Model

    Czech Academy of Sciences Publication Activity Database

    Polach, J.; Kukačka, Jiří

    (2018) ISSN 1860-711X R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Heterogeneous Agent Model * Prospect Theory * Behavioral finance * Stylized facts Subject RIV: AH - Economic s OBOR OECD: Finance Impact factor: 0.931, year: 2016 http://library.utia.cas.cz/separaty/2018/E/kukacka-0488438.pdf

  17. Recursive renormalization group theory based subgrid modeling

    Science.gov (United States)

    Zhou, YE

    1991-01-01

    Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.

  18. Diagrammatic group theory in quark models

    International Nuclear Information System (INIS)

    Canning, G.P.

    1977-05-01

    A simple and systematic diagrammatic method is presented for calculating the numerical factors arising from group theory in quark models: dimensions, casimir invariants, vector coupling coefficients and especially recoupling coefficients. Some coefficients for the coupling of 3 quark objects are listed for SU(n) and SU(2n). (orig.) [de

  19. Aligning Grammatical Theories and Language Processing Models

    Science.gov (United States)

    Lewis, Shevaun; Phillips, Colin

    2015-01-01

    We address two important questions about the relationship between theoretical linguistics and psycholinguistics. First, do grammatical theories and language processing models describe separate cognitive systems, or are they accounts of different aspects of the same system? We argue that most evidence is consistent with the one-system view. Second,…

  20. Effects of turbulence on the geometric collision rate of sedimenting droplets. Part 2. Theory and parameterization

    International Nuclear Information System (INIS)

    Ayala, Orlando; Rosa, Bogdan; Wang Lianping

    2008-01-01

    The effect of air turbulence on the geometric collision kernel of cloud droplets can be predicted if the effects of air turbulence on two kinematic pair statistics can be modeled. The first is the average radial relative velocity and the second is the radial distribution function (RDF). A survey of the literature shows that no theory is available for predicting the radial relative velocity of finite-inertia sedimenting droplets in a turbulent flow. In this paper, a theory for the radial relative velocity is developed, using a statistical approach assuming that gravitational sedimentation dominates the relative motion of droplets before collision. In the weak-inertia limit, the theory reveals a new term making a positive contribution to the radial relative velocity resulting from a coupling between sedimentation and air turbulence on the motion of finite-inertia droplets. The theory is compared to the direct numerical simulations (DNS) results in part 1, showing a reasonable agreement with the DNS data for bidisperse cloud droplets. For droplets larger than 30 μm in radius, a nonlinear drag (NLD) can also be included in the theory in terms of an effective inertial response time and an effective terminal velocity. In addition, an empirical model is developed to quantify the RDF. This, together with the theory for radial relative velocity, provides a parameterization for the turbulent geometric collision kernel. Using this integrated model, we find that turbulence could triple the geometric collision kernel, relative to the stagnant air case, for a droplet pair of 10 and 20 μm sedimenting through a cumulus cloud at R λ =2x10 4 and ε=600 cm 2 s -3 . For the self-collisions of 20 μm droplets, the collision kernel depends sensitively on the flow dissipation rate

  1. Model Uncertainty and Exchange Rate Forecasting

    NARCIS (Netherlands)

    Kouwenberg, R.; Markiewicz, A.; Verhoeks, R.; Zwinkels, R.C.J.

    2017-01-01

    Exchange rate models with uncertain and incomplete information predict that investors focus on a small set of fundamentals that changes frequently over time. We design a model selection rule that captures the current set of fundamentals that best predicts the exchange rate. Out-of-sample tests show

  2. Locating the rate-limiting step for the interaction of hydrogen with Mg(0001) using density-functional theory calculations and rate theory

    DEFF Research Database (Denmark)

    Vegge, Tejs

    2004-01-01

    The dissociation of molecular hydrogen on a Mgs0001d surface and the subsequent diffusion of atomic hydrogen into the magnesium substrate is investigated using Density Functional Theory (DFT) calculations and rate theory. The minimum energy path and corresponding transition states are located usi...... to be rate-limiting for the ab- and desorption of hydrogen, respectively. Zero-point energy contributions are found to be substantial for the diffusion of atomic hydrogen, but classical rates are still found to be within an order of magnitude at room temperature.......The dissociation of molecular hydrogen on a Mgs0001d surface and the subsequent diffusion of atomic hydrogen into the magnesium substrate is investigated using Density Functional Theory (DFT) calculations and rate theory. The minimum energy path and corresponding transition states are located using...

  3. A dynamical theory for the Rishon model

    International Nuclear Information System (INIS)

    Harari, H.; Seiberg, N.

    1980-09-01

    We propose a composite model for quarks and leptons based on an exact SU(3)sub(C)xSU(3)sub(H) gauge theory and two fundamental J=1/2 fermions: a charged T-rishon and a neutral V-rishon. Quarks, leptons and W-bosons are SU(3)sub(H)-singlet composites of rishons. A dynamically broken effective SU(3)sub(C)xSU(2)sub(L)xSU(2)sub(R)xU(1)sub(B-L) gauge theory emerges at the composite level. The theory is ''natural'', anomaly-free, has no fundamental scalar particles, and describes at least three generations of quarks and leptons. Several ''technicolor'' mechanisms are automatically present. (Author)

  4. Polyacetylene and relativistic field-theory models

    International Nuclear Information System (INIS)

    Bishop, A.R.; Campbell, D.K.; Fesser, K.

    1981-01-01

    Connections between continuum, mean-field, adiabatic Peierls-Froehlich theory in the half-filled band limit and known field theory results are discussed. Particular attention is given to the phi 4 model and to the solvable N = 2 Gross-Neveu model. The latter is equivalent to the Peierls system at a static, semi-classical level. Based on this equivalence we note the prediction of both kink and polaron solitons in models of trans-(CH)/sub x/. Polarons in cis-(CH)/sub x/ are compared with those in the trans isomer. Optical absorption from polarons is described, and general experimental consequences of polarons in (CH)/sub x/ and other conjugated polymers is discussed

  5. Micromechanical modeling of rate-dependent behavior of Connective tissues.

    Science.gov (United States)

    Fallah, A; Ahmadian, M T; Firozbakhsh, K; Aghdam, M M

    2017-03-07

    In this paper, a constitutive and micromechanical model for prediction of rate-dependent behavior of connective tissues (CTs) is presented. Connective tissues are considered as nonlinear viscoelastic material. The rate-dependent behavior of CTs is incorporated into model using the well-known quasi-linear viscoelasticity (QLV) theory. A planar wavy representative volume element (RVE) is considered based on the tissue microstructure histological evidences. The presented model parameters are identified based on the available experiments in the literature. The presented constitutive model introduced to ABAQUS by means of UMAT subroutine. Results show that, monotonic uniaxial test predictions of the presented model at different strain rates for rat tail tendon (RTT) and human patellar tendon (HPT) are in good agreement with experimental data. Results of incremental stress-relaxation test are also presented to investigate both instantaneous and viscoelastic behavior of connective tissues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.

    Science.gov (United States)

    Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth

    2015-01-01

    This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana.

  7. Generalized Rate Theory for Void and Bubble Swelling and its Application to Plutonium Metal Alloys

    Energy Technology Data Exchange (ETDEWEB)

    Allen, P. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wolfer, W. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-16

    In the classical rate theory for void swelling, vacancies and self-interstitials are produced by radiation in equal numbers, and in addition, thermal vacancies are also generated at the sinks, primarily at edge dislocations, at voids, and at grain boundaries. In contrast, due to the high formation energy of self-interstitials for normal metals and alloys, their thermal generation is negligible, as pointed out by Bullough and Perrin. However, recent DFT calculations of the formation energy of self-interstitial atoms in bcc metals have revealed that the sum of formation and migration energies for self-interstitials atoms (SIA) is of the same order of magnitude as for vacancies. The ratio of the activation energies for thermal generation of SIA and vacancies is presented. For fcc metals, this ratio is around three, but for bcc metals it is around 1.5. Reviewing theoretical predictions of point defect properties in δ-Pu, this ratio could possibly be less than one. As a result, thermal generation of SIA in bcc metals and in plutonium must be taken into considerations when modeling the growth of voids and of helium bubbles, and the classical rate theory (CRT) for void and bubble swelling must be extended to a generalized rate theory (GRT).

  8. Gross domestic product growth rates as confined Lévy flights: Towards a unifying theory of economic growth rate fluctuations

    Science.gov (United States)

    Lera, Sandro Claudio; Sornette, Didier

    2018-01-01

    A model that combines economic growth rate fluctuations at the microscopic and macroscopic levels is presented. At the microscopic level, firms are growing at different rates while also being exposed to idiosyncratic shocks at the firm and sector levels. We describe such fluctuations as independent Lévy-stable fluctuations, varying over multiple orders of magnitude. These fluctuations are aggregated and measured at the macroscopic level in averaged economic output quantities such as GDP. A fundamental question is thereby to what extent individual firm size fluctuations can have a noticeable impact on the overall economy. We argue that this question can be answered by considering the Lévy fluctuations as embedded in a steep confining potential well, ensuring nonlinear mean-reversal behavior, without having to rely on microscopic details of the system. The steepness of the potential well directly controls the extent to which idiosyncratic shocks to firms and sectors are damped at the level of the economy. Additionally, the theory naturally accounts for business cycles, represented in terms of a bimodal economic output distribution and thus connects two so far unrelated fields in economics. By analyzing 200 years of U.S. gross domestic product growth rates, we find that the model is in good agreement with the data.

  9. Working memory: theories, models, and controversies.

    Science.gov (United States)

    Baddeley, Alan

    2012-01-01

    I present an account of the origins and development of the multicomponent approach to working memory, making a distinction between the overall theoretical framework, which has remained relatively stable, and the attempts to build more specific models within this framework. I follow this with a brief discussion of alternative models and their relationship to the framework. I conclude with speculations on further developments and a comment on the value of attempting to apply models and theories beyond the laboratory studies on which they are typically based.

  10. Effective field theory and the quark model

    International Nuclear Information System (INIS)

    Durand, Loyal; Ha, Phuoc; Jaczko, Gregory

    2001-01-01

    We analyze the connections between the quark model (QM) and the description of hadrons in the low-momentum limit of heavy-baryon effective field theory in QCD. By using a three-flavor-index representation for the effective baryon fields, we show that the 'nonrelativistic' constituent QM for baryon masses and moments is completely equivalent through O(m s ) to a parametrization of the relativistic field theory in a general spin-flavor basis. The flavor and spin variables can be identified with those of effective valence quarks. Conversely, the spin-flavor description clarifies the structure and dynamical interpretation of the chiral expansion in effective field theory, and provides a direct connection between the field theory and the semirelativistic models for hadrons used in successful dynamical calculations. This allows dynamical information to be incorporated directly into the chiral expansion. We find, for example, that the striking success of the additive QM for baryon magnetic moments is a consequence of the relative smallness of the non-additive spin-dependent corrections

  11. Investigating dislocation motion through a field of solutes with atomistic simulations and reaction rate theory

    International Nuclear Information System (INIS)

    Saroukhani, S.; Warner, D.H.

    2017-01-01

    The rate of thermally activated dislocation motion across a field of solutes is studied using traditional and modern atomistically informed rate theories. First, the accuracy of popular variants of the Harmonic Transition State Theory, as the most common approach, is examined by comparing predictions to direct MD simulations. It is shown that HTST predictions are grossly inaccurate due to the anharmonic effect of thermal softening. Next, the utility of the Transition Interface Sampling was examined as the method was recently shown to be effective for predicting the rate of dislocation-precipitate interactions. For dislocation-solute interactions studied here, TIS is found to be accurate only when the dislocation overcomes multiple obstacles at a time, i.e. jerky motion, and it is inaccurate in the unpinning regime where the energy barrier is of diffusive nature. It is then shown that the Partial Path TIS method - designed for diffusive barriers - provides accurate predictions in the unpinning regime. The two methods are then used to study the temperature and load dependence of the rate. It is shown that Meyer-Neldel (MN) rule prediction of the entropy barrier is not as accurate as it is in the case of dislocation-precipitate interactions. In response, an alternative model is proposed that provides an accurate prediction of the entropy barrier. This model can be combined with TST to offer an attractively simple rate prediction approach. Lastly, (PP)TIS is used to predict the Strain Rate Sensitivity (SRS) factor at experimental strain rates and the predictions are compared to experimental values.

  12. Topos models for physics and topos theory

    International Nuclear Information System (INIS)

    Wolters, Sander

    2014-01-01

    What is the role of topos theory in the topos models for quantum theory as used by Isham, Butterfield, Döring, Heunen, Landsman, Spitters, and others? In other words, what is the interplay between physical motivation for the models and the mathematical framework used in these models? Concretely, we show that the presheaf topos model of Butterfield, Isham, and Döring resembles classical physics when viewed from the internal language of the presheaf topos, similar to the copresheaf topos model of Heunen, Landsman, and Spitters. Both the presheaf and copresheaf models provide a “quantum logic” in the form of a complete Heyting algebra. Although these algebras are natural from a topos theoretic stance, we seek a physical interpretation for the logical operations. Finally, we investigate dynamics. In particular, we describe how an automorphism on the operator algebra induces a homeomorphism (or isomorphism of locales) on the associated state spaces of the topos models, and how elementary propositions and truth values transform under the action of this homeomorphism. Also with dynamics the focus is on the internal perspective of the topos

  13. Prospects for advanced RF theory and modeling

    International Nuclear Information System (INIS)

    Batchelor, D. B.

    1999-01-01

    This paper represents an attempt to express in print the contents of a rather philosophical review talk. The charge for the talk was not to summarize the present status of the field and what we can do, but to assess what we will need to do in the future and where the gaps are in fulfilling these needs. The objective was to be complete, covering all aspects of theory and modeling in all frequency regimes, although in the end the talk mainly focussed on the ion cyclotron range of frequencies (ICRF). In choosing which areas to develop, it is important to keep in mind who the customers for RF modeling are likely to be and what sorts of tasks they will need for RF to do. This occupies the first part of the paper. Then we examine each of the elements of a complete RF theory and try to identify the kinds of advances needed. (c) 1999 American Institute of Physics

  14. Attribution models and the Cooperative Game Theory

    OpenAIRE

    Cano Berlanga, Sebastian; Vilella, Cori

    2017-01-01

    The current paper studies the attribution model used by Google Analytics. Precisely, we use the Cooperative Game Theory to propose a fair distribution of the revenues among the considered channels, in order to facilitate the cooperation and to guarantee stability. We define a transferable utility convex cooperative game from the observed frequencies and we use the Shapley value to allocate the revenues among the di erent channels. Furthermore, we evaluate the impact of an advertising...

  15. MODELS AND THE DYNAMICS OF THEORIES

    Directory of Open Access Journals (Sweden)

    Paulo Abrantes

    2007-12-01

    Full Text Available Abstract: This paper gives a historical overview of the ways various trends in the philosophy of science dealt with models and their relationship with the topics of heuristics and theoretical dynamics. First of all, N. Campbell’s account of analogies as components of scientific theories is presented. Next, the notion of ‘model’ in the reconstruction of the structure of scientific theories proposed by logical empiricists is examined. This overview finishes with M. Hesse’s attempts to develop Campbell’s early ideas in terms of an analogical inference. The final part of the paper points to contemporary developments on these issues which adopt a cognitivist perspective. It is indicated how discussions in the cognitive sciences might help to flesh out some of the insights philosophers of science had concerning the role models and analogies play in actual scientific theorizing. Key words: models, analogical reasoning, metaphors in science, the structure of scientific theories, theoretical dynamics, heuristics, scientific discovery.

  16. Conceptual Models and Theory-Embedded Principles on Effective Schooling.

    Science.gov (United States)

    Scheerens, Jaap

    1997-01-01

    Reviews models and theories on effective schooling. Discusses four rationality-based organization theories and a fifth perspective, chaos theory, as applied to organizational functioning. Discusses theory-embedded principles flowing from these theories: proactive structuring, fit, market mechanisms, cybernetics, and self-organization. The…

  17. Multistate cohort models with proportional transfer rates

    DEFF Research Database (Denmark)

    Schoen, Robert; Canudas-Romo, Vladimir

    2006-01-01

    of transfer rates. The two living state case and hierarchical multistate models with any number of living states are analyzed in detail. Applying our approach to 1997 U.S. fertility data, we find that observed rates of parity progression are roughly proportional over age. Our proportional transfer rate...... approach provides trajectories by parity state and facilitates analyses of the implications of changes in parity rate levels and patterns. More women complete childbearing at parity 2 than at any other parity, and parity 2 would be the modal parity in models with total fertility rates (TFRs) of 1.40 to 2......We present a new, broadly applicable approach to summarizing the behavior of a cohort as it moves through a variety of statuses (or states). The approach is based on the assumption that all rates of transfer maintain a constant ratio to one another over age. We present closed-form expressions...

  18. Modeling Real Exchange Rate Persistence in Chile

    Directory of Open Access Journals (Sweden)

    Leonardo Salazar

    2017-07-01

    Full Text Available The long and persistent swings in the real exchange rate have for a long time puzzled economists. Recent models built on imperfect knowledge economics seem to provide a theoretical explanation for this persistence. Empirical results, based on a cointegrated vector autoregressive (CVAR model, provide evidence of error-increasing behavior in prices and interest rates, which is consistent with the persistence observed in the data. The movements in the real exchange rate are compensated by movements in the interest rate spread, which restores the equilibrium in the product market when the real exchange rate moves away from its long-run benchmark value. Fluctuations in the copper price also explain the deviations of the real exchange rate from its long-run equilibrium value.

  19. Finite Unification: Theory, Models and Predictions

    CERN Document Server

    Heinemeyer, S; Zoupanos, G

    2011-01-01

    All-loop Finite Unified Theories (FUTs) are very interesting N=1 supersymmetric Grand Unified Theories (GUTs) realising an old field theory dream, and moreover have a remarkable predictive power due to the required reduction of couplings. The reduction of the dimensionless couplings in N=1 GUTs is achieved by searching for renormalization group invariant (RGI) relations among them holding beyond the unification scale. Finiteness results from the fact that there exist RGI relations among dimensional couplings that guarantee the vanishing of all beta-functions in certain N=1 GUTs even to all orders. Furthermore developments in the soft supersymmetry breaking sector of N=1 GUTs and FUTs lead to exact RGI relations, i.e. reduction of couplings, in this dimensionful sector of the theory, too. Based on the above theoretical framework phenomenologically consistent FUTs have been constructed. Here we review FUT models based on the SU(5) and SU(3)^3 gauge groups and their predictions. Of particular interest is the Hig...

  20. Absence of saturation of void growth in rate theory with anisotropic diffusion

    CERN Document Server

    Hudson, T S; Sutton, A P

    2002-01-01

    We present a first attempt at solution the problem of the growth of a single void in the presence of anisotropically diffusing radiation induced self-interstitial atom (SIA) clusters. In order to treat a distribution of voids we perform ensemble averaging over the positions of centres of voids using a mean-field approximation. In this way we are able to model physical situations in between the Standard Rate Theory (SRT) treatment of swelling (isotropic diffusion), and the purely 1-dimensional diffusion of clusters in the Production Bias Model. The background absorption by dislocations is however treated isotropically, with a bias for interstitial cluster absorption assumed similar to that of individual SIAs. We find that for moderate anisotropy, unsaturated void growth is characteristic of this anisotropic diffusion of clusters. In addition we obtain a higher initial void swelling rate than predicted by SRT whenever the diffusion is anisotropic.

  1. Factors influencing variation in physician adenoma detection rates: a theory-based approach for performance improvement.

    Science.gov (United States)

    Atkins, Louise; Hunkeler, Enid M; Jensen, Christopher D; Michie, Susan; Lee, Jeffrey K; Doubeni, Chyke A; Zauber, Ann G; Levin, Theodore R; Quinn, Virginia P; Corley, Douglas A

    2016-03-01

    Interventions to improve physician adenoma detection rates for colonoscopy have generally not been successful, and there are little data on the factors contributing to variation that may be appropriate targets for intervention. We sought to identify factors that may influence variation in detection rates by using theory-based tools for understanding behavior. We separately studied gastroenterologists and endoscopy nurses at 3 Kaiser Permanente Northern California medical centers to identify potentially modifiable factors relevant to physician adenoma detection rate variability by using structured group interviews (focus groups) and theory-based tools for understanding behavior and eliciting behavior change: the Capability, Opportunity, and Motivation behavior model; the Theoretical Domains Framework; and the Behavior Change Wheel. Nine factors potentially associated with adenoma detection rate variability were identified, including 6 related to capability (uncertainty about which types of polyps to remove, style of endoscopy team leadership, compromised ability to focus during an examination due to distractions, examination technique during withdrawal, difficulty detecting certain types of adenomas, and examiner fatigue and pain), 2 related to opportunity (perceived pressure due to the number of examinations expected per shift and social pressure to finish examinations before scheduled breaks or the end of a shift), and 1 related to motivation (valuing a meticulous examination as the top priority). Examples of potential intervention strategies are provided. By using theory-based tools, this study identified several novel and potentially modifiable factors relating to capability, opportunity, and motivation that may contribute to adenoma detection rate variability and be appropriate targets for future intervention trials. Copyright © 2016 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  2. Extension of a Kinetic-Theory Approach for Computing Chemical-Reaction Rates to Reactions with Charged Particles

    Science.gov (United States)

    Liechty, Derek S.; Lewis, Mark J.

    2010-01-01

    Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties (i.e., no macroscopic reaction rate information) are extended to include reactions involving charged particles and electronic energy levels. The proposed extensions include ionization reactions, exothermic associative ionization reactions, endothermic and exothermic charge exchange reactions, and other exchange reactions involving ionized species. The extensions are shown to agree favorably with the measured Arrhenius rates for near-equilibrium conditions.

  3. Theory, modeling and simulation: Annual report 1993

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE`s research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies.

  4. Theory, modeling and simulation: Annual report 1993

    International Nuclear Information System (INIS)

    Dunning, T.H. Jr.; Garrett, B.C.

    1994-07-01

    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE's research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies

  5. Exchange rate determination and the flaws of mainstream monetary theory

    Directory of Open Access Journals (Sweden)

    HEINER FLASSBECK

    2018-03-01

    Full Text Available ABSTRACT Developing countries in general need flexibility and a sufficient number of instruments to prevent excessive volatility. Evidence does not support the orthodox belief that, with free floating, international financial markets will perform that role by smoothly adjusting exchange rates to their “equilibrium” level. In reality, exchange rates under a floating regime have proved to be highly unstable, leading to long spells of misalignment. The experience with hard pegs has not been satisfactory either: the exchange rate could not be corrected in cases of external shocks or misalignment. Given this experience, “intermediate” regimes are preferable when there is instability in international financial markets.

  6. Putting Reaction Rates and Collision Theory in the Hands of Your Students.

    Science.gov (United States)

    Evenson, Andy

    2002-01-01

    Describes a simulation that can be used to give concrete analogies of collision theory and the factors that affect reaction rates including temperature, concentration, catalyst, and molecular orientation. The simulation works best if done as an introduction to the concepts to help prevent misconceptions about reaction rates and collision theory.…

  7. THE EVOLUTION OF CURRENCY RELATIONS IN THE LIGHT OF MAJOR EXCHANGE RATE ADJUSTMENT THEORIES

    Directory of Open Access Journals (Sweden)

    Sergiy TKACH

    2014-07-01

    Full Text Available This paper examines the impact of major exchange rate adjustment theories on the global monetary system. The reasons of the previous organization forms of monetary relations collapse at the global level are defined. The main achievements and failures of major exchange rate theories are described.

  8. σ-models and string theories

    International Nuclear Information System (INIS)

    Randjbar-Daemi, S.

    1987-01-01

    The propagation of closed bosonic strings interacting with background gravitational and dilaton fields is reviewed. The string is treated as a quantum field theory on a compact 2-dimensional manifold. The question is posed as to how the conditions for the vanishing trace anomaly and the ensuing background field equations may depend on global features of the manifold. It is shown that to the leading order in σ-model perturbation theory the string loop effects do not modify the gravitational and the dilaton field equations. However for the purely bosonic strings new terms involving the modular parameter of the world sheet are induced by quantum effects which can be absorbed into a re-definition of the background fields. The authors also discuss some aspects of several regularization schemes such as dimensional, Pauli-Villars and the proper-time cut off in an appendix

  9. A kinetic-theory approach for computing chemical-reaction rates in upper-atmosphere hypersonic flows.

    Science.gov (United States)

    Gallis, Michael A; Bond, Ryan B; Torczynski, John R

    2009-09-28

    Recently proposed molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties (i.e., no macroscopic reaction-rate information) are investigated for chemical reactions occurring in upper-atmosphere hypersonic flows. The new models are in good agreement with the measured Arrhenius rates for near-equilibrium conditions and with both measured rates and other theoretical models for far-from-equilibrium conditions. Additionally, the new models are applied to representative combustion and ionization reactions and are in good agreement with available measurements and theoretical models. Thus, molecular-level chemistry modeling provides an accurate method for predicting equilibrium and nonequilibrium chemical-reaction rates in gases.

  10. An Application of Durkheim's Theory of Suicide to Prison Suicide Rates in the United States

    Science.gov (United States)

    Tartaro, Christine; Lester, David

    2005-01-01

    E. Durkheim (1897) suggested that the societal rate of suicide might be explained by societal factors, such as marriage, divorce, and birth rates. The current study examined male prison suicide rates and suicide rates for men in the total population in the United States and found that variables based on Durkheim's theory of suicide explained…

  11. Bridging Economic Theory Models and the Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Møller, Niels Framroze

    2008-01-01

    Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity in the econo......Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity...... are related to expectations formation, market clearing, nominal rigidities, etc. Finally, the general-partial equilibrium distinction is analyzed....

  12. Bridging Economic Theory Models and the Cointegrated Vector Autoregressive Model

    DEFF Research Database (Denmark)

    Møller, Niels Framroze

    2008-01-01

    Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity in the econo......Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity...... parameters of the CVAR are shown to be interpretable in terms of expectations formation, market clearing, nominal rigidities, etc. The general-partial equilibrium distinction is also discussed....

  13. Quantum integrable models of field theory

    International Nuclear Information System (INIS)

    Faddeev, L.D.

    1979-01-01

    Fundamental features of the classical method of the inverse problem have been formulated in the form which is convenient for its quantum reformulation. Typical examples are studied which may help to formulate the quantum method of the inverse problem. Examples are considered for interaction with both attraction and repulsion at a final density. The sine-Gordon model and the XYZ model from the quantum theory of magnetics are examined in short. It is noted that all the achievements of the one-dimensional mathematical physics as applied to exactly solvable quantum models may be put to an extent within the framework of the quantum method of the inverse problem. Unsolved questions are enumerated and perspectives of applying the inverse problem method are shown

  14. Theory and Model for Martensitic Transformations

    DEFF Research Database (Denmark)

    Lindgård, Per-Anker; Mouritsen, Ole G.

    1986-01-01

    Martensitic transformations are shown to be driven by the interplay between two fluctuating strain components. No soft mode is needed, but a central peak occurs representing the dynamics of strain clusters. A two-dimensional magnetic-analog model with the martensitic-transition symmetry is constr......Martensitic transformations are shown to be driven by the interplay between two fluctuating strain components. No soft mode is needed, but a central peak occurs representing the dynamics of strain clusters. A two-dimensional magnetic-analog model with the martensitic-transition symmetry...... is constructed and analyzed by computer simulation and by a theory which accounts for correlation effects. Dramatic precursor effects at the first-order transition are demonstrated. The model is also of relevance for surface reconstruction transitions....

  15. Economic contract theory tests models of mutualism.

    Science.gov (United States)

    Weyl, E Glen; Frederickson, Megan E; Yu, Douglas W; Pierce, Naomi E

    2010-09-07

    Although mutualisms are common in all ecological communities and have played key roles in the diversification of life, our current understanding of the evolution of cooperation applies mostly to social behavior within a species. A central question is whether mutualisms persist because hosts have evolved costly punishment of cheaters. Here, we use the economic theory of employment contracts to formulate and distinguish between two mechanisms that have been proposed to prevent cheating in host-symbiont mutualisms, partner fidelity feedback (PFF) and host sanctions (HS). Under PFF, positive feedback between host fitness and symbiont fitness is sufficient to prevent cheating; in contrast, HS posits the necessity of costly punishment to maintain mutualism. A coevolutionary model of mutualism finds that HS are unlikely to evolve de novo, and published data on legume-rhizobia and yucca-moth mutualisms are consistent with PFF and not with HS. Thus, in systems considered to be textbook cases of HS, we find poor support for the theory that hosts have evolved to punish cheating symbionts; instead, we show that even horizontally transmitted mutualisms can be stabilized via PFF. PFF theory may place previously underappreciated constraints on the evolution of mutualism and explain why punishment is far from ubiquitous in nature.

  16. Magnetic flux tube models in superstring theory

    CERN Document Server

    Russo, Jorge G

    1996-01-01

    Superstring models describing curved 4-dimensional magnetic flux tube backgrounds are exactly solvable in terms of free fields. We consider the simplest model of this type (corresponding to `Kaluza-Klein' Melvin background). Its 2d action has a flat but topologically non-trivial 10-dimensional target space (there is a mixing of angular coordinate of the 2-plane with an internal compact coordinate). We demonstrate that this theory has broken supersymmetry but is perturbatively stable if the radius R of the internal coordinate is larger than R_0=\\sqrt{2\\a'}. In the Green-Schwarz formulation the supersymmetry breaking is a consequence of the presence of a flat but non-trivial connection in the fermionic terms in the action. For R R/2\\a' there appear instabilities corresponding to tachyonic winding states. The torus partition function Z(q,R) is finite for R > R_0 (and vanishes for qR=2n, n=integer). At the special points qR=2n (2n+1) the model is equivalent to the free superstring theory compactified on a circle...

  17. Group theory for unified model building

    International Nuclear Information System (INIS)

    Slansky, R.

    1981-01-01

    The results gathered here on simple Lie algebras have been selected with attention to the needs of unified model builders who study Yang-Mills theories based on simple, local-symmetry groups that contain as a subgroup the SUsup(w) 2 x Usup(w) 1 x SUsup(c) 3 symmetry of the standard theory of electromagnetic, weak, and strong interactions. The major topics include, after a brief review of the standard model and its unification into a simple group, the use of Dynkin diagrams to analyze the structure of the group generators and to keep track of the weights (quantum numbers) of the representation vectors; an analysis of the subgroup structure of simple groups, including explicit coordinatizations of the projections in weight space; lists of representations, tensor products and branching rules for a number of simple groups; and other details about groups and their representations that are often helpful for surveying unified models, including vector-coupling coefficient calculations. Tabulations of representations, tensor products, and branching rules for E 6 , SO 10 , SU 6 , F 4 , SO 9 , SO 5 , SO 8 , SO 7 , SU 4 , E 7 , E 8 , SU 8 , SO 14 , SO 18 , SO 22 , and for completeness, SU 3 are included. (These tables may have other applications.) Group-theoretical techniques for analyzing symmetry breaking are described in detail and many examples are reviewed, including explicit parameterizations of mass matrices. (orig.)

  18. A matrix model from string field theory

    Directory of Open Access Journals (Sweden)

    Syoji Zeze

    2016-09-01

    Full Text Available We demonstrate that a Hermitian matrix model can be derived from level truncated open string field theory with Chan-Paton factors. The Hermitian matrix is coupled with a scalar and U(N vectors which are responsible for the D-brane at the tachyon vacuum. Effective potential for the scalar is evaluated both for finite and large N. Increase of potential height is observed in both cases. The large $N$ matrix integral is identified with a system of N ZZ branes and a ghost FZZT brane.

  19. Calculus of variations in rate of reactions tax using the general pertubation theory

    International Nuclear Information System (INIS)

    Silva, F.C. da.

    1981-02-01

    A perturbation expression to calculate the variations in the rates of integral parameters (such as reaction rates) of a reactor using a Time-Independent Generalized Perturbation Theory, was developed. This theory makes use of the concepts of neutron generation and neutron importance with respect to a given process occurring in a system. The application of Time-Dependent Generalized Perturbation Theory to the calculation of Burnup, by using the expressions derived by A. Gandini, along with the perturbation expression derived in the Time Independent Generalized Perturbation Theory, is done. (Author) [pt

  20. A model for hot electron phenomena: Theory and general results

    International Nuclear Information System (INIS)

    Carrillo, J.L.; Rodriquez, M.A.

    1988-10-01

    We propose a model for the description of the hot electron phenomena in semiconductors. Based on this model we are able to reproduce accurately the main characteristics observed in experiments of electric field transport, optical absorption, steady state photoluminescence and relaxation process. Our theory does not contain free nor adjustable parameters, it is very fast computerwise, and incorporates the main collision mechanisms including screening and phonon heating effects. Our description on a set of nonlinear rate equations in which the interactions are represented by coupling coefficients or effective frequencies. We calculate three coefficients from the characteristic constants and the band structure of the material. (author). 22 refs, 5 figs, 1 tab

  1. On low rank classical groups in string theory, gauge theory and matrix models

    International Nuclear Information System (INIS)

    Intriligator, Ken; Kraus, Per; Ryzhov, Anton V.; Shigemori, Masaki; Vafa, Cumrun

    2004-01-01

    We consider N=1 supersymmetric U(N), SO(N), and Sp(N) gauge theories, with two-index tensor matter and added tree-level superpotential, for general breaking patterns of the gauge group. By considering the string theory realization and geometric transitions, we clarify when glueball superfields should be included and extremized, or rather set to zero; this issue arises for unbroken group factors of low rank. The string theory results, which are equivalent to those of the matrix model, refer to a particular UV completion of the gauge theory, which could differ from conventional gauge theory results by residual instanton effects. Often, however, these effects exhibit miraculous cancellations, and the string theory or matrix model results end up agreeing with standard gauge theory. In particular, these string theory considerations explain and remove some apparent discrepancies between gauge theories and matrix models in the literature

  2. Affinity functions for modeling glass dissolution rates

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1997-07-01

    Glass dissolution rates decrease dramatically as glass approach ''saturation'' with respect to the leachate solution. Most repository sites are chosen where water fluxes are minimal, and therefore the waste glass is most likely to dissolve under conditions close to ''saturation''. The key term in the rate expression used to predict glass dissolution rates close to ''saturation'' is the affinity term, which accounts for saturation effects on dissolution rates. Interpretations of recent experimental data on the dissolution behaviour of silicate glasses and silicate minerals indicate the following: 1) simple affinity control does not explain the observed dissolution rate for silicate minerals or glasses; 2) dissolution rates can be significantly modified by dissolved cations even under conditions far from saturation where the affinity term is near unity; 3) the effects of dissolved species such as Al and Si on the dissolution rate vary with pH, temperature, and saturation state; and 4) as temperature is increased, the effect of both pH and temperature on glass and mineral dissolution rates decrease, which strongly suggests a switch in rate control from surface reaction-based to diffusion control. Borosilicate glass dissolution models need to be upgraded to account for these recent experimental observations. (A.C.)

  3. Classical nucleation theory in the phase-field crystal model.

    Science.gov (United States)

    Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas

    2018-04-01

    A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.

  4. Classical nucleation theory in the phase-field crystal model

    Science.gov (United States)

    Jreidini, Paul; Kocher, Gabriel; Provatas, Nikolas

    2018-04-01

    A full understanding of polycrystalline materials requires studying the process of nucleation, a thermally activated phase transition that typically occurs at atomistic scales. The numerical modeling of this process is problematic for traditional numerical techniques: commonly used phase-field methods' resolution does not extend to the atomic scales at which nucleation takes places, while atomistic methods such as molecular dynamics are incapable of scaling to the mesoscale regime where late-stage growth and structure formation takes place following earlier nucleation. Consequently, it is of interest to examine nucleation in the more recently proposed phase-field crystal (PFC) model, which attempts to bridge the atomic and mesoscale regimes in microstructure simulations. In this work, we numerically calculate homogeneous liquid-to-solid nucleation rates and incubation times in the simplest version of the PFC model, for various parameter choices. We show that the model naturally exhibits qualitative agreement with the predictions of classical nucleation theory (CNT) despite a lack of some explicit atomistic features presumed in CNT. We also examine the early appearance of lattice structure in nucleating grains, finding disagreement with some basic assumptions of CNT. We then argue that a quantitatively correct nucleation theory for the PFC model would require extending CNT to a multivariable theory.

  5. Application of Chaos Theory to Psychological Models

    Science.gov (United States)

    Blackerby, Rae Fortunato

    This dissertation shows that an alternative theoretical approach from physics--chaos theory--offers a viable basis for improved understanding of human beings and their behavior. Chaos theory provides achievable frameworks for potential identification, assessment, and adjustment of human behavior patterns. Most current psychological models fail to address the metaphysical conditions inherent in the human system, thus bringing deep errors to psychological practice and empirical research. Freudian, Jungian and behavioristic perspectives are inadequate psychological models because they assume, either implicitly or explicitly, that the human psychological system is a closed, linear system. On the other hand, Adlerian models that require open systems are likely to be empirically tenable. Logically, models will hold only if the model's assumptions hold. The innovative application of chaotic dynamics to psychological behavior is a promising theoretical development because the application asserts that human systems are open, nonlinear and self-organizing. Chaotic dynamics use nonlinear mathematical relationships among factors that influence human systems. This dissertation explores these mathematical relationships in the context of a sample model of moral behavior using simulated data. Mathematical equations with nonlinear feedback loops describe chaotic systems. Feedback loops govern the equations' value in subsequent calculation iterations. For example, changes in moral behavior are affected by an individual's own self-centeredness, family and community influences, and previous moral behavior choices that feed back to influence future choices. When applying these factors to the chaos equations, the model behaves like other chaotic systems. For example, changes in moral behavior fluctuate in regular patterns, as determined by the values of the individual, family and community factors. In some cases, these fluctuations converge to one value; in other cases, they diverge in

  6. Effective dynamics along given reaction coordinates, and reaction rate theory.

    Science.gov (United States)

    Zhang, Wei; Hartmann, Carsten; Schütte, Christof

    2016-12-22

    In molecular dynamics and related fields one considers dynamical descriptions of complex systems in full (atomic) detail. In order to reduce the overwhelming complexity of realistic systems (high dimension, large timescale spread, limited computational resources) the projection of the full dynamics onto some reaction coordinates is examined in order to extract statistical information like free energies or reaction rates. In this context, the effective dynamics that is induced by the full dynamics on the reaction coordinate space has attracted considerable attention in the literature. In this article, we contribute to this discussion: we first show that if we start with an ergodic diffusion process whose invariant measure is unique then these properties are inherited by the effective dynamics. Then, we give equations for the effective dynamics, discuss whether the dominant timescales and reaction rates inferred from the effective dynamics are accurate approximations of such quantities for the full dynamics, and compare our findings to results from approaches like Mori-Zwanzig, averaging, or homogenization. Finally, by discussing the algorithmic realization of the effective dynamics, we demonstrate that recent algorithmic techniques like the "equation-free" approach and the "heterogeneous multiscale method" can be seen as special cases of our approach.

  7. PARFUME Theory and Model basis Report

    Energy Technology Data Exchange (ETDEWEB)

    Darrell L. Knudson; Gregory K Miller; G.K. Miller; D.A. Petti; J.T. Maki; D.L. Knudson

    2009-09-01

    The success of gas reactors depends upon the safety and quality of the coated particle fuel. The fuel performance modeling code PARFUME simulates the mechanical, thermal and physico-chemical behavior of fuel particles during irradiation. This report documents the theory and material properties behind vari¬ous capabilities of the code, which include: 1) various options for calculating CO production and fission product gas release, 2) an analytical solution for stresses in the coating layers that accounts for irradiation-induced creep and swelling of the pyrocarbon layers, 3) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 4) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, and kernel migration (or amoeba effect), 5) two independent methods for determining particle failure probabilities, 6) a model for calculating release-to-birth (R/B) ratios of gaseous fission products that accounts for particle failures and uranium contamination in the fuel matrix, and 7) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. The accident condi¬tion entails diffusion of fission products through the particle coating layers and through the fuel matrix to the coolant boundary. This document represents the initial version of the PARFUME Theory and Model Basis Report. More detailed descriptions will be provided in future revisions.

  8. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  9. Decay rates of quarkonia and potential models

    International Nuclear Information System (INIS)

    Rai, Ajay Kumar; Pandya, J N; Vinodkumar, P C

    2005-01-01

    The decay rates of cc-bar and b-barb mesons have been studied with contributions from different correction terms. The corrections based on hard processes involved in the decays are quantitatively studied in the framework of different phenomenological potential models

  10. Theory and theory-based models for the pedestal, edge stability and ELMs in tokamaks

    International Nuclear Information System (INIS)

    Guzdar, P.N.; Mahajan, S.M.; Yoshida, Z.; Dorland, W.; Rogers, B.N.; Bateman, G.; Kritz, A.H.; Pankin, A.; Voitsekhovitch, I.; Onjun, T.; Snyder, S.

    2005-01-01

    Theories for equilibrium and stability of H-modes, and models for use within integrated modeling codes with the objective of predicting the height, width and shape of the pedestal at the edge of H-mode plasmas in tokamaks, as well as the onset and frequency of Edge Localized Modes (ELMs), are developed. A theory model for relaxed plasma states with flow, which uses two-fluid Hall-MHD equations, predicts that the natural scale length of the pedestal is the ion skin depth and the pedestal width is larger than the ion poloidal gyro-radius, in agreement with experimental observations. Computations with the GS2 code are used to identify micro-instabilities, such as electron drift waves, that survive the strong flow shear, diamagnetic flows, and magnetic shear that are characteristic of the pedestal. Other instabilities on the pedestal and gyro-radius scale, such as the Kelvin-Helmholtz instability, are also investigated. Time-dependent integrated modeling simulations are used to follow the transition from L-mode to H-mode and the subsequent evolution of ELMs as the heating power is increased. The flow shear stabilization that produces the transport barrier at the edge of the plasma reduces different modes of anomalous transport and, consequently, different channels of transport at different rates. ELM crashes are triggered in the model by pressure-driven ballooning modes or by current-driven peeling modes. (author)

  11. "Depletion": A Game with Natural Rules for Teaching Reaction Rate Theory.

    Science.gov (United States)

    Olbris, Donald J.; Herzfeld, Judith

    2002-01-01

    Depletion is a game that reinforces central concepts of reaction rate theory through simulation. Presents the game with a set of follow-up questions suitable for either a quiz or discussion. Also describes student reaction to the game. (MM)

  12. Pipe fracture evaluations for leak-rate detection: Probabilistic models

    International Nuclear Information System (INIS)

    Rahman, S.; Wilkowski, G.; Ghadiali, N.

    1993-01-01

    This is the second in series of three papers generated from studies on nuclear pipe fracture evaluations for leak-rate detection. This paper focuses on the development of novel probabilistic models for stochastic performance evaluation of degraded nuclear piping systems. It was accomplished here in three distinct stages. First, a statistical analysis was conducted to characterize various input variables for thermo-hydraulic analysis and elastic-plastic fracture mechanics, such as material properties of pipe, crack morphology variables, and location of cracks found in nuclear piping. Second, a new stochastic model was developed to evaluate performance of degraded piping systems. It is based on accurate deterministic models for thermo-hydraulic and fracture mechanics analyses described in the first paper, statistical characterization of various input variables, and state-of-the-art methods of modem structural reliability theory. From this model. the conditional probability of failure as a function of leak-rate detection capability of the piping systems can be predicted. Third, a numerical example was presented to illustrate the proposed model for piping reliability analyses. Results clearly showed that the model provides satisfactory estimates of conditional failure probability with much less computational effort when compared with those obtained from Monte Carlo simulation. The probabilistic model developed in this paper will be applied to various piping in boiling water reactor and pressurized water reactor plants for leak-rate detection applications

  13. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2017-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  14. Theory and modelling of nanocarbon phase stability.

    Energy Technology Data Exchange (ETDEWEB)

    Barnard, A. S.

    2006-01-01

    The transformation of nanodiamonds into carbon-onions (and vice versa) has been observed experimentally and has been modeled computationally at various levels of sophistication. Also, several analytical theories have been derived to describe the size, temperature and pressure dependence of this phase transition. However, in most cases a pure carbon-onion or nanodiamond is not the final product. More often than not an intermediary is formed, known as a bucky-diamond, with a diamond-like core encased in an onion-like shell. This has prompted a number of studies investigating the relative stability of nanodiamonds, bucky-diamonds, carbon-onions and fullerenes, in various size regimes. Presented here is a review outlining results of numerous theoretical studies examining the phase diagrams and phase stability of carbon nanoparticles, to clarify the complicated relationship between fullerenic and diamond structures at the nanoscale.

  15. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2015-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  16. Leak rate models and leak detection

    International Nuclear Information System (INIS)

    1992-01-01

    Leak detection may be carried out by a number of detection systems, but selection of the systems must be carefully adapted to the fluid state and the location of the leak in the reactor coolant system. Computer programs for the calculation of leak rates contain different models to take into account the fluid state before its entrance into the crack, and they have to be verified by experiments; agreement between experiments and calculations is generally not satisfactory for very small leak rates resulting from narrow cracks or from a closing bending moment

  17. Game Theory and its Relationship with Linear Programming Models ...

    African Journals Online (AJOL)

    Game Theory and its Relationship with Linear Programming Models. ... This paper shows that game theory and linear programming problem are closely related subjects since any computing method devised for ... AJOL African Journals Online.

  18. Hosotani model in closed string theory

    International Nuclear Information System (INIS)

    Shiraishi, Kiyoshi.

    1988-11-01

    Hosotani mechanism in the closed string theory with current algebra symmetry is described by the (old covariant) operator method. We compare the gauge symmetry breaking mechanism in a string theory which has SU(2) symmetry with the one in an equivalent compactified closed string theory. We also investigate the difference between Hosotani mechanism and Higgs mechanism in closed string theories by calculation of a fourpoint amplitude of 'Higgs' bosons at tree level. (author)

  19. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  20. Visceral obesity and psychosocial stress: a generalised control theory model

    Science.gov (United States)

    Wallace, Rodrick

    2016-07-01

    The linking of control theory and information theory via the Data Rate Theorem and its generalisations allows for construction of necessary conditions statistical models of body mass regulation in the context of interaction with a complex dynamic environment. By focusing on the stress-related induction of central obesity via failure of HPA axis regulation, we explore implications for strategies of prevention and treatment. It rapidly becomes evident that individual-centred biomedical reductionism is an inadequate paradigm. Without mitigation of HPA axis or related dysfunctions arising from social pathologies of power imbalance, economic insecurity, and so on, it is unlikely that permanent changes in visceral obesity for individuals can be maintained without constant therapeutic effort, an expensive - and likely unsustainable - public policy.

  1. System Dynamics as Model-Based Theory Building

    OpenAIRE

    Schwaninger, Markus; Grösser, Stefan N.

    2008-01-01

    This paper introduces model-based theory building as a feature of system dynamics (SD) with large potential. It presents a systemic approach to actualizing that potential, thereby opening up a new perspective on theory building in the social sciences. The question addressed is if and how SD enables the construction of high-quality theories. This contribution is based on field experiment type projects which have been focused on model-based theory building, specifically the construction of a mi...

  2. A Realizability Model for Impredicative Hoare Type Theory

    DEFF Research Database (Denmark)

    Petersen, Rasmus Lerchedal; Birkedal, Lars; Nanevski, Alexandar

    2008-01-01

    We present a denotational model of impredicative Hoare Type Theory, a very expressive dependent type theory in which one can specify and reason about mutable abstract data types. The model ensures soundness of the extension of Hoare Type Theory with impredicative polymorphism; makes the connections...... to separation logic clear, and provides a basis for investigation of further sound extensions of the theory, in particular equations between computations and types....

  3. Irreducible integrable theories form tensor products of conformal models

    International Nuclear Information System (INIS)

    Mathur, S.D.; Warner, N.P.

    1991-01-01

    By using Toda field theories we show that there are perturbations of direct products of conformal theories that lead to irreducible integrable field theories. The same affine Toda theory can be truncated to different quantum integrable models for different choices of the charge at infinity and the coupling. The classification of integrable models that can be obtained in this fashion follows the classification of symmetric spaces of type G/H with rank H = rank G. (orig.)

  4. Prediction on corrosion rate of pipe in nuclear power system based on optimized grey theory

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Chen Dengke; Jiang Wei

    2007-01-01

    For the prediction of corrosion rate of pipe in nuclear power system, the pre- diction error from the grey theory is greater, so a new method, optimized grey theory was presented in the paper. A comparison among predicted results from present and other methods was carried out, and it is seem that optimized grey theory is correct and effective for the prediction of corrosion rate of pipe in nuclear power system, and it provides a fundamental basis for the maintenance of pipe in nuclear power system. (authors)

  5. Gaussian Mixture Model of Heart Rate Variability

    Science.gov (United States)

    Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario

    2012-01-01

    Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386

  6. Predicting extinction rates in stochastic epidemic models

    International Nuclear Information System (INIS)

    Schwartz, Ira B; Billings, Lora; Dykman, Mark; Landsman, Alexandra

    2009-01-01

    We investigate the stochastic extinction processes in a class of epidemic models. Motivated by the process of natural disease extinction in epidemics, we examine the rate of extinction as a function of disease spread. We show that the effective entropic barrier for extinction in a susceptible–infected–susceptible epidemic model displays scaling with the distance to the bifurcation point, with an unusual critical exponent. We make a direct comparison between predictions and numerical simulations. We also consider the effect of non-Gaussian vaccine schedules, and show numerically how the extinction process may be enhanced when the vaccine schedules are Poisson distributed

  7. Modeling the Volatility of Exchange Rates: GARCH Models

    Directory of Open Access Journals (Sweden)

    Fahima Charef

    2017-03-01

    Full Text Available The modeling of the dynamics of the exchange rate at a long time remains a financial and economic research center. In our research we tried to study the relationship between the evolution of exchange rates and macroeconomic fundamentals. Our empirical study is based on a series of exchange rates for the Tunisian dinar against three currencies of major trading partners (dollar, euro, yen and fundamentals (the terms of trade, the inflation rate, the interest rate differential, of monthly data, from jan 2000 to dec-2014, for the case of the Tunisia. We have adopted models of conditional heteroscedasticity (ARCH, GARCH, EGARCH, TGARCH. The results indicate that there is a partial relationship between the evolution of the Tunisian dinar exchange rates and macroeconomic variables.

  8. Rate theory scenarios study on fission gas behavior of U 3 Si 2 under LOCA conditions in LWRs

    Energy Technology Data Exchange (ETDEWEB)

    Miao, Yinbin; Gamble, Kyle A.; Andersson, David; Mei, Zhi-Gang; Yacout, Abdellatif M.

    2018-01-01

    Fission gas behavior of U3Si2 under various loss-of-coolant accident (LOCA) conditions in light water reactors (LWRs) was simulated using rate theory. A rate theory model for U3Si2 that covers both steady-state operation and power transients was developed for the GRASS-SST code based on existing research reactor/ion irradiation experimental data and theoretical predictions of density functional theory (DFT) calculations. The steady-state and LOCA condition parameters were either directly provided or inspired by BISON simulations. Due to the absence of in-pile experiment data for U3Si2's fuel performance under LWR conditions at this stage of accident tolerant fuel (ATF) development, a variety of LOCA scenarios were taken into consideration to comprehensively and conservatively evaluate the fission gas behavior of U3Si2 during a LOCA.

  9. From Landau's hydrodynamical model to field theory model to field theory models of multiparticle production: a tribute to Peter

    International Nuclear Information System (INIS)

    Cooper, F.

    1996-01-01

    We review the assumptions and domain of applicability of Landau's Hydrodynamical Model. By considering two models of particle production, pair production from strong electric fields and particle production in the linear σ model, we demonstrate that many of Landau's ideas are verified in explicit field theory calculations

  10. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...

  11. Applications of the absolute reaction rate theory to biological responses in electric and magnetic fields

    International Nuclear Information System (INIS)

    Brannen, J.P.; Wayland, J.R.

    1976-01-01

    This paper develops a theoretical foundation for the study of biological responses of electric and magnetic fields. The basis of the development is the absolute reaction rate theory and the effects of fields on reaction rates. A simple application to the response of Bacillus subtilis var niger in a microwave field is made. Potential areas of application are discussed

  12. Chaos Theory as a Model for Managing Issues and Crises.

    Science.gov (United States)

    Murphy, Priscilla

    1996-01-01

    Uses chaos theory to model public relations situations in which the salient feature is volatility of public perceptions. Discusses the premises of chaos theory and applies them to issues management, the evolution of interest groups, crises, and rumors. Concludes that chaos theory is useful as an analogy to structure image problems and to raise…

  13. Catastrophe Theory: A Unified Model for Educational Change.

    Science.gov (United States)

    Cryer, Patricia; Elton, Lewis

    1990-01-01

    Catastrophe Theory and Herzberg's theory of motivation at work was used to create a model of change that unifies and extends Lewin's two separate stage and force field models. This new model is used to analyze the behavior of academics as they adapt to the changing university environment. (Author/MLW)

  14. A Leadership Identity Development Model: Applications from a Grounded Theory

    Science.gov (United States)

    Komives, Susan R.; Mainella, Felicia C.; Longerbeam, Susan D.; Osteen, Laura; Owen, Julie E.

    2006-01-01

    This article describes a stage-based model of leadership identity development (LID) that resulted from a grounded theory study on developing a leadership identity (Komives, Owen, Longerbeam, Mainella, & Osteen, 2005). The LID model expands on the leadership identity stages, integrates the categories of the grounded theory into the LID model, and…

  15. Reconstructing Constructivism: Causal Models, Bayesian Learning Mechanisms, and the Theory Theory

    Science.gov (United States)

    Gopnik, Alison; Wellman, Henry M.

    2012-01-01

    We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework…

  16. Reconsideration of r/K Selection Theory Using Stochastic Control Theory and Nonlinear Structured Population Models.

    Directory of Open Access Journals (Sweden)

    Ryo Oizumi

    Full Text Available Despite the fact that density effects and individual differences in life history are considered to be important for evolution, these factors lead to several difficulties in understanding the evolution of life history, especially when population sizes reach the carrying capacity. r/K selection theory explains what types of life strategies evolve in the presence of density effects and individual differences. However, the relationship between the life schedules of individuals and population size is still unclear, even if the theory can classify life strategies appropriately. To address this issue, we propose a few equations on adaptive life strategies in r/K selection where density effects are absent or present. The equations detail not only the adaptive life history but also the population dynamics. Furthermore, the equations can incorporate temporal individual differences, which are referred to as internal stochasticity. Our framework reveals that maximizing density effects is an evolutionarily stable strategy related to the carrying capacity. A significant consequence of our analysis is that adaptive strategies in both selections maximize an identical function, providing both population growth rate and carrying capacity. We apply our method to an optimal foraging problem in a semelparous species model and demonstrate that the adaptive strategy yields a lower intrinsic growth rate as well as a lower basic reproductive number than those obtained with other strategies. This study proposes that the diversity of life strategies arises due to the effects of density and internal stochasticity.

  17. Reconsideration of r/K Selection Theory Using Stochastic Control Theory and Nonlinear Structured Population Models.

    Science.gov (United States)

    Oizumi, Ryo; Kuniya, Toshikazu; Enatsu, Yoichi

    2016-01-01

    Despite the fact that density effects and individual differences in life history are considered to be important for evolution, these factors lead to several difficulties in understanding the evolution of life history, especially when population sizes reach the carrying capacity. r/K selection theory explains what types of life strategies evolve in the presence of density effects and individual differences. However, the relationship between the life schedules of individuals and population size is still unclear, even if the theory can classify life strategies appropriately. To address this issue, we propose a few equations on adaptive life strategies in r/K selection where density effects are absent or present. The equations detail not only the adaptive life history but also the population dynamics. Furthermore, the equations can incorporate temporal individual differences, which are referred to as internal stochasticity. Our framework reveals that maximizing density effects is an evolutionarily stable strategy related to the carrying capacity. A significant consequence of our analysis is that adaptive strategies in both selections maximize an identical function, providing both population growth rate and carrying capacity. We apply our method to an optimal foraging problem in a semelparous species model and demonstrate that the adaptive strategy yields a lower intrinsic growth rate as well as a lower basic reproductive number than those obtained with other strategies. This study proposes that the diversity of life strategies arises due to the effects of density and internal stochasticity.

  18. Domain Theory, Its Models and Concepts

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Howard, Thomas J.; Bruun, Hans Peter Lomholt

    2014-01-01

    Domain Theory is a systems approach for the analysis and synthesis of products. Its basic idea is to view a product as systems of activities, organs and parts and to define structure, elements, behaviour and function in these domains. The theory is a basis for a long line of research contribution...

  19. Phenomenological rate process theory for the storage of atomic H in solid Hsub(2)sup(*)

    International Nuclear Information System (INIS)

    Rosen, G.

    1976-01-01

    A phenomenological rate process theory is developed for the storage and rapid recombination of atomic hydrogen fuel radical in a crystalline molecular hydrogen solid at temperatures in the range o.1K(<=)T(<=K. It is shown that such a theory can account quantitatively for the recently observed dependence of the storage time on the storage temperature, for the maximum concentration of trapped H atom, and for the time duration of the energy release in the tritium decay experiments of Webeler

  20. Using NIF to Test Theories of High-Pressure, High-Rate Plastic Flow in Metals

    Science.gov (United States)

    Rudd, Robert E.; Arsenlis, A.; Cavallo, R. M.; Huntington, C. M.; McNaney, J. M.; Park, H. S.; Powell, P.; Prisbrey, S. T.; Remington, B. A.; Swift, D.; Wehrenberg, C. E.; Yang, L.

    2017-10-01

    Precisely controlled plasmas are playing key roles both as pump and probe in experiments to understand the strength of solid metals at high energy density (HED) conditions. In concert with theoretical advances, these experiments have enabled a predictive capability to model material strength at Mbar pressures and high strain rates. Here we describe multiscale strength models developed for tantalum starting with atomic bonding and extending up through the mobility of individual dislocations, the evolution of dislocation networks and so on until the ultimate material response at the scale of an experiment. Experiments at the National Ignition Facility (NIF) probe strength in metals ramp compressed to 1-8 Mbar. The model is able to predict 1 Mbar experiments without adjustable parameters. The combination of experiment and theory has shown that solid metals can behave significantly differently at HED conditions. We also describe recent studies of lead compressed to 3-5 Mbar. Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA273.

  1. General Theory of Decoy-State Quantum Cryptography with Dark Count Rate Fluctuation

    International Nuclear Information System (INIS)

    Xiang, Gao; Shi-Hai, Sun; Lin-Mei, Liang

    2009-01-01

    The existing theory of decoy-state quantum cryptography assumes that the dark count rate is a constant, but in practice there exists fluctuation. We develop a new scheme of the decoy state, achieve a more practical key generation rate in the presence of fluctuation of the dark count rate, and compare the result with the result of the decoy-state without fluctuation. It is found that the key generation rate and maximal secure distance will be decreased under the influence of the fluctuation of the dark count rate

  2. Annonaceae substitution rates: a codon model perspective

    Directory of Open Access Journals (Sweden)

    Lars Willem Chatrou

    2014-01-01

    Full Text Available The Annonaceae includes cultivated species of economic interest and represents an important source of information for better understanding the evolution of tropical rainforests. In phylogenetic analyses of DNA sequence data that are used to address evolutionary questions, it is imperative to use appropriate statistical models. Annonaceae are cases in point: Two sister clades, the subfamilies Annonoideae and Malmeoideae, contain the majority of Annonaceae species diversity. The Annonoideae generally show a greater degree of sequence divergence compared to the Malmeoideae, resulting in stark differences in branch lengths in phylogenetic trees. Uncertainty in how to interpret and analyse these differences has led to inconsistent results when estimating the ages of clades in Annonaceae using molecular dating techniques. We ask whether these differences may be attributed to inappropriate modelling assumptions in the phylogenetic analyses. Specifically, we test for (clade-specific differences in rates of non-synonymous and synonymous substitutions. A high ratio of nonsynonymous to synonymous substitutions may lead to similarity of DNA sequences due to convergence instead of common ancestry, and as a result confound phylogenetic analyses. We use a dataset of three chloroplast genes (rbcL, matK, ndhF for 129 species representative of the family. We find that differences in branch lengths between major clades are not attributable to different rates of non-synonymous and synonymous substitutions. The differences in evolutionary rate between the major clades of Annonaceae pose a challenge for current molecular dating techniques that should be seen as a warning for the interpretation of such results in other organisms.

  3. Big bang models in string theory

    Energy Technology Data Exchange (ETDEWEB)

    Craps, Ben [Theoretische Natuurkunde, Vrije Universiteit Brussel and The International Solvay Institutes Pleinlaan 2, B-1050 Brussels (Belgium)

    2006-11-07

    These proceedings are based on lectures delivered at the 'RTN Winter School on Strings, Supergravity and Gauge Theories', CERN, 16-20 January 2006. The school was mainly aimed at PhD students and young postdocs. The lectures start with a brief introduction to spacetime singularities and the string theory resolution of certain static singularities. Then they discuss attempts to resolve cosmological singularities in string theory, mainly focusing on two specific examples: the Milne orbifold and the matrix big bang.

  4. A CVAR scenario for a standard monetary model using theory-consistent expectations

    DEFF Research Database (Denmark)

    Juselius, Katarina

    2017-01-01

    A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...

  5. The Standard Model is Natural as Magnetic Gauge Theory

    DEFF Research Database (Denmark)

    Sannino, Francesco

    2011-01-01

    matter. The absence of scalars in the electric theory indicates that the associated magnetic theory is free from quadratic divergences. Our novel solution to the Standard Model hierarchy problem leads also to a new insight on the mystery of the observed number of fundamental fermion generations......We suggest that the Standard Model can be viewed as the magnetic dual of a gauge theory featuring only fermionic matter content. We show this by first introducing a Pati-Salam like extension of the Standard Model and then relating it to a possible dual electric theory featuring only fermionic...

  6. Chiral gauged Wess-Zumino-Witten theories and coset models in conformal field theory

    International Nuclear Information System (INIS)

    Chung, S.; Tye, S.H.

    1993-01-01

    The Wess-Zumino-Witten (WZW) theory has a global symmetry denoted by G L direct-product G R . In the standard gauged WZW theory, vector gauge fields (i.e., with vector gauge couplings) are in the adjoint representation of the subgroup H contained-in G. In this paper, we show that, in the conformal limit in two dimensions, there is a gauged WZW theory where the gauge fields are chiral and belong to the subgroups H L and H R where H L and H R can be different groups. In the special case where H L =H R , the theory is equivalent to vector gauged WZW theory. For general groups H L and H R , an examination of the correlation functions (or more precisely, conformal blocks) shows that the chiral gauged WZW theory is equivalent to (G/H L ) L direct-product(G/H R ) R coset models in conformal field theory

  7. Spatial data modelling and maximum entropy theory

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2005-01-01

    Roč. 51, č. 2 (2005), s. 80-83 ISSN 0139-570X Institutional research plan: CEZ:AV0Z10750506 Keywords : spatial data classification * distribution function * error distribution Subject RIV: BD - Theory of Information

  8. Electroweak theory and the Standard Model

    CERN Multimedia

    CERN. Geneva; Giudice, Gian Francesco

    2004-01-01

    There is a natural splitting in four sectors of the theory of the ElectroWeak (EW) Interactions, at pretty different levels of development/test. Accordingly, the 5 lectures are organized as follows, with an eye to the future: Lecture 1: The basic structure of the theory; Lecture 2: The gauge sector; Lecture 3: The flavor sector; Lecture 4: The neutrino sector; Lecture 5: The EW symmetry breaking sector.

  9. Statistical Learning Theory: Models, Concepts, and Results

    OpenAIRE

    von Luxburg, Ulrike; Schoelkopf, Bernhard

    2008-01-01

    Statistical learning theory provides the theoretical basis for many of today's machine learning algorithms. In this article we attempt to give a gentle, non-technical overview over the key ideas and insights of statistical learning theory. We target at a broad audience, not necessarily machine learning researchers. This paper can serve as a starting point for people who want to get an overview on the field before diving into technical details.

  10. Solid modeling and applications rapid prototyping, CAD and CAE theory

    CERN Document Server

    Um, Dugan

    2016-01-01

    The lessons in this fundamental text equip students with the theory of Computer Assisted Design (CAD), Computer Assisted Engineering (CAE), the essentials of Rapid Prototyping, as well as practical skills needed to apply this understanding in real world design and manufacturing settings. The book includes three main areas: CAD, CAE, and Rapid Prototyping, each enriched with numerous examples and exercises. In the CAD section, Professor Um outlines the basic concept of geometric modeling, Hermite and Bezier Spline curves theory, and 3-dimensional surface theories as well as rendering theory. The CAE section explores mesh generation theory, matrix notion for FEM, the stiffness method, and truss Equations. And in Rapid Prototyping, the author illustrates stereo lithographic theory and introduces popular modern RP technologies. Solid Modeling and Applications: Rapid Prototyping, CAD and CAE Theory is ideal for university students in various engineering disciplines as well as design engineers involved in product...

  11. The logical foundations of scientific theories languages, structures, and models

    CERN Document Server

    Krause, Decio

    2016-01-01

    This book addresses the logical aspects of the foundations of scientific theories. Even though the relevance of formal methods in the study of scientific theories is now widely recognized and regaining prominence, the issues covered here are still not generally discussed in philosophy of science. The authors focus mainly on the role played by the underlying formal apparatuses employed in the construction of the models of scientific theories, relating the discussion with the so-called semantic approach to scientific theories. The book describes the role played by this metamathematical framework in three main aspects: considerations of formal languages employed to axiomatize scientific theories, the role of the axiomatic method itself, and the way set-theoretical structures, which play the role of the models of theories, are developed. The authors also discuss the differences and philosophical relevance of the two basic ways of aximoatizing a scientific theory, namely Patrick Suppes’ set theoretical predicate...

  12. Supersymmetry and String Theory: Beyond the Standard Model

    International Nuclear Information System (INIS)

    Rocek, Martin

    2007-01-01

    When I was asked to review Michael Dine's new book, 'Supersymmetry and String Theory', I was pleased to have a chance to read a book by such an established authority on how string theory might become testable. The book is most useful as a list of current topics of interest in modern theoretical physics. It gives a succinct summary of a huge variety of subjects, including the standard model, symmetry, Yang-Mills theory, quantization of gauge theories, the phenomenology of the standard model, the renormalization group, lattice gauge theory, effective field theories, anomalies, instantons, solitons, monopoles, dualities, technicolor, supersymmetry, the minimal supersymmetric standard model, dynamical supersymmetry breaking, extended supersymmetry, Seiberg-Witten theory, general relativity, cosmology, inflation, bosonic string theory, the superstring, the heterotic string, string compactifications, the quintic, string dualities, large extra dimensions, and, in the appendices, Goldstone's theorem, path integrals, and exact beta-functions in supersymmetric gauge theories. Its breadth is both its strength and its weakness: it is not (and could not possibly be) either a definitive reference for experts, where the details of thorny technical issues are carefully explored, or a textbook for graduate students, with detailed pedagogical expositions. As such, it complements rather than replaces the much narrower and more focussed String Theory I and II volumes by Polchinski, with their deep insights, as well the two older volumes by Green, Schwarz, and Witten, which develop string theory pedagogically. (book review)

  13. Introduction to gauge theories and the Standard Model

    CERN Document Server

    de Wit, Bernard

    1995-01-01

    The conceptual basis of gauge theories is introduced to enable the construction of generic models.Spontaneous symmetry breaking is dicussed and its relevance for the renormalization of theories with massive vector field is explained. Subsequently a d standard model. When time permits we will address more practical questions that arise in the evaluation of quantum corrections.

  14. A 'theory of everything'? [Extending the Standard Model

    International Nuclear Information System (INIS)

    Ross, G.G.

    1993-01-01

    The Standard Model provides us with an amazingly successful theory of the strong, weak and electromagnetic interactions. Despite this, many physicists believe it represents only a step towards understanding the ultimate ''theory of everything''. In this article we describe why the Standard Model is thought to be incomplete and some of the suggestions for its extension. (Author)

  15. Neutron Star Models in Alternative Theories of Gravity

    Science.gov (United States)

    Manolidis, Dimitrios

    We study the structure of neutron stars in a broad class of alternative theories of gravity. In particular, we focus on Scalar-Tensor theories and f(R) theories of gravity. We construct static and slowly rotating numerical star models for a set of equations of state, including a polytropic model and more realistic equations of state motivated by nuclear physics. Observable quantities such as masses, radii, etc are calculated for a set of parameters of the theories. Specifically for Scalar-Tensor theories, we also calculate the sensitivities of the mass and moment of inertia of the models to variations in the asymptotic value of the scalar field at infinity. These quantities enter post-Newtonian equations of motion and gravitational waveforms of two body systems that are used for gravitational-wave parameter estimation, in order to test these theories against observations. The construction of numerical models of neutron stars in f(R) theories of gravity has been difficult in the past. Using a new formalism by Jaime, Patino and Salgado we were able to construct models with high interior pressure, namely pc > rho c/3, both for constant density models and models with a polytropic equation of state. Thus, we have shown that earlier objections to f(R) theories on the basis of the inability to construct viable neutron star models are unfounded.

  16. Generalized algebra-valued models of set theory

    NARCIS (Netherlands)

    Löwe, B.; Tarafder, S.

    2015-01-01

    We generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory.

  17. A QCD Model Using Generalized Yang-Mills Theory

    International Nuclear Information System (INIS)

    Wang Dianfu; Song Heshan; Kou Lina

    2007-01-01

    Generalized Yang-Mills theory has a covariant derivative, which contains both vector and scalar gauge bosons. Based on this theory, we construct a strong interaction model by using the group U(4). By using this U(4) generalized Yang-Mills model, we also obtain a gauge potential solution, which can be used to explain the asymptotic behavior and color confinement.

  18. A review of organizational buyer behaviour models and theories ...

    African Journals Online (AJOL)

    Over the years, models have been developed, and theories propounded, to explain the behavior of industrial buyers on the one hand and the nature of the dyadic relationship between organizational buyers and sellers on the other hand. This paper is an attempt at a review of the major models and theories in extant ...

  19. Rate theory of solvent exchange and kinetics of Li(+) - BF4 (-)/PF6 (-) ion pairs in acetonitrile.

    Science.gov (United States)

    Dang, Liem X; Chang, Tsun-Mei

    2016-09-07

    In this paper, we describe our efforts to apply rate theories in studies of solvent exchange around Li(+) and the kinetics of ion pairings in lithium-ion batteries (LIBs). We report one of the first computer simulations of the exchange dynamics around solvated Li(+) in acetonitrile (ACN), which is a common solvent used in LIBs. We also provide details of the ion-pairing kinetics of Li(+)-[BF4] and Li(+)-[PF6] in ACN. Using our polarizable force-field models and employing classical rate theories of chemical reactions, we examine the ACN exchange process between the first and second solvation shells around Li(+). We calculate exchange rates using transition state theory and weighted them with the transmission coefficients determined by the reactive flux, Impey, Madden, and McDonald approaches, and Grote-Hynes theory. We found the relaxation times changed from 180 ps to 4600 ps and from 30 ps to 280 ps for Li(+)-[BF4] and Li(+)-[PF6] ion pairs, respectively. These results confirm that the solvent response to the kinetics of ion pairing is significant. Our results also show that, in addition to affecting the free energy of solvation into ACN, the anion type also should significantly influence the kinetics of ion pairing. These results will increase our understanding of the thermodynamic and kinetic properties of LIB systems.

  20. The Birth of Model Theory Lowenheim's Theorem in the Frame of the Theory of Relatives

    CERN Document Server

    Badesa, Calixto

    2008-01-01

    Löwenheim's theorem reflects a critical point in the history of mathematical logic, for it marks the birth of model theory--that is, the part of logic that concerns the relationship between formal theories and their models. However, while the original proofs of other, comparably significant theorems are well understood, this is not the case with Löwenheim's theorem. For example, the very result that scholars attribute to Löwenheim today is not the one that Skolem--a logician raised in the algebraic tradition, like Löwenheim--appears to have attributed to him. In The Birth of Model Theory, Cali

  1. Application of decision-making theory to the regulation of muscular work rate during self-paced competitive endurance activity.

    Science.gov (United States)

    Renfree, Andrew; Martin, Louise; Micklewright, Dominic; St Clair Gibson, Alan

    2014-02-01

    Successful participation in competitive endurance activities requires continual regulation of muscular work rate in order to maximise physiological performance capacities, meaning that individuals must make numerous decisions with regards to the muscular work rate selected at any point in time. Decisions relating to the setting of appropriate goals and the overall strategic approach to be utilised are made prior to the commencement of an event, whereas tactical decisions are made during the event itself. This review examines current theories of decision-making in an attempt to explain the manner in which regulation of muscular work is achieved during athletic activity. We describe rational and heuristic theories, and relate these to current models of regulatory processes during self-paced exercise in an attempt to explain observations made in both laboratory and competitive environments. Additionally, we use rational and heuristic theories in an attempt to explain the influence of the presence of direct competitors on the quality of the decisions made during these activities. We hypothesise that although both rational and heuristic models can plausibly explain many observed behaviours in competitive endurance activities, the complexity of the environment in which such activities occur would imply that effective rational decision-making is unlikely. However, at present, many proposed models of the regulatory process share similarities with rational models. We suggest enhanced understanding of the decision-making process during self-paced activities is crucial in order to improve the ability to understand regulation of performance and performance outcomes during athletic activity.

  2. Tantalum strength model incorporating temperature, strain rate and pressure

    Science.gov (United States)

    Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt

    Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. To Save or to Consume: Linking Growth Theory with the Keynesian Model

    Science.gov (United States)

    Kwok, Yun-kwong

    2007-01-01

    In the neoclassical growth theory, higher saving rate gives rise to higher output per capita. However, in the Keynesian model, higher saving rate causes lower consumption, which may lead to a recession. Students may ask, "Should we save or should we consume?" In most of the macroeconomics textbooks, economic growth and Keynesian economics are in…

  4. Non-linear σ-models and string theories

    International Nuclear Information System (INIS)

    Sen, A.

    1986-10-01

    The connection between σ-models and string theories is discussed, as well as how the σ-models can be used as tools to prove various results in string theories. Closed bosonic string theory in the light cone gauge is very briefly introduced. Then, closed bosonic string theory in the presence of massless background fields is discussed. The light cone gauge is used, and it is shown that in order to obtain a Lorentz invariant theory, the string theory in the presence of background fields must be described by a two-dimensional conformally invariant theory. The resulting constraints on the background fields are found to be the equations of motion of the string theory. The analysis is extended to the case of the heterotic string theory and the superstring theory in the presence of the massless background fields. It is then shown how to use these results to obtain nontrivial solutions to the string field equations. Another application of these results is shown, namely to prove that the effective cosmological constant after compactification vanishes as a consequence of the classical equations of motion of the string theory. 34 refs

  5. Tissue Acoustoelectric Effect Modeling From Solid Mechanics Theory.

    Science.gov (United States)

    Song, Xizi; Qin, Yexian; Xu, Yanbin; Ingram, Pier; Witte, Russell S; Dong, Feng

    2017-10-01

    The acoustoelectric (AE) effect is a basic physical phenomenon, which underlies the changes made in the conductivity of a medium by the application of focused ultrasound. Recently, based on the AE effect, several biomedical imaging techniques have been widely studied, such as ultrasound-modulated electrical impedance tomography and ultrasound current source density imaging. To further investigate the mechanism of the AE effect in tissue and to provide guidance for such techniques, we have modeled the tissue AE effect using the theory of solid mechanics. Both bulk compression and thermal expansion of tissue are considered and discussed. Computation simulation shows that the muscle AE effect result, conductivity change rate, is 3.26×10 -3 with 4.3-MPa peak pressure, satisfying the theoretical value. Bulk compression plays the main role for muscle AE effect, while thermal expansion makes almost no contribution to it. In addition, the AE signals of porcine muscle are measured at different focal positions. With the same magnitude order and the same change trend, the experiment result confirms that the simulation result is effective. Both simulation and experimental results validate that tissue AE effect modeling using solid mechanics theory is feasible, which is of significance for the further development of related biomedical imaging techniques.

  6. Toric Methods in F-Theory Model Building

    Directory of Open Access Journals (Sweden)

    Johanna Knapp

    2011-01-01

    Full Text Available We discuss recent constructions of global F-theory GUT models and explain how to make use of toric geometry to do calculations within this framework. After introducing the basic properties of global F-theory GUTs, we give a self-contained review of toric geometry and introduce all the tools that are necessary to construct and analyze global F-theory models. We will explain how to systematically obtain a large class of compact Calabi-Yau fourfolds which can support F-theory GUTs by using the software package PALP.

  7. Dark matter relics and the expansion rate in scalar-tensor theories

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, Bhaskar; Jimenez, Esteban [Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Zavala, Ivonne, E-mail: dutta@physics.tamu.edu, E-mail: este1985@physics.tamu.edu, E-mail: e.i.zavalacarrasco@swansea.ac.uk [Department of Physics, Swansea University, Singleton Park, Swansea, SA2 8PP (United Kingdom)

    2017-06-01

    We study the impact of a modified expansion rate on the dark matter relic abundance in a class of scalar-tensor theories. The scalar-tensor theories we consider are motivated from string theory constructions, which have conformal as well as disformally coupled matter to the scalar. We investigate the effects of such a conformal coupling to the dark matter relic abundance for a wide range of initial conditions, masses and cross-sections. We find that exploiting all possible initial conditions, the annihilation cross-section required to satisfy the dark matter content can differ from the thermal average cross-section in the standard case. We also study the expansion rate in the disformal case and find that physically relevant solutions require a nontrivial relation between the conformal and disformal functions. We study the effects of the disformal coupling in an explicit example where the disformal function is quadratic.

  8. Quantum Link Models and Quantum Simulation of Gauge Theories

    International Nuclear Information System (INIS)

    Wiese, U.J.

    2015-01-01

    This lecture is about Quantum Link Models and Quantum Simulation of Gauge Theories. The lecture consists out of 4 parts. The first part gives a brief history of Computing and Pioneers of Quantum Computing and Quantum Simulations of Quantum Spin Systems are introduced. The 2nd lecture is about High-Temperature Superconductors versus QCD, Wilson’s Lattice QCD and Abelian Quantum Link Models. The 3rd lecture deals with Quantum Simulators for Abelian Lattice Gauge Theories and Non-Abelian Quantum Link Models. The last part of the lecture discusses Quantum Simulators mimicking ‘Nuclear’ physics and the continuum limit of D-Theorie models. (nowak)

  9. Generalization of the Activated Complex Theory of Reaction Rates. II. Classical Mechanical Treatment

    Science.gov (United States)

    Marcus, R. A.

    1964-01-01

    In its usual classical form activated complex theory assumes a particular expression for the kinetic energy of the reacting system -- one associated with a rectilinear motion along the reaction coordinate. The derivation of the rate expression given in the present paper is based on the general kinetic energy expression.

  10. Explaining transgression in respiratory rate observation methods in the emergency department: A classic grounded theory analysis.

    Science.gov (United States)

    Flenady, Tracy; Dwyer, Trudy; Applegarth, Judith

    2017-09-01

    Abnormal respiratory rates are one of the first indicators of clinical deterioration in emergency department(ED) patients. Despite the importance of respiratory rate observations, this vital sign is often inaccurately recorded on ED observation charts, compromising patient safety. Concurrently, there is a paucity of research reporting why this phenomenon occurs. To develop a substantive theory explaining ED registered nurses' reasoning when they miss or misreport respiratory rate observations. This research project employed a classic grounded theory analysis of qualitative data. Seventy-nine registered nurses currently working in EDs within Australia. Data collected included detailed responses from individual interviews and open-ended responses from an online questionnaire. Classic grounded theory (CGT) research methods were utilised, therefore coding was central to the abstraction of data and its reintegration as theory. Constant comparison synonymous with CGT methods were employed to code data. This approach facilitated the identification of the main concern of the participants and aided in the generation of theory explaining how the participants processed this issue. The main concern identified is that ED registered nurses do not believe that collecting an accurate respiratory rate for ALL patients at EVERY round of observations is a requirement, and yet organizational requirements often dictate that a value for the respiratory rate be included each time vital signs are collected. The theory 'Rationalising Transgression', explains how participants continually resolve this problem. The study found that despite feeling professionally conflicted, nurses often erroneously record respiratory rate observations, and then rationalise this behaviour by employing strategies that adjust the significance of the organisational requirement. These strategies include; Compensating, when nurses believe they are compensating for errant behaviour by enhancing the patient's outcome

  11. Testing linear growth rate formulas of non-scale endogenous growth models

    NARCIS (Netherlands)

    Ziesemer, Thomas

    2017-01-01

    Endogenous growth theory has produced formulas for steady-state growth rates of income per capita which are linear in the growth rate of the population. Depending on the details of the models, slopes and intercepts are positive, zero or negative. Empirical tests have taken over the assumption of

  12. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  13. Macromolecular Rate Theory (MMRT) Provides a Thermodynamics Rationale to Underpin the Convergent Temperature Response in Plant Leaf Respiration

    Science.gov (United States)

    Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.

    2017-12-01

    Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average

  14. Reconstructing constructivism: Causal models, Bayesian learning mechanisms and the theory theory

    OpenAIRE

    Gopnik, Alison; Wellman, Henry M.

    2012-01-01

    We propose a new version of the “theory theory” grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and non-technical way, and review an extensive but ...

  15. Time-dependent perturbation theory for nonequilibrium lattice models

    International Nuclear Information System (INIS)

    Jensen, I.; Dickman, R.

    1993-01-01

    The authors develop a time-dependent perturbation theory for nonequilibrium interacting particle systems. They focus on models such as the contact process which evolve via destruction and autocatalytic creation of particles. At a critical value of the destruction rate there is a continuous phase transition between an active steady state and the vacuum state, which is absorbing. They present several methods for deriving series for the evolution starting from a single seed particle, including expansions for the ultimate survival probability in the super- and subcritical regions, expansions for the average number of particles in the subcritical region, and short-time expansions. Algorithms for computer generation of the various expansions are presented. Rather long series (24 terms or more) and precise estimates of critical parameters are presented. 45 refs., 4 figs., 9 tabs

  16. The Self-Perception Theory vs. a Dynamic Learning Model

    OpenAIRE

    Swank, Otto H.

    2006-01-01

    Several economists have directed our attention to a finding in the social psychological literature that extrinsic motivation may undermine intrinsic motivation. The self-perception (SP) theory developed by Bem (1972) explains this finding. The crux of this theory is that people remember their past decisions and the extrinsic rewards they received, but they do not recall their intrinsic motives. In this paper I show that the SP theory can be modeled as a variant of a conventional dynamic learn...

  17. A new safety assessment model for shallow land burial of LLW based on multicomponent sorption theory

    International Nuclear Information System (INIS)

    Katoh, N.; Asano, T.; Tasaka, H.

    1984-01-01

    A new model on the radionuclide migration in underground environment is developed based on ''multicomponent sorption theory''. The model is capable of predicting the behaviors of the coexisting materials in soil-ground water system as ''multicomponent sorption phenomena'' and also predicting the radinuclide migration affected by the changes of concentrations of coexisting materials. The model is not a ''statistical model'' but a ''chemical model'' based on the ''ion exchange theory'' and ''adsorption theory''. Additionally, the model is a ''kinetic model'' capable of estimating the effect of ''rate of sorption'' on the radionuclide migration. The validity of the model was checked by the results of column experiments for sorption. Finally, sample calculations on the radionuclide migration in reference shallow land burial site were carried out for demonstration

  18. Modeling Equity for Alternative Water Rate Structures

    Science.gov (United States)

    Griffin, R.; Mjelde, J.

    2011-12-01

    The rising popularity of increasing block rates for urban water runs counter to mainstream economic recommendations, yet decision makers in rate design forums are attracted to the notion of higher prices for larger users. Among economists, it is widely appreciated that uniform rates have stronger efficiency properties than increasing block rates, especially when volumetric prices incorporate intrinsic water value. Yet, except for regions where water market purchases have forced urban authorities to include water value in water rates, economic arguments have weakly penetrated policy. In this presentation, recent evidence will be reviewed regarding long term trends in urban rate structures while observing economic principles pertaining to these choices. The main objective is to investigate the equity of increasing block rates as contrasted to uniform rates for a representative city. Using data from four Texas cities, household water demand is established as a function of marginal price, income, weather, number of residents, and property characteristics. Two alternative rate proposals are designed on the basis of recent experiences for both water and wastewater rates. After specifying a reasonable number (~200) of diverse households populating the city and parameterizing each household's characteristics, every household's consumption selections are simulated for twelve months. This procedure is repeated for both rate systems. Monthly water and wastewater bills are also computed for each household. Most importantly, while balancing the budget of the city utility we compute the effect of switching rate structures on the welfares of households of differing types. Some of the empirical findings are as follows. Under conditions of absent water scarcity, households of opposing characters such as low versus high income do not have strong preferences regarding rate structure selection. This changes as water scarcity rises and as water's opportunity costs are allowed to

  19. Theory and model use in social marketing health interventions.

    Science.gov (United States)

    Luca, Nadina Raluca; Suggs, L Suzanne

    2013-01-01

    The existing literature suggests that theories and models can serve as valuable frameworks for the design and evaluation of health interventions. However, evidence on the use of theories and models in social marketing interventions is sparse. The purpose of this systematic review is to identify to what extent papers about social marketing health interventions report using theory, which theories are most commonly used, and how theory was used. A systematic search was conducted for articles that reported social marketing interventions for the prevention or management of cancer, diabetes, heart disease, HIV, STDs, and tobacco use, and behaviors related to reproductive health, physical activity, nutrition, and smoking cessation. Articles were published in English, after 1990, reported an evaluation, and met the 6 social marketing benchmarks criteria (behavior change, consumer research, segmentation and targeting, exchange, competition and marketing mix). Twenty-four articles, describing 17 interventions, met the inclusion criteria. Of these 17 interventions, 8 reported using theory and 7 stated how it was used. The transtheoretical model/stages of change was used more often than other theories. Findings highlight an ongoing lack of use or underreporting of the use of theory in social marketing campaigns and reinforce the call to action for applying and reporting theory to guide and evaluate interventions.

  20. Measurement Models for Reasoned Action Theory

    OpenAIRE

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin

    2012-01-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are ...

  1. Modeling Routinization in Games: An Information Theory Approach

    DEFF Research Database (Denmark)

    Wallner, Simon; Pichlmair, Martin; Hecher, Michael

    2015-01-01

    Routinization is the result of practicing until an action stops being a goal-directed process. This paper formulates a definition of routinization in games based on prior research in the fields of activity theory and practice theory. Routinization is analyzed using the formal model of discrete......-time, discrete-space Markov chains and information theory to measure the actual error between the dynamically trained models and the player interaction. Preliminary research supports the hypothesis that Markov chains can be effectively used to model routinization in games. A full study design is presented...

  2. Internal Universes in Models of Homotopy Type Theory

    DEFF Research Database (Denmark)

    Licata, Daniel R.; Orton, Ian; Pitts, Andrew M.

    2018-01-01

    We show that universes of fibrations in various models of homotopy type theory have an essentially global character: they cannot be described in the internal language of the presheaf topos from which the model is constructed. We get around this problem by extending the internal language with a mo...... that the interval in cubical sets does indeed have. This leads to a completely internal development of models of homotopy type theory within what we call crisp type theory.......We show that universes of fibrations in various models of homotopy type theory have an essentially global character: they cannot be described in the internal language of the presheaf topos from which the model is constructed. We get around this problem by extending the internal language...

  3. Theory, modeling, and simulation annual report, 1992

    Energy Technology Data Exchange (ETDEWEB)

    1993-05-01

    This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.

  4. Theories of conduct disorder: a causal modelling analysis

    NARCIS (Netherlands)

    Krol, N.P.C.M.; Morton, J.; Bruyn, E.E.J. De

    2004-01-01

    Background: If a clinician has to make decisions on diagnosis and treatment, he or she is confronted with a variety of causal theories. In order to compare these theories a neutral terminology and notational system is needed. The Causal Modelling framework involving three levels of description –

  5. Models of Regge behaviour in an asymptotically free theory

    International Nuclear Information System (INIS)

    Polkinghorne, J.C.

    1976-01-01

    Two simple Feynman integral models are presented which reproduce the features expected to be of physical importance in the Regge behaviour of asymptotically free theories. Analysis confirms the result, expected on general grounds, that phi 3 in six dimensions has an essential singularity at l=-1. The extension to gauge theories is discussed. (Auth.)

  6. Theory analysis of the Dental Hygiene Human Needs Conceptual Model.

    Science.gov (United States)

    MacDonald, L; Bowen, D M

    2017-11-01

    Theories provide a structural knowing about concept relationships, practice intricacies, and intuitions and thus shape the distinct body of the profession. Capturing ways of knowing and being is essential to any professions' practice, education and research. This process defines the phenomenon of the profession - its existence or experience. Theory evaluation is a systematic criterion-based assessment of a specific theory. This study presents a theory analysis of the Dental Hygiene Human Needs Conceptual Model (DH HNCM). Using the Walker and Avant Theory Analysis, a seven-step process, the DH HNCM, was analysed and evaluated for its meaningfulness and contribution to dental hygiene. The steps include the following: (i) investigate the origins; (ii) examine relationships of the theory's concepts; (iii) assess the logic of the theory's structure; (iv) consider the usefulness to practice; (v) judge the generalizability; (vi) evaluate the parsimony; and (vii) appraise the testability of the theory. Human needs theory in nursing and Maslow's Hierarchy of Need Theory prompted this theory's development. The DH HNCM depicts four concepts based on the paradigm concepts of the profession: client, health/oral health, environment and dental hygiene actions, and includes validated eleven human needs that evolved overtime to eight. It is logical, simplistic, allows scientific predictions and testing, and provides a unique lens for the dental hygiene practitioner. With this model, dental hygienists have entered practice, knowing they enable clients to meet their human needs. For the DH HNCM, theory analysis affirmed that the model is reasonable and insightful and adds to the dental hygiene professions' epistemology and ontology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Rate dependent inelastic behavior of polycrystalline solids using a dislocation model

    International Nuclear Information System (INIS)

    Werne, R.W.; Kelly, J.M.

    1980-01-01

    A rate dependent theory of polycrystalline plasticity is presented in which the solid is modeled as an isotropic continuum with internal variables. The rate of plastic deformation is shown to be a function of the deviatoric portion of the Cauchy stress tensor as well as two scalar internal variables. The scalar internal variables, which are the dislocation density and mobile fraction, are governed by rate equations which reflect the evolution of microstructural processes. The model has been incorporated into a two dimensional finite element code and several example multidimensional problems are presented which exhibit the rate dependence of the material model

  8. Extended Nambu models: Their relation to gauge theories

    Science.gov (United States)

    Escobar, C. A.; Urrutia, L. F.

    2017-05-01

    Yang-Mills theories supplemented by an additional coordinate constraint, which is solved and substituted in the original Lagrangian, provide examples of the so-called Nambu models, in the case where such constraints arise from spontaneous Lorentz symmetry breaking. Some explicit calculations have shown that, after additional conditions are imposed, Nambu models are capable of reproducing the original gauge theories, thus making Lorentz violation unobservable and allowing the interpretation of the corresponding massless gauge bosons as the Goldstone bosons arising from the spontaneous symmetry breaking. A natural question posed by this approach in the realm of gauge theories is to determine under which conditions the recovery of an arbitrary gauge theory from the corresponding Nambu model, defined by a general constraint over the coordinates, becomes possible. We refer to these theories as extended Nambu models (ENM) and emphasize the fact that the defining coordinate constraint is not treated as a standard gauge fixing term. At this level, the mechanism for generating the constraint is irrelevant and the case of spontaneous Lorentz symmetry breaking is taken only as a motivation, which naturally bring this problem under consideration. Using a nonperturbative Hamiltonian analysis we prove that the ENM yields the original gauge theory after we demand current conservation for all time, together with the imposition of the Gauss laws constraints as initial conditions upon the dynamics of the ENM. The Nambu models yielding electrodynamics, Yang-Mills theories and linearized gravity are particular examples of our general approach.

  9. Linear control theory for gene network modeling.

    Science.gov (United States)

    Shin, Yong-Jun; Bleris, Leonidas

    2010-09-16

    Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  10. Polling models : from theory to traffic intersections

    NARCIS (Netherlands)

    Boon, M.A.A.

    2011-01-01

    The subject of the present monograph is the study of polling models, which are queueing models consisting of multiple queues, cyclically attended by one server. Polling models originated in the late 1950s, but did not receive much attention until the 1980s when an abundance of new applications arose

  11. Development of a dynamic computational model of social cognitive theory.

    Science.gov (United States)

    Riley, William T; Martin, Cesar A; Rivera, Daniel E; Hekler, Eric B; Adams, Marc A; Buman, Matthew P; Pavel, Misha; King, Abby C

    2016-12-01

    Social cognitive theory (SCT) is among the most influential theories of behavior change and has been used as the conceptual basis of health behavior interventions for smoking cessation, weight management, and other health behaviors. SCT and other behavior theories were developed primarily to explain differences between individuals, but explanatory theories of within-person behavioral variability are increasingly needed as new technologies allow for intensive longitudinal measures and interventions adapted from these inputs. These within-person explanatory theoretical applications can be modeled as dynamical systems. SCT constructs, such as reciprocal determinism, are inherently dynamical in nature, but SCT has not been modeled as a dynamical system. This paper describes the development of a dynamical system model of SCT using fluid analogies and control systems principles drawn from engineering. Simulations of this model were performed to assess if the model performed as predicted based on theory and empirical studies of SCT. This initial model generates precise and testable quantitative predictions for future intensive longitudinal research. Dynamic modeling approaches provide a rigorous method for advancing health behavior theory development and refinement and for guiding the development of more potent and efficient interventions.

  12. Contribution to the study of conformal theories and integrable models

    International Nuclear Information System (INIS)

    Sochen, N.

    1992-05-01

    The purpose of this thesis is the 2-D physics study. The main tool is the conformal field theory with Kac-Moody and W algebra. This theory describes the 2-D models that have translation, rotation and dilatation symmetries, at their critical point. The expanded conformal theories describe models that have a larger symmetry than conformal symmetry. After a review of conformal theory methods, the author effects a detailed study of singular vector form in sl(2) affine algebra. With this important form, correlation functions can be calculated. The classical W algebra is studied and the relations between classical W algebra and quantum W algebra are specified. Bosonization method is presented and sl(2)/sl(2) topological model, studied. Partition function bosonization of different models is described. A program of rational theory classification is described linking rational conformal theories and spin integrable models, and interesting relations between Boltzmann weights of different models have been found. With these relations, the integrability of models by a direct calculation of their Boltzmann weights is proved

  13. Three level constraints on conformal field theories and string models

    International Nuclear Information System (INIS)

    Lewellen, D.C.

    1989-05-01

    Simple tree level constraints for conformal field theories which follow from the requirement of crossing symmetry of four-point amplitudes are presented, and their utility for probing general properties of string models is briefly illustrated and discussed. 9 refs

  14. Model Uncertainty and Exchange Rate Forecasting

    NARCIS (Netherlands)

    R.R.P. Kouwenberg (Roy); A. Markiewicz (Agnieszka); R. Verhoeks (Ralph); R.C.J. Zwinkels (Remco)

    2013-01-01

    textabstractWe propose a theoretical framework of exchange rate behavior where investors focus on a subset of economic fundamentals. We find that any adjustment in the set of predictors used by investors leads to changes in the relation between the exchange rate and fundamentals. We test the

  15. Nematic elastomers: from a microscopic model to macroscopic elasticity theory.

    Science.gov (United States)

    Xing, Xiangjun; Pfahl, Stephan; Mukhopadhyay, Swagatam; Goldbart, Paul M; Zippelius, Annette

    2008-05-01

    A Landau theory is constructed for the gelation transition in cross-linked polymer systems possessing spontaneous nematic ordering, based on symmetry principles and the concept of an order parameter for the amorphous solid state. This theory is substantiated with help of a simple microscopic model of cross-linked dimers. Minimization of the Landau free energy in the presence of nematic order yields the neoclassical theory of the elasticity of nematic elastomers and, in the isotropic limit, the classical theory of isotropic elasticity. These phenomenological theories of elasticity are thereby derived from a microscopic model, and it is furthermore demonstrated that they are universal mean-field descriptions of the elasticity for all chemical gels and vulcanized media.

  16. Soliton excitations in a class of nonlinear field theory models

    International Nuclear Information System (INIS)

    Makhan'kov, V.G.; Fedyanin, V.K.

    1985-01-01

    Investigation results of nonlinear models of the field theory with a lagrangian are described. The theory includes models both with zero stable vacuum epsilon=1 and with condensate epsilon=-1 (of disturbed symmetry). Conditions of existence of particle-like solutions (PLS), stability of these solutions are investigated. Soliton dynamics is studied. PLS formfactors are calculated. Statistical mechanics of solitons is built and their dynamic structure factors are calculated

  17. Two-matrix models and c =1 string theory

    International Nuclear Information System (INIS)

    Bonora, L.; Xiong Chuansheng

    1994-05-01

    We show that the most general two-matrix model with bilinear coupling underlies c = 1 string theory. More precisely we prove that W 1+∞ constraints, a subset of the correlation functions and the integrable hierarchy characterizing such two-matrix model, correspond exactly to the W 1+∞ constraints, to the discrete tachyon correlation functions and the integrable hierarchy of the c = 1 string theory. (orig.)

  18. Planar N = 4 gauge theory and the Hubbard model

    International Nuclear Information System (INIS)

    Rej, Adam; Serban, Didina; Staudacher, Matthias

    2006-01-01

    Recently it was established that a certain integrable long-range spin chain describes the dilatation operator of N = 4 gauge theory in the su(2) sector to at least three-loop order, while exhibiting BMN scaling to all orders in perturbation theory. Here we identify this spin chain as an approximation to an integrable short-ranged model of strongly correlated electrons: The Hubbard model

  19. Hidden Fermi liquid, scattering rate saturation, and Nernst effect: a dynamical mean-field theory perspective.

    Science.gov (United States)

    Xu, Wenhu; Haule, Kristjan; Kotliar, Gabriel

    2013-07-19

    We investigate the transport properties of a correlated metal within dynamical mean-field theory. Canonical Fermi liquid behavior emerges only below a very low temperature scale T(FL). Surprisingly the quasiparticle scattering rate follows a quadratic temperature dependence up to much higher temperatures and crosses over to saturated behavior around a temperature scale T(sat). We identify these quasiparticles as constituents of the hidden Fermi liquid. The non-Fermi-liquid transport above T(FL), in particular the linear-in-T resistivity, is shown to be a result of a strongly temperature dependent band dispersion. We derive simple expressions for the resistivity, Hall angle, thermoelectric power and Nernst coefficient in terms of a temperature dependent renormalized band structure and the quasiparticle scattering rate. We discuss possible tests of the dynamical mean-field theory picture of transport using ac measurements.

  20. Scattering and short-distance properties in field theory models

    International Nuclear Information System (INIS)

    Iagolnitzer, D.

    1987-01-01

    The aim of constructive field theory is not only to define models but also to establish their general properties of physical interest. We here review recent works on scattering and on short-distance properties for weakly coupled theories with mass gap such as typically P(φ) in dimension 2, φ 4 in dimension 3 and the (renormalizable, asymptotically free) massive Gross-Neveu (GN) model in dimension 2. Many of the ideas would apply similarly to other (possibly non renormalizable) theories that might be defined in a similar way via phase-space analysis

  1. The monster sporadic group and a theory underlying superstring models

    International Nuclear Information System (INIS)

    Chapline, G.

    1996-09-01

    The pattern of duality symmetries acting on the states of compactified superstring models reinforces an earlier suggestion that the Monster sporadic group is a hidden symmetry for superstring models. This in turn points to a supersymmetric theory of self-dual and anti-self-dual K3 manifolds joined by Dirac strings and evolving in a 13 dimensional spacetime as the fundamental theory. In addition to the usual graviton and dilaton this theory contains matter-like degrees of freedom resembling the massless states of the heterotic string, thus providing a completely geometric interpretation for ordinary matter. 25 refs

  2. Consumer preference models: fuzzy theory approach

    Science.gov (United States)

    Turksen, I. B.; Wilson, I. A.

    1993-12-01

    Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).

  3. Narrative theories as computational models: reader-oriented theory and artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Galloway, P.

    1983-12-01

    In view of the rapid development of reader-oriented theory and its interest in dynamic models of narrative, the author speculates in a serious way about what such models might look like in computational terms. Researchers in artificial intelligence (AI) have already begun to develop models of story understanding as the emphasis in ai research has shifted toward natural language understanding and as ai has allied itself with cognitive psychology and linguistics to become cognitive science. Research in ai and in narrative theory share many common interests and problems and both studies might benefit from an exchange of ideas. 11 references.

  4. Eye growth and myopia development: Unifying theory and Matlab model.

    Science.gov (United States)

    Hung, George K; Mahadas, Kausalendra; Mohammad, Faisal

    2016-03-01

    The aim of this article is to present an updated unifying theory of the mechanisms underlying eye growth and myopia development. A series of model simulation programs were developed to illustrate the mechanism of eye growth regulation and myopia development. Two fundamental processes are presumed to govern the relationship between physiological optics and eye growth: genetically pre-programmed signaling and blur feedback. Cornea/lens is considered to have only a genetically pre-programmed component, whereas eye growth is considered to have both a genetically pre-programmed and a blur feedback component. Moreover, based on the Incremental Retinal-Defocus Theory (IRDT), the rate of change of blur size provides the direction for blur-driven regulation. The various factors affecting eye growth are shown in 5 simulations: (1 - unregulated eye growth): blur feedback is rendered ineffective, as in the case of form deprivation, so there is only genetically pre-programmed eye growth, generally resulting in myopia; (2 - regulated eye growth): blur feedback regulation demonstrates the emmetropization process, with abnormally excessive or reduced eye growth leading to myopia and hyperopia, respectively; (3 - repeated near-far viewing): simulation of large-to-small change in blur size as seen in the accommodative stimulus/response function, and via IRDT as well as nearwork-induced transient myopia (NITM), leading to the development of myopia; (4 - neurochemical bulk flow and diffusion): release of dopamine from the inner plexiform layer of the retina, and the subsequent diffusion and relay of neurochemical cascade show that a decrease in dopamine results in a reduction of proteoglycan synthesis rate, which leads to myopia; (5 - Simulink model): model of genetically pre-programmed signaling and blur feedback components that allows for different input functions to simulate experimental manipulations that result in hyperopia, emmetropia, and myopia. These model simulation programs

  5. A Dynamic Systems Theory Model of Visual Perception Development

    Science.gov (United States)

    Coté, Carol A.

    2015-01-01

    This article presents a model for understanding the development of visual perception from a dynamic systems theory perspective. It contrasts to a hierarchical or reductionist model that is often found in the occupational therapy literature. In this proposed model vision and ocular motor abilities are not foundational to perception, they are seen…

  6. Membrane models and generalized Z2 gauge theories

    International Nuclear Information System (INIS)

    Lowe, M.J.; Wallace, D.J.

    1980-01-01

    We consider models of (d-n)-dimensional membranes fluctuating in a d-dimensional space under the action of surface tension. We investigate the renormalization properties of these models perturbatively and in 1/n expansion. The potential relationships of these models to generalized Z 2 gauge theories are indicated. (orig.)

  7. Theories and Frameworks for Online Education: Seeking an Integrated Model

    Science.gov (United States)

    Picciano, Anthony G.

    2017-01-01

    This article examines theoretical frameworks and models that focus on the pedagogical aspects of online education. After a review of learning theory as applied to online education, a proposal for an integrated "Multimodal Model for Online Education" is provided based on pedagogical purpose. The model attempts to integrate the work of…

  8. Generalization of the Activated Complex Theory of Reaction Rates. I. Quantum Mechanical Treatment

    Science.gov (United States)

    Marcus, R. A.

    1964-01-01

    In its usual form activated complex theory assumes a quasi-equilibrium between reactants and activated complex, a separable reaction coordinate, a Cartesian reaction coordinate, and an absence of interaction of rotation with internal motion in the complex. In the present paper a rate expression is derived without introducing the Cartesian assumption. The expression bears a formal resemblance to the usual one and reduces to it when the added assumptions of the latter are introduced.

  9. Linear control theory for gene network modeling.

    Directory of Open Access Journals (Sweden)

    Yong-Jun Shin

    Full Text Available Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain and linear state-space (time domain can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.

  10. Measurement Models for Reasoned Action Theory.

    Science.gov (United States)

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin

    2012-03-01

    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach.

  11. Modeling acquaintance networks based on balance theory

    Directory of Open Access Journals (Sweden)

    Vukašinović Vida

    2014-09-01

    Full Text Available An acquaintance network is a social structure made up of a set of actors and the ties between them. These ties change dynamically as a consequence of incessant interactions between the actors. In this paper we introduce a social network model called the Interaction-Based (IB model that involves well-known sociological principles. The connections between the actors and the strength of the connections are influenced by the continuous positive and negative interactions between the actors and, vice versa, the future interactions are more likely to happen between the actors that are connected with stronger ties. The model is also inspired by the social behavior of animal species, particularly that of ants in their colony. A model evaluation showed that the IB model turned out to be sparse. The model has a small diameter and an average path length that grows in proportion to the logarithm of the number of vertices. The clustering coefficient is relatively high, and its value stabilizes in larger networks. The degree distributions are slightly right-skewed. In the mature phase of the IB model, i.e., when the number of edges does not change significantly, most of the network properties do not change significantly either. The IB model was found to be the best of all the compared models in simulating the e-mail URV (University Rovira i Virgili of Tarragona network because the properties of the IB model more closely matched those of the e-mail URV network than the other models

  12. Modeling in applied sciences a kinetic theory approach

    CERN Document Server

    Pulvirenti, Mario

    2000-01-01

    Modeling complex biological, chemical, and physical systems, in the context of spatially heterogeneous mediums, is a challenging task for scientists and engineers using traditional methods of analysis Modeling in Applied Sciences is a comprehensive survey of modeling large systems using kinetic equations, and in particular the Boltzmann equation and its generalizations An interdisciplinary group of leading authorities carefully develop the foundations of kinetic models and discuss the connections and interactions between model theories, qualitative and computational analysis and real-world applications This book provides a thoroughly accessible and lucid overview of the different aspects, models, computations, and methodology for the kinetic-theory modeling process Topics and Features * Integrated modeling perspective utilized in all chapters * Fluid dynamics of reacting gases * Self-contained introduction to kinetic models * Becker–Doring equations * Nonlinear kinetic models with chemical reactions * Kinet...

  13. Baldrige Theory into Practice: A Generic Model

    Science.gov (United States)

    Arif, Mohammed

    2007-01-01

    Purpose: The education system globally has moved from a push-based or producer-centric system to a pull-based or customer centric system. Malcolm Baldrige Quality Award (MBQA) model happens to be one of the latest additions to the pull based models. The purpose of this paper is to develop a generic framework for MBQA that can be used by…

  14. Optimal transportation networks models and theory

    CERN Document Server

    Bernot, Marc; Morel, Jean-Michel

    2009-01-01

    The transportation problem can be formalized as the problem of finding the optimal way to transport a given measure into another with the same mass. In contrast to the Monge-Kantorovitch problem, recent approaches model the branched structure of such supply networks as minima of an energy functional whose essential feature is to favour wide roads. Such a branched structure is observable in ground transportation networks, in draining and irrigation systems, in electrical power supply systems and in natural counterparts such as blood vessels or the branches of trees. These lectures provide mathematical proof of several existence, structure and regularity properties empirically observed in transportation networks. The link with previous discrete physical models of irrigation and erosion models in geomorphology and with discrete telecommunication and transportation models is discussed. It will be mathematically proven that the majority fit in the simple model sketched in this volume.

  15. Derivation of the chemical-equilibrium rate coefficient using scattering theory

    Science.gov (United States)

    Mickens, R. E.

    1977-01-01

    Scattering theory is applied to derive the equilibrium rate coefficient for a general homogeneous chemical reaction involving ideal gases. The reaction rate is expressed in terms of the product of a number of normalized momentum distribution functions, the product of the number of molecules with a given internal energy state, and the spin-averaged T-matrix elements. An expression for momentum distribution at equilibrium for an arbitrary molecule is presented, and the number of molecules with a given internal-energy state is represented by an expression which includes the partition function.

  16. The Relevance of Using Mathematical Models in Macroeconomic Policies Theory

    Directory of Open Access Journals (Sweden)

    Nora Mihail

    2006-11-01

    Full Text Available The article presents a look of the principal’s mathematical models – starting with Theil, Hansen and Tinbergen work – and their results used to analysis and design macroeconomic policies. In modeling field changes are very fast in theoretical aspects of modeling the many problems of macroeconomic policies and in using in practice the different political models elaboration. The article points out the problems of static and dynamic theory used in macro-policies modeling.

  17. The Relevance of Using Mathematical Models in Macroeconomic Policies Theory

    Directory of Open Access Journals (Sweden)

    Nora Mihail

    2006-09-01

    Full Text Available The article presents a look of the principal’s mathematical models – starting with Theil, Hansen and Tinbergen work – and their results used to analysis and design macroeconomic policies. In modeling field changes are very fast in theoretical aspects of modeling the many problems of macroeconomic policies and in using in practice the different political models elaboration. The article points out the problems of static and dynamic theory used in macro-policies modeling.

  18. Fire and Heat Spreading Model Based on Cellular Automata Theory

    Science.gov (United States)

    Samartsev, A. A.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.; Fominykh, D. S.

    2018-05-01

    The distinctive feature of the proposed fire and heat spreading model in premises is the reduction of the computational complexity due to the use of the theory of cellular automata with probability rules of behavior. The possibilities and prospects of using this model in practice are noted. The proposed model has a simple mechanism of integration with agent-based evacuation models. The joint use of these models could improve floor plans and reduce the time of evacuation from premises during fires.

  19. Matrix model as a mirror of Chern-Simons theory

    International Nuclear Information System (INIS)

    Aganagic, Mina; Klemm, Albrecht; Marino, Marcos; Vafa, Cumrun

    2004-01-01

    Using mirror symmetry, we show that Chern-Simons theory on certain manifolds such as lens spaces reduces to a novel class of Hermitian matrix models, where the measure is that of unitary matrix models. We show that this agrees with the more conventional canonical quantization of Chern-Simons theory. Moreover, large N dualities in this context lead to computation of all genus A-model topological amplitudes on toric Calabi-Yau manifolds in terms of matrix integrals. In the context of type IIA superstring compactifications on these Calabi-Yau manifolds with wrapped D6 branes (which are dual to M-theory on G2 manifolds) this leads to engineering and solving F-terms for N=1 supersymmetric gauge theories with superpotentials involving certain multi-trace operators. (author)

  20. Mixed models theory and applications with R

    CERN Document Server

    Demidenko, Eugene

    2013-01-01

    Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g

  1. Nonconvex Model of Material Growth: Mathematical Theory

    Science.gov (United States)

    Ganghoffer, J. F.; Plotnikov, P. I.; Sokolowski, J.

    2018-06-01

    The model of volumetric material growth is introduced in the framework of finite elasticity. The new results obtained for the model are presented with complete proofs. The state variables include the deformations, temperature and the growth factor matrix function. The existence of global in time solutions for the quasistatic deformations boundary value problem coupled with the energy balance and the evolution of the growth factor is shown. The mathematical results can be applied to a wide class of growth models in mechanics and biology.

  2. Computing decay rates for new physics theories with FEYNRULES and MADGRAPH 5_AMC@NLO

    Science.gov (United States)

    Alwall, Johan; Duhr, Claude; Fuks, Benjamin; Mattelaer, Olivier; Öztürk, Deniz Gizem; Shen, Chia-Hsien

    2015-12-01

    We present new features of the FEYNRULES and MADGRAPH 5_AMC@NLO programs for the automatic computation of decay widths that consistently include channels of arbitrary final-state multiplicity. The implementations are generic enough so that they can be used in the framework of any quantum field theory, possibly including higher-dimensional operators. We extend at the same time the conventions of the Universal FEYNRULES Output (or UFO) format to include decay tables and information on the total widths. We finally provide a set of representative examples of the usage of the new functions of the different codes in the framework of the Standard Model, the Higgs Effective Field Theory, the Strongly Interacting Light Higgs model and the Minimal Supersymmetric Standard Model and compare the results to available literature and programs for validation purposes.

  3. Solid mechanics theory, modeling, and problems

    CERN Document Server

    Bertram, Albrecht

    2015-01-01

    This textbook offers an introduction to modeling the mechanical behavior of solids within continuum mechanics and thermodynamics. To illustrate the fundamental principles, the book starts with an overview of the most important models in one dimension. Tensor calculus, which is called for in three-dimensional modeling, is concisely presented in the second part of the book. Once the reader is equipped with these essential mathematical tools, the third part of the book develops the foundations of continuum mechanics right from the beginning. Lastly, the book’s fourth part focuses on modeling the mechanics of materials and in particular elasticity, viscoelasticity and plasticity. Intended as an introductory textbook for students and for professionals interested in self-study, it also features numerous worked-out examples to aid in understanding.

  4. Modeling workplace bullying using catastrophe theory.

    Science.gov (United States)

    Escartin, J; Ceja, L; Navarro, J; Zapf, D

    2013-10-01

    Workplace bullying is defined as negative behaviors directed at organizational members or their work context that occur regularly and repeatedly over a period of time. Employees' perceptions of psychosocial safety climate, workplace bullying victimization, and workplace bullying perpetration were assessed within a sample of nearly 5,000 workers. Linear and nonlinear approaches were applied in order to model both continuous and sudden changes in workplace bullying. More specifically, the present study examines whether a nonlinear dynamical systems model (i.e., a cusp catastrophe model) is superior to the linear combination of variables for predicting the effect of psychosocial safety climate and workplace bullying victimization on workplace bullying perpetration. According to the AICc, and BIC indices, the linear regression model fits the data better than the cusp catastrophe model. The study concludes that some phenomena, especially unhealthy behaviors at work (like workplace bullying), may be better studied using linear approaches as opposed to nonlinear dynamical systems models. This can be explained through the healthy variability hypothesis, which argues that positive organizational behavior is likely to present nonlinear behavior, while a decrease in such variability may indicate the occurrence of negative behaviors at work.

  5. Further Results on Dynamic Additive Hazard Rate Model

    Directory of Open Access Journals (Sweden)

    Zhengcheng Zhang

    2014-01-01

    Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.

  6. On a Corporate Bond Pricing Model with Credit Rating Migration Risksand Stochastic Interest Rate

    Directory of Open Access Journals (Sweden)

    Jin Liang

    2017-10-01

    Full Text Available In this paper we study a corporate bond-pricing model with credit rating migration and astochastic interest rate. The volatility of bond price in the model strongly depends on potential creditrating migration and stochastic change of the interest rate. This new model improves the previousexisting models in which the interest rate is considered to be a constant. The existence, uniquenessand regularity of the solution for the model are established. Moreover, some properties includingthe smoothness of the free boundary are obtained. Furthermore, some numerical computations arepresented to illustrate the theoretical results.

  7. Latency-Rate servers & Dataflow models

    NARCIS (Netherlands)

    Wiggers, M.H.; Bekooij, Marco; Bekooij, Marco Jan Gerrit

    2006-01-01

    In the signal processing domain, dataflow graphs [2] [10] and their associated analysis techniques are a well-accepted modeling paradigm. The vertices of a dataflow graph represent functionality and are called actors, while the edges model which actors communicate with each other. Traditionally,

  8. Spatial interaction models facility location using game theory

    CERN Document Server

    D'Amato, Egidio; Pardalos, Panos

    2017-01-01

    Facility location theory develops the idea of locating one or more facilities by optimizing suitable criteria such as minimizing transportation cost, or capturing the largest market share. The contributions in this book focus an approach to facility location theory through game theoretical tools highlighting situations where a location decision is faced by several decision makers and leading to a game theoretical framework in non-cooperative and cooperative methods. Models and methods regarding the facility location via game theory are explored and applications are illustrated through economics, engineering, and physics. Mathematicians, engineers, economists and computer scientists working in theory, applications and computational aspects of facility location problems using game theory will find this book useful.

  9. MONETARY MODELS AND EXCHANGE RATE DETERMINATION ...

    African Journals Online (AJOL)

    Power Party [PPP] based on the law of one price asserts that the change in the exchange rate between .... exchange in international economic transactions has made it vitally evident that the management of ... One lesson from this episode is to ...

  10. Electrorheological fluids modeling and mathematical theory

    CERN Document Server

    Růžička, Michael

    2000-01-01

    This is the first book to present a model, based on rational mechanics of electrorheological fluids, that takes into account the complex interactions between the electromagnetic fields and the moving liquid. Several constitutive relations for the Cauchy stress tensor are discussed. The main part of the book is devoted to a mathematical investigation of a model possessing shear-dependent viscosities, proving the existence and uniqueness of weak and strong solutions for the steady and the unsteady case. The PDS systems investigated possess so-called non-standard growth conditions. Existence results for elliptic systems with non-standard growth conditions and with a nontrivial nonlinear r.h.s. and the first ever results for parabolic systems with a non-standard growth conditions are given for the first time. Written for advanced graduate students, as well as for researchers in the field, the discussion of both the modeling and the mathematics is self-contained.

  11. Density functional theory and multiscale materials modeling

    Indian Academy of Sciences (India)

    One of the vital ingredients in the theoretical tools useful in materials modeling at all the length scales of interest is the concept of density. In the microscopic length scale, it is the electron density that has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids.

  12. Characteristics of highly rated leadership in nursing homes using item response theory.

    Science.gov (United States)

    Backman, Annica; Sjögren, Karin; Lindkvist, Marie; Lövheim, Hugo; Edvardsson, David

    2017-12-01

    To identify characteristics of highly rated leadership in nursing homes. An ageing population entails fundamental social, economic and organizational challenges for future aged care. Knowledge is limited of both specific leadership behaviours and organizational and managerial characteristics which have an impact on the leadership of contemporary nursing home care. Cross-sectional. From 290 municipalities, 60 were randomly selected and 35 agreed to participate, providing a sample of 3605 direct-care staff employed in 169 Swedish nursing homes. The staff assessed their managers' (n = 191) leadership behaviours using the Leadership Behaviour Questionnaire. Data were collected from November 2013 - September 2014, and the study was completed in November 2016. A two-parameter item response theory approach and regression analyses were used to identify specific characteristics of highly rated leadership. Five specific behaviours of highly rated nursing home leadership were identified; that the manager: experiments with new ideas; controls work closely; relies on subordinates; coaches and gives direct feedback; and handles conflicts constructively. The regression analyses revealed that managers with social work backgrounds and privately run homes were significantly associated with higher leadership ratings. This study highlights the five most important leadership behaviours that characterize those nursing home managers rated highest in terms of leadership. Managers in privately run nursing homes and managers with social work backgrounds were associated with higher leadership ratings. Further work is needed to explore these behaviours and factors predictive of higher leadership ratings. © 2017 John Wiley & Sons Ltd.

  13. Toda theories, W-algebras, and minimal models

    International Nuclear Information System (INIS)

    Mansfield, P.; Spence, B.

    1991-01-01

    We discuss the classical W-algebra symmetries of Toda field theories in terms of the pseudo-differential Lax operator associated with the Toda Lax pair. We then show how the W-algebra transformations can be understood as the non-abelian gauge transformations which preserve the form of the Lax pair. This provides a new understanding of the W-algebras, and we discuss their closure and co-cycle structure using this approach. The quantum Lax operator is investigated, and we show that this operator, which generates the quantum W-algebra currents, is conserved in the conformally extended Toda theories. The W-algebra minimal model primary fields are shown to arise naturally in these theories, leading to the conjecture that the conformally extended Toda theories provide a lagrangian formulation of the W-algebra minimal models. (orig.)

  14. Automated Physico-Chemical Cell Model Development through Information Theory

    Energy Technology Data Exchange (ETDEWEB)

    Peter J. Ortoleva

    2005-11-29

    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  15. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong

    2015-01-01

    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  16. Fuzzy Stochastic Optimization Theory, Models and Applications

    CERN Document Server

    Wang, Shuming

    2012-01-01

    Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies.   The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...

  17. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  18. An Ar threesome: Matrix models, 2d conformal field theories, and 4dN=2 gauge theories

    International Nuclear Information System (INIS)

    Schiappa, Ricardo; Wyllard, Niclas

    2010-01-01

    We explore the connections between three classes of theories: A r quiver matrix models, d=2 conformal A r Toda field theories, and d=4N=2 supersymmetric conformal A r quiver gauge theories. In particular, we analyze the quiver matrix models recently introduced by Dijkgraaf and Vafa (unpublished) and make detailed comparisons with the corresponding quantities in the Toda field theories and the N=2 quiver gauge theories. We also make a speculative proposal for how the matrix models should be modified in order for them to reproduce the instanton partition functions in quiver gauge theories in five dimensions.

  19. Exchange rate misalignment, capital accumulation and income distribution: Theory and evidence from the case of Brazil

    Directory of Open Access Journals (Sweden)

    Oreiro José Luis

    2013-01-01

    Full Text Available This article analyzes the relationship between economic growth, income distribution and real exchange rate within the neo-Kaleckian literature, through the construction of a nonlinear macrodynamic model for an open economy in which investment in fixed capital is assumed to be a quadratic function of the real exchange rate. The model demonstrates that the prevailing regime of accumulation in a given economy depends on the type of currency misalignment, so if the real exchange rate is overvalued, then the regime of accumulation will be profit-led, but if the exchange rate is undervalued, then the accumulation regime is wage-led. Subsequently, the adherence of the theoretical model to data is tested for Brazil in the period 1994/Q3-2008/Q4. The econometric results are consistent with the theoretical non-linear specification of the investment function used in the model, so that we can define the existence of a real exchange rate that maximizes the rate of capital accumulation for the Brazilian economy. From the estimate of this optimal rate we show that the real exchange rate is overvalued in 1994/Q3- 2001/Q1 and 2005/Q4-2008/Q4 and undervalued in the period 2001/Q2-2005/Q3. As a direct corollary of this result, it follows that the prevailing regime of accumulation in the Brazilian economy after the last quarter of 2005 is profit-led.

  20. Lenses on reading an introduction to theories and models

    CERN Document Server

    Tracey, Diane H

    2017-01-01

    Widely adopted as an ideal introduction to the major models of reading, this text guides students to understand and facilitate children's literacy development. Coverage encompasses the full range of theories that have informed reading instruction and research, from classical thinking to cutting-edge cognitive, social learning, physiological, and affective perspectives. Readers learn how theory shapes instructional decision making and how to critically evaluate the assumptions and beliefs that underlie their own teaching. Pedagogical features include framing and discussion questions, learning a

  1. Perturbation theory instead of large scale shell model calculations

    International Nuclear Information System (INIS)

    Feldmeier, H.; Mankos, P.

    1977-01-01

    Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de

  2. Scaling theory of depinning in the Sneppen model

    International Nuclear Information System (INIS)

    Maslov, S.; Paczuski, M.

    1994-01-01

    We develop a scaling theory for the critical depinning behavior of the Sneppen interface model [Phys. Rev. Lett. 69, 3539 (1992)]. This theory is based on a ''gap'' equation that describes the self-organization process to a critical state of the depinning transition. All of the critical exponents can be expressed in terms of two independent exponents, ν parallel (d) and ν perpendicular (d), characterizing the divergence of the parallel and perpendicular correlation lengths as the interface approaches its dynamical attractor

  3. Thermodynamic Models from Fluctuation Solution Theory Analysis of Molecular Simulations

    DEFF Research Database (Denmark)

    Christensen, Steen; Peters, Günther H.j.; Hansen, Flemming Yssing

    2007-01-01

    Fluctuation solution theory (FST) is employed to analyze results of molecular dynamics (MD) simulations of liquid mixtures. The objective is to generate parameters for macroscopic GE-models, here the modified Margules model. We present a strategy for choosing the number of parameters included...

  4. The Use of Modelling for Theory Building in Qualitative Analysis

    Science.gov (United States)

    Briggs, Ann R. J.

    2007-01-01

    The purpose of this article is to exemplify and enhance the place of modelling as a qualitative process in educational research. Modelling is widely used in quantitative research as a tool for analysis, theory building and prediction. Statistical data lend themselves to graphical representation of values, interrelationships and operational…

  5. Goodness-of-Fit Assessment of Item Response Theory Models

    Science.gov (United States)

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  6. Mean field theory for a balanced hypercolumn model of orientation selectivity in primary visual cortex

    DEFF Research Database (Denmark)

    Lerchner, Alexander; Sterner, G.; Hertz, J.

    2006-01-01

    We present a complete mean field theory for a balanced state of a simple model of an orientation hypercolumn, with a numerical procedure for solving the mean-field equations quantitatively. With our treatment, one can determine self-consistently both the firing rates and the firing correlations...

  7. Analysis of family-wise error rates in statistical parametric mapping using random field theory.

    Science.gov (United States)

    Flandin, Guillaume; Friston, Karl J

    2017-11-01

    This technical report revisits the analysis of family-wise error rates in statistical parametric mapping-using random field theory-reported in (Eklund et al. []: arXiv 1511.01863). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions-and random field theory-in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al. []: arXiv 1511.01863) for parametric procedures. Hum Brain Mapp, 2017. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  8. Optimal velocity difference model for a car-following theory

    International Nuclear Information System (INIS)

    Peng, G.H.; Cai, X.H.; Liu, C.Q.; Cao, B.F.; Tuo, M.X.

    2011-01-01

    In this Letter, we present a new optimal velocity difference model for a car-following theory based on the full velocity difference model. The linear stability condition of the new model is obtained by using the linear stability theory. The unrealistically high deceleration does not appear in OVDM. Numerical simulation of traffic dynamics shows that the new model can avoid the disadvantage of negative velocity occurred at small sensitivity coefficient λ in full velocity difference model by adjusting the coefficient of the optimal velocity difference, which shows that collision can disappear in the improved model. -- Highlights: → A new optimal velocity difference car-following model is proposed. → The effects of the optimal velocity difference on the stability of traffic flow have been explored. → The starting and braking process were carried out through simulation. → The effects of the optimal velocity difference can avoid the disadvantage of negative velocity.

  9. Advances in cognitive theory and therapy: the generic cognitive model.

    Science.gov (United States)

    Beck, Aaron T; Haigh, Emily A P

    2014-01-01

    For over 50 years, Beck's cognitive model has provided an evidence-based way to conceptualize and treat psychological disorders. The generic cognitive model represents a set of common principles that can be applied across the spectrum of psychological disorders. The updated theoretical model provides a framework for addressing significant questions regarding the phenomenology of disorders not explained in previous iterations of the original model. New additions to the theory include continuity of adaptive and maladaptive function, dual information processing, energizing of schemas, and attentional focus. The model includes a theory of modes, an organization of schemas relevant to expectancies, self-evaluations, rules, and memories. A description of the new theoretical model is followed by a presentation of the corresponding applied model, which provides a template for conceptualizing a specific disorder and formulating a case. The focus on beliefs differentiates disorders and provides a target for treatment. A variety of interventions are described.

  10. Modeling and Predicting the EUR/USD Exchange Rate: The Role of Nonlinear Adjustments to Purchasing Power Parity

    OpenAIRE

    Jesús Crespo Cuaresma; Anna Orthofer

    2010-01-01

    Reliable medium-term forecasts are essential for forward-looking monetary policy decisionmaking. Traditionally, predictions of the exchange rate tend to be linked to the equilibrium concept implied by the purchasing power parity (PPP) theory. In particular, the traditional benchmark for exchange rate models is based on a linear adjustment of the exchange rate to the level implied by PPP. In the presence of aggregation effects, transaction costs or uncertainty, however, economic theory predict...

  11. A Range-Based Multivariate Model for Exchange Rate Volatility

    NARCIS (Netherlands)

    B. Tims (Ben); R.J. Mahieu (Ronald)

    2003-01-01

    textabstractIn this paper we present a parsimonious multivariate model for exchange rate volatilities based on logarithmic high-low ranges of daily exchange rates. The multivariate stochastic volatility model divides the log range of each exchange rate into two independent latent factors, which are

  12. Universal Rate Model Selector: A Method to Quickly Find the Best-Fit Kinetic Rate Model for an Experimental Rate Profile

    Science.gov (United States)

    2017-08-01

    k2 – k1) 3.3 Universal Kinetic Rate Platform Development Kinetic rate models range from pure chemical reactions to mass transfer...14 8. The rate model that best fits the experimental data is a first-order or homogeneous catalytic reaction ...Avrami (7), and intraparticle diffusion (6) rate equations to name a few. A single fitting algorithm (kinetic rate model ) for a reaction does not

  13. Applications of generalizability theory and their relations to classical test theory and structural equation modeling.

    Science.gov (United States)

    Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat

    2018-03-01

    Although widely recognized as a comprehensive framework for representing score reliability, generalizability theory (G-theory), despite its potential benefits, has been used sparingly in reporting of results for measures of individual differences. In this article, we highlight many valuable ways that G-theory can be used to quantify, evaluate, and improve psychometric properties of scores. Our illustrations encompass assessment of overall reliability, percentages of score variation accounted for by individual sources of measurement error, dependability of cut-scores for decision making, estimation of reliability and dependability for changes made to measurement procedures, disattenuation of validity coefficients for measurement error, and linkages of G-theory with classical test theory and structural equation modeling. We also identify computer packages for performing G-theory analyses, most of which can be obtained free of charge, and describe how they compare with regard to data input requirements, ease of use, complexity of designs supported, and output produced. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    Science.gov (United States)

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  15. Putting "Organizations" into an Organization Theory Course: A Hybrid CAO Model for Teaching Organization Theory

    Science.gov (United States)

    Hannah, David R.; Venkatachary, Ranga

    2010-01-01

    In this article, the authors present a retrospective analysis of an instructor's multiyear redesign of a course on organization theory into what is called a hybrid Classroom-as-Organization model. It is suggested that this new course design served to apprentice students to function in quasi-real organizational structures. The authors further argue…

  16. Reconstructing constructivism: causal models, Bayesian learning mechanisms, and the theory theory.

    Science.gov (United States)

    Gopnik, Alison; Wellman, Henry M

    2012-11-01

    We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and nontechnical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and the psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists.

  17. M-Theory Model-Building and Proton Stability

    CERN Document Server

    Ellis, Jonathan Richard; Nanopoulos, Dimitri V; Ellis, John; Faraggi, Alon E.

    1998-01-01

    We study the problem of baryon stability in M theory, starting from realistic four-dimensional string models constructed using the free-fermion formulation of the weakly-coupled heterotic string. Suitable variants of these models manifest an enhanced custodial gauge symmetry that forbids to all orders the appearance of dangerous dimension-five baryon-decay operators. We exhibit the underlying geometric (bosonic) interpretation of these models, which have a $Z_2 \\times Z_2$ orbifold structure similar, but not identical, to the class of Calabi-Yau threefold compactifications of M and F theory investigated by Voisin and Borcea. A related generalization of their work may provide a solution to the problem of proton stability in M theory.

  18. M-theory model-building and proton stability

    International Nuclear Information System (INIS)

    Ellis, J.; Faraggi, A.E.; Nanopoulos, D.V.; Houston Advanced Research Center, The Woodlands, TX; Academy of Athens

    1997-09-01

    The authors study the problem of baryon stability in M theory, starting from realistic four-dimensional string models constructed using the free-fermion formulation of the weakly-coupled heterotic string. Suitable variants of these models manifest an enhanced custodial gauge symmetry that forbids to all orders the appearance of dangerous dimension-five baryon-decay operators. The authors exhibit the underlying geometric (bosonic) interpretation of these models, which have a Z 2 x Z 2 orbifold structure similar, but not identical, to the class of Calabi-Yau threefold compactifications of M and F theory investigated by Voisin and Borcea. A related generalization of their work may provide a solution to the problem of proton stability in M theory

  19. Variational transition state theory for multidimensional activated rate processes in the presence of anisotropic friction

    Science.gov (United States)

    Berezhkovskii, Alexander M.; Frishman, Anatoli M.; Pollak, Eli

    1994-09-01

    Variational transition state theory (VTST) is applied to the study of the activated escape of a particle trapped in a multidimensional potential well and coupled to a heat bath. Special attention is given to the dependence of the rate constant on the friction coefficients in the case of anisotropic friction. It is demonstrated explicitly that both the traditional as well as the nontraditional scenarios for the particle escape are recovered uniformly within the framework of VTST. Effects such as saddle point avoidance and friction dependence of the activation energy are derived from VTST using optimized planar dividing surfaces.

  20. Algebraic computability and enumeration models recursion theory and descriptive complexity

    CERN Document Server

    Nourani, Cyrus F

    2016-01-01

    This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

  1. Cross sectional efficient estimation of stochastic volatility short rate models

    NARCIS (Netherlands)

    Danilov, Dmitri; Mandal, Pranab K.

    2001-01-01

    We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, as indicated by de Jong (2000), the EKF in this

  2. Cross sectional efficient estimation of stochastic volatility short rate models

    NARCIS (Netherlands)

    Danilov, Dmitri; Mandal, Pranab K.

    2002-01-01

    We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, the EKF in this situation leads to inconsistent

  3. Theory to practice: the humanbecoming leading-following model.

    Science.gov (United States)

    Ursel, Karen L

    2015-01-01

    Guided by the humanbecoming leading-following model, the author designed a nursing theories course with the intention of creating a meaningful nursing theory to practice link. The author perceived that with the implementation of Situation-Background-Assessment-Recommendations (SBAR) communication, nursing staff had drifted away from using the Kardex™ in shift to shift reporting. Nurse students, faculty, and staff members supported the creation of a theories project which would engage nursing students in the pursuit of clinical excellence. The project chosen was to revise the existing Kardex™ (predominant nursing communication tool). In the project, guided by a nursing theory, nursing students focused on the unique patient's experience, depicting the specific role of nursing knowledge and the contributions of the registered nurse to the patient's healthcare journey. The emphasis of this theoretical learning was the application of a nursing theory to real-life clinical challenges with communication of relevant, timely, and accurate patient information, recognizing that real problems are often complex and require multi-perspective approaches. This project created learning opportunities where a nursing theory would be chosen by the nursing student clinical group and applied in their clinical specialty area. This practice activity served to broaden student understandings of the role of nursing knowledge and nursing theories in their professional practice. © The Author(s) 2014.

  4. Theory of Time beyond the standard model

    International Nuclear Information System (INIS)

    Poliakov, Eugene S.

    2008-01-01

    A frame of non-uniform time is discussed. A concept of 'flow of time' is presented. The principle of time relativity in analogy with Galilean principle of relativity is set. Equivalence principle is set to state that the outcome of non-uniform time in an inertial frame of reference is equivalent to the outcome of a fictitious gravity force external to the frame of reference. Thus it is flow of time that causes gravity rather than mass. The latter is compared to experimental data achieving precision of up to 0.0003%. It is shown that the law of energy conservation is inapplicable to the frames of non-uniform time. A theoretical model of a physical entity (point mass, photon) travelling in the field of non-uniform time is considered. A generalized law that allows the flow of time to replace classical energy conservation is introduced on the basis of the experiment of Pound and Rebka. It is shown that linear dependence of flow of time on spatial coordinate conforms the inverse square law of universal gravitation and Keplerian mechanics. Momentum is shown to still be conserved

  5. Standard Model theory calculations and experimental tests

    International Nuclear Information System (INIS)

    Cacciari, M.; Hamel de Monchenault, G.

    2015-01-01

    To present knowledge, all the physics at the Large Hadron Collider (LHC) can be described in the framework of the Standard Model (SM) of particle physics. Indeed the newly discovered Higgs boson with a mass close to 125 GeV seems to confirm the predictions of the SM. Thus, besides looking for direct manifestations of the physics beyond the SM, one of the primary missions of the LHC is to perform ever more stringent tests of the SM. This requires not only improved theoretical developments to produce testable predictions and provide experiments with reliable event generators, but also sophisticated analyses techniques to overcome the formidable experimental environment of the LHC and perform precision measurements. In the first section, we describe the state of the art of the theoretical tools and event generators that are used to provide predictions for the production cross sections of the processes of interest. In section 2, inclusive cross section measurements with jets, leptons and vector bosons are presented. Examples of differential cross sections, charge asymmetries and the study of lepton pairs are proposed in section 3. Finally, in section 4, we report studies on the multiple production of gauge bosons and constraints on anomalous gauge couplings

  6. Monetary models and exchange rate determination: The Nigerian ...

    African Journals Online (AJOL)

    Monetary models and exchange rate determination: The Nigerian evidence. ... income levels and real interest rate differentials provide better forecasts of the ... partner can expect to suffer depreciation in the external value of her currency.

  7. Models with oscillator terms in noncommutative quantum field theory

    International Nuclear Information System (INIS)

    Kronberger, E.

    2010-01-01

    The main focus of this Ph.D. thesis is on noncommutative models involving oscillator terms in the action. The first one historically is the successful Grosse-Wulkenhaar (G.W.) model which has already been proven to be renormalizable to all orders of perturbation theory. Remarkably it is furthermore capable of solving the Landau ghost problem. In a first step, we have generalized the G.W. model to gauge theories in a very straightforward way, where the action is BRS invariant and exhibits the good damping properties of the scalar theory by using the same propagator, the so-called Mehler kernel. To be able to handle some more involved one-loop graphs we have programmed a powerful Mathematica package, which is capable of analytically computing Feynman graphs with many terms. The result of those investigations is that new terms originally not present in the action arise, which led us to the conclusion that we should better start from a theory where those terms are already built in. Fortunately there is an action containing this complete set of terms. It can be obtained by coupling a gauge field to the scalar field of the G.W. model, integrating out the latter, and thus 'inducing' a gauge theory. Hence the model is called Induced Gauge Theory. Despite the advantage that it is by construction completely gauge invariant, it contains also some unphysical terms linear in the gauge field. Advantageously we could get rid of these terms using a special gauge dedicated to this purpose. Within this gauge we could again establish the Mehler kernel as gauge field propagator. Furthermore we where able to calculate the ghost propagator, which turned out to be very involved. Thus we were able to start with the first few loop computations showing the expected behavior. The next step is to show renormalizability of the model, where some hints towards this direction will also be given. (author) [de

  8. Implications of Information Theory for Computational Modeling of Schizophrenia.

    Science.gov (United States)

    Silverstein, Steven M; Wibral, Michael; Phillips, William A

    2017-10-01

    Information theory provides a formal framework within which information processing and its disorders can be described. However, information theory has rarely been applied to modeling aspects of the cognitive neuroscience of schizophrenia. The goal of this article is to highlight the benefits of an approach based on information theory, including its recent extensions, for understanding several disrupted neural goal functions as well as related cognitive and symptomatic phenomena in schizophrenia. We begin by demonstrating that foundational concepts from information theory-such as Shannon information, entropy, data compression, block coding, and strategies to increase the signal-to-noise ratio-can be used to provide novel understandings of cognitive impairments in schizophrenia and metrics to evaluate their integrity. We then describe more recent developments in information theory, including the concepts of infomax, coherent infomax, and coding with synergy, to demonstrate how these can be used to develop computational models of schizophrenia-related failures in the tuning of sensory neurons, gain control, perceptual organization, thought organization, selective attention, context processing, predictive coding, and cognitive control. Throughout, we demonstrate how disordered mechanisms may explain both perceptual/cognitive changes and symptom emergence in schizophrenia. Finally, we demonstrate that there is consistency between some information-theoretic concepts and recent discoveries in neurobiology, especially involving the existence of distinct sites for the accumulation of driving input and contextual information prior to their interaction. This convergence can be used to guide future theory, experiment, and treatment development.

  9. A spatial Mankiw-Romer-Weil model: Theory and evidence

    OpenAIRE

    Fischer, Manfred M.

    2009-01-01

    This paper presents a theoretical growth model that extends the Mankiw-Romer-Weil [MRW] model by accounting for technological interdependence among regional economies. Interdependence is assumed to work through spatial externalities caused by disembodied knowledge diffusion. The transition from theory to econometrics leads to a reduced-form empirical spatial Durbin model specification that explains the variation in regional levels of per worker output at steady state. A system ...

  10. Reservoir theory, groundwater transit time distributions, and lumped parameter models

    International Nuclear Information System (INIS)

    Etcheverry, D.; Perrochet, P.

    1999-01-01

    The relation between groundwater residence times and transit times is given by the reservoir theory. It allows to calculate theoretical transit time distributions in a deterministic way, analytically, or on numerical models. Two analytical solutions validates the piston flow and the exponential model for simple conceptual flow systems. A numerical solution of a hypothetical regional groundwater flow shows that lumped parameter models could be applied in some cases to large-scale, heterogeneous aquifers. (author)

  11. Theory of compressive modeling and simulation

    Science.gov (United States)

    Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith

    2013-05-01

    Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .

  12. Consistent constraints on the Standard Model Effective Field Theory

    International Nuclear Information System (INIS)

    Berthier, Laure; Trott, Michael

    2016-01-01

    We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred and three observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, Λ≳ 3 TeV. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an S,T analysis is modified by the theory errors we include as an illustrative example.

  13. The nearly neutral and selection theories of molecular evolution under the fisher geometrical framework: substitution rate, population size, and complexity.

    Science.gov (United States)

    Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A

    2012-06-01

    The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population's phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model

  14. Effective potential in Lorentz-breaking field theory models

    Energy Technology Data Exchange (ETDEWEB)

    Baeta Scarpelli, A.P. [Centro Federal de Educacao Tecnologica, Nova Gameleira Belo Horizonte, MG (Brazil); Setor Tecnico-Cientifico, Departamento de Policia Federal, Belo Horizonte, MG (Brazil); Brito, L.C.T. [Universidade Federal de Lavras, Departamento de Fisica, Lavras, MG (Brazil); Felipe, J.C.C. [Universidade Federal de Lavras, Departamento de Fisica, Lavras, MG (Brazil); Universidade Federal dos Vales do Jequitinhonha e Mucuri, Instituto de Engenharia, Ciencia e Tecnologia, Veredas, Janauba, MG (Brazil); Nascimento, J.R.; Petrov, A.Yu. [Universidade Federal da Paraiba, Departamento de Fisica, Joao Pessoa, Paraiba (Brazil)

    2017-12-15

    We calculate explicitly the one-loop effective potential in different Lorentz-breaking field theory models. First, we consider a Yukawa-like theory and some examples of Lorentz-violating extensions of scalar QED. We observe, for the extended QED models, that the resulting effective potential converges to the known result in the limit in which Lorentz symmetry is restored. Besides, the one-loop corrections to the effective potential in all the cases we study depend on the background tensors responsible for the Lorentz-symmetry violation. This has consequences for physical quantities like, for example, in the induced mass due to the Coleman-Weinberg mechanism. (orig.)

  15. Lenses on Reading An Introduction to Theories and Models

    CERN Document Server

    Tracey, Diane H

    2012-01-01

    This widely adopted text explores key theories and models that frame reading instruction and research. Readers learn why theory matters in designing and implementing high-quality instruction and research; how to critically evaluate the assumptions and beliefs that guide their own work; and what can be gained by looking at reading through multiple theoretical lenses. For each theoretical model, classroom applications are brought to life with engaging vignettes and teacher reflections. Research applications are discussed and illustrated with descriptions of exemplary studies. New to This Edition

  16. Effective potential in Lorentz-breaking field theory models

    International Nuclear Information System (INIS)

    Baeta Scarpelli, A.P.; Brito, L.C.T.; Felipe, J.C.C.; Nascimento, J.R.; Petrov, A.Yu.

    2017-01-01

    We calculate explicitly the one-loop effective potential in different Lorentz-breaking field theory models. First, we consider a Yukawa-like theory and some examples of Lorentz-violating extensions of scalar QED. We observe, for the extended QED models, that the resulting effective potential converges to the known result in the limit in which Lorentz symmetry is restored. Besides, the one-loop corrections to the effective potential in all the cases we study depend on the background tensors responsible for the Lorentz-symmetry violation. This has consequences for physical quantities like, for example, in the induced mass due to the Coleman-Weinberg mechanism. (orig.)

  17. Biomimetic model systems of rigid hair beds: Part I - Theory

    Science.gov (United States)

    Hood, Kaitlyn; Jammalamadaka, Mani S. S.; Hosoi, Anette

    2017-11-01

    Crustaceans - such as lobsters, crabs, and stomapods - have hairy appendages that they use to recognize and track odorants in the surrounding fluid. An array of rigid hairs impedes flow at different rates depending on the spacing between hairs and the Reynolds number, Re. At larger Reynolds numbers (Re >1), fluid travels through the hairs rather than around them, a phenomenon called leakiness. Crustaceans flick their appendages at different speeds in order to manipulate the leakiness between the hairs, allowing the hairs to either detect odors in a sample of fluid or collect a new sample. A single hair can be represented as a slender body attached at one end to a wall. Using both slender body theory and numerical methods, we observe that there is a region of flow around the hair that speeds up relative to the unobstructed flow. As the Reynolds number increases, this fast flow region moves closer to the hair. Using this model, we predict that an array of hairs can be engineered to have a desired leakiness profile.

  18. A Range-Based Multivariate Model for Exchange Rate Volatility

    OpenAIRE

    Tims, Ben; Mahieu, Ronald

    2003-01-01

    textabstractIn this paper we present a parsimonious multivariate model for exchange rate volatilities based on logarithmic high-low ranges of daily exchange rates. The multivariate stochastic volatility model divides the log range of each exchange rate into two independent latent factors, which are interpreted as the underlying currency specific components. Due to the normality of logarithmic volatilities the model can be estimated conveniently with standard Kalman filter techniques. Our resu...

  19. Modelling plastic deformation of metals over a wide range of strain rates using irreversible thermodynamics

    International Nuclear Information System (INIS)

    Huang Mingxin; Rivera-Diaz-del-Castillo, Pedro E J; Zwaag, Sybrand van der; Bouaziz, Olivier

    2009-01-01

    Based on the theory of irreversible thermodynamics, the present work proposes a dislocation-based model to describe the plastic deformation of FCC metals over wide ranges of strain rates. The stress-strain behaviour and the evolution of the average dislocation density are derived. It is found that there is a transitional strain rate (∼ 10 4 s -1 ) over which the phonon drag effects appear, resulting in a significant increase in the flow stress and the average dislocation density. The model is applied to pure Cu deformed at room temperature and at strain rates ranging from 10 -5 to 10 6 s -1 showing good agreement with experimental results.

  20. Students' motivational processes and their relationship to teacher ratings in school physical education: a self-determination theory approach.

    Science.gov (United States)

    Standage, Martyn; Duda, Joan L; Ntoumanis, Nikos

    2006-03-01

    In the present study, we used a model of motivation grounded in self-determination theory (Deci & Ryan, 1985, 1991; Ryan & Deci, 2000a, 2000b, 2002) to examine the relationship between physical education (PE) students' motivational processes and ratings of their effort and persistence as provided by their PE teacher. Data were obtained from 394 British secondary school students (204 boys, 189 girls, 1 gender not specified; M age = 11.97 years; SD = .89; range = 11-14 years) who responded to a multisection inventory (tapping autonomy-support, autonomy, competence, relatedness, and self-determined motivation). The students' respective PE teachers subsequently provided ratings reflecting the effort and persistence each student exhibited in their PE classes. The hypothesized relationships among the study variables were examined via structural equation modeling analysis using latent factors. Results of maximum likelihood analysis using the bootstrapping method revealed the proposed model demonstrated a good fit to the data, chi-squared (292) = 632.68, p self-determination. Student-reported levels of self-determined motivation positively predicted teacher ratings of effort and persistence in PE. The findings are discussed with regard to enhancing student motivation in PE settings.

  1. Integrable models in 1+1 dimensional quantum field theory

    International Nuclear Information System (INIS)

    Faddeev, Ludvig.

    1982-09-01

    The goal of this lecture is to present a unifying view on the exactly soluble models. There exist several reasons arguing in favor of the 1+1 dimensional models: every exact solution of a field-theoretical model can teach about the ability of quantum field theory to describe spectrum and scattering; some 1+1 d models have physical applications in the solid state theory. There are several ways to become acquainted with the methods of exactly soluble models: via classical statistical mechanics, via Bethe Ansatz, via inverse scattering method. Fundamental Poisson bracket relation FPR and/or fundamental commutation relations FCR play fundamental role. General classification of FPR is given with promizing generalizations to FCR

  2. A model of PCF in guarded type theory

    DEFF Research Database (Denmark)

    Paviotti, Marco; Møgelberg, Rasmus Ejlers; Birkedal, Lars

    2015-01-01

    Guarded recursion is a form of recursion where recursive calls are guarded by delay modalities. Previous work has shown how guarded recursion is useful for constructing logics for reasoning about programming languages with advanced features, as well as for constructing and reasoning about element...... adequate. The model construction is related to Escardo's metric model for PCF, but here everything is carried out entirely in type theory with guarded recursion, including the formulation of the operational semantics, the model construction and the proof of adequacy...... of coinductive types. In this paper we investigate how type theory with guarded recursion can be used as a metalanguage for denotational semantics useful both for constructing models and for proving properties of these. We do this by constructing a fairly intensional model of PCF and proving it computationally...

  3. A Model of PCF in Guarded Type Theory

    DEFF Research Database (Denmark)

    Paviotti, Marco; Møgelberg, Rasmus Ejlers; Birkedal, Lars

    2015-01-01

    Guarded recursion is a form of recursion where recursive calls are guarded by delay modalities. Previous work has shown how guarded recursion is useful for constructing logics for reasoning about programming languages with advanced features, as well as for constructing and reasoning about element...... adequate. The model construction is related to Escardo's metric model for PCF, but here everything is carried out entirely in type theory with guarded recursion, including the formulation of the operational semantics, the model construction and the proof of adequacy....... of coinductive types. In this paper we investigate how type theory with guarded recursion can be used as a metalanguage for denotational semantics useful both for constructing models and for proving properties of these. We do this by constructing a fairly intensional model of PCF and proving it computationally...

  4. An introduction to queueing theory modeling and analysis in applications

    CERN Document Server

    Bhat, U Narayan

    2015-01-01

    This introductory textbook is designed for a one-semester course on queueing theory that does not require a course on stochastic processes as a prerequisite. By integrating the necessary background on stochastic processes with the analysis of models, the work provides a sound foundational introduction to the modeling and analysis of queueing systems for a wide interdisciplinary audience of students in mathematics, statistics, and applied disciplines such as computer science, operations research, and engineering. This edition includes additional topics in methodology and applications. Key features: • An introductory chapter including a historical account of the growth of queueing theory in more than 100 years. • A modeling-based approach with emphasis on identification of models. • Rigorous treatment of the foundations of basic models commonly used in applications with appropriate references for advanced topics. • Applications in manufacturing and, computer and communication systems. • A chapter on ...

  5. Beta-decay rate and beta-delayed neutron emission probability of improved gross theory

    Science.gov (United States)

    Koura, Hiroyuki

    2014-09-01

    A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for unmeasured nuclei are adopted from the KTUY nuclear mass formula, which is based on the spherical-basis method. Considering the properties of the integrated Fermi function, we can roughly categorized energy region of excited-state of a daughter nucleus into three regions: a highly-excited energy region, which fully affect a delayed neutron probability, a middle energy region, which is estimated to contribute the decay heat, and a region neighboring the ground-state, which determines the beta-decay rate. Some results will be given in the presentation. A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for

  6. Rate theory of ion pairing at the water liquid-vapor interface: A case of sodium iodide

    Science.gov (United States)

    Dang, Liem X.; Schenter, Gregory K.

    2018-06-01

    Studies on ion pairing at interfaces have been intensified recently because of their importance in many chemical reactive phenomena, such as ion-ion interactions that are affected by interfaces and their influence on kinetic processes. In this study, we performed simulations to examine the thermodynamics and kinetics of small polarizable sodium iodide ions in the bulk and near the water liquid-vapor interface. Using classical transition state theory, we calculated the dissociation rates and corrected them with transmission coefficients obtained from the reactive flux formalism and Grote-Hynes theory. Our results show that in addition to affecting the free energy of ions in solution, the interfacial environments significantly influence the kinetics of ion pairing. The results on the relaxation time obtained using the reactive flux formalism and Grote-Hynes theory present an unequivocal picture that the interface suppresses ion dissociation. The effects of the use of molecular models on the ion interactions as well as the ion-pair configurations at the interface are also quantified and discussed.

  7. Using Omega and NIF to Advance Theories of High-Pressure, High-Strain-Rate Tantalum Plastic Flow

    Science.gov (United States)

    Rudd, R. E.; Arsenlis, A.; Barton, N. R.; Cavallo, R. M.; Huntington, C. M.; McNaney, J. M.; Orlikowski, D. A.; Park, H.-S.; Prisbrey, S. T.; Remington, B. A.; Wehrenberg, C. E.

    2015-11-01

    Precisely controlled plasmas are playing an important role as both pump and probe in experiments to understand the strength of solid metals at high energy density (HED) conditions. In concert with theory, these experiments have enabled a predictive capability to model material strength at Mbar pressures and high strain rates. Here we describe multiscale strength models developed for tantalum and vanadium starting with atomic bonding and extending up through the mobility of individual dislocations, the evolution of dislocation networks and so on up to full scale. High-energy laser platforms such as the NIF and the Omega laser probe ramp-compressed strength to 1-5 Mbar. The predictions of the multiscale model agree well with the 1 Mbar experiments without tuning. The combination of experiment and theory has shown that solid metals can behave significantly differently at HED conditions; for example, the familiar strengthening of metals as the grain size is reduced has been shown not to occur in the high pressure experiments. Work performed under the auspices of the U.S. Dept. of Energy by Lawrence Livermore National Lab under contract DE-AC52-07NA273.

  8. Rate constants of chemical reactions from semiclassical transition state theory in full and one dimension

    Energy Technology Data Exchange (ETDEWEB)

    Greene, Samuel M., E-mail: samuel.greene@chem.ox.ac.uk; Shan, Xiao, E-mail: xiao.shan@chem.ox.ac.uk; Clary, David C., E-mail: david.clary@chem.ox.ac.u [Physical and Theoretical Chemistry Laboratory, Department of Chemistry, University of Oxford, South Parks Road, Oxford OX1 3QZ (United Kingdom)

    2016-06-28

    Semiclassical Transition State Theory (SCTST), a method for calculating rate constants of chemical reactions, offers gains in computational efficiency relative to more accurate quantum scattering methods. In full-dimensional (FD) SCTST, reaction probabilities are calculated from third and fourth potential derivatives along all vibrational degrees of freedom. However, the computational cost of FD SCTST scales unfavorably with system size, which prohibits its application to larger systems. In this study, the accuracy and efficiency of 1-D SCTST, in which only third and fourth derivatives along the reaction mode are used, are investigated in comparison to those of FD SCTST. Potential derivatives are obtained from numerical ab initio Hessian matrix calculations at the MP2/cc-pVTZ level of theory, and Richardson extrapolation is applied to improve the accuracy of these derivatives. Reaction barriers are calculated at the CCSD(T)/cc-pVTZ level. Results from FD SCTST agree with results from previous theoretical and experimental studies when Richardson extrapolation is applied. Results from our implementation of 1-D SCTST, which uses only 4 single-point MP2/cc-pVTZ energy calculations in addition to those for conventional TST, agree with FD results to within a factor of 5 at 250 K. This degree of agreement and the efficiency of the 1-D method suggest its potential as a means of approximating rate constants for systems too large for existing quantum scattering methods.

  9. Generalized Rate Theory for Void and Bubble Swelling and its Application to Delta-Plutonium

    Energy Technology Data Exchange (ETDEWEB)

    Allen, P. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wall, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wolfer, W. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-10-04

    A rate theory for void and bubble swelling is derived that allows both vacancies and self-interstitial atoms to be generated by thermal activation at all sinks. In addition, they can also be produced by displacement damage from external and internal radiation. This generalized rate theory (GRT) is applied to swelling of gallium-stabilized δ-plutonium in which α-decay causes the displacement damage. Since the helium atoms produced also become trapped in vacancies, a distinction is made between empty and occupied vacancies. The growth of helium bubbles observed by transmission electron microscopy (TEM) in weapons-grade and in material enriched with Pu238 is analyzed, using different values for the formation energy of self-interstitial atoms (SIA) and two different sets of relaxation volumes for the vacancy and for the SIA. One set allows preferential capture of SIA at dislocations, while the other set gives equal preference to both vacancy and SIA. It is found that the helium bubble diameters observed are in better agreement with GRT predictions if no preferential capture occurs at dislocations. Therefore, helium bubbles in δ-plutonium will not evolve into voids. The helium density within the bubbles remains sufficiently high to cause thermal emission of SIA. Based on a helium density between two to three helium atoms per vacant site, the sum of formation and migration energies must be around 2.0 eV for SIA in δ-plutonium.

  10. A macro-physics model of depreciation rate in economic exchange

    Science.gov (United States)

    Marmont Lobo, Rui F.; de Sousa, Miguel Rocha

    2014-02-01

    This article aims at a new approach for a known fundamental result: barter or trade increases economic value. It successfully bridges the gap between the theory of value and the exchange process attached to the transition from endowments to the equilibrium in the core and contract curve. First, we summarise the theory of value; in Section 2, we present the Edgeworth (1881) box and an axiomatic approach and in Section 3, we apply our pure exchange model. Finally (in Section 4), using our open econo-physics pure barter (EPB) model, we derive an improvement in value, which means that pure barter leads to a decline in depreciation rate.

  11. Traffic Games: Modeling Freeway Traffic with Game Theory.

    Science.gov (United States)

    Cortés-Berrueco, Luis E; Gershenson, Carlos; Stephens, Christopher R

    2016-01-01

    We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.

  12. Comparison of potential models through heavy quark effective theory

    International Nuclear Information System (INIS)

    Amundson, J.F.

    1995-01-01

    I calculate heavy-light decay constants in a nonrelativistic potential model. The resulting estimate of heavy quark symmetry breaking conflicts with similar estimates from lattice QCD. I show that a semirelativistic potential model eliminates the conflict. Using the results of heavy quark effective theory allows me to identify and compensate for shortcomings in the model calculations in addition to isolating the source of the differences in the two models. The results lead to a rule as to where the nonrelativistic quark model gives misleading predictions

  13. Model building with a dynamical volume element in gravity, particle theory and theories of extended object

    International Nuclear Information System (INIS)

    Guendelman, E.

    2004-01-01

    Full Text:The Volume Element of Space Time can be considered as a geometrical object which can be independent of the metric. The use in the action of a volume element which is metric independent leads to the appearance of a measure of integration which is metric independent. This can be applied to all known generally coordinate invariant theories, we will discuss three very important cases: 1. 4-D theories describing gravity and matter fields, 2. Parametrization invariant theories of extended objects and 3. Higher dimensional theories including gravity and matter fields. In case 1, a large number of new effects appear: (i) spontaneous breaking of scale invariance associated to integration of degrees of freedom related to the measure, (ii) under normal particle physics laboratory conditions fermions split into three families, but when matter is highly diluted, neutrinos increase their mass and become suitable candidates for dark matter, (iii) cosmic coincidence between dark energy and dark matter is natural, (iv) quintessence scenarios with automatic decoupling of the quintessence scalar to ordinary matter, but not dark matter are obtained (2) For theories or extended objects, the use of a measure of integration independent of the metric leads to (i) dynamical tension, (ii) string models of non abelian confinement (iii) The possibility of new Weyl invariant light-like branes (WTT.L branes). These Will branes dynamically adjust themselves to sit at black hole horizons and in the context of higher dimensional theories can provide examples of massless 4-D particles with nontrivial Kaluza Klein quantum numbers, (3) In Bronx and Kaluza Klein scenarios, the use of a measure independent of the metric makes it possible to construct naturally models where only the extra dimensions get curved and the 4-D observable space-time remain flat

  14. Theory of positive disintegration as a model of adolescent development.

    Science.gov (United States)

    Laycraft, Krystyna

    2011-01-01

    This article introduces a conceptual model of the adolescent development based on the theory of positive disintegration combined with theory of self-organization. Dabrowski's theory of positive disintegration, which was created almost a half century ago, still attracts psychologists' and educators' attention, and is extensively applied into studies of gifted and talented people. The positive disintegration is the mental development described by the process of transition from lower to higher levels of mental life and stimulated by tension, inner conflict, and anxiety. This process can be modeled by a sequence of patterns of organization (attractors) as a developmental potential (a control parameter) changes. Three levels of disintegration (unilevel disintegration, spontaneous multilevel disintegration, and organized multilevel disintegration) are analyzed in detail and it is proposed that they represent behaviour of early, middle and late periods of adolescence. In the discussion, recent research on the adolescent brain development is included.

  15. Aspect-Aware Latent Factor Model: Rating Prediction with Ratings and Reviews

    OpenAIRE

    Cheng, Zhiyong; Ding, Ying; Zhu, Lei; Kankanhalli, Mohan

    2018-01-01

    Although latent factor models (e.g., matrix factorization) achieve good accuracy in rating prediction, they suffer from several problems including cold-start, non-transparency, and suboptimal recommendation for local users or items. In this paper, we employ textual review information with ratings to tackle these limitations. Firstly, we apply a proposed aspect-aware topic model (ATM) on the review text to model user preferences and item features from different aspects, and estimate the aspect...

  16. Should Unemployment Insurance Vary with the Unemployment Rate? Theory and Evidence

    OpenAIRE

    Kroft, Kory; Notowidigdo, Matthew J.

    2012-01-01

    We study how optimal unemployment insurance (UI) benefits vary over the business cycle by estimating how the moral hazard cost and the consumption smoothing benefit of UI vary with the unemployment rate. We find that the moral hazard cost is procyclical, greater when the unemployment rate is relatively low. By contrast, our evidence suggests that the consumption smoothing benefit of UI is acyclical. Using these estimates to calibrate our job search model, we find that a one standard deviation...

  17. Integrating social capital theory, social cognitive theory, and the technology acceptance model to explore a behavioral model of telehealth systems.

    Science.gov (United States)

    Tsai, Chung-Hung

    2014-05-07

    Telehealth has become an increasingly applied solution to delivering health care to rural and underserved areas by remote health care professionals. This study integrated social capital theory, social cognitive theory, and the technology acceptance model (TAM) to develop a comprehensive behavioral model for analyzing the relationships among social capital factors (social capital theory), technological factors (TAM), and system self-efficacy (social cognitive theory) in telehealth. The proposed framework was validated with 365 respondents from Nantou County, located in Central Taiwan. Structural equation modeling (SEM) was used to assess the causal relationships that were hypothesized in the proposed model. The finding indicates that elderly residents generally reported positive perceptions toward the telehealth system. Generally, the findings show that social capital factors (social trust, institutional trust, and social participation) significantly positively affect the technological factors (perceived ease of use and perceived usefulness respectively), which influenced usage intention. This study also confirmed that system self-efficacy was the salient antecedent of perceived ease of use. In addition, regarding the samples, the proposed model fitted considerably well. The proposed integrative psychosocial-technological model may serve as a theoretical basis for future research and can also offer empirical foresight to practitioners and researchers in the health departments of governments, hospitals, and rural communities.

  18. Integrating Social Capital Theory, Social Cognitive Theory, and the Technology Acceptance Model to Explore a Behavioral Model of Telehealth Systems

    Directory of Open Access Journals (Sweden)

    Chung-Hung Tsai

    2014-05-01

    Full Text Available Telehealth has become an increasingly applied solution to delivering health care to rural and underserved areas by remote health care professionals. This study integrated social capital theory, social cognitive theory, and the technology acceptance model (TAM to develop a comprehensive behavioral model for analyzing the relationships among social capital factors (social capital theory, technological factors (TAM, and system self-efficacy (social cognitive theory in telehealth. The proposed framework was validated with 365 respondents from Nantou County, located in Central Taiwan. Structural equation modeling (SEM was used to assess the causal relationships that were hypothesized in the proposed model. The finding indicates that elderly residents generally reported positive perceptions toward the telehealth system. Generally, the findings show that social capital factors (social trust, institutional trust, and social participation significantly positively affect the technological factors (perceived ease of use and perceived usefulness respectively, which influenced usage intention. This study also confirmed that system self-efficacy was the salient antecedent of perceived ease of use. In addition, regarding the samples, the proposed model fitted considerably well. The proposed integrative psychosocial-technological model may serve as a theoretical basis for future research and can also offer empirical foresight to practitioners and researchers in the health departments of governments, hospitals, and rural communities.

  19. Modelling machine ensembles with discrete event dynamical system theory

    Science.gov (United States)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  20. Theory, modeling, and integrated studies in the Arase (ERG) project

    Science.gov (United States)

    Seki, Kanako; Miyoshi, Yoshizumi; Ebihara, Yusuke; Katoh, Yuto; Amano, Takanobu; Saito, Shinji; Shoji, Masafumi; Nakamizo, Aoi; Keika, Kunihiro; Hori, Tomoaki; Nakano, Shin'ya; Watanabe, Shigeto; Kamiya, Kei; Takahashi, Naoko; Omura, Yoshiharu; Nose, Masahito; Fok, Mei-Ching; Tanaka, Takashi; Ieda, Akimasa; Yoshikawa, Akimasa

    2018-02-01

    Understanding of underlying mechanisms of drastic variations of the near-Earth space (geospace) is one of the current focuses of the magnetospheric physics. The science target of the geospace research project Exploration of energization and Radiation in Geospace (ERG) is to understand the geospace variations with a focus on the relativistic electron acceleration and loss processes. In order to achieve the goal, the ERG project consists of the three parts: the Arase (ERG) satellite, ground-based observations, and theory/modeling/integrated studies. The role of theory/modeling/integrated studies part is to promote relevant theoretical and simulation studies as well as integrated data analysis to combine different kinds of observations and modeling. Here we provide technical reports on simulation and empirical models related to the ERG project together with their roles in the integrated studies of dynamic geospace variations. The simulation and empirical models covered include the radial diffusion model of the radiation belt electrons, GEMSIS-RB and RBW models, CIMI model with global MHD simulation REPPU, GEMSIS-RC model, plasmasphere thermosphere model, self-consistent wave-particle interaction simulations (electron hybrid code and ion hybrid code), the ionospheric electric potential (GEMSIS-POT) model, and SuperDARN electric field models with data assimilation. ERG (Arase) science center tools to support integrated studies with various kinds of data are also briefly introduced.[Figure not available: see fulltext.

  1. Spectral and scattering theory for translation invariant models in quantum field theory

    DEFF Research Database (Denmark)

    Rasmussen, Morten Grud

    This thesis is concerned with a large class of massive translation invariant models in quantum field theory, including the Nelson model and the Fröhlich polaron. The models in the class describe a matter particle, e.g. a nucleon or an electron, linearly coupled to a second quantised massive scalar...... by the physically relevant choices. The translation invariance implies that the Hamiltonian may be decomposed into a direct integral over the space of total momentum where the fixed momentum fiber Hamiltonians are given by , where denotes total momentum and is the Segal field operator. The fiber Hamiltonians...

  2. Cohomological gauge theory, quiver matrix models and Donaldson-Thomas theoryCohomological gauge theory, quiver matrix models and Donaldson-Thomas theory

    NARCIS (Netherlands)

    Cirafici, M.; Sinkovics, A.; Szabo, R.J.

    2009-01-01

    We study the relation between Donaldson–Thomas theory of Calabi–Yau threefolds and a six-dimensional topological Yang–Mills theory. Our main example is the topological U(N) gauge theory on flat space in its Coulomb branch. To evaluate its partition function we use equivariant localization techniques

  3. Linking Complexity and Sustainability Theories: Implications for Modeling Sustainability Transitions

    Directory of Open Access Journals (Sweden)

    Camaren Peter

    2014-03-01

    Full Text Available In this paper, we deploy a complexity theory as the foundation for integration of different theoretical approaches to sustainability and develop a rationale for a complexity-based framework for modeling transitions to sustainability. We propose a framework based on a comparison of complex systems’ properties that characterize the different theories that deal with transitions to sustainability. We argue that adopting a complexity theory based approach for modeling transitions requires going beyond deterministic frameworks; by adopting a probabilistic, integrative, inclusive and adaptive approach that can support transitions. We also illustrate how this complexity-based modeling framework can be implemented; i.e., how it can be used to select modeling techniques that address particular properties of complex systems that we need to understand in order to model transitions to sustainability. In doing so, we establish a complexity-based approach towards modeling sustainability transitions that caters for the broad range of complex systems’ properties that are required to model transitions to sustainability.

  4. Excellence in Physics Education Award: Modeling Theory for Physics Instruction

    Science.gov (United States)

    Hestenes, David

    2014-03-01

    All humans create mental models to plan and guide their interactions with the physical world. Science has greatly refined and extended this ability by creating and validating formal scientific models of physical things and processes. Research in physics education has found that mental models created from everyday experience are largely incompatible with scientific models. This suggests that the fundamental problem in learning and understanding science is coordinating mental models with scientific models. Modeling Theory has drawn on resources of cognitive science to work out extensive implications of this suggestion and guide development of an approach to science pedagogy and curriculum design called Modeling Instruction. Modeling Instruction has been widely applied to high school physics and, more recently, to chemistry and biology, with noteworthy results.

  5. A Model of Statistics Performance Based on Achievement Goal Theory.

    Science.gov (United States)

    Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

    2003-01-01

    Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

  6. Anisotropic cosmological models and generalized scalar tensor theory

    Indian Academy of Sciences (India)

    Abstract. In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–. Sachs space-time. For bulk viscous fluid, both exponential and power-law solutions have been stud- ied and some assumptions ...

  7. Anisotropic cosmological models and generalized scalar tensor theory

    Indian Academy of Sciences (India)

    In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–Sachs space-time. For bulk viscous fluid, both exponential and power-law solutions have been studied and some assumptions among the ...

  8. Two-dimensional models in statistical mechanics and field theory

    International Nuclear Information System (INIS)

    Koberle, R.

    1980-01-01

    Several features of two-dimensional models in statistical mechanics and Field theory, such as, lattice quantum chromodynamics, Z(N), Gross-Neveu and CP N-1 are discussed. The problems of confinement and dynamical mass generation are also analyzed. (L.C.) [pt

  9. The early years of string theory: The dual resonance model

    International Nuclear Information System (INIS)

    Ramond, P.

    1987-10-01

    This paper reviews the past quantum mechanical history of the dual resonance model which is an early string theory. The content of this paper is listed as follows: historical review, the Veneziano amplitude, the operator formalism, the ghost story, and the string story

  10. Interacting bosons model and relation with BCS theory

    International Nuclear Information System (INIS)

    Diniz, R.

    1990-01-01

    The Nambu mechanism for BCS theory is extended with inclusion of quadrupole pairing in addition to the usual monopole pairing. An effective Hamiltonian is constructed and its relation to the IBM is discussed. The faced difficulties and a possible generalization of this model are discussed. (author)

  11. Symmetry-guided large-scale shell-model theory

    Czech Academy of Sciences Publication Activity Database

    Launey, K. D.; Dytrych, Tomáš; Draayer, J. P.

    2016-01-01

    Roč. 89, JUL (2016), s. 101-136 ISSN 0146-6410 R&D Projects: GA ČR GA16-16772S Institutional support: RVO:61389005 Keywords : Ab intio shell -model theory * Symplectic symmetry * Collectivity * Clusters * Hoyle state * Orderly patterns in nuclei from first principles Subject RIV: BE - Theoretical Physics Impact factor: 11.229, year: 2016

  12. The Five-Factor Model and Self-Determination Theory

    DEFF Research Database (Denmark)

    Olesen, Martin Hammershøj; Thomsen, Dorthe Kirkegaard; Schnieber, Anette

    This study investigates conceptual overlap vs. distinction between individual differences in personality traits, i.e. the Five-Factor Model; and Self-determination Theory, i.e. general causality orientations. Twelve-hundred-and-eighty-seven freshmen (mean age 21.71; 64% women) completed electronic...

  13. A Proposed Model of Jazz Theory Knowledge Acquisition

    Science.gov (United States)

    Ciorba, Charles R.; Russell, Brian E.

    2014-01-01

    The purpose of this study was to test a hypothesized model that proposes a causal relationship between motivation and academic achievement on the acquisition of jazz theory knowledge. A reliability analysis of the latent variables ranged from 0.92 to 0.94. Confirmatory factor analyses of the motivation (standardized root mean square residual…

  14. S matrix theory of the massive Thirring model

    International Nuclear Information System (INIS)

    Berg, B.

    1980-01-01

    The S matrix theory of the massive Thirring model, describing the exact quantum scattering of solitons and their boundstates, is reviewed. Treated are: Factorization equations and their solution, boundstates, generalized Jost functions and Levinson's theorem, scattering of boundstates, 'virtual' and anomalous thresholds. (orig.) 891 HSI/orig. 892 MKO

  15. Multilevel Higher-Order Item Response Theory Models

    Science.gov (United States)

    Huang, Hung-Yu; Wang, Wen-Chung

    2014-01-01

    In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…

  16. Item Response Theory Models for Performance Decline during Testing

    Science.gov (United States)

    Jin, Kuan-Yu; Wang, Wen-Chung

    2014-01-01

    Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…

  17. Item Response Theory Modeling of the Philadelphia Naming Test

    Science.gov (United States)

    Fergadiotis, Gerasimos; Kellough, Stacey; Hula, William D.

    2015-01-01

    Purpose: In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating…

  18. An NCME Instructional Module on Polytomous Item Response Theory Models

    Science.gov (United States)

    Penfield, Randall David

    2014-01-01

    A polytomous item is one for which the responses are scored according to three or more categories. Given the increasing use of polytomous items in assessment practices, item response theory (IRT) models specialized for polytomous items are becoming increasingly common. The purpose of this ITEMS module is to provide an accessible overview of…

  19. Profiles in Leadership: Enhancing Learning through Model and Theory Building.

    Science.gov (United States)

    Mello, Jeffrey A.

    2003-01-01

    A class assignment was designed to present factors affecting leadership dynamics, allow practice in model and theory building, and examine leadership from multicultural perspectives. Students developed a profile of a fictional or real leader and analyzed qualities, motivations, context, and effectiveness in written and oral presentations.…

  20. Compositional models and conditional independence in evidence theory

    Czech Academy of Sciences Publication Activity Database

    Jiroušek, Radim; Vejnarová, Jiřina

    2011-01-01

    Roč. 52, č. 3 (2011), s. 316-334 ISSN 0888-613X Institutional research plan: CEZ:AV0Z10750506 Keywords : Evidence theory * Conditional independence * multidimensional models Subject RIV: BA - General Mathematics Impact factor: 1.948, year: 2011 http://library.utia.cas.cz/separaty/2012/MTR/jirousek-0370515.pdf

  1. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  2. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.

    2000-01-01

    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest

  3. Conformal field theories, Coulomb gas picture and integrable models

    International Nuclear Information System (INIS)

    Zuber, J.B.

    1988-01-01

    The aim of the study is to present the links between some results of conformal field theory, the conventional Coulomb gas picture in statistical mechanics and the approach of integrable models. It is shown that families of conformal theories, related by the coset construction to the SU(2) Kac-Moody algebra, may be regarded as obtained from some free field, and modified by the coupling of its winding numbers to floating charges. This representation reflects the procedure of restriction of the corresponding integrable lattice models. The work may be generalized to models based on the coset construction with higher rank algebras. The corresponding integrable models are identified. In the conformal field description, generalized parafermions appear, and are coupled to free fields living on a higher-dimensional torus. The analysis is not as exhaustive as in the SU(2) case: all the various restrictions have not been identified, nor the modular invariants completely classified

  4. Route Choice Model Based on Game Theory for Commuters

    Directory of Open Access Journals (Sweden)

    Licai Yang

    2016-06-01

    Full Text Available The traffic behaviours of commuters may cause traffic congestion during peak hours. Advanced Traffic Information System can provide dynamic information to travellers. Due to the lack of timeliness and comprehensiveness, the provided information cannot satisfy the travellers’ needs. Since the assumptions of traditional route choice model based on Expected Utility Theory conflict with the actual situation, a route choice model based on Game Theory is proposed to provide reliable route choice to commuters in actual situation in this paper. The proposed model treats the alternative routes as game players and utilizes the precision of predicted information and familiarity of traffic condition to build a game. The optimal route can be generated considering Nash Equilibrium by solving the route choice game. Simulations and experimental analysis show that the proposed model can describe the commuters’ routine route choice decisionexactly and the provided route is reliable.

  5. Modeling Composite Assessment Data Using Item Response Theory

    Science.gov (United States)

    Ueckert, Sebastian

    2018-01-01

    Composite assessments aim to combine different aspects of a disease in a single score and are utilized in a variety of therapeutic areas. The data arising from these evaluations are inherently discrete with distinct statistical properties. This tutorial presents the framework of the item response theory (IRT) for the analysis of this data type in a pharmacometric context. The article considers both conceptual (terms and assumptions) and practical questions (modeling software, data requirements, and model building). PMID:29493119

  6. Constitutive relationships and models in continuum theories of multiphase flows

    International Nuclear Information System (INIS)

    Decker, R.

    1989-09-01

    In April, 1989, a workshop on constitutive relationships and models in continuum theories of multiphase flows was held at NASA's Marshall Space Flight Center. Topics of constitutive relationships for the partial or per phase stresses, including the concept of solid phase pressure are discussed. Models used for the exchange of mass, momentum, and energy between the phases in a multiphase flow are also discussed. The program, abstracts, and texts of the presentations from the workshop are included

  7. Perturbation theory around the Wess-Zumino-Witten model

    International Nuclear Information System (INIS)

    Hasseln, H. v.

    1991-05-01

    We consider a perturbation of the Wess-Zumino-Witten model in 2D by a current-current interaction. The β-function is computed to third order in the coupling constant and a nontrivial fixedpoint is found. By non-abelian bosonization, this perturbed WZW-model is shown to have the same β-function (at least to order g 2 ) as the fermionic theory with a four-fermion interaction. (orig.) [de

  8. A SIPA-based theory of irradiation creep in the low swelling rate regime

    International Nuclear Information System (INIS)

    Garner, F.A.; Woo, C.H.

    1991-11-01

    A model is presented which describes the major facets of the relationships between irradiation creep, void swelling and applied stress. The increasing degree of anisotropy in distribution of dislocation Burger's vectors with stress level plays a major role in this model. Although bcc metals are known to creep and swell at lower rates than fcc metals, it is predicted that the creep-swelling coupling coefficient is actually larger

  9. A model for C-14 tracer evaporative rate analysis (ERA)

    International Nuclear Information System (INIS)

    Gardner, R.P.; Verghese, K.

    1993-01-01

    A simple model has been derived and tested for the C-14 tracer evaporative rate analysis (ERA) method. It allows the accurate determination of the evaporative rate coefficient of the C-14 tracer detector in the presence of variable evaporation rates of the detector solvent and variable background counting rates. The evaporation rate coefficient should be the most fundamental parameter available in this analysis method and, therefore, its measurements with the proposed model should allow the most direct correlations to be made with the system properties of interest such as surface cleanliness. (author)

  10. A general-model-space diagrammatic perturbation theory

    International Nuclear Information System (INIS)

    Hose, G.; Kaldor, U.

    1980-01-01

    A diagrammatic many-body perturbation theory applicable to arbitrary model spaces is presented. The necessity of having a complete model space (all possible occupancies of the partially-filled shells) is avoided. This requirement may be troublesome for systems with several well-spaced open shells, such as most atomic and molecular excited states, as a complete model space spans a very broad energy range and leaves out states within that range, leading to poor or no convergence of the perturbation series. The method presented here would be particularly useful for such states. The solution of a model problem (He 2 excited Σ + sub(g) states) is demonstrated. (Auth.)

  11. A critique of recent models for human error rate assessment

    International Nuclear Information System (INIS)

    Apostolakis, G.E.

    1988-01-01

    This paper critically reviews two groups of models for assessing human error rates under accident conditions. The first group, which includes the US Nuclear Regulatory Commission (NRC) handbook model and the human cognitive reliability (HCR) model, considers as fundamental the time that is available to the operators to act. The second group, which is represented by the success likelihood index methodology multiattribute utility decomposition (SLIM-MAUD) model, relies on ratings of the human actions with respect to certain qualitative factors and the subsequent derivation of error rates. These models are evaluated with respect to two criteria: the treatment of uncertainties and the internal coherence of the models. In other words, this evaluation focuses primarily on normative aspects of these models. The principal findings are as follows: (1) Both of the time-related models provide human error rates as a function of the available time for action and the prevailing conditions. However, the HCR model ignores the important issue of state-of-knowledge uncertainties, dealing exclusively with stochastic uncertainty, whereas the model presented in the NRC handbook handles both types of uncertainty. (2) SLIM-MAUD provides a highly structured approach for the derivation of human error rates under given conditions. However, the treatment of the weights and ratings in this model is internally inconsistent. (author)

  12. Theory-based Bayesian models of inductive learning and reasoning.

    Science.gov (United States)

    Tenenbaum, Joshua B; Griffiths, Thomas L; Kemp, Charles

    2006-07-01

    Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations.

  13. Fluid analog model for boundary effects in field theory

    International Nuclear Information System (INIS)

    Ford, L. H.; Svaiter, N. F.

    2009-01-01

    Quantum fluctuations in the density of a fluid with a linear phonon dispersion relation are studied. In particular, we treat the changes in these fluctuations due to nonclassical states of phonons and to the presence of boundaries. These effects are analogous to similar effects in relativistic quantum field theory, and we argue that the case of the fluid is a useful analog model for effects in field theory. We further argue that the changes in the mean squared density are, in principle, observable by light scattering experiments.

  14. Chern-Simons Theory, Matrix Models, and Topological Strings

    International Nuclear Information System (INIS)

    Walcher, J

    2006-01-01

    This book is a find. Marino meets the challenge of filling in less than 200 pages the need for an accessible review of topological gauge/gravity duality. He is one of the pioneers of the subject and a clear expositor. It is no surprise that reading this book is a great pleasure. The existence of dualities between gauge theories and theories of gravity remains one of the most surprising recent discoveries in mathematical physics. While it is probably fair to say that we do not yet understand the full reach of such a relation, the impressive amount of evidence that has accumulated over the past years can be regarded as a substitute for a proof, and will certainly help to delineate the question of what is the most fundamental quantum mechanical theory. Here is a brief summary of the book. The journey begins with matrix models and an introduction to various techniques for the computation of integrals including perturbative expansion, large-N approximation, saddle point analysis, and the method of orthogonal polynomials. The second chapter, on Chern-Simons theory, is the longest and probably the most complete one in the book. Starting from the action we meet Wilson loop observables, the associated perturbative 3-manifold invariants, Witten's exact solution via the canonical duality to WZW models, the framing ambiguity, as well as a collection of results on knot invariants that can be derived from Chern-Simons theory and the combinatorics of U (∞) representation theory. The chapter also contains a careful derivation of the large-N expansion of the Chern-Simons partition function, which forms the cornerstone of its interpretation as a closed string theory. Finally, we learn that Chern-Simons theory can sometimes also be represented as a matrix model. The story then turns to the gravity side, with an introduction to topological sigma models (chapter 3) and topological string theory (chapter 4). While this presentation is necessarily rather condensed (and the beginner may

  15. Finite-size scaling theory and quantum hamiltonian Field theory: the transverse Ising model

    International Nuclear Information System (INIS)

    Hamer, C.J.; Barber, M.N.

    1979-01-01

    Exact results for the mass gap, specific heat and susceptibility of the one-dimensional transverse Ising model on a finite lattice are generated by constructing a finite matrix representation of the Hamiltonian using strong-coupling eigenstates. The critical behaviour of the limiting infinite chain is analysed using finite-size scaling theory. In this way, excellent estimates (to within 1/2% accuracy) are found for the critical coupling and the exponents α, ν and γ

  16. A General Framework for Portfolio Theory. Part I: theory and various models

    OpenAIRE

    Maier-Paape, Stanislaus; Zhu, Qiji Jim

    2017-01-01

    Utility and risk are two often competing measurements on the investment success. We show that efficient trade-off between these two measurements for investment portfolios happens, in general, on a convex curve in the two dimensional space of utility and risk. This is a rather general pattern. The modern portfolio theory of Markowitz [H. Markowitz, Portfolio Selection, 1959] and its natural generalization, the capital market pricing model, [W. F. Sharpe, Mutual fund performance , 1966] are spe...

  17. Angle-dependent strong-field molecular ionization rates with tuned range-separated time-dependent density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Sissay, Adonay [Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Abanador, Paul; Mauger, François; Gaarde, Mette; Schafer, Kenneth J. [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Lopata, Kenneth, E-mail: klopata@lsu.edu [Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States)

    2016-09-07

    Strong-field ionization and the resulting electronic dynamics are important for a range of processes such as high harmonic generation, photodamage, charge resonance enhanced ionization, and ionization-triggered charge migration. Modeling ionization dynamics in molecular systems from first-principles can be challenging due to the large spatial extent of the wavefunction which stresses the accuracy of basis sets, and the intense fields which require non-perturbative time-dependent electronic structure methods. In this paper, we develop a time-dependent density functional theory approach which uses a Gaussian-type orbital (GTO) basis set to capture strong-field ionization rates and dynamics in atoms and small molecules. This involves propagating the electronic density matrix in time with a time-dependent laser potential and a spatial non-Hermitian complex absorbing potential which is projected onto an atom-centered basis set to remove ionized charge from the simulation. For the density functional theory (DFT) functional we use a tuned range-separated functional LC-PBE*, which has the correct asymptotic 1/r form of the potential and a reduced delocalization error compared to traditional DFT functionals. Ionization rates are computed for hydrogen, molecular nitrogen, and iodoacetylene under various field frequencies, intensities, and polarizations (angle-dependent ionization), and the results are shown to quantitatively agree with time-dependent Schrödinger equation and strong-field approximation calculations. This tuned DFT with GTO method opens the door to predictive all-electron time-dependent density functional theory simulations of ionization and ionization-triggered dynamics in molecular systems using tuned range-separated hybrid functionals.

  18. Angle-dependent strong-field molecular ionization rates with tuned range-separated time-dependent density functional theory

    International Nuclear Information System (INIS)

    Sissay, Adonay; Abanador, Paul; Mauger, François; Gaarde, Mette; Schafer, Kenneth J.; Lopata, Kenneth

    2016-01-01

    Strong-field ionization and the resulting electronic dynamics are important for a range of processes such as high harmonic generation, photodamage, charge resonance enhanced ionization, and ionization-triggered charge migration. Modeling ionization dynamics in molecular systems from first-principles can be challenging due to the large spatial extent of the wavefunction which stresses the accuracy of basis sets, and the intense fields which require non-perturbative time-dependent electronic structure methods. In this paper, we develop a time-dependent density functional theory approach which uses a Gaussian-type orbital (GTO) basis set to capture strong-field ionization rates and dynamics in atoms and small molecules. This involves propagating the electronic density matrix in time with a time-dependent laser potential and a spatial non-Hermitian complex absorbing potential which is projected onto an atom-centered basis set to remove ionized charge from the simulation. For the density functional theory (DFT) functional we use a tuned range-separated functional LC-PBE*, which has the correct asymptotic 1/r form of the potential and a reduced delocalization error compared to traditional DFT functionals. Ionization rates are computed for hydrogen, molecular nitrogen, and iodoacetylene under various field frequencies, intensities, and polarizations (angle-dependent ionization), and the results are shown to quantitatively agree with time-dependent Schrödinger equation and strong-field approximation calculations. This tuned DFT with GTO method opens the door to predictive all-electron time-dependent density functional theory simulations of ionization and ionization-triggered dynamics in molecular systems using tuned range-separated hybrid functionals.

  19. Data analysis using the Binomial Failure Rate common cause model

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1983-09-01

    This report explains how to use the Binomial Failure Rate (BFR) method to estimate common cause failure rates. The entire method is described, beginning with the conceptual model, and covering practical issues of data preparation, treatment of variation in the failure rates, Bayesian estimation of the quantities of interest, checking the model assumptions for lack of fit to the data, and the ultimate application of the answers

  20. An analytical model of nonproportional scintillator light yield in terms of recombination rates

    International Nuclear Information System (INIS)

    Bizarri, G.; Moses, W. W.; Singh, J.; Vasil'ev, A. N.; Williams, R. T.

    2009-01-01

    Analytical expressions for the local light yield as a function of the local deposited energy (-dE/dx) and total scintillation yield integrated over the track of an electron of initial energy E are derived from radiative and/or nonradiative rates of first through third order in density of electronic excitations. The model is formulated in terms of rate constants, some of which can be determined independently from time-resolved spectroscopy and others estimated from measured light yield efficiency as a constraint assumed to apply in each kinetic order. The rates and parameters are used in the theory to calculate scintillation yield versus primary electron energy for comparison to published experimental results on four scintillators. Influence of the track radius on the yield is also discussed. Results are found to be qualitatively consistent with the observed scintillation light yield. The theory can be applied to any scintillator if the rates of the radiative and nonradiative processes are known

  1. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  2. Real Exchange Rate and Productivity in an OLG Model

    OpenAIRE

    Thi Hong Thinh DOAN; Karine GENTE

    2013-01-01

    This article develops an overlapping generations model to show how demography and savings affect the relationship between real exchange rate (RER) and productivity. In high-saving (low-saving) countries and/or low-population-growth-rate countries, a rise in productivity leads to a real depreciation (appreciation) whereas the RER may appreciate or depreciate in highproduction-growth-rate. Using panel data, we conclude that a rise in productivity generally causes a real exchange rate appreciati...

  3. Exchange rate predictability and state-of-the-art models

    OpenAIRE

    Yeșin, Pınar

    2016-01-01

    This paper empirically evaluates the predictive performance of the International Monetary Fund's (IMF) exchange rate assessments with respect to future exchange rate movements. The assessments of real trade-weighted exchange rates were conducted from 2006 to 2011, and were based on three state-of-the-art exchange rate models with a medium-term focus which were developed by the IMF. The empirical analysis using 26 advanced and emerging market economy currencies reveals that the "diagnosis" of ...

  4. Crossover behavior of the thermal conductance and Kramers’ transition rate theory

    Science.gov (United States)

    Velizhanin, Kirill A.; Sahu, Subin; Chien, Chih-Chun; Dubi, Yonatan; Zwolak, Michael

    2015-12-01

    Kramers’ theory frames chemical reaction rates in solution as reactants overcoming a barrier in the presence of friction and noise. For weak coupling to the solution, the reaction rate is limited by the rate at which the solution can restore equilibrium after a subset of reactants have surmounted the barrier to become products. For strong coupling, there are always sufficiently energetic reactants. However, the solution returns many of the intermediate states back to the reactants before the product fully forms. Here, we demonstrate that the thermal conductance displays an analogous physical response to the friction and noise that drive the heat current through a material or structure. A crossover behavior emerges where the thermal reservoirs dominate the conductance at the extremes and only in the intermediate region are the intrinsic properties of the lattice manifest. Not only does this shed new light on Kramers’ classic turnover problem, this result is significant for the design of devices for thermal management and other applications, as well as the proper simulation of transport at the nanoscale.

  5. Using effort-reward imbalance theory to understand high rates of depression and anxiety among clergy.

    Science.gov (United States)

    Proeschold-Bell, Rae Jean; Miles, Andrew; Toth, Matthew; Adams, Christopher; Smith, Bruce W; Toole, David

    2013-12-01

    The clergy occupation is unique in its combination of role strains and higher calling, putting clergy mental health at risk. We surveyed all United Methodist clergy in North Carolina, and 95% (n = 1,726) responded, with 38% responding via phone interview. We compared clergy phone interview depression rates, assessed using the Patient Health Questionnaire (PHQ-9), to those of in-person interviews in a representative United States sample that also used the PHQ-9. The clergy depression prevalence was 8.7%, significantly higher than the 5.5% rate of the national sample. We used logistic regression to explain depression, and also anxiety, assessed using the Hospital Anxiety and Depression Scale. As hypothesized by effort-reward imbalance theory, several extrinsic demands (job stress, life unpredictability) and intrinsic demands (guilt about not doing enough work, doubting one's call to ministry) significantly predicted depression and anxiety, as did rewards such as ministry satisfaction and lack of financial stress. The high rate of clergy depression signals the need for preventive policies and programs for clergy. The extrinsic and intrinsic demands and rewards suggest specific actions to improve clergy mental health.

  6. Should the model for risk-informed regulation be game theory rather than decision theory?

    Science.gov (United States)

    Bier, Vicki M; Lin, Shi-Woei

    2013-02-01

    deception), to identify optimal regulatory strategies. Therefore, we believe that the types of regulatory interactions analyzed in this article are better modeled using game theory rather than decision theory. In particular, the goals of this article are to review the relevant literature in game theory and regulatory economics (to stimulate interest in this area among risk analysts), and to present illustrative results showing how the application of game theory can provide useful insights into the theory and practice of risk-informed regulation. © 2012 Society for Risk Analysis.

  7. sigma model approach to the heterotic string theory

    International Nuclear Information System (INIS)

    Sen, A.

    1985-09-01

    Relation between the equations of motion for the massless fields in the heterotic string theory, and the conformal invariance of the sigma model describing the propagation of the heterotic string in arbitrary background massless fields is discussed. It is emphasized that this sigma model contains complete information about the string theory. Finally, we discuss the extension of the Hull-Witten proof of local gauge and Lorentz invariance of the sigma-model to higher order in α', and the modification of the transformation laws of the antisymmetric tensor field under these symmetries. Presence of anomaly in the naive N = 1/2 supersymmetry transformation is also pointed out in this context. 12 refs

  8. Integrable lambda models and Chern-Simons theories

    International Nuclear Information System (INIS)

    Schmidtt, David M.

    2017-01-01

    In this note we reveal a connection between the phase space of lambda models on S 1 ×ℝ and the phase space of double Chern-Simons theories on D×ℝ and explain in the process the origin of the non-ultralocality of the Maillet bracket, which emerges as a boundary algebra. In particular, this means that the (classical) AdS 5 ×S 5 lambda model can be understood as a double Chern-Simons theory defined on the Lie superalgebra psu(2,2|4) after a proper dependence of the spectral parameter is introduced. This offers a possibility for avoiding the use of the problematic non-ultralocal Poisson algebras that preclude the introduction of lattice regularizations and the application of the QISM to string sigma models. The utility of the equivalence at the quantum level is, however, still to be explored.

  9. Models for probability and statistical inference theory and applications

    CERN Document Server

    Stapleton, James H

    2007-01-01

    This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readersModels for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses mo...

  10. Integrable lambda models and Chern-Simons theories

    Energy Technology Data Exchange (ETDEWEB)

    Schmidtt, David M. [Departamento de Física, Universidade Federal de São Carlos,Caixa Postal 676, CEP 13565-905, São Carlos-SP (Brazil)

    2017-05-03

    In this note we reveal a connection between the phase space of lambda models on S{sup 1}×ℝ and the phase space of double Chern-Simons theories on D×ℝ and explain in the process the origin of the non-ultralocality of the Maillet bracket, which emerges as a boundary algebra. In particular, this means that the (classical) AdS{sub 5}×S{sup 5} lambda model can be understood as a double Chern-Simons theory defined on the Lie superalgebra psu(2,2|4) after a proper dependence of the spectral parameter is introduced. This offers a possibility for avoiding the use of the problematic non-ultralocal Poisson algebras that preclude the introduction of lattice regularizations and the application of the QISM to string sigma models. The utility of the equivalence at the quantum level is, however, still to be explored.

  11. Matrix models and stochastic growth in Donaldson-Thomas theory

    Energy Technology Data Exchange (ETDEWEB)

    Szabo, Richard J. [Department of Mathematics, Heriot-Watt University, Colin Maclaurin Building, Riccarton, Edinburgh EH14 4AS, United Kingdom and Maxwell Institute for Mathematical Sciences, Edinburgh (United Kingdom); Tierz, Miguel [Grupo de Fisica Matematica, Complexo Interdisciplinar da Universidade de Lisboa, Av. Prof. Gama Pinto, 2, PT-1649-003 Lisboa (Portugal); Departamento de Analisis Matematico, Facultad de Ciencias Matematicas, Universidad Complutense de Madrid, Plaza de Ciencias 3, 28040 Madrid (Spain)

    2012-10-15

    We show that the partition functions which enumerate Donaldson-Thomas invariants of local toric Calabi-Yau threefolds without compact divisors can be expressed in terms of specializations of the Schur measure. We also discuss the relevance of the Hall-Littlewood and Jack measures in the context of BPS state counting and study the partition functions at arbitrary points of the Kaehler moduli space. This rewriting in terms of symmetric functions leads to a unitary one-matrix model representation for Donaldson-Thomas theory. We describe explicitly how this result is related to the unitary matrix model description of Chern-Simons gauge theory. This representation is used to show that the generating functions for Donaldson-Thomas invariants are related to tau-functions of the integrable Toda and Toeplitz lattice hierarchies. The matrix model also leads to an interpretation of Donaldson-Thomas theory in terms of non-intersecting paths in the lock-step model of vicious walkers. We further show that these generating functions can be interpreted as normalization constants of a corner growth/last-passage stochastic model.

  12. Matrix models and stochastic growth in Donaldson-Thomas theory

    International Nuclear Information System (INIS)

    Szabo, Richard J.; Tierz, Miguel

    2012-01-01

    We show that the partition functions which enumerate Donaldson-Thomas invariants of local toric Calabi-Yau threefolds without compact divisors can be expressed in terms of specializations of the Schur measure. We also discuss the relevance of the Hall-Littlewood and Jack measures in the context of BPS state counting and study the partition functions at arbitrary points of the Kähler moduli space. This rewriting in terms of symmetric functions leads to a unitary one-matrix model representation for Donaldson-Thomas theory. We describe explicitly how this result is related to the unitary matrix model description of Chern-Simons gauge theory. This representation is used to show that the generating functions for Donaldson-Thomas invariants are related to tau-functions of the integrable Toda and Toeplitz lattice hierarchies. The matrix model also leads to an interpretation of Donaldson-Thomas theory in terms of non-intersecting paths in the lock-step model of vicious walkers. We further show that these generating functions can be interpreted as normalization constants of a corner growth/last-passage stochastic model.

  13. Forewarning model for water pollution risk based on Bayes theory.

    Science.gov (United States)

    Zhao, Jun; Jin, Juliang; Guo, Qizhong; Chen, Yaqian; Lu, Mengxiong; Tinoco, Luis

    2014-02-01

    In order to reduce the losses by water pollution, forewarning model for water pollution risk based on Bayes theory was studied. This model is built upon risk indexes in complex systems, proceeding from the whole structure and its components. In this study, the principal components analysis is used to screen out index systems. Hydrological model is employed to simulate index value according to the prediction principle. Bayes theory is adopted to obtain posterior distribution by prior distribution with sample information which can make samples' features preferably reflect and represent the totals to some extent. Forewarning level is judged on the maximum probability rule, and then local conditions for proposing management strategies that will have the effect of transforming heavy warnings to a lesser degree. This study takes Taihu Basin as an example. After forewarning model application and vertification for water pollution risk from 2000 to 2009 between the actual and simulated data, forewarning level in 2010 is given as a severe warning, which is well coincide with logistic curve. It is shown that the model is rigorous in theory with flexible method, reasonable in result with simple structure, and it has strong logic superiority and regional adaptability, providing a new way for warning water pollution risk.

  14. Soliton excitations in polyacetylene and relativistic field theory models

    International Nuclear Information System (INIS)

    Campbell, D.K.; Bishop, A.R.; Los Alamos Scientific Lab., NM

    1982-01-01

    A continuum model of a Peierls-dimerized chain, as described generally by Brazovskii and discussed for the case of polyacetylene by Takayama, Lin-Liu and Maki (TLM), is considered. The continuum (Bogliubov-de Gennes) equations arising in this model of interacting electrons and phonons are shown to be equivalent to the static, semiclassical equations for a solvable model field theory of self-coupled fermions - the N = 2 Gross-Neveu model. Based on this equivalence we note the existence of soliton defect states in polyacetylene that are additional to, and qualitatively different from, the amplitude kinks commonly discussed. The new solutions do not have the topological stability of kinks but are essentially conventional strong-coupling polarons in the dimerized chain. They carry spin (1/2) and charge (+- e). In addition, we discuss further areas in which known field theory results may apply to a Peierls-dimerized chain, including relations between phenomenological PHI 4 and continuuum electron-phonon models, and the structure of the fully quantum versus mean field theories. (orig.)

  15. A permutation information theory tour through different interest rate maturities: the Libor case.

    Science.gov (United States)

    Bariviera, Aurelio Fernández; Guercio, María Belén; Martinez, Lisana B; Rosso, Osvaldo A

    2015-12-13

    This paper analyses Libor interest rates for seven different maturities and referred to operations in British pounds, euros, Swiss francs and Japanese yen, during the period 2001-2015. The analysis is performed by means of two quantifiers derived from information theory: the permutation Shannon entropy and the permutation Fisher information measure. An anomalous behaviour in the Libor is detected in all currencies except euros during the years 2006-2012. The stochastic switch is more severe in one, two and three months maturities. Given the special mechanism of Libor setting, we conjecture that the behaviour could have been produced by the manipulation that was uncovered by financial authorities. We argue that our methodology is pertinent as a market overseeing instrument. © 2015 The Author(s).

  16. Surface hopping, transition state theory, and decoherence. II. Thermal rate constants and detailed balance

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Amber; Subotnik, Joseph E., E-mail: subotnik@sas.upenn.edu [Department of Chemistry, University of Pennsylvania, 231 South 34th Street, Philadelphia, Pennsylvania 19104 (United States)

    2015-10-07

    We investigate a simple approach to compute a non-adiabatic thermal rate constant using the fewest switches surface hopping (FSSH) dynamics. We study the effects of both decoherence (using our augmented-FSSH (A-FSSH) algorithm) and forbidden hops over a large range of parameters, including high and low friction regimes, and weak and strong electronic coupling regimes. Furthermore, when possible, we benchmark our results against exact hierarchy equations of motion results, where we usually find a maximum error of roughly a factor of two (at reasonably large temperatures). In agreement with Hammes-Schiffer and Tully, we find that a merger of transition state theory and surface hopping can be both accurate and efficient when performed correctly. We further show that detailed balance is followed approximately by A-FSSH dynamics.

  17. Modeling Atmospheric Turbulence via Rapid Distortion Theory: Spectral Tensor of Velocity and Buoyancy

    DEFF Research Database (Denmark)

    Chougule, Abhijit S.; Mann, Jakob; Kelly, Mark C.

    2017-01-01

    A spectral tensor model is presented for turbulent fluctuations of wind velocity components and temperature, assuming uniform vertical gradients in mean temperature and mean wind speed. The model is built upon rapid distortion theory (RDT) following studies by Mann and by Hanazaki and Hunt, using...... the eddy lifetime parameterization of Mann to make the model stationary. The buoyant spectral tensor model is driven via five parameters: the viscous dissipation rate epsilon, length scale of energy-containing eddies L, a turbulence anisotropy parameter Gamma, gradient Richardson number (Ri) representing...

  18. Hypersurface Homogeneous Cosmological Model in Modified Theory of Gravitation

    Science.gov (United States)

    Katore, S. D.; Hatkar, S. P.; Baxi, R. J.

    2016-12-01

    We study a hypersurface homogeneous space-time in the framework of the f (R, T) theory of gravitation in the presence of a perfect fluid. Exact solutions of field equations are obtained for exponential and power law volumetric expansions. We also solve the field equations by assuming the proportionality relation between the shear scalar (σ ) and the expansion scalar (θ ). It is observed that in the exponential model, the universe approaches isotropy at large time (late universe). The investigated model is notably accelerating and expanding. The physical and geometrical properties of the investigated model are also discussed.

  19. Categories of relations as models of quantum theory

    Directory of Open Access Journals (Sweden)

    Chris Heunen

    2015-11-01

    Full Text Available Categories of relations over a regular category form a family of models of quantum theory. Using regular logic, many properties of relations over sets lift to these models, including the correspondence between Frobenius structures and internal groupoids. Over compact Hausdorff spaces, this lifting gives continuous symmetric encryption. Over a regular Mal'cev category, this correspondence gives a characterization of categories of completely positive maps, enabling the formulation of quantum features. These models are closer to Hilbert spaces than relations over sets in several respects: Heisenberg uncertainty, impossibility of broadcasting, and behavedness of rank one morphisms.

  20. Massive mu pair production in a vector field theory model

    CERN Document Server

    Halliday, I G

    1976-01-01

    Massive electrodynamics is treated as a model for the production of massive mu pairs in high-energy hadronic collisions. The dominant diagrams in perturbation theory are identified and analyzed. These graphs have an eikonal structure which leads to enormous cancellations in the two-particle inclusive cross section but not in the n-particle production cross sections. Under the assumption that these cancellations are complete, a Drell-Yan structure appears in the inclusive cross section but the particles accompanying the mu pairs have a very different structure compared to the parton model. The pionization region is no longer empty of particles as in single parton models. (10 refs).

  1. Supersymmetric sigma models and composite Yang-Mills theory

    International Nuclear Information System (INIS)

    Lukierski, J.

    1980-04-01

    We describe two types of supersymmetric sigma models: with field values in supercoset space and with superfields. The notion of Riemannian symmetric pair (H,G/H) is generalized to supergroups. Using the supercoset approach the superconformal-invariant model of composite U(n) Yang-Mills fields in introduced. In the framework of the superfield approach we present with some details two versions of the composite N=1 supersymmetric Yang-Mills theory in four dimensions with U(n) and U(m) x U(n) local invariance. We argue that especially the superfield sigma models can be used for the description of pre-QCD supersymmetric dynamics. (author)

  2. Approximate models for broken clouds in stochastic radiative transfer theory

    International Nuclear Information System (INIS)

    Doicu, Adrian; Efremenko, Dmitry S.; Loyola, Diego; Trautmann, Thomas

    2014-01-01

    This paper presents approximate models in stochastic radiative transfer theory. The independent column approximation and its modified version with a solar source computed in a full three-dimensional atmosphere are formulated in a stochastic framework and for arbitrary cloud statistics. The nth-order stochastic models describing the independent column approximations are equivalent to the nth-order stochastic models for the original radiance fields in which the gradient vectors are neglected. Fast approximate models are further derived on the basis of zeroth-order stochastic models and the independent column approximation. The so-called “internal mixing” models assume a combination of the optical properties of the cloud and the clear sky, while the “external mixing” models assume a combination of the radiances corresponding to completely overcast and clear skies. A consistent treatment of internal and external mixing models is provided, and a new parameterization of the closure coefficient in the effective thickness approximation is given. An efficient computation of the closure coefficient for internal mixing models, using a previously derived vector stochastic model as a reference, is also presented. Equipped with appropriate look-up tables for the closure coefficient, these models can easily be integrated into operational trace gas retrieval systems that exploit absorption features in the near-IR solar spectrum. - Highlights: • Independent column approximation in a stochastic setting. • Fast internal and external mixing models for total and diffuse radiances. • Efficient optimization of internal mixing models to match reference models

  3. Li+ solvation and kinetics of Li+-BF4-/PF6- ion pairs in ethylene carbonate. A molecular dynamics study with classical rate theories

    Science.gov (United States)

    Chang, Tsun-Mei; Dang, Liem X.

    2017-10-01

    Using our polarizable force-field models and employing classical rate theories of chemical reactions, we examine the ethylene carbonate (EC) exchange process between the first and second solvation shells around Li+ and the dissociation kinetics of ion pairs Li+-[BF4] and Li+-[PF6] in this solvent. We calculate the exchange rates using transition state theory and correct them with transmission coefficients computed by the reactive flux, Impey, Madden, and McDonald approaches, and Grote-Hynes theory. We found that the residence times of EC around Li+ ions varied from 60 to 450 ps, depending on the correction method used. We found that the relaxation times changed significantly from Li+-[BF4] to Li+-[PF6] ion pairs in EC. Our results also show that, in addition to affecting the free energy of dissociation in EC, the anion type also significantly influences the dissociation kinetics of ion pairing.

  4. Aggressive behavior: an alternative model of resting heart rate and sensation seeking.

    Science.gov (United States)

    Wilson, Laura C; Scarpa, Angela

    2014-01-01

    Low resting heart rate is a well-replicated biological correlate of aggression, and sensation seeking is frequently cited as the underlying causal explanation. However, little empirical evidence supports this mediating relationship. Furthermore, the biosocial model of violence and social push theory suggest sensation seeking may moderate the relationship between heart rate and aggression. In a sample of 128 college students (82.0% White; 73.4% female), the current study tested a moderation model as an alternative relationship between resting heart rate and sensation seeking in regard to aggression. Overall, the findings partially supported an interaction effect, whereby the relationship between heart rate and aggression was moderated by sensation seeking. Specifically, the oft-noted relationship between low resting heart rate and increased aggression was found, but only for individuals with low levels of sensation seeking. If replication supports this finding, the results may better inform prevention and intervention work. © 2013 Wiley Periodicals, Inc.

  5. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  6. Symmetry Breaking, Unification, and Theories Beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasunori

    2009-07-31

    A model was constructed in which the supersymmetric fine-tuning problem is solved without extending the Higgs sector at the weak scale. We have demonstrated that the model can avoid all the phenomenological constraints, while avoiding excessive fine-tuning. We have also studied implications of the model on dark matter physics and collider physics. I have proposed in an extremely simple construction for models of gauge mediation. We found that the {mu} problem can be simply and elegantly solved in a class of models where the Higgs fields couple directly to the supersymmetry breaking sector. We proposed a new way of addressing the flavor problem of supersymmetric theories. We have proposed a new framework of constructing theories of grand unification. We constructed a simple and elegant model of dark matter which explains excess flux of electrons/positrons. We constructed a model of dark energy in which evolving quintessence-type dark energy is naturally obtained. We studied if we can find evidence of the multiverse.

  7. Decision-Making Theories and Models: A Discussion of Rational and Psychological Decision-Making Theories and Models: The Search for a Cultural-Ethical Decision-Making Model

    OpenAIRE

    Oliveira, Arnaldo

    2007-01-01

    This paper examines rational and psychological decision-making models. Descriptive and normative methodologies such as attribution theory, schema theory, prospect theory, ambiguity model, game theory, and expected utility theory are discussed. The definition of culture is reviewed, and the relationship between culture and decision making is also highlighted as many organizations use a cultural-ethical decision-making model.

  8. Consumption-based macroeconomic models of asset pricing theory

    Directory of Open Access Journals (Sweden)

    Đorđević Marija

    2016-01-01

    Full Text Available The family of consumptionbased asset pricing models yields a stochastic discount factor proportional to the marginal rate of intertemporal substitution of consumption. In examining the empirical performance of this class of models, several puzzles are discovered. In this literature review we present the canonical model, the corresponding empirical tests, and different extensions to this model that propose a resolution of these puzzles.

  9. Evaporation of Liquid Droplet in Nano and Micro Scales from Statistical Rate Theory.

    Science.gov (United States)

    Duan, Fei; He, Bin; Wei, Tao

    2015-04-01

    The statistical rate theory (SRT) is applied to predict the average evaporation flux of liquid droplet after the approach is validated in the sessile droplet experiments of the water and heavy water. The steady-state experiments show a temperature discontinuity at the evaporating interface. The average evaporation flux is evaluated by individually changing the measurement at a liquid-vapor interface, including the interfacial liquid temperature, the interfacial vapor temperature, the vapor-phase pressure, and the droplet size. The parameter study shows that a higher temperature jump would reduce the average evaporation flux. The average evaporation flux can significantly be influenced by the interfacial liquid temperature and the vapor-phase pressure. The variation can switch the evaporation into condensation. The evaporation flux is found to remain relative constant if the droplet is larger than a micro scale, while the smaller diameters in nano scale can produce a much higher evaporation flux. In addition, a smaller diameter of droplets with the same liquid volume has a larger surface area. It is suggested that the evaporation rate increases dramatically as the droplet shrinks into nano size.

  10. Nonlinear structural mechanics theory, dynamical phenomena and modeling

    CERN Document Server

    Lacarbonara, Walter

    2013-01-01

    Nonlinear Structural Mechanics: Theory, Dynamical Phenomena and Modeling offers a concise, coherent presentation of the theoretical framework of nonlinear structural mechanics, computational methods, applications, parametric investigations of nonlinear phenomena and their mechanical interpretation towards design. The theoretical and computational tools that enable the formulation, solution, and interpretation of nonlinear structures are presented in a systematic fashion so as to gradually attain an increasing level of complexity of structural behaviors, under the prevailing assumptions on the geometry of deformation, the constitutive aspects and the loading scenarios. Readers will find a treatment of the foundations of nonlinear structural mechanics towards advanced reduced models, unified with modern computational tools in the framework of the prominent nonlinear structural dynamic phenomena while tackling both the mathematical and applied sciences. Nonlinear Structural Mechanics: Theory, Dynamical Phenomena...

  11. Profile-likelihood Confidence Intervals in Item Response Theory Models.

    Science.gov (United States)

    Chalmers, R Philip; Pek, Jolynn; Liu, Yang

    2017-01-01

    Confidence intervals (CIs) are fundamental inferential devices which quantify the sampling variability of parameter estimates. In item response theory, CIs have been primarily obtained from large-sample Wald-type approaches based on standard error estimates, derived from the observed or expected information matrix, after parameters have been estimated via maximum likelihood. An alternative approach to constructing CIs is to quantify sampling variability directly from the likelihood function with a technique known as profile-likelihood confidence intervals (PL CIs). In this article, we introduce PL CIs for item response theory models, compare PL CIs to classical large-sample Wald-type CIs, and demonstrate important distinctions among these CIs. CIs are then constructed for parameters directly estimated in the specified model and for transformed parameters which are often obtained post-estimation. Monte Carlo simulation results suggest that PL CIs perform consistently better than Wald-type CIs for both non-transformed and transformed parameters.

  12. The QCD model of hadron cores of the meson theory

    International Nuclear Information System (INIS)

    Pokrovskii, Y.E.

    1985-01-01

    It was shown that in the previously proposed QCD model of hadron cores the exchange and self-energy contributions of the virtual quark-antiquark-gluon cloud on the outside of a bag which radius coincides with the hardon core radius of the meson theory (∼ 0.4 Fm) have been taken into account at the phenomenological level. Simulation of this cloud by the meson field results in realistic estimations of the nucleon's electroweak properties, moment fractions carried by gluons, quarks, antiquarks and hadron-hadron interaction cross-sections within a wide range of energies. The authors note that the QCD hadron core model proposed earlier not only realistically reflects the hadron masses, but reflects self-consistently main elements of the structure and interaction of hadrons at the quark-gluon bag radius (R - 0.4Fm) being close to the meson theory core radius

  13. Synthetic Domain Theory and Models of Linear Abadi & Plotkin Logic

    DEFF Research Database (Denmark)

    Møgelberg, Rasmus Ejlers; Birkedal, Lars; Rosolini, Guiseppe

    2008-01-01

    Plotkin suggested using a polymorphic dual intuitionistic/linear type theory (PILLY) as a metalanguage for parametric polymorphism and recursion. In recent work the first two authors and R.L. Petersen have defined a notion of parametric LAPL-structure, which are models of PILLY, in which one can...... reason using parametricity and, for example, solve a large class of domain equations, as suggested by Plotkin.In this paper, we show how an interpretation of a strict version of Bierman, Pitts and Russo's language Lily into synthetic domain theory presented by Simpson and Rosolini gives rise...... to a parametric LAPL-structure. This adds to the evidence that the notion of LAPL-structure is a general notion, suitable for treating many different parametric models, and it provides formal proofs of consequences of parametricity expected to hold for the interpretation. Finally, we show how these results...

  14. Lepton number violation in theories with a large number of standard model copies

    International Nuclear Information System (INIS)

    Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich

    2011-01-01

    We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.

  15. SIMP model at NNLO in chiral perturbation theory

    DEFF Research Database (Denmark)

    Hansen, Martin Rasmus Lundquist; Langaeble, K.; Sannino, F.

    2015-01-01

    We investigate the phenomenological viability of a recently proposed class of composite dark matter models where the relic density is determined by 3 to 2 number-changing processes in the dark sector. Here the pions of the strongly interacting field theory constitute the dark matter particles...... with phenomenological constraints challenging the viability of the simplest realisation of the strongly interacting massive particle (SIMP) paradigm....

  16. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  17. A model theory for tachyons in two dimensions

    International Nuclear Information System (INIS)

    Recami, E.; Rodrigues, W.A.

    1985-01-01

    The paper is divided in two parts, the first one having nothing to do with tachyons. In fact, to prepare the ground, in part one (sect. 2) it is shown that special relativity, even without tachyons, can be given a form such to describe both particles and antiparticles. The plan of part two is confined only to a model theory in two dimensions, for the reasons stated in sect. 3

  18. A realistic model for quantum theory with a locality property

    International Nuclear Information System (INIS)

    Eberhard, P.H.

    1987-04-01

    A model reproducing the predictions of relativistic quantum theory to any desired degree of accuracy is described in this paper. It involves quantities that are independent of the observer's knowledge, and therefore can be called real, and which are defined at each point in space, and therefore can be called local in a rudimentary sense. It involves faster-than-light, but not instantaneous, action at distance

  19. Theory, Modeling and Simulation Annual Report 2000; FINAL

    International Nuclear Information System (INIS)

    Dixon, David A; Garrett, Bruce C; Straatsma, TP; Jones, Donald R; Studham, Scott; Harrison, Robert J; Nichols, Jeffrey A

    2001-01-01

    This annual report describes the 2000 research accomplishments for the Theory, Modeling, and Simulation (TM and S) directorate, one of the six research organizations in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory (PNNL). EMSL is a U.S. Department of Energy (DOE) national scientific user facility and is the centerpiece of the DOE commitment to providing world-class experimental, theoretical, and computational capabilities for solving the nation's environmental problems

  20. Properties of lattice gauge theory models at low temperatures

    International Nuclear Information System (INIS)

    Mack, G.

    1980-01-01

    The Z(N) theory of quark confinement is discussed and how fluctuations of Z(N) gauge fields may continue to be important in the continuum limit. Existence of a model in four dimensions is pointed out in which confinement of (scalar) quarks can be shown to persist in the continuum limit. This article is based on the author's Cargese lectures 1979. Some of its results are published here for the first time. (orig.) 891 HSI/orig. 892 MKO

  1. Field theory of large amplitude collective motion. A schematic model

    International Nuclear Information System (INIS)

    Reinhardt, H.

    1978-01-01

    By using path integral methods the equation for large amplitude collective motion for a schematic two-level model is derived. The original fermion theory is reformulated in terms of a collective (Bose) field. The classical equation of motion for the collective field coincides with the time-dependent Hartree-Fock equation. Its classical solution is quantized by means of the field-theoretical generalization of the WKB method. (author)

  2. Stability Analysis for Car Following Model Based on Control Theory

    International Nuclear Information System (INIS)

    Meng Xiang-Pei; Li Zhi-Peng; Ge Hong-Xia

    2014-01-01

    Stability analysis is one of the key issues in car-following theory. The stability analysis with Lyapunov function for the two velocity difference car-following model (for short, TVDM) is conducted and the control method to suppress traffic congestion is introduced. Numerical simulations are given and results are consistent with the theoretical analysis. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  3. Analytical theory of Doppler reflectometry in slab plasma model

    Energy Technology Data Exchange (ETDEWEB)

    Gusakov, E.Z.; Surkov, A.V. [Ioffe Institute, Politekhnicheskaya 26, St. Petersburg (Russian Federation)

    2004-07-01

    Doppler reflectometry is considered in slab plasma model in the frameworks of analytical theory. The diagnostics locality is analyzed for both regimes: linear and nonlinear in turbulence amplitude. The toroidal antenna focusing of probing beam to the cut-off is proposed and discussed as a method to increase diagnostics spatial resolution. It is shown that even in the case of nonlinear regime of multiple scattering, the diagnostics can be used for an estimation (with certain accuracy) of plasma poloidal rotation profile. (authors)

  4. Spherically symmetric star model in the gravitational gauge theory

    Energy Technology Data Exchange (ETDEWEB)

    Tsou, C [Peking Observatory, China; Ch' en, S; Ho, T; Kuo, H

    1976-12-01

    It is shown that a star model, which is black hole-free and singularity-free, can be obtained naturally in the gravitational gauge theory, provided the space-time is torsion-free and the matter is spinless. The conclusion in a sense shows that the discussions about the black hole and the singularity based on general relativity may not describe nature correctly.

  5. Building Better Ecological Machines: Complexity Theory and Alternative Economic Models

    Directory of Open Access Journals (Sweden)

    Jess Bier

    2016-12-01

    Full Text Available Computer models of the economy are regularly used to predict economic phenomena and set financial policy. However, the conventional macroeconomic models are currently being reimagined after they failed to foresee the current economic crisis, the outlines of which began to be understood only in 2007-2008. In this article we analyze the most prominent of this reimagining: Agent-Based models (ABMs. ABMs are an influential alternative to standard economic models, and they are one focus of complexity theory, a discipline that is a more open successor to the conventional chaos and fractal modeling of the 1990s. The modelers who create ABMs claim that their models depict markets as ecologies, and that they are more responsive than conventional models that depict markets as machines. We challenge this presentation, arguing instead that recent modeling efforts amount to the creation of models as ecological machines. Our paper aims to contribute to an understanding of the organizing metaphors of macroeconomic models, which we argue is relevant conceptually and politically, e.g., when models are used for regulatory purposes.

  6. Crossing rate of labelled Poisson cluster processes and their application in the reliability theory

    International Nuclear Information System (INIS)

    Schrupp, K.

    1986-01-01

    A load process is modelled within a given interdependency system and the failure probability of a structure is estimated using the crossing rate method. The term 'labelled cluster process' is formally introduced. An approximation is given by the expected value of the point process of the crossing from the safe range to the failure range. This expected value is explicitly calculated for the instationary cluster process, the stationary borderline process, and for various types of superpositions (clustering) of such processes. (DG) [de

  7. Ductility prediction of substrate-supported metal layers based on rate-independent crystal plasticity theory

    Directory of Open Access Journals (Sweden)

    Akpama Holanyo K.

    2016-01-01

    Full Text Available In this paper, both the bifurcation theory and the initial imperfection approach are used to predict localized necking in substrate-supported metal layers. The self-consistent scale-transition scheme is used to derive the mechanical behavior of a representative volume element of the metal layer from the behavior of its microscopic constituents (the single crystals. The mechanical behavior of the elastomer substrate follows the neo-Hookean hyperelastic model. The adherence between the two layers is assumed to be perfect. Through numerical results, it is shown that the limit strains predicted by the initial imperfection approach tend towards the bifurcation predictions when the size of the geometric imperfection in the metal layer vanishes. Also, it is shown that the addition of an elastomer layer to a metal layer enhances ductility.

  8. Noncommutative gauge theory and symmetry breaking in matrix models

    International Nuclear Information System (INIS)

    Grosse, Harald; Steinacker, Harold; Lizzi, Fedele

    2010-01-01

    We show how the fields and particles of the standard model can be naturally realized in noncommutative gauge theory. Starting with a Yang-Mills matrix model in more than four dimensions, an SU(n) gauge theory on a Moyal-Weyl space arises with all matter and fields in the adjoint of the gauge group. We show how this gauge symmetry can be broken spontaneously down to SU(3) c xSU(2) L xU(1) Q [resp. SU(3) c xU(1) Q ], which couples appropriately to all fields in the standard model. An additional U(1) B gauge group arises which is anomalous at low energies, while the trace-U(1) sector is understood in terms of emergent gravity. A number of additional fields arise, which we assume to be massive, in a pattern that is reminiscent of supersymmetry. The symmetry breaking might arise via spontaneously generated fuzzy spheres, in which case the mechanism is similar to brane constructions in string theory.

  9. Can producer currency pricing models generate volatile real exchange rates?

    OpenAIRE

    Povoledo, L.

    2012-01-01

    If the elasticities of substitution between traded and nontraded and between Home and Foreign traded goods are sufficiently low, then the real exchange rate generated by a model with full producer currency pricing is as volatile as in the data.

  10. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  11. Lepton asymmetry rate from quantum field theory: NLO in the hierarchical limit

    Energy Technology Data Exchange (ETDEWEB)

    Bödeker, D.; Sangel, M., E-mail: bodeker@physik.uni-bielefeld.de, E-mail: msangel@physik.uni-bielefeld.de [Fakultät für Physik, Universität Bielefeld, 33501 Bielefeld (Germany)

    2017-06-01

    The rates for generating a matter-antimatter asymmetry in extensions of the Standard Model (SM) containing right-handed neutrinos are the most interesting and least trivial co\\-efficients in the rate equations for baryogenesis through thermal leptogenesis. We obtain a relation of these rates to finite-temperature real-time correlation functions, similar to the Kubo formulas for transport coefficients. Then we consider the case of hierarchical masses for the sterile neutrinos. At leading order in their Yukawa couplings we find a simple master formula which relates the rates to a single finite temperature three-point spectral function. It is valid to all orders in g , where g denotes a SM gauge or quark Yukawa coupling. We use it to compute the rate for generating a matter-antimatter asymmetry at next-to-leading order in g in the non-relativistic regime. The corrections are of order g {sup 2}, and they amount to 4% or less.

  12. Measuring and modeling salience with the theory of visual attention.

    Science.gov (United States)

    Krüger, Alexander; Tünnermann, Jan; Scharlau, Ingrid

    2017-08-01

    For almost three decades, the theory of visual attention (TVA) has been successful in mathematically describing and explaining a wide variety of phenomena in visual selection and recognition with high quantitative precision. Interestingly, the influence of feature contrast on attention has been included in TVA only recently, although it has been extensively studied outside the TVA framework. The present approach further develops this extension of TVA's scope by measuring and modeling salience. An empirical measure of salience is achieved by linking different (orientation and luminance) contrasts to a TVA parameter. In the modeling part, the function relating feature contrasts to salience is described mathematically and tested against alternatives by Bayesian model comparison. This model comparison reveals that the power function is an appropriate model of salience growth in the dimensions of orientation and luminance contrast. Furthermore, if contrasts from the two dimensions are combined, salience adds up additively.

  13. Mathematical finance theory review and exercises from binomial model to risk measures

    CERN Document Server

    Gianin, Emanuela Rosazza

    2013-01-01

    The book collects over 120 exercises on different subjects of Mathematical Finance, including Option Pricing, Risk Theory, and Interest Rate Models. Many of the exercises are solved, while others are only proposed. Every chapter contains an introductory section illustrating the main theoretical results necessary to solve the exercises. The book is intended as an exercise textbook to accompany graduate courses in mathematical finance offered at many universities as part of degree programs in Applied and Industrial Mathematics, Mathematical Engineering, and Quantitative Finance.

  14. Modelling Exchange Rate Volatility by Macroeconomic Fundamentals in Pakistan

    OpenAIRE

    Munazza Jabeen; Saud Ahmad Khan

    2014-01-01

    What drives volatility in foreign exchange market in Pakistan? This paper undertakes an analysis of modelling exchange rate volatility in Pakistan by potential macroeconomic fundamentals well-known in the economic literature. For this, monthly data on Pak Rupee exchange rates in the terms of major currencies (US Dollar, British Pound, Canadian Dollar and Japanese Yen) and macroeconomics fundamentals is taken from April, 1982 to November, 2011. The results show thatthe PKR-USD exchange rate vo...

  15. Growth rate in the dynamical dark energy models

    International Nuclear Information System (INIS)

    Avsajanishvili, Olga; Arkhipova, Natalia A.; Samushia, Lado; Kahniashvili, Tina

    2014-01-01

    Dark energy models with a slowly rolling cosmological scalar field provide a popular alternative to the standard, time-independent cosmological constant model. We study the simultaneous evolution of background expansion and growth in the scalar field model with the Ratra-Peebles self-interaction potential. We use recent measurements of the linear growth rate and the baryon acoustic oscillation peak positions to constrain the model parameter α that describes the steepness of the scalar field potential. (orig.)

  16. Growth rate in the dynamical dark energy models.

    Science.gov (United States)

    Avsajanishvili, Olga; Arkhipova, Natalia A; Samushia, Lado; Kahniashvili, Tina

    Dark energy models with a slowly rolling cosmological scalar field provide a popular alternative to the standard, time-independent cosmological constant model. We study the simultaneous evolution of background expansion and growth in the scalar field model with the Ratra-Peebles self-interaction potential. We use recent measurements of the linear growth rate and the baryon acoustic oscillation peak positions to constrain the model parameter [Formula: see text] that describes the steepness of the scalar field potential.

  17. Origins of Discrepancies Between Kinetic Rate Law Theory and Experiments in the Na2O-B2O3-SiO2 System

    International Nuclear Information System (INIS)

    McGrail, B. PETER; Icenhower, Jonathan P.; Rodriguez, Elsa A.; McGrail, B.P.; Cragnolino, G.A.

    2002-01-01

    Discrepancies between classical kinetic rate law theory and experiment were quantitatively assessed and found to correlate with macromolecular amorphous separation in the sodium borosilicate glass system. A quantitative reinterpretation of static corrosion data and new SPFT data shows that a recently advanced protective surface layer theory fails to describe the observed dissolution behavior of simple and complex silicate glasses under carefully controlled experimental conditions. The hypothesis is shown to be self-inconsistent in contrast with a phase separation model that is in quantitative agreement with experiments

  18. Analysis of sensory ratings data with cumulative link models

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen; Brockhoff, Per B.

    2013-01-01

    Examples of categorical rating scales include discrete preference, liking and hedonic rating scales. Data obtained on these scales are often analyzed with normal linear regression methods or with omnibus Pearson chi2 tests. In this paper we propose to use cumulative link models that allow for reg...

  19. Application of the evolution theory in modelling of innovation diffusion

    Directory of Open Access Journals (Sweden)

    Krstić Milan

    2016-01-01

    Full Text Available The theory of evolution has found numerous analogies and applications in other scientific disciplines apart from biology. In that sense, today the so-called 'memetic-evolution' has been widely accepted. Memes represent a complex adaptable system, where one 'meme' represents an evolutional cultural element, i.e. the smallest unit of information which can be identified and used in order to explain the evolution process. Among others, the field of innovations has proved itself to be a suitable area where the theory of evolution can also be successfully applied. In this work the authors have started from the assumption that it is also possible to apply the theory of evolution in the modelling of the process of innovation diffusion. Based on the conducted theoretical research, the authors conclude that the process of innovation diffusion in the interpretation of a 'meme' is actually the process of imitation of the 'meme' of innovation. Since during the process of their replication certain 'memes' show a bigger success compared to others, that eventually leads to their natural selection. For the survival of innovation 'memes', their manifestations are of key importance in the sense of their longevity, fruitfulness and faithful replicating. The results of the conducted research have categorically confirmed the assumption of the possibility of application of the evolution theory with the innovation diffusion with the help of innovation 'memes', which opens up the perspectives for some new researches on the subject.

  20. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  1. A MODEL OF RATING FOR BANKS IN ROMANIA

    Directory of Open Access Journals (Sweden)

    POPA ANAMARIA

    2012-07-01

    Full Text Available Abstract.In the paper the authors present a model of rating for the banking system. Thus we took into account the records of 11 banks in Romania, based on annual financial reports. The model classified the banks in seven categories according with notes used by Standard Poor’s and Moody’s rating Agencies.

  2. Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.

    Science.gov (United States)

    Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H

    2008-05-01

    The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.

  3. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  4. H+3 WZNW model from Liouville field theory

    International Nuclear Information System (INIS)

    Hikida, Yasuaki; Schomerus, Volker

    2007-01-01

    There exists an intriguing relation between genus zero correlation functions in the H + 3 WZNW model and in Liouville field theory. We provide a path integral derivation of the correspondence and then use our new approach to generalize the relation to surfaces of arbitrary genus g. In particular we determine the correlation functions of N primary fields in the WZNW model explicitly through Liouville correlators with N+2g-2 additional insertions of certain degenerate fields. The paper concludes with a list of interesting further extensions and a few comments on the relation to the geometric Langlands program

  5. A possibilistic uncertainty model in classical reliability theory

    International Nuclear Information System (INIS)

    De Cooman, G.; Capelle, B.

    1994-01-01

    The authors argue that a possibilistic uncertainty model can be used to represent linguistic uncertainty about the states of a system and of its components. Furthermore, the basic properties of the application of this model to classical reliability theory are studied. The notion of the possibilistic reliability of a system or a component is defined. Based on the concept of a binary structure function, the important notion of a possibilistic function is introduced. It allows to calculate the possibilistic reliability of a system in terms of the possibilistic reliabilities of its components

  6. Theory and Circuit Model for Lossy Coaxial Transmission Line

    Energy Technology Data Exchange (ETDEWEB)

    Genoni, T. C.; Anderson, C. N.; Clark, R. E.; Gansz-Torres, J.; Rose, D. V.; Welch, Dale Robert

    2017-04-01

    The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.

  7. Models and applications of chaos theory in modern sciences

    CERN Document Server

    Zeraoulia, Elhadj

    2011-01-01

    This book presents a select group of papers that provide a comprehensive view of the models and applications of chaos theory in medicine, biology, ecology, economy, electronics, mechanical, and the human sciences. Covering both the experimental and theoretical aspects of the subject, it examines a range of current topics of interest. It considers the problems arising in the study of discrete and continuous time chaotic dynamical systems modeling the several phenomena in nature and society-highlighting powerful techniques being developed to meet these challenges that stem from the area of nonli

  8. A model of clearance rate regulation in mussels

    Science.gov (United States)

    Fréchette, Marcel

    2012-10-01

    Clearance rate regulation has been modelled as an instantaneous response to food availability, independent of the internal state of the animals. This view is incompatible with latent effects during ontogeny and phenotypic flexibility in clearance rate. Internal-state regulation of clearance rate is required to account for these patterns. Here I develop a model of internal-state based regulation of clearance rate. External factors such as suspended sediments are included in the model. To assess the relative merits of instantaneous regulation and internal-state regulation, I modelled blue mussel clearance rate and growth using a DEB model. In the usual standard feeding module, feeding is governed by a Holling's Type II response to food concentration. In the internal-state feeding module, gill ciliary activity and thus clearance rate are driven by internal reserve level. Factors such as suspended sediments were not included in the simulations. The two feeding modules were compared on the basis of their ability to capture the impact of latent effects, of environmental heterogeneity in food abundance and of physiological flexibility on clearance rate and individual growth. The Holling feeding module was unable to capture the effect of any of these sources of variability. In contrast, the internal-state feeding module did so without any modification or ad hoc calibration. Latent effects, however, appeared transient. With simple annual variability in temperature and food concentration, the relationship between clearance rate and food availability predicted by the internal-state feeding module was quite similar to that observed in Norwegian fjords. I conclude that in contrast with the usual Holling feeding module, internal-state regulation of clearance rate is consistent with well-documented growth and clearance rate patterns.

  9. Refined pipe theory for mechanistic modeling of wood development.

    Science.gov (United States)

    Deckmyn, Gaby; Evans, Sam P; Randle, Tim J

    2006-06-01

    We present a mechanistic model of wood tissue development in response to changes in competition, management and climate. The model is based on a refinement of the pipe theory, where the constant ratio between sapwood and leaf area (pipe theory) is replaced by a ratio between pipe conductivity and leaf area. Simulated pipe conductivity changes with age, stand density and climate in response to changes in allocation or pipe radius, or both. The central equation of the model, which calculates the ratio of carbon (C) allocated to leaves and pipes, can be parameterized to describe the contrasting stem conductivity behavior of different tree species: from constant stem conductivity (functional homeostasis hypothesis) to height-related reduction in stem conductivity with age (hydraulic limitation hypothesis). The model simulates the daily growth of pipes (vessels or tracheids), fibers and parenchyma as well as vessel size and simulates the wood density profile and the earlywood to latewood ratio from these data. Initial runs indicate the model yields realistic seasonal changes in pipe radius (decreasing pipe radius from spring to autumn) and wood density, as well as realistic differences associated with the competitive status of trees (denser wood in suppressed trees).

  10. Equivalence of interest rate models and lattice gases.

    Science.gov (United States)

    Pirjol, Dan

    2012-04-01

    We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t(1),t(2))=-Cov[x(t(1)),x(t(2))]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e(-γ|x-y|)-e(-γ(x+y))). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.

  11. Toward a General Research Process for Using Dubin's Theory Building Model

    Science.gov (United States)

    Holton, Elwood F.; Lowe, Janis S.

    2007-01-01

    Dubin developed a widely used methodology for theory building, which describes the components of the theory building process. Unfortunately, he does not define a research process for implementing his theory building model. This article proposes a seven-step general research process for implementing Dubin's theory building model. An example of a…

  12. Item response theory and structural equation modelling for ordinal data: Describing the relationship between KIDSCREEN and Life-H.

    Science.gov (United States)

    Titman, Andrew C; Lancaster, Gillian A; Colver, Allan F

    2016-10-01

    Both item response theory and structural equation models are useful in the analysis of ordered categorical responses from health assessment questionnaires. We highlight the advantages and disadvantages of the item response theory and structural equation modelling approaches to modelling ordinal data, from within a community health setting. Using data from the SPARCLE project focussing on children with cerebral palsy, this paper investigates the relationship between two ordinal rating scales, the KIDSCREEN, which measures quality-of-life, and Life-H, which measures participation. Practical issues relating to fitting models, such as non-positive definite observed or fitted correlation matrices, and approaches to assessing model fit are discussed. item response theory models allow properties such as the conditional independence of particular domains of a measurement instrument to be assessed. When, as with the SPARCLE data, the latent traits are multidimensional, structural equation models generally provide a much more convenient modelling framework. © The Author(s) 2013.

  13. On rate-state and Coulomb failure models

    Science.gov (United States)

    Gomberg, J.; Beeler, N.; Blanpied, M.

    2000-01-01

    We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified

  14. Two problems from the theory of semiotic control models. I. Representations of semiotic models

    Energy Technology Data Exchange (ETDEWEB)

    Osipov, G S

    1981-11-01

    Two problems from the theory of semiotic control models are being stated, in particular the representation of models and the semantic analysis of themtheory of semiotic control models are being stated, in particular the representation of models and the semantic analysis of them. Algebraic representation of semiotic models, covering of representations, their reduction and equivalence are discussed. The interrelations between functional and structural characteristics of semiotic models are investigated. 20 references.

  15. General topology meets model theory, on p and t.

    Science.gov (United States)

    Malliaris, Maryanthe; Shelah, Saharon

    2013-08-13

    Cantor proved in 1874 [Cantor G (1874) J Reine Angew Math 77:258-262] that the continuum is uncountable, and Hilbert's first problem asks whether it is the smallest uncountable cardinal. A program arose to study cardinal invariants of the continuum, which measure the size of the continuum in various ways. By Gödel [Gödel K (1939) Proc Natl Acad Sci USA 25(4):220-224] and Cohen [Cohen P (1963) Proc Natl Acad Sci USA 50(6):1143-1148], Hilbert's first problem is independent of ZFC (Zermelo-Fraenkel set theory with the axiom of choice). Much work both before and since has been done on inequalities between these cardinal invariants, but some basic questions have remained open despite Cohen's introduction of forcing. The oldest and perhaps most famous of these is whether " p = t," which was proved in a special case by Rothberger [Rothberger F (1948) Fund Math 35:29-46], building on Hausdorff [Hausdorff (1936) Fund Math 26:241-255]. In this paper we explain how our work on the structure of Keisler's order, a large-scale classification problem in model theory, led to the solution of this problem in ZFC as well as of an a priori unrelated open question in model theory.

  16. Item level diagnostics and model - data fit in item response theory ...

    African Journals Online (AJOL)

    Item response theory (IRT) is a framework for modeling and analyzing item response data. Item-level modeling gives IRT advantages over classical test theory. The fit of an item score pattern to an item response theory (IRT) models is a necessary condition that must be assessed for further use of item and models that best fit ...

  17. Inflation, Exchange Rates and Interest Rates in Ghana: an Autoregressive Distributed Lag Model

    Directory of Open Access Journals (Sweden)

    Dennis Nchor

    2015-01-01

    Full Text Available This paper investigates the impact of exchange rate movement and the nominal interest rate on inflation in Ghana. It also looks at the presence of the Fisher Effect and the International Fisher Effect scenarios. It makes use of an autoregressive distributed lag model and an unrestricted error correction model. Ordinary Least Squares regression methods were also employed to determine the presence of the Fischer Effect and the International Fisher Effect. The results from the study show that in the short run a percentage point increase in the level of depreciation of the Ghana cedi leads to an increase in the rate of inflation by 0.20%. A percentage point increase in the level of nominal interest rates however results in a decrease in inflation by 0.98%. Inflation increases by 1.33% for every percentage point increase in the nominal interest rate in the long run. An increase in inflation on the other hand increases the nominal interest rate by 0.51% which demonstrates the partial Fisher effect. A 1% increase in the interest rate differential leads to a depreciation of the Ghana cedi by approximately 1% which indicates the full International Fisher effect.

  18. A Model of Exchange-Rate-Based Stabilization for Turkey

    OpenAIRE

    Ozlem Aytac

    2008-01-01

    The literature on the exchange-rate-based stabilization has focused almost exclusively in Latin America. Many other countries however, such as Egypt, Lebanon and Turkey; have undertaken this sort of programs in the last 10-15 years. I depart from the existing literature by developing a model specifically for the 2000-2001 heterodox exchange-rate-based stabilization program in Turkey: When the government lowers the rate of crawl, the rate of domestic credit creation is set equal to the lower r...

  19. Modeling baroreflex regulation of heart rate during orthostatic stress

    DEFF Research Database (Denmark)

    Olufsen, Mette; Tran, Hien T.; Ottesen, Johnny T.

    2006-01-01

    . The model uses blood pressure measured in the finger as an input to model heart rate dynamics in response to changes in baroreceptor nerve firing rate, sympathetic and parasympathetic responses, vestibulo-sympathetic reflex, and concentrations of norepinephrine and acetylcholine. We formulate an inverse...... in healthy and hypertensive elderly people the hysteresis loop shifts to higher blood pressure values and its area is diminished. Finally, for hypertensive elderly people the hysteresis loop is generally not closed indicating that during postural change from sitting to standing, the blood pressure resettles......During orthostatic stress, arterial and cardiopulmonary baroreflexes play a key role in maintaining arterial pressure by regulating heart rate. This study, presents a mathematical model that can predict the dynamics of heart rate regulation in response to postural change from sitting to standing...

  20. A model-theory for Tachyons in two dimensions

    International Nuclear Information System (INIS)

    Recami, E.; Rodriques, W.A. Jr.

    1986-01-01

    The subject of Tachyons, even if still speculative, may deserve some attention for reasons that can be divided into a few categories, two of which are as follows: The larger scheme, to build up in order to incorporate space-like objects in the relativistic theories. These allow better understanding of many aspects of the ordinary relativistic physics, even if Tachyons would not exist in our cosmos as ''asymptotically free'' objects; superliminal classical objects can have a role in elementary particle interactions (perhaps even in astrophysics) and possible verification of the reproduction of quantum-like behaviour at a classical level when taking into account the possible existence of faster-than-light classical particles. This paper shows that Special Relativity - even without tachyons - can be given a form which describes both particles and anti-particles. This paper also is confined only to a ''model theory'' of Tachyons in two dimensions

  1. DsixTools: the standard model effective field theory toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Celis, Alejandro [Ludwig-Maximilians-Universitaet Muenchen, Fakultaet fuer Physik, Arnold Sommerfeld Center for Theoretical Physics, Munich (Germany); Fuentes-Martin, Javier; Vicente, Avelino [Universitat de Valencia-CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Virto, Javier [University of Bern, Albert Einstein Center for Fundamental Physics, Institute for Theoretical Physics, Bern (Switzerland)

    2017-06-15

    We present DsixTools, a Mathematica package for the handling of the dimension-six standard model effective field theory. Among other features, DsixTools allows the user to perform the full one-loop renormalization group evolution of the Wilson coefficients in the Warsaw basis. This is achieved thanks to the SMEFTrunner module, which implements the full one-loop anomalous dimension matrix previously derived in the literature. In addition, DsixTools also contains modules devoted to the matching to the ΔB = ΔS = 1, 2 and ΔB = ΔC = 1 operators of the Weak Effective Theory at the electroweak scale, and their QCD and QED Renormalization group evolution below the electroweak scale. (orig.)

  2. Effective-field theory on the kinetic Ising model

    International Nuclear Information System (INIS)

    Shi Xiaoling; Wei Guozhu; Li Lin

    2008-01-01

    As an analytical method, the effective-field theory (EFT) is used to study the dynamical response of the kinetic Ising model in the presence of a sinusoidal oscillating field. The effective-field equations of motion of the average magnetization are given for the square lattice (Z=4) and the simple cubic lattice (Z=6), respectively. The dynamic order parameter, the hysteresis loop area and the dynamic correlation are calculated. In the field amplitude h 0 /ZJ-temperature T/ZJ plane, the phase boundary separating the dynamic ordered and the disordered phase has been drawn, and the dynamical tricritical point has been observed. We also make the compare results of EFT with that given by using the mean field theory (MFT)

  3. Adapting the Theory of Visual Attention (TVA) to model auditory attention

    DEFF Research Database (Denmark)

    Roberts, Katherine L.; Andersen, Tobias; Kyllingsbæk, Søren

    Mathematical and computational models have provided useful insights into normal and impaired visual attention, but less progress has been made in modelling auditory attention. We are developing a Theory of Auditory Attention (TAA), based on an influential visual model, the Theory of Visual...... Attention (TVA). We report that TVA provides a good fit to auditory data when the stimuli are closely matched to those used in visual studies. In the basic visual TVA task, participants view a brief display of letters and are asked to report either all of the letters (whole report) or a subset of letters (e...... the auditory data, producing good estimates of the rate at which information is encoded (C), the minimum exposure duration required for processing to begin (t0), and the relative attentional weight to targets versus distractors (α). Future work will address the issue of target-distractor confusion, and extend...

  4. A study of the logical model of capital market complexity theories

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Analyzes the shortcomings of the classic capital market theories based on EMH and discloses the complexity essence of the capital market. Considering the capital market a complicated, interactive and adaptable dynamic system, with complexity science as the method for researching the operation law of the capital market, this paper constructs a nonlinear logical model to analyze the applied realm, focal point and interrelationship of such theories as dissipative structure theory, chaos theory, fractal theory, synergetics theory, catastrophe theory and scale theory, and summarizes and discusses the achievements and problems of each theory.Based on the research, the paper foretells the developing direction of complexity science in a capital market.

  5. Chern-Simons matrix models, two-dimensional Yang-Mills theory and the Sutherland model

    International Nuclear Information System (INIS)

    Szabo, Richard J; Tierz, Miguel

    2010-01-01

    We derive some new relationships between matrix models of Chern-Simons gauge theory and of two-dimensional Yang-Mills theory. We show that q-integration of the Stieltjes-Wigert matrix model is the discrete matrix model that describes q-deformed Yang-Mills theory on S 2 . We demonstrate that the semiclassical limit of the Chern-Simons matrix model is equivalent to the Gross-Witten model in the weak-coupling phase. We study the strong-coupling limit of the unitary Chern-Simons matrix model and show that it too induces the Gross-Witten model, but as a first-order deformation of Dyson's circular ensemble. We show that the Sutherland model is intimately related to Chern-Simons gauge theory on S 3 , and hence to q-deformed Yang-Mills theory on S 2 . In particular, the ground-state wavefunction of the Sutherland model in its classical equilibrium configuration describes the Chern-Simons free energy. The correspondence is extended to Wilson line observables and to arbitrary simply laced gauge groups.

  6. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.

  7. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    International Nuclear Information System (INIS)

    Kim, S. K.; Kang, G. B.; Ko, W. I.

    2013-01-01

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole

  8. Multi-Agent Market Modeling of Foreign Exchange Rates

    Science.gov (United States)

    Zimmermann, Georg; Neuneier, Ralph; Grothmann, Ralph

    A market mechanism is basically driven by a superposition of decisions of many agents optimizing their profit. The oeconomic price dynamic is a consequence of the cumulated excess demand/supply created on this micro level. The behavior analysis of a small number of agents is well understood through the game theory. In case of a large number of agents one may use the limiting case that an individual agent does not have an influence on the market, which allows the aggregation of agents by statistic methods. In contrast to this restriction, we can omit the assumption of an atomic market structure, if we model the market through a multi-agent approach. The contribution of the mathematical theory of neural networks to the market price formation is mostly seen on the econometric side: neural networks allow the fitting of high dimensional nonlinear dynamic models. Furthermore, in our opinion, there is a close relationship between economics and the modeling ability of neural networks because a neuron can be interpreted as a simple model of decision making. With this in mind, a neural network models the interaction of many decisions and, hence, can be interpreted as the price formation mechanism of a market.

  9. Plane answers to complex questions the theory of linear models

    CERN Document Server

    Christensen, Ronald

    1987-01-01

    This book was written to rigorously illustrate the practical application of the projective approach to linear models. To some, this may seem contradictory. I contend that it is possible to be both rigorous and illustrative and that it is possible to use the projective approach in practical applications. Therefore, unlike many other books on linear models, the use of projections and sub­ spaces does not stop after the general theory. They are used wherever I could figure out how to do it. Solving normal equations and using calculus (outside of maximum likelihood theory) are anathema to me. This is because I do not believe that they contribute to the understanding of linear models. I have similar feelings about the use of side conditions. Such topics are mentioned when appropriate and thenceforward avoided like the plague. On the other side of the coin, I just as strenuously reject teaching linear models with a coordinate free approach. Although Joe Eaton assures me that the issues in complicated problems freq...

  10. Theory of thermoluminescence gamma dose response: The unified interaction model

    International Nuclear Information System (INIS)

    Horowitz, Y.S.

    2001-01-01

    We describe the development of a comprehensive theory of thermoluminescence (TL) dose response, the unified interaction model (UNIM). The UNIM is based on both radiation absorption stage and recombination stage mechanisms and can describe dose response for heavy charged particles (in the framework of the extended track interaction model - ETIM) as well as for isotropically ionising gamma rays and electrons (in the framework of the TC/LC geminate recombination model) in a unified and self-consistent conceptual and mathematical formalism. A theory of optical absorption dose response is also incorporated in the UNIM to describe the radiation absorption stage. The UNIM is applied to the dose response supralinearity characteristics of LiF:Mg,Ti and is especially and uniquely successful in explaining the ionisation density dependence of the supralinearity of composite peak 5 in TLD-100. The UNIM is demonstrated to be capable of explaining either qualitatively or quantitatively all of the major features of TL dose response with many of the variable parameters of the model strongly constrained by ancilliary optical absorption and sensitisation measurements

  11. Dose rate modelled for the outdoors of a gamma irradiation

    International Nuclear Information System (INIS)

    Mangussi, J

    2012-01-01

    A model for the absorbed dose rate calculation on the surroundings of a gamma irradiation plant is developed. In such plants, a part of the radiation emitted upwards reach's the outdoors. The Compton scatterings on the wall of the exhausting pipes through de plant roof and on the outdoors air are modelled. The absorbed dose rate generated by the scattered radiation as far as 200 m is calculated. The results of the models, to be used for the irradiation plant design and for the environmental studies, are showed on graphics (author)

  12. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  13. CONTINUOUS MODELING OF FOREIGN EXCHANGE RATE OF USD VERSUS TRY

    Directory of Open Access Journals (Sweden)

    Yakup Arı

    2011-01-01

    Full Text Available This study aims to construct continuous-time autoregressive (CAR model and continuous-time GARCH (COGARCH model from discrete time data of foreign exchange rate of United States Dollar (USD versus Turkish Lira (TRY. These processes are solutions to stochastic differential equation Lévy-driven processes. We have shown that CAR(1 and COGARCH(1,1 processes are proper models to represent foreign exchange rate of USD and TRY for different periods of time February 2002- June 2010.

  14. Adapting Structuration Theory as a Comprehensive Theory for Distance Education: The ASTIDE Model

    Science.gov (United States)

    Aktaruzzaman, Md; Plunkett, Margaret

    2016-01-01

    Distance Education (DE) theorists have argued about the requirement for a theory to be comprehensive in a way that can explicate many of the activities associated with DE. Currently, Transactional Distance Theory (TDT) (Moore, 1993) and the Theory of Instructional Dialogue (IDT) (Caspi & Gorsky, 2006) are the most prominent theories, yet they…

  15. Molecular evolutionary rates are not correlated with temperature and latitude in Squamata: an exception to the metabolic theory of ecology?

    Science.gov (United States)

    Rolland, Jonathan; Loiseau, Oriane; Romiguier, Jonathan; Salamin, Nicolas

    2016-05-20

    The metabolic theory of ecology stipulates that molecular evolutionary rates should correlate with temperature and latitude in ectothermic organisms. Previous studies have shown that most groups of vertebrates, such as amphibians, turtles and even endothermic mammals, have higher molecular evolutionary rates in regions where temperature is high. However, the association between molecular evolutionary rates and temperature or latitude has never been tested in Squamata. We used a large dataset including the spatial distributions and environmental variables for 1,651 species of Squamata and compared the contrast of the rates of molecular evolution with the contrast of temperature and latitude between sister species. Using major axis regressions and a new algorithm to choose independent sister species pairs, we found that temperature and absolute latitude were not associated with molecular evolutionary rates. This absence of association in such a diverse ectothermic group questions the mechanisms explaining current pattern of species diversity in Squamata and challenges the presupposed universality of the metabolic theory of ecology.

  16. Modelling non-ignorable missing data mechanisms with item response theory models

    NARCIS (Netherlands)

    Holman, Rebecca; Glas, Cornelis A.W.

    2005-01-01

    A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled

  17. Modelling non-ignorable missing-data mechanisms with item response theory models

    NARCIS (Netherlands)

    Holman, Rebecca; Glas, Cees A. W.

    2005-01-01

    A model-based procedure for assessing the extent to which missing data can be ignored and handling non-ignorable missing data is presented. The procedure is based on item response theory modelling. As an example, the approach is worked out in detail in conjunction with item response data modelled

  18. The pipe model theory half a century on: a review.

    Science.gov (United States)

    Lehnebach, Romain; Beyer, Robert; Letort, Véronique; Heuret, Patrick

    2018-01-23

    More than a half century ago, Shinozaki et al. (Shinozaki K, Yoda K, Hozumi K, Kira T. 1964b. A quantitative analysis of plant form - the pipe model theory. II. Further evidence of the theory and its application in forest ecology. Japanese Journal of Ecology14: 133-139) proposed an elegant conceptual framework, the pipe model theory (PMT), to interpret the observed linear relationship between the amount of stem tissue and corresponding supported leaves. The PMT brought a satisfactory answer to two vividly debated problems that were unresolved at the moment of its publication: (1) What determines tree form and which rules drive biomass allocation to the foliar versus stem compartments in plants? (2) How can foliar area or mass in an individual plant, in a stand or at even larger scales be estimated? Since its initial formulation, the PMT has been reinterpreted and used in applications, and has undoubtedly become an important milestone in the mathematical interpretation of plant form and functioning. This article aims to review the PMT by going back to its initial formulation, stating its explicit and implicit properties and discussing them in the light of current biological knowledge and experimental evidence in order to identify the validity and range of applicability of the theory. We also discuss the use of the theory in tree biomechanics and hydraulics as well as in functional-structural plant modelling. Scrutinizing the PMT in the light of modern biological knowledge revealed that most of its properties are not valid as a general rule. The hydraulic framework derived from the PMT has attracted much more attention than its mechanical counterpart and implies that only the conductive portion of a stem cross-section should be proportional to the supported foliage amount rather than the whole of it. The facts that this conductive portion is experimentally difficult to measure and varies with environmental conditions and tree ontogeny might cause the commonly

  19. Continued development of modeling tools and theory for RF heating

    International Nuclear Information System (INIS)

    1998-01-01

    Mission Research Corporation (MRC) is pleased to present the Department of Energy (DOE) with its renewal proposal to the Continued Development of Modeling Tools and Theory for RF Heating program. The objective of the program is to continue and extend the earlier work done by the proposed principal investigator in the field of modeling (Radio Frequency) RF heating experiments in the large tokamak fusion experiments, particularly the Tokamak Fusion Test Reactor (TFTR) device located at Princeton Plasma Physics Laboratory (PPPL). An integral part of this work is the investigation and, in some cases, resolution of theoretical issues which pertain to accurate modeling. MRC is nearing the successful completion of the specified tasks of the Continued Development of Modeling Tools and Theory for RF Heating project. The following tasks are either completed or nearing completion. (1) Anisotropic temperature and rotation upgrades; (2) Modeling for relativistic ECRH; (3) Further documentation of SHOOT and SPRUCE. As a result of the progress achieved under this project, MRC has been urged to continue this effort. Specifically, during the performance of this project two topics were identified by PPPL personnel as new applications of the existing RF modeling tools. These two topics concern (a) future fast-wave current drive experiments on the large tokamaks including TFTR and (c) the interpretation of existing and future RF probe data from TFTR. To address each of these topics requires some modification or enhancement of the existing modeling tools, and the first topic requires resolution of certain theoretical issues to produce self-consistent results. This work falls within the scope of the original project and is more suited to the project's renewal than to the initiation of a new project

  20. A Logistic Regression Based Auto Insurance Rate-Making Model Designed for the Insurance Rate Reform

    Directory of Open Access Journals (Sweden)

    Zhengmin Duan

    2018-02-01

    Full Text Available Using a generalized linear model to determine the claim frequency of auto insurance is a key ingredient in non-life insurance research. Among auto insurance rate-making models, there are very few considering auto types. Therefore, in this paper we are proposing a model that takes auto types into account by making an innovative use of the auto burden index. Based on this model and data from a Chinese insurance company, we built a clustering model that classifies auto insurance rates into three risk levels. The claim frequency and the claim costs are fitted to select a better loss distribution. Then the Logistic Regression model is employed to fit the claim frequency, with the auto burden index considered. Three key findings can be concluded from our study. First, more than 80% of the autos with an auto burden index of 20 or higher belong to the highest risk level. Secondly, the claim frequency is better fitted using the Poisson distribution, however the claim cost is better fitted using the Gamma distribution. Lastly, based on the AIC criterion, the claim frequency is more adequately represented by models that consider the auto burden index than those do not. It is believed that insurance policy recommendations that are based on Generalized linear models (GLM can benefit from our findings.

  1. Prediction of interest rate using CKLS model with stochastic parameters

    International Nuclear Information System (INIS)

    Ying, Khor Chia; Hin, Pooi Ah

    2014-01-01

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ (j) of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ (j) , we assume that φ (j) depends on φ (j−m) , φ (j−m+1) ,…, φ (j−1) and the interest rate r j+n at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r j+n+1 of the interest rate at the next time point when the value r j+n of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r j+n+d at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters

  2. Prediction of interest rate using CKLS model with stochastic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)

    2014-06-19

    The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.

  3. Diffusion theory model for optimization calculations of cold neutron sources

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1987-01-01

    Cold neutron sources are becoming increasingly important and common experimental facilities made available at many research reactors around the world due to the high utility of cold neutrons in scattering experiments. The authors describe a simple two-group diffusion model of an infinite slab LD 2 cold source. The simplicity of the model permits to obtain an analytical solution from which one can deduce the reason for the optimum thickness based solely on diffusion-type phenomena. Also, a second more sophisticated model is described and the results compared to a deterministic transport calculation. The good (particularly qualitative) agreement between the results suggests that diffusion theory methods can be used in parametric and optimization studies to avoid the generally more expensive transport calculations

  4. Lattice Gauge Theories Within and Beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Gelzer, Zechariah John [Iowa U.

    2017-01-01

    The Standard Model of particle physics has been very successful in describing fundamental interactions up to the highest energies currently probed in particle accelerator experiments. However, the Standard Model is incomplete and currently exhibits tension with experimental data for interactions involving $B$~mesons. Consequently, $B$-meson physics is of great interest to both experimentalists and theorists. Experimentalists worldwide are studying the decay and mixing processes of $B$~mesons in particle accelerators. Theorists are working to understand the data by employing lattice gauge theories within and beyond the Standard Model. This work addresses the theoretical effort and is divided into two main parts. In the first part, I present a lattice-QCD calculation of form factors for exclusive semileptonic decays of $B$~mesons that are mediated by both charged currents ($B \\to \\pi \\ell \

  5. A queueing theory based model for business continuity in hospitals.

    Science.gov (United States)

    Miniati, R; Cecconi, G; Dori, F; Frosini, F; Iadanza, E; Biffi Gentili, G; Niccolini, F; Gusinu, R

    2013-01-01

    Clinical activities can be seen as results of precise and defined events' succession where every single phase is characterized by a waiting time which includes working duration and possible delay. Technology makes part of this process. For a proper business continuity management, planning the minimum number of devices according to the working load only is not enough. A risk analysis on the whole process should be carried out in order to define which interventions and extra purchase have to be made. Markov models and reliability engineering approaches can be used for evaluating the possible interventions and to protect the whole system from technology failures. The following paper reports a case study on the application of the proposed integrated model, including risk analysis approach and queuing theory model, for defining the proper number of device which are essential to guarantee medical activity and comply the business continuity management requirements in hospitals.

  6. Density Functional Theory and Materials Modeling at Atomistic Length Scales

    Directory of Open Access Journals (Sweden)

    Swapan K. Ghosh

    2002-04-01

    Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.

  7. Criticism of the Classical Theory of Macroeconomic Modeling

    Directory of Open Access Journals (Sweden)

    Konstantin K. Kumehov

    2015-01-01

    Full Text Available Abstract: Current approaches and methods of modeling of macroeconomic systems do not allow to generate research ideas that could be used in applications. This is largely due to the fact that the dominant economic schools and research directions are building their theories on misconceptions about the economic system as object modeling, and have no common methodological approaches in the design of macroeconomic models. All of them are focused on building a model aimed at establishing equilibrium parameters of supply and demand, production and consumption. At the same time as the underlying factors are not considered resource potential and the needs of society in material and other benefits. In addition, there is no unity in the choice of elements and mechanisms of interaction between them. Not installed, what are the criteria to determine the elements of the model: whether it is the institutions, whether the industry is whether the population, or banks, or classes, etc. From the methodological point of view, the design of the model all the most well-known authors extrapolated to the new models of the past state or past events. As a result, every time the model is ready by the time the situation changes, the last parameters underlying the model are losing relevance, so at best, the researcher may have to interpret the events and parameters that are not feasible in the future. In this paper, based on analysis of the works of famous authors, belonging to different schools and areas revealed weaknesses of their proposed macroeconomic models that do not allow you to use them to solve applied problems of economic development. A fundamentally new approaches and methods by which it is possible the construction of macroeconomic models that take into account the theoretical and applied aspects of modeling, as well as formulated the basic methodological requirements.

  8. ECONOMETRIC APPROACH TO DIFFERENCE EQUATIONS MODELING OF EXCHANGE RATES CHANGES

    Directory of Open Access Journals (Sweden)

    Josip Arnerić

    2010-12-01

    Full Text Available Time series models that are commonly used in econometric modeling are autoregressive stochastic linear models (AR and models of moving averages (MA. Mentioned models by their structure are actually stochastic difference equations. Therefore, the objective of this paper is to estimate difference equations containing stochastic (random component. Estimated models of time series will be used to forecast observed data in the future. Namely, solutions of difference equations are closely related to conditions of stationary time series models. Based on the fact that volatility is time varying in high frequency data and that periods of high volatility tend to cluster, the most successful and popular models in modeling time varying volatility are GARCH type models and their variants. However, GARCH models will not be analyzed because the purpose of this research is to predict the value of the exchange rate in the levels within conditional mean equation and to determine whether the observed variable has a stable or explosive time path. Based on the estimated difference equation it will be examined whether Croatia is implementing a stable policy of exchange rates.

  9. The Gaussian streaming model and convolution Lagrangian effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Vlah, Zvonimir [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA 94306 (United States); Castorina, Emanuele; White, Martin, E-mail: zvlah@stanford.edu, E-mail: ecastorina@berkeley.edu, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-12-01

    We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM to a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.

  10. Non local theory of excitations applied to the Hubbard model

    International Nuclear Information System (INIS)

    Kakehashi, Y; Nakamura, T; Fulde, P

    2010-01-01

    We propose a nonlocal theory of single-particle excitations. It is based on an off-diagonal effective medium and the projection operator method for treating the retarded Green function. The theory determines the nonlocal effective medium matrix elements by requiring that they are consistent with those of the self-energy of the Green function. This arrows for a description of long-range intersite correlations with high resolution in momentum space. Numerical study for the half-filled Hubbard model on the simple cubic lattice demonstrates that the theory is applicable to the strong correlation regime as well as the intermediate regime of Coulomb interaction strength. Furthermore the results show that nonlocal excitations cause sub-bands in the strong Coulomb interaction regime due to strong antiferromagnetic correlations, decrease the quasi-particle peak on the Fermi level with increasing Coulomb interaction, and shift the critical Coulomb interaction U C2 for the divergence of effective mass towards higher energies at least by a factor of two as compared with that in the single-site approximation.

  11. Multiagent model and mean field theory of complex auction dynamics

    Science.gov (United States)

    Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng

    2015-09-01

    Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.

  12. Multiagent model and mean field theory of complex auction dynamics

    International Nuclear Information System (INIS)

    Chen, Qinghua; Wang, Yougui; Huang, Zi-Gang; Lai, Ying-Cheng

    2015-01-01

    Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena. (paper)

  13. The SMART Theory and Modeling Team: An Integrated Element of Mission Development and Science Analysis

    Science.gov (United States)

    Hesse, Michael; Birn, J.; Denton, Richard E.; Drake, J.; Gombosi, T.; Hoshino, M.; Matthaeus, B.; Sibeck, D.

    2005-01-01

    When targeting physical understanding of space plasmas, our focus is gradually shifting away from discovery-type investigations to missions and studies that address our basic understanding of processes we know to be important. For these studies, theory and models provide physical predictions that need to be verified or falsified by empirical evidence. Within this paradigm, a tight integration between theory, modeling, and space flight mission design and execution is essential. NASA's Magnetospheric MultiScale (MMS) mission is a pathfinder in this new era of space research. The prime objective of MMS is to understand magnetic reconnection, arguably the most fundamental of plasma processes. In particular, MMS targets the microphysical processes, which permit magnetic reconnection to operate in the collisionless plasmas that permeate space and astrophysical systems. More specifically, MMS will provide closure to such elemental questions as how particles become demagnetized in the reconnection diffusion region, which effects determine the reconnection rate, and how reconnection is coupled to environmental conditions such as magnetic shear angles. Solutions to these problems have remained elusive in past and present spacecraft missions primarily due to instrumental limitations - yet they are fundamental to the large-scale dynamics of collisionless plasmas. Owing to the lack of measurements, most of our present knowledge of these processes is based on results from modern theory and modeling studies of the reconnection process. Proper design and execution of a mission targeting magnetic reconnection should include this knowledge and have to ensure that all relevant scales and effects can be resolved by mission measurements. The SMART mission has responded to this need through a tight integration between instrument and theory and modeling teams. Input from theory and modeling is fed into all aspects of science mission design, and theory and modeling activities are tailored

  14. A model for reaction rates in turbulent reacting flows

    Science.gov (United States)

    Chinitz, W.; Evans, J. S.

    1984-01-01

    To account for the turbulent temperature and species-concentration fluctuations, a model is presented on the effects of chemical reaction rates in computer analyses of turbulent reacting flows. The model results in two parameters which multiply the terms in the reaction-rate equations. For these two parameters, graphs are presented as functions of the mean values and intensity of the turbulent fluctuations of the temperature and species concentrations. These graphs will facilitate incorporation of the model into existing computer programs which describe turbulent reacting flows. When the model was used in a two-dimensional parabolic-flow computer code to predict the behavior of an experimental, supersonic hydrogen jet burning in air, some improvement in agreement with the experimental data was obtained in the far field in the region near the jet centerline. Recommendations are included for further improvement of the model and for additional comparisons with experimental data.

  15. Irreducible gauge theory of a consolidated Salam-Weinberg model

    International Nuclear Information System (INIS)

    Ne'eman, Y.

    1979-01-01

    The Salam-Weinberg model is derived by gauging an internal simple supergroup SU(2/1). The theory uniquely assigns the correct SU(2)sub(L) X U(1) eigenvalues for all leptons, fixes thetasub(W) = 30 0 , generates the W +- sub(sigma), Z 0 sub(sigma) and Asub(sigma) together with the Higgs-Goldstone Isub(L) = 1/2 scalar multiplets as gauge fields, and imposes the standard spontaneous breakdown of SU(2)sub(L) X U(1). The masses of intermediate bosons and fermions are directly generated by SU(2/1) universality, which also fixes the Higgs field coupling. (Auth.)

  16. Ferromagnetism in the Hubbard model: a modified perturbation theory

    International Nuclear Information System (INIS)

    Gangadhar Reddy, G.; Ramakanth, A.; Nolting, W.

    2005-01-01

    We study the possibility of ferromagnetism in the Hubbard model using the modified perturbation theory. In this approach an Ansatz is made for the self-energy of the electron which contains the second order contribution developed around the Hartree-Fock solution and two parameters. The parameters are fixed by using a moment method. This self energy satisfies several known exact limiting cases. Using this self energy, the Curie temperature T c as a function of band filling n is investigated. It is found that T c falls off abruptly as n approaches half filling. The results are in qualitative agreement with earlier calculations using other approximation schemes. (author)

  17. Mean-field theory and self-consistent dynamo modeling

    International Nuclear Information System (INIS)

    Yoshizawa, Akira; Yokoi, Nobumitsu

    2001-12-01

    Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)

  18. Morphing the Shell Model into an Effective Theory

    International Nuclear Information System (INIS)

    Haxton, W. C.; Song, C.-L.

    2000-01-01

    We describe a strategy for attacking the canonical nuclear structure problem--bound-state properties of a system of point nucleons interacting via a two-body potential--which involves an expansion in the number of particles scattering at high momenta, but is otherwise exact. The required self-consistent solutions of the Bloch-Horowitz equation for effective interactions and operators are obtained by an efficient Green's function method based on the Lanczos algorithm. We carry out this program for the simplest nuclei, d and 3 He , in order to explore the consequences of reformulating the shell model as a controlled effective theory. (c) 2000 The American Physical Society

  19. Lagrangian model of conformal invariant interacting quantum field theory

    International Nuclear Information System (INIS)

    Lukierski, J.

    1976-01-01

    A Lagrangian model of conformal invariant interacting quantum field theory is presented. The interacting Lagrangian and free Lagrangian are derived replacing the canonical field phi by the field operator PHIsub(d)sup(c) and introducing the conformal-invariant interaction Lagrangian. It is suggested that in the conformal-invariant QFT with the dimensionality αsub(B) obtained from the bootstrep equation, the normalization constant c of the propagator and the coupling parametery do not necessarily need to satisfy the relation xsub(B) = phi 2 c 3

  20. New Trends in Model Coupling Theory, Numerics and Applications

    International Nuclear Information System (INIS)

    Coquel, F.; Godlewski, E.; Herard, J. M.; Segre, J.

    2010-01-01

    This special issue comprises selected papers from the workshop New Trends in Model Coupling, Theory, Numerics and Applications (NTMC'09) which took place in Paris, September 2 - 4, 2009. The research of optimal technological solutions in a large amount of industrial systems requires to perform numerical simulations of complex phenomena which are often characterized by the coupling of models related to various space and/or time scales. Thus, the so-called multi-scale modelling has been a thriving scientific activity which connects applied mathematics and other disciplines such as physics, chemistry, biology or even social sciences. To illustrate the variety of fields concerned by the natural occurrence of model coupling we may quote: meteorology where it is required to take into account several turbulence scales or the interaction between oceans and atmosphere, but also regional models in a global description, solid mechanics where a thorough understanding of complex phenomena such as propagation of cracks needs to couple various models from the atomistic level to the macroscopic level; plasma physics for fusion energy for instance where dense plasmas and collisionless plasma coexist; multiphase fluid dynamics when several types of flow corresponding to several types of models are present simultaneously in complex circuits; social behaviour analysis with interaction between individual actions and collective behaviour. (authors)