Modeling irradiation creep of graphite using rate theory
Sarkar, Apu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Eapen, Jacob, E-mail: jacob.eapen@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Raj, Anant; Murty, K.L. [Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Burchell, T.D. [Fusion Materials & Nuclear Structures, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)
2016-05-15
We have examined irradiation induced creep of graphite in the framework of transition state rate theory. Experimental data for two grades of nuclear graphite (H-337 and AGOT) have been analyzed to determine the stress exponent (n) and activation energy (Q) for plastic flow under irradiation. We show that the mean activation energy lies between 0.14 and 0.32 eV with a mean stress-exponent of 1.0 ± 0.2. A stress exponent of unity and the unusually low activation energies strongly indicate a diffusive defect transport mechanism for neutron doses in the range of 3–4 × 10{sup 22} n/cm{sup 2}.
Hedging LIBOR Derivatives in a Field Theory Model of Interest Rates
Baaquie, B E; Warachka, M C; Baaquie, Belal E.; Liang, Cui; Warachka, Mitch C.
2005-01-01
We investigate LIBOR-based derivatives using a parsimonious field theory interest rate model capable of instilling imperfect correlation between different maturities. Delta and Gamma hedge parameters are derived for LIBOR Caps and Floors against fluctuations in underlying forward rates. An empirical illustration of our methodology is also conducted to demonstrate the influence of correlation on the hedging of interest rate risk.
The stochastic string model as a unifying theory of the term structure of interest rates
Bueno-Guerrero, Alberto; Moreno, Manuel; Navas, Javier F.
2016-11-01
We present the stochastic string model of Santa-Clara and Sornette (2001), as reformulated by Bueno-Guerrero et al. (2015), as a unifying theory of the continuous-time modeling of the term structure of interest rates. We provide several new results, such as: (a) an orthogonality condition for the volatilities in the Heath, Jarrow, and Morton (1992) (HJM) model, (b) the interpretation of multi-factor HJM models as approximations to a full infinite-dimensional model, (c) a result of consistency based on Hilbert spaces, and (d) a theorem for option valuation.
Situated learning theory: adding rate and complexity effects via Kauffman's NK model.
Yuan, Yu; McKelvey, Bill
2004-01-01
For many firms, producing information, knowledge, and enhancing learning capability have become the primary basis of competitive advantage. A review of organizational learning theory identifies two approaches: (1) those that treat symbolic information processing as fundamental to learning, and (2) those that view the situated nature of cognition as fundamental. After noting that the former is inadequate because it focuses primarily on behavioral and cognitive aspects of individual learning, this paper argues the importance of studying learning as interactions among people in the context of their environment. It contributes to organizational learning in three ways. First, it argues that situated learning theory is to be preferred over traditional behavioral and cognitive learning theories, because it treats organizations as complex adaptive systems rather than mere information processors. Second, it adds rate and nonlinear learning effects. Third, following model-centered epistemology, it uses an agent-based computational model, in particular a "humanized" version of Kauffman's NK model, to study the situated nature of learning. Using simulation results, we test eight hypotheses extending situated learning theory in new directions. The paper ends with a discussion of possible extensions of the current study to better address key issues in situated learning.
Rate Theory Modeling and Simulation of Silicide Fuel at LWR Conditions
Miao, Yinbin [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Ye, Bei [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Hofman, Gerard [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Yacout, Abdellatif [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States). Fuel Modeling and Simulation; Mei, Zhi-Gang [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division
2016-08-29
As a promising candidate for the accident tolerant fuel (ATF) used in light water reactors (LWRs), the fuel performance of uranium silicide (U_{3}Si_{2}) at LWR conditions needs to be well understood. In this report, rate theory model was developed based on existing experimental data and density functional theory (DFT) calculations so as to predict the fission gas behavior in U_{3}Si_{2} at LWR conditions. The fission gas behavior of U_{3}Si_{2} can be divided into three temperature regimes. During steady-state operation, the majority of the fission gas stays in intragranular bubbles, whereas the dominance of intergranular bubbles and fission gas release only occurs beyond 1000 K. The steady-state rate theory model was also used as reference to establish a gaseous swelling correlation of U_{3}Si_{2} for the BISON code. Meanwhile, the overpressurized bubble model was also developed so that the fission gas behavior at LOCA can be simulated. LOCA simulation showed that intragranular bubbles are still dominant after a 70 second LOCA, resulting in a controllable gaseous swelling. The fission gas behavior of U_{3}Si_{2} at LWR conditions is benign according to the rate theory prediction at both steady-state and LOCA conditions, which provides important references to the qualification of U_{3}Si_{2} as a LWR fuel material with excellent fuel performance and enhanced accident tolerance.
Rate Theory Modeling and Simulation of Silicide Fuel at LWR Conditions
Miao, Yinbin [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Ye, Bei [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Hofman, Gerard [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Yacout, Abdellatif [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Gamble, Kyle [Idaho National Lab. (INL), Idaho Falls, ID (United States). Fuel Modeling and Simulation; Mei, Zhi-Gang [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division
2016-08-29
As a promising candidate for the accident tolerant fuel (ATF) used in light water reactors (LWRs), the fuel performance of uranium silicide (U_{3}Si_{2}) at LWR conditions need to be well-understood. In this report, rate theory model was developed based on existing experimental data and density functional theory (DFT) calculations so as to predict the fission gas behavior in U_{3}Si_{2} at LWR conditions. The fission gas behavior of U_{3}Si_{2} can be divided into three temperature regimes. During steady-state operation, the majority of the fission gas stays in intragranular bubbles, whereas the dominance of intergranular bubbles and fission gas release only occurs beyond 1000 K. The steady-state rate theory model was also used as reference to establish a gaseous swelling correlation of U_{3}Si_{2} for the BISON code. Meanwhile, the overpressurized bubble model was also developed so that the fission gas behavior at LOCA can be simulated. LOCA simulation showed that intragranular bubbles are still dominant after a 70 second LOCA, resulting in a controllable gaseous swelling. The fission gas behavior of U_{3}Si_{2} at LWR conditions is benign according to the rate theory prediction at both steady-state and LOCA conditions, which provides important references to the qualification of U_{3}Si_{2} as a LWR fuel material with excellent fuel performance and enhanced accident tolerance.
Chang, CC
2012-01-01
Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko
Rate Theory Modeling and Simulations of Silicide Fuel at LWR Conditions
Miao, Yinbin [Argonne National Lab. (ANL), Argonne, IL (United States); Ye, Bei [Argonne National Lab. (ANL), Argonne, IL (United States); Mei, Zhigang [Argonne National Lab. (ANL), Argonne, IL (United States); Hofman, Gerard [Argonne National Lab. (ANL), Argonne, IL (United States); Yacout, Abdellatif [Argonne National Lab. (ANL), Argonne, IL (United States)
2015-12-10
Uranium silicide (U_{3}Si_{2}) fuel has higher thermal conductivity and higher uranium density, making it a promising candidate for the accident-tolerant fuel (ATF) used in light water reactors (LWRs). However, previous studies on the fuel performance of U_{3}Si_{2}, including both experimental and computational approaches, have been focusing on the irradiation conditions in research reactors, which usually involve low operation temperatures and high fuel burnups. Thus, it is important to examine the fuel performance of U_{3}Si_{2} at typical LWR conditions so as to evaluate the feasibility of replacing conventional uranium dioxide fuel with this silicide fuel material. As in-reactor irradiation experiments involve significant time and financial cost, it is appropriate to utilize modeling tools to estimate the behavior of U_{3}Si_{2} in LWRs based on all those available research reactor experimental references and state-of-the-art density functional theory (DFT) calculation capabilities at the early development stage. Hence, in this report, a comprehensive investigation of the fission gas swelling behavior of U_{3}Si_{2} at LWR conditions is introduced. The modeling efforts mentioned in this report was based on the rate theory (RT) model of fission gas bubble evolution that has been successfully applied for a variety of fuel materials at devious reactor conditions. Both existing experimental data and DFT-calculated results were used for the optimization of the parameters adopted by the RT model. Meanwhile, the fuel-cladding interaction was captured by the coupling of the RT model with simplified mechanical correlations. Therefore, the swelling behavior of U_{3}Si_{2} fuel and its consequent interaction with cladding in LWRs was predicted by the rate theory modeling, providing valuable information for the development of U_{3}Si_{2} fuel as an accident
Watanabe, Y.; Morishita, K.; Nakasuji, T.; Ando, M.; Tanigawa, H.
2015-06-01
Reaction rate theory analysis has been conducted to investigate helium effects on the formation kinetics of interstitial type dislocation loops (I-loops) and helium bubbles in reduced-activation-ferritic/martensitic steel during irradiation, by focusing on the nucleation and growth processes of the defect clusters. The rate theory model employs the size and chemical composition dependence of thermal dissociation of point defects from defect clusters. In the calculations, the temperature and the production rate of Frenkel pairs are fixed to be T = 723 K and PV = 10-6 dpa/s, respectively. And then, only the production rate of helium atoms was changed into the following three cases: PHe = 0, 10-7 and 10-5 appm He/s. The calculation results show that helium effect on I-loop formation quite differs from that on bubble formation. As to I-loops, the loop formation hardly depends on the existence of helium, where the number density of I-loops is almost the same for the three cases of PHe. This is because helium atoms trapped in vacancies are easily emitted into the matrix due to the recombination between the vacancies and SIAs, which induces no pronounced increase or decrease of vacancies and SIAs in the matrix, leading to no remarkable impact on the I-loop nucleation. On the other hand, the bubble formation depends much on the existence of helium, in which the number density of bubbles for PHe = 10-7 and 10-5 appm He/s is much higher than that for PHe = 0. This is because helium atoms trapped in a bubble increase the vacancy binding energy, and suppress the vacancy dissociation from the bubble, resulting in a promotion of the bubble nucleation. And then, the helium effect on the promotion of bubble nucleation is very strong, even the number of helium atoms in a bubble is not so large.
Hodges, Wilfrid
1993-01-01
An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.
Somers, Kieran P; Simmie, John M; Metcalfe, Wayne K; Curran, Henry J
2014-03-21
Due to the rapidly growing interest in the use of biomass derived furanic compounds as potential platform chemicals and fossil fuel replacements, there is a simultaneous need to understand the pyrolysis and combustion properties of such molecules. To this end, the potential energy surfaces for the pyrolysis relevant reactions of the biofuel candidate 2-methylfuran have been characterized using quantum chemical methods (CBS-QB3, CBS-APNO and G3). Canonical transition state theory is employed to determine the high-pressure limiting kinetics, k(T), of elementary reactions. Rice-Ramsperger-Kassel-Marcus theory with an energy grained master equation is used to compute pressure-dependent rate constants, k(T,p), and product branching fractions for the multiple-well, multiple-channel reaction pathways which typify the pyrolysis reactions of the title species. The unimolecular decomposition of 2-methylfuran is shown to proceed via hydrogen atom transfer reactions through singlet carbene intermediates which readily undergo ring opening to form collisionally stabilised acyclic C5H6O isomers before further decomposition to C1-C4 species. Rate constants for abstraction by the hydrogen atom and methyl radical are reported, with abstraction from the alkyl side chain calculated to dominate. The fate of the primary abstraction product, 2-furanylmethyl radical, is shown to be thermal decomposition to the n-butadienyl radical and carbon monoxide through a series of ring opening and hydrogen atom transfer reactions. The dominant bimolecular products of hydrogen atom addition reactions are found to be furan and methyl radical, 1-butene-1-yl radical and carbon monoxide and vinyl ketene and methyl radical. A kinetic mechanism is assembled with computer simulations in good agreement with shock tube speciation profiles taken from the literature. The kinetic mechanism developed herein can be used in future chemical kinetic modelling studies on the pyrolysis and oxidation of 2-methylfuran
Convergence rates for rank-based models with applications to portfolio theory
Ichiba, Tomoyuki; Shkolnikov, Mykhaylo
2011-01-01
We determine rates of convergence of rank-based interacting diffusions and semimartingale reflecting Brownian motions to equilibrium. Convergence rate for the total variation metric is derived using Lyapunov functions. Sharp fluctuations of additive functionals are obtained using Transportation Cost-Information inequalities for Markov processes. We work out various applications to the rank-based abstract equity markets used in Stochastic Portfolio Theory. For example, we produce quantitative bounds, including constants, for fluctuations of market weights and occupation times of various ranks for individual coordinates. Another important application is the comparison of performance between symmetric functionally generated portfolios and the market portfolio. This produces estimates of probabilities of "beating the market".
Using a Theory-Consistent CVAR Scenario to Test an Exchange Rate Model Based on Imperfect Knowledge
Katarina Juselius
2017-07-01
Full Text Available A theory-consistent CVAR scenario describes a set of testable regularieties one should expect to see in the data if the basic assumptions of the theoretical model are empirically valid. Using this method, the paper demonstrates that all basic assumptions about the shock structure and steady-state behavior of an an imperfect knowledge based model for exchange rate determination can be formulated as testable hypotheses on common stochastic trends and cointegration. This model obtaines remarkable support for almost every testable hypothesis and is able to adequately account for the long persistent swings in the real exchange rate.
Yogurtcu, Osman N; Johnson, Margaret E
2015-08-28
The dynamics of association between diffusing and reacting molecular species are routinely quantified using simple rate-equation kinetics that assume both well-mixed concentrations of species and a single rate constant for parameterizing the binding rate. In two-dimensions (2D), however, even when systems are well-mixed, the assumption of a single characteristic rate constant for describing association is not generally accurate, due to the properties of diffusional searching in dimensions d ≤ 2. Establishing rigorous bounds for discriminating between 2D reactive systems that will be accurately described by rate equations with a single rate constant, and those that will not, is critical for both modeling and experimentally parameterizing binding reactions restricted to surfaces such as cellular membranes. We show here that in regimes of intrinsic reaction rate (ka) and diffusion (D) parameters ka/D > 0.05, a single rate constant cannot be fit to the dynamics of concentrations of associating species independently of the initial conditions. Instead, a more sophisticated multi-parametric description than rate-equations is necessary to robustly characterize bimolecular reactions from experiment. Our quantitative bounds derive from our new analysis of 2D rate-behavior predicted from Smoluchowski theory. Using a recently developed single particle reaction-diffusion algorithm we extend here to 2D, we are able to test and validate the predictions of Smoluchowski theory and several other theories of reversible reaction dynamics in 2D for the first time. Finally, our results also mean that simulations of reactive systems in 2D using rate equations must be undertaken with caution when reactions have ka/D > 0.05, regardless of the simulation volume. We introduce here a simple formula for an adaptive concentration dependent rate constant for these chemical kinetics simulations which improves on existing formulas to better capture non-equilibrium reaction dynamics from dilute
Basic theories for strain localization analysis of porous media with rate dependent model
ZHANG Hongwu; QIN Jianmin
2005-01-01
This paper analyzes the interaction between two kinds of internal length scales when the rate dependent plasticity is introduced to a multiphase material model to study the dynamic strain localization phenomenon of saturated and partially saturated porous media. The stability analysis demonstrates that the enhanced porous media model preserves the well-posedness of the initial value problem for both axial and shear waves because an internal length scale parameter is introduced in the visco-plasticity model. On the other hand, the interaction between the length scale introduced by the rate dependent model and that naturally contained in the governing equations of fully and partially saturated model will take place. A basic method is presented to investigate the internal length scale of the multiphase porous media under the interaction of these two kinds of length scale parameters. Material stability analysis is carried out for a certain permeability from which the results of wave number domain with real wave speed are distinguished. A one dimensional example is given to illustrate the theoretical findings.
Ivan Chang
Full Text Available Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1 it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2 it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3 it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with
Chang, Ivan; Heiske, Margit; Letellier, Thierry; Wallace, Douglas; Baldi, Pierre
2011-01-01
Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and are thus incapable of capturing the heterogeneity associated with these complexes across different systems and system states. Here we introduce a new composable rate equation, the chemiosmotic rate law, that expresses the flux of a prototypical energy transduction complex as a function of: the saturation kinetics of the electron donor and acceptor substrates; the redox transfer potential between the complex and the substrates; and the steady-state thermodynamic force-to-flux relationship of the overall electro-chemical reaction. Modeling of bioenergetics with this rate law has several advantages: (1) it minimizes the use of arbitrary free parameters while featuring biochemically relevant parameters that can be obtained through progress curves of common enzyme kinetics protocols; (2) it is modular and can adapt to various enzyme complex arrangements for both in vivo and in vitro systems via transformation of its rate and equilibrium constants; (3) it provides a clear association between the sensitivity of the parameters of the individual complexes and the sensitivity of the system's steady-state. To validate our approach, we conduct in vitro measurements of ETC complex I, III, and IV activities using rat heart homogenates, and construct an estimation procedure for the parameter values directly from these measurements. In addition, we show the theoretical connections of our approach to the existing models, and compare the predictive accuracy of the rate law with our experimentally
A dual theory of price and value in a meso-scale economic model with stochastic profit rate
Greenblatt, R. E.
2014-12-01
The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.
Subjective Information Measure and Rate Fidelity Theory
Lu, Chenguang
2007-01-01
Using fish-covering model, this paper intuitively explains how to extend Hartley's information formula to the generalized information formula step by step for measuring subjective information: metrical information (such as conveyed by thermometers), sensory information (such as conveyed by color vision), and semantic information (such as conveyed by weather forecasts). The pivotal step is to differentiate condition probability and logical condition probability of a message. The paper illustrates the rationality of the formula, discusses the coherence of the generalized information formula and Popper's knowledge evolution theory. For optimizing data compression, the paper discusses rate-of-limiting-errors and its similarity to complexity-distortion based on Kolmogorov's complexity theory, and improves the rate-distortion theory into the rate-fidelity theory by replacing Shannon's distortion with subjective mutual information. It is proved that both the rate-distortion function and the rate-fidelity function ar...
Pham, Tien Hung; Rühaak, Wolfram; Sass, Ingo
2017-04-01
Extensive groundwater extraction leads to a drawdown of the ground water table. Consequently, soil effective stress increases and can cause land subsidence. Analysis of land subsidence generally requires a numerical model based on poroelasticity theory, which was first proposed by Biot (1941). In the review of regional land subsidence accompanying groundwater extraction, Galloway and Burbey (2011) stated that more research and application is needed in coupling of stress-dependent land subsidence process. In geotechnical field, the constant rate of strain tests (CRS) was first introduced in 1969 (Smith and Wahls 1969) and was standardized in 1982 through the designation D4186-82 by American Society for Testing and Materials. From the reading values of CRS tests, the stress-dependent parameters of poroelasticity model can be calculated. So far, there is no research to link poroelasticity theory with CRS tests in modelling land subsidence due to groundwater extraction. One dimensional CRS tests using conventional compression cell and three dimension CRS tests using Rowe cell were performed. The tests were also modelled by using finite element method with mixed elements. Back analysis technique is used to find the suitable values of hydraulic conductivity and bulk modulus that depend on the stress or void ratio. Finally, the obtained results are used in land subsidence models. Biot, M. A. (1941). "General theory of three-dimensional consolidation." Journal of applied physics 12(2): 155-164. Galloway, D. L. and T. J. Burbey (2011). "Review: Regional land subsidence accompanying groundwater extraction." Hydrogeology Journal 19(8): 1459-1486. Smith, R. E. and H. E. Wahls (1969). "Consolidation under constant rates of strain." Journal of Soil Mechanics & Foundations Div.
Mangani, P
2011-01-01
This title includes: Lectures - G.E. Sacks - Model theory and applications, and H.J. Keisler - Constructions in model theory; and, Seminars - M. Servi - SH formulas and generalized exponential, and J.A. Makowski - Topological model theory.
ZHU Zhenyang; QIANG Sheng; CHEN Weimin
2014-01-01
Recent achievements in concrete hydration exothermic models based on Arrhenius equation have improved computation accuracy for mass concrete temperature field. But the properties of the activation energy and the gas constant (Ea/R) have not been well studied yet. From the latest experiments it is shown that Ea/R obviously changes with the hydration degree without fixed form. In this paper, the relationship between hydration degree and Ea/R is studied and a new hydration exothermic model is proposed. With those achievements, the mass concrete temperature field with arbitrary boundary condition can be calculated more precisely.
A Theory of Interest Rate Stepping : Inflation Targeting in a Dynamic Menu Cost Model
Eijffinger, S.C.W.; Schaling, E.; Verhagen, W.H.
1999-01-01
Abstract: A stylised fact of monetary policy making is that central banks do not immediately respond to new information but rather seem to prefer to wait until sufficient ‘evidence’ to warrant a change has accumulated. However, theoretical models of inflation targeting imply that an optimising
Ivan Chang; Margit Heiske; Thierry Letellier; Douglas Wallace; Pierre Baldi
2011-01-01
Mitochondrial bioenergetic processes are central to the production of cellular energy, and a decrease in the expression or activity of enzyme complexes responsible for these processes can result in energetic deficit that correlates with many metabolic diseases and aging. Unfortunately, existing computational models of mitochondrial bioenergetics either lack relevant kinetic descriptions of the enzyme complexes, or incorporate mechanisms too specific to a particular mitochondrial system and ar...
Rate-distortion theory and human perception.
Sims, Chris R
2016-07-01
The fundamental goal of perception is to aid in the achievement of behavioral objectives. This requires extracting and communicating useful information from noisy and uncertain sensory signals. At the same time, given the complexity of sensory information and the limitations of biological information processing, it is necessary that some information must be lost or discarded in the act of perception. Under these circumstances, what constitutes an 'optimal' perceptual system? This paper describes the mathematical framework of rate-distortion theory as the optimal solution to the problem of minimizing the costs of perceptual error subject to strong constraints on the ability to communicate or transmit information. Rate-distortion theory offers a general and principled theoretical framework for developing computational-level models of human perception (Marr, 1982). Models developed in this framework are capable of producing quantitatively precise explanations for human perceptual performance, while yielding new insights regarding the nature and goals of perception. This paper demonstrates the application of rate-distortion theory to two benchmark domains where capacity limits are especially salient in human perception: discrete categorization of stimuli (also known as absolute identification) and visual working memory. A software package written for the R statistical programming language is described that aids in the development of models based on rate-distortion theory.
De Giovanni, Domenico
2010-01-01
prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation......The surrender option embedded in many life insurance products is a clause that allows policyholders to terminate the contract early. Pricing techniques based on the American Contingent Claim (ACC) theory are often used, though the actual policyholders' behavior is far from optimal. Inspired by many......' behavior....
The Game Theory Model Analysis of Microfinance High Interest Rate%小额信贷高利率的博弈论模型分析
段霞
2015-01-01
In this paper,using microfinance game theory model of both sides of supply and demand be-havior and game theory model of high rates,the microfinance high interest rate is analyzed.Result shows that the current microfinance should adopt the high interest rate.%用小额信贷供求双方行为的博弈论模型及高利率的博弈论模型分析小额信贷高利率问题，得出目前小额信贷宜采用高利率的结论。
Belegradek, OV
1999-01-01
This volume is a collection of papers on model theory and its applications. The longest paper, "Model Theory of Unitriangular Groups" by O. V. Belegradek, forms a subtle general theory behind Mal‴tsev's famous correspondence between rings and groups. This is the first published paper on the topic. Given the present model-theoretic interest in algebraic groups, Belegradek's work is of particular interest to logicians and algebraists. The rest of the collection consists of papers on various questions of model theory, mainly on stability theory. Contributors are leading Russian researchers in the
Rate-independent systems theory and application
Mielke, Alexander
2015-01-01
This monograph provides both an introduction to and a thorough exposition of the theory of rate-independent systems, which the authors have worked on with a number of collaborators over many years. The focus is mostly on fully rate-independent systems, first on an abstract level with or without a linear structure, discussing various concepts of solutions with full mathematical rigor. The usefulness of the abstract concepts is then demonstrated on the level of various applications primarily in continuum mechanics of solids, including suitable approximation strategies with guaranteed numerical stability and convergence. Particular applications concern inelastic processes such as plasticity, damage, phase transformations, or adhesive-type contacts both at small strains and at finite strains. Other physical systems such as magnetic or ferroelectric materials, and couplings to rate-dependent thermodynamic models are also considered. Selected applications are accompanied by numerical simulations illustrating both t...
Theory of nanolaser devices: Rate equation analysis versus microscopic theory
Lorke, Michael; Skovgård, Troels Suhr; Gregersen, Niels;
2013-01-01
A rate equation theory for quantum-dot-based nanolaser devices is developed. We show that these rate equations are capable of reproducing results of a microscopic semiconductor theory, making them an appropriate starting point for complex device simulations of nanolasers. The input...
Prest, M
1988-01-01
In recent years the interplay between model theory and other branches of mathematics has led to many deep and intriguing results. In this, the first book on the topic, the theme is the interplay between model theory and the theory of modules. The book is intended to be a self-contained introduction to the subject and introduces the requisite model theory and module theory as it is needed. Dr Prest develops the basic ideas concerning what can be said about modules using the information which may be expressed in a first-order language. Later chapters discuss stability-theoretic aspects of module
2005-02-01
asexual. In asexual reproduction , one parent divides into two or more o¤spring. In sexual reproduction, two parents must mate to produce one or more o...spring, n. In terms of rates, asexual reproduction produces n o¤spring, where n may be the ex- pected value of some random variable, so we have a rate of...h 1 kpeatx i ktencounter y; 53 which gets us closer to the form of the Lotka-Volterra equations, especially for the asexual reproduction forms
Common features of extraordinary rate theories.
Peters, Baron
2015-05-28
We examine the capabilities and foundations of three landmark rate theories: harmonic transition state theory, classical nucleation theory, and the Marcus theory of electron transfer. Each of the three classic rate theories is widely used to predict rates and trends. They are also used "in reverse" to interpret experimental data with no computation at all. Their common foundations include a quasi-equilibrium assumption and dimensionality reduction to a physically meaningful, one-dimensional, and broadly applicable reaction coordinate. Many applications lie beyond the scope of the classic theories, so rare events research has pursued trajectory-based methods that efficiently predict accurate rate constants even when the reaction coordinate and mechanistic details are unknown. Trajectory based rare events methods achieved these ambitious goals, but (by construction) they provide rates rather than mechanistic understanding. We briefly discuss recent efforts to identify reaction coordinates, including methods which provide abstract statistically defined coordinates and those which identify physical collective variables. Finally, we note some natural synergies between existing simulation methods which might help discover simple and powerful quasi-equilibrium theories for the many applications that fall beyond the scope of the classic rate theories.
Theory Modeling and Simulation
Shlachter, Jack [Los Alamos National Laboratory
2012-08-23
Los Alamos has a long history in theory, modeling and simulation. We focus on multidisciplinary teams that tackle complex problems. Theory, modeling and simulation are tools to solve problems just like an NMR spectrometer, a gas chromatograph or an electron microscope. Problems should be used to define the theoretical tools needed and not the other way around. Best results occur when theory and experiments are working together in a team.
Information theory and rate distortion theory for communications and compression
Gibson, Jerry
2013-01-01
This book is very specifically targeted to problems in communications and compression by providing the fundamental principles and results in information theory and rate distortion theory for these applications and presenting methods that have proved and will prove useful in analyzing and designing real systems. The chapters contain treatments of entropy, mutual information, lossless source coding, channel capacity, and rate distortion theory; however, it is the selection, ordering, and presentation of the topics within these broad categories that is unique to this concise book. While the cover
Modelling heart rate kinetics.
Zakynthinaki, Maria S
2015-01-01
The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual's cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women).
Modelling heart rate kinetics.
Maria S Zakynthinaki
Full Text Available The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise. Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual's cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women.
Zakynthinaki, Maria S.
2015-01-01
The objective of the present study was to formulate a simple and at the same time effective mathematical model of heart rate kinetics in response to movement (exercise). Based on an existing model, a system of two coupled differential equations which give the rate of change of heart rate and the rate of change of exercise intensity is used. The modifications introduced to the existing model are justified and discussed in detail, while models of blood lactate accumulation in respect to time and exercise intensity are also presented. The main modification is that the proposed model has now only one parameter which reflects the overall cardiovascular condition of the individual. The time elapsed after the beginning of the exercise, the intensity of the exercise, as well as blood lactate are also taken into account. Application of the model provides information regarding the individual’s cardiovascular condition and is able to detect possible changes in it, across the data recording periods. To demonstrate examples of successful numerical fit of the model, constant intensity experimental heart rate data sets of two individuals have been selected and numerical optimization was implemented. In addition, numerical simulations provided predictions for various exercise intensities and various cardiovascular condition levels. The proposed model can serve as a powerful tool for a complete means of heart rate analysis, not only in exercise physiology (for efficiently designing training sessions for healthy subjects) but also in the areas of cardiovascular health and rehabilitation (including application in population groups for which direct heart rate recordings at intense exercises are not possible or not allowed, such as elderly or pregnant women). PMID:25876164
De Giovanni, Domenico
prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...... of the corresponding two-space-dimensional parabolic partial differential equation. Extensive numerical experiments show the differences in terms of pricing and interest rate elasticity between the ACC and RE approaches as well as the sensitivity of the contract price with respect to changes in the policyholders...
De Giovanni, Domenico
2010-01-01
prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...... of the corresponding two-space-dimensional parabolic partial differential equation. Extensive numerical experiments show the differences in terms of pricing and interest rate elasticity between the ACC and RE approaches as well as the sensitivity of the contract price with respect to changes in the policyholders...
Robin A. J. Taylor; Daniel A. Herms; Louis R. Iverson
2008-01-01
The dispersal of organisms is rarely random, although diffusion processes can be useful models for movement in approximately homogeneous environments. However, the environments through which all organisms disperse are far from uniform at all scales. The emerald ash borer (EAB), Agrilus planipennis, is obligate on ash (Fraxinus spp...
Vasilkoski, Zlatko
2008-01-01
Under externally applied electric fields, lipid membranes tend to permeate and change their electrical resistance by the combined processes of pore creation and pore evolution (expansion or contraction). This study is focused on the pore creation process, represented by an empirical expression currently used in the electroporation (EP) models, for which an alternative theoretically based expression was provided. The choice of this expression was motivated by the role the DLVO's (disjoining) pressures may play in the process of EP. The electrostatic energy effects on each sides of a lipid membrane were evaluated in terms of the electrostatic component of the disjoining pressure. Thus the pore creation energy considerations in the current EP models, associated with the necessity of an idealized non conducting circular pre-pore were avoided. As a result, a new expression for the onset of the electroporation was proposed. It was found that this new theoretically determined expression is in good agreement with the...
On the theory of interest rate policy
Heinz-Peter Spahn
2001-12-01
Full Text Available A new consensus in the theory of monetary policy has been reached pointing to the pivotal role of interest rates that are set in accordance with central banks' reaction functions. The decisive criterion of assessing the Taylor rule, inflation and monetary targeting is not the macrotheoretic foundation of these concepts. They serve as "languages" coordinating heterogeneous beliefs among policy makers and private agents, and should also allow rule-based discretionary policies when markets are in need of leadership. Contrary to the ECB dogma, the Fed is right to have an eye on the risks of inflation and unemployment.
Noncommutative Gauge Theories: Model for Hodge theory
Upadhyay, Sudhaker
2013-01-01
The nilpotent BRST, anti-BRST, dual-BRST and anti-dual-BRST symmetry transformations are constructed in the context of noncommutative (NC) 1-form as well as 2-form gauge theories. The corresponding Noether's charges for these symmetries on the Moyal plane are shown to satisfy the same algebra as by the de Rham cohomological operators of differential geometry. The Hodge decomposition theorem on compact manifold is also studied. We show that noncommutative gauge theories are field theoretic models for Hodge theory.
Quantum theory of chemical reaction rates
Miller, W.H. [Univ. of California, Berkeley, CA (United States). Dept. of Chemistry]|[Lawrence Berkeley Lab., CA (United States). Chemical Sciences Div.
1994-10-01
If one wishes to describe a chemical reaction at the most detailed level possible, i.e., its state-to-state differential scattering cross section, then it is necessary to solve the Schroedinger equation to obtain the S-matrix as a function of total energy E and total angular momentum J, in terms of which the cross sections can be calculated as given by equation (1) in the paper. All other physically observable attributes of the reaction can be derived from the cross sections. Often, in fact, one is primarily interested in the least detailed quantity which characterizes the reaction, namely its thermal rate constant, which is obtained by integrating Eq. (1) over all scattering angles, summing over all product quantum states, and Boltzmann-averaging over all initial quantum states of reactants. With the proper weighting factors, all of these averages are conveniently contained in the cumulative reaction probability (CRP), which is defined by equation (2) and in terms of which the thermal rate constant is given by equation (3). Thus, having carried out a full state-to-state scattering calculation to obtain the S-matrix, one can obtain the CRP from Eq. (2), and then rate constant from Eq. (3), but this seems like ``overkill``; i.e., if one only wants the rate constant, it would clearly be desirable to have a theory that allows one to calculate it, or the CRP, more directly than via Eq. (2), yet also correctly, i.e., without inherent approximations. Such a theory is the subject of this paper.
Modified Rate-Theory Predictions in Comparison to Microstructural Data
Surh, M P; Okita, T; Wolfer, W G
2003-11-03
Standard rate theory methods have recently been combined with experimental microstructures to successfully reproduce measured swelling behavior in ternary steels around 400 C. Fit parameters have reasonable values except possibly for the recombination radius, R{sub c}, which can be larger than expected. Numerical simulations of void nucleation and growth reveal the importance additional recombination processes at unstable clusters. Such extra recombination may reduce the range of possible values for R{sub c}. A modified rate theory is presented here that includes the effect of these undetectably small defect clusters. The fit values for R{sub c} are not appreciably altered, as the modification has little effect on the model behavior in the late steady state. It slightly improves the predictions for early transient times, when the sink strength of stable voids and dislocations is relatively small. Standard rate theory successfully explains steady swelling behavior in high purity stainless steel.
Modeling helicity dissipation-rate equation
Yokoi, Nobumitsu
2016-01-01
Transport equation of the dissipation rate of turbulent helicity is derived with the aid of a statistical analytical closure theory of inhomogeneous turbulence. It is shown that an assumption on the helicity scaling with an algebraic relationship between the helicity and its dissipation rate leads to the transport equation of the turbulent helicity dissipation rate without resorting to a heuristic modeling.
Reaction rate theory of radiation exposure: Effects of the dose rate on mutation frequencies
Manabe, Yuichiro; Nakamura, Issei
2014-01-01
We develop a kinetic reaction model for the cells having the irradiated DNA molecules due to the ionizing radiation exposure. Our theory simultaneously accounts for the time-dependent reactions of the DNA damage, the DNA mutation, the DNA repair, and the proliferation and apoptosis of cells in a tissue with a minimal set of model parameters. In contrast to the existing theories for the radiation exposition, we do not assume the relationships between the total dose and the induced mutation frequency. We show good agreement between theory and experiment. Importantly, our result shows a new perspective that the key ingredient in the study of the irradiated cells is the rate constants depending on the dose rate. Moreover, we discuss the universal scaling function for mutation frequencies due to the irradiation at low dose rates.
Probability state modeling theory.
Bagwell, C Bruce; Hunsberger, Benjamin C; Herbert, Donald J; Munson, Mark E; Hill, Beth L; Bray, Chris M; Preffer, Frederic I
2015-07-01
As the technology of cytometry matures, there is mounting pressure to address two major issues with data analyses. The first issue is to develop new analysis methods for high-dimensional data that can directly reveal and quantify important characteristics associated with complex cellular biology. The other issue is to replace subjective and inaccurate gating with automated methods that objectively define subpopulations and account for population overlap due to measurement uncertainty. Probability state modeling (PSM) is a technique that addresses both of these issues. The theory and important algorithms associated with PSM are presented along with simple examples and general strategies for autonomous analyses. PSM is leveraged to better understand B-cell ontogeny in bone marrow in a companion Cytometry Part B manuscript. Three short relevant videos are available in the online supporting information for both of these papers. PSM avoids the dimensionality barrier normally associated with high-dimensionality modeling by using broadened quantile functions instead of frequency functions to represent the modulation of cellular epitopes as cells differentiate. Since modeling programs ultimately minimize or maximize one or more objective functions, they are particularly amenable to automation and, therefore, represent a viable alternative to subjective and inaccurate gating approaches.
Testing Theories of Transfer Using Error Rate Learning Curves.
Koedinger, Kenneth R; Yudelson, Michael V; Pavlik, Philip I
2016-07-01
We analyze naturally occurring datasets from student use of educational technologies to explore a long-standing question of the scope of transfer of learning. We contrast a faculty theory of broad transfer with a component theory of more constrained transfer. To test these theories, we develop statistical models of them. These models use latent variables to represent mental functions that are changed while learning to cause a reduction in error rates for new tasks. Strong versions of these models provide a common explanation for the variance in task difficulty and transfer. Weak versions decouple difficulty and transfer explanations by describing task difficulty with parameters for each unique task. We evaluate these models in terms of both their prediction accuracy on held-out data and their power in explaining task difficulty and learning transfer. In comparisons across eight datasets, we find that the component models provide both better predictions and better explanations than the faculty models. Weak model variations tend to improve generalization across students, but hurt generalization across items and make a sacrifice to explanatory power. More generally, the approach could be used to identify malleable components of cognitive functions, such as spatial reasoning or executive functions. Copyright © 2016 Cognitive Science Society, Inc.
Evaluation Theory, Models, and Applications
Stufflebeam, Daniel L.; Shinkfield, Anthony J.
2007-01-01
"Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing…
Swanson, Patricia E.
2015-01-01
Elementary school mathematics is increasingly recognized for its crucial role in developing the foundational skills and understandings for algebra. In this article, the author uses a lesson to introduce the concept of "rates"--comparing two different types and units of measure--and how to graph them. Described is the lesson and shared…
Model Theory in Algebra, Analysis and Arithmetic
Dries, Lou; Macpherson, H Dugald; Pillay, Anand; Toffalori, Carlo; Wilkie, Alex J
2014-01-01
Presenting recent developments and applications, the book focuses on four main topics in current model theory: 1) the model theory of valued fields; 2) undecidability in arithmetic; 3) NIP theories; and 4) the model theory of real and complex exponentiation. Young researchers in model theory will particularly benefit from the book, as will more senior researchers in other branches of mathematics.
Grey-theory based intrusion detection model
Qin Boping; Zhou Xianwei; Yang Jun; Song Cunyi
2006-01-01
To solve the problem that current intrusion detection model needs large-scale data in formulating the model in real-time use, an intrusion detection system model based on grey theory (GTIDS) is presented. Grey theory has merits of fewer requirements on original data scale, less limitation of the distribution pattern and simpler algorithm in modeling.With these merits GTIDS constructs model according to partial time sequence for rapid detect on intrusive act in secure system. In this detection model rate of false drop and false retrieval are effectively reduced through twice modeling and repeated detect on target data. Furthermore, GTIDS framework and specific process of modeling algorithm are presented. The affectivity of GTIDS is proved through emulated experiments comparing snort and next-generation intrusion detection expert system (NIDES) in SRI international.
Modelling Australia's Retail Mortgage Rate
Abbas Valadkhani; Sajid Anwar
2012-01-01
There is an ongoing controversy over whether banks’ mortgage rates rise more readily than they fall due to their asymmetric responses to changes in the cash rate. This paper examines the dynamic interplay between the cash rate and the variable mortgage rate using monthly data in the post-1989 era. Unlike previous studies for Australia, our proposed threshold and asymmetric error-correction models account for both the amount and adjustment asymmetries. We found thatrate rises have much larger ...
Lee, Gyeong-Geun, E-mail: gglee@kaeri.re.kr; Jin, Hyung-Ha; Lee, Yong-Bok; Kwon, Junhyun
2014-06-01
Radiation-induced segregation (RIS) is the phenomenon of compositional change at point defect sinks in alloys irradiated at a moderate temperature. Owing to the potential relevance of RIS by way of the susceptibility of structural materials to irradiation-assisted stress corrosion cracking, basic research on austenitic stainless steels used in nuclear reactors has been carried out in recent years. In this work, commercial stainless steel 316 specimens were irradiated with Fe ions, and the resulting changes in Cr and Ni compositions were characterized using transmission electron microscopy and energy-dispersive X-ray spectroscopy. The samples with various grain boundary orientations, including the special Σ3 orientation, were analyzed. The ledges of a few special Σ3 twin boundaries showed significantly higher RIS compared to the coherent regions. The RIS behavior of a parallel twin pair was observed, and two profiles of RIS were found in them. The inner twins in multi-twins showed considerably lower RIS compared to the outer twins. For the calculation of RIS, time-dependent differential equations based on the rate theory were established and numerically integrated. An additional variable, representing the sink strength of the grain boundary, was introduced in the differential equations, and the concentration profiles of the Σ3 twins were calculated. The calculated results were in good agreement with the experimental results.
Causal Rate Distortion Function and Relations to Filtering Theory
Charalambous, Charalambos D; Kourtellaris, Christos K
2011-01-01
A causal rate distortion function is defined, its solution is described, and its relation to filtering theory is discusssed. The relation to filtering is obtained via a causal constraint imposed on the reconstruction kernel to be realizable.
Item Response Theory Analyses of the Parent and Teacher Ratings of the DSM-IV ADHD Rating Scale
Gomez, Rapson
2008-01-01
The graded response model (GRM), which is based on item response theory (IRT), was used to evaluate the psychometric properties of the inattention and hyperactivity/impulsivity symptoms in an ADHD rating scale. To accomplish this, parents and teachers completed the DSM-IV ADHD Rating Scale (DARS; Gomez et al., "Journal of Child Psychology and…
Stochastic Climate Theory and Modelling
Franzke, Christian L E; Berner, Judith; Williams, Paul D; Lucarini, Valerio
2014-01-01
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations as well as for model error representation, uncertainty quantification, data assimilation and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochast...
Model companions of theories with an automorphism
Kikyo, Hirotaka
1998-01-01
For a theory $T$ in $L, T_\\sigma$ is the theory of the models of $T$ with an automorphism $\\sigma$. If $T$ is an unstable model complete theory without the independence property, then $T_\\sigma$ has no model companion. If $T$ is an unstable model complete theory and $T_\\sigma$ has the amalgamation property, then $T_\\sigma$ has no model companion. If $T$ is model complete and has the fcp, then $T_\\sigma$ has no model completion.
Random walk theory and exchange rate dynamics in transition economies
Gradojević Nikola
2010-01-01
Full Text Available This paper investigates the validity of the random walk theory in the Euro-Serbian dinar exchange rate market. We apply Andrew Lo and Archie MacKinlay's (1988 conventional variance ratio test and Jonathan Wright's (2000 non-parametric ranks and signs based variance ratio tests to the daily Euro/Serbian dinar exchange rate returns using the data from January 2005 - December 2008. Both types of variance ratio tests overwhelmingly reject the random walk hypothesis over the data span. To assess the robustness of our findings, we examine the forecasting performance of a non-linear, nonparametric model in the spirit of Francis Diebold and James Nason (1990 and find that it is able to significantly improve upon the random walk model, thus confirming the existence of foreign exchange market imperfections in a small transition economy such as Serbia. In the last part of the paper, we conduct a comparative study on how our results relate to those of other transition economies in the region.
Derivation of instanton rate theory from first principles
Richardson, Jeremy O
2015-01-01
Instanton rate theory is used to study tunneling events in a wide range of systems including low-temperature chemical reactions. Despite many successful applications, the method has never been obtained from first principles, relying instead on the "ImF" premise. In this paper, the same expression for the rate of barrier penetration at finite temperature is rederived from quantum scattering theory [W. H. Miller, S. D. Schwartz, and J. W. Tromp, J. Chem. Phys. 79, 4889 (1983)] using a semiclassical Green's function formalism. This justifies the instanton approach and provides a route to deriving the rate of other processes.
Rate theory on water exchange in aqueous uranyl ion
Dang, Liem X.; Vo, Quynh N.; Nilsson, Mikael; Nguyen, Hung D.
2017-03-01
We report a classical rate theory approach to predict the exchange mechanism that occurs between water and aqueous uranyl ion. Using our water and ion-water polarizable force field and molecular dynamics techniques, we computed the potentials of mean force for the uranyl ion-water pair as a function of different pressures at ambient temperature. These potentials of mean force were used to calculate rate constants using transition rate theory; the transmission coefficients also were examined using the reactive flux method and Grote-Hynes approach. The computed activation volumes are positive; thus, the mechanism of this particular water-exchange is a dissociative process.
Derivation of instanton rate theory from first principles
Richardson, Jeremy O.
2016-03-01
Instanton rate theory is used to study tunneling events in a wide range of systems including low-temperature chemical reactions. Despite many successful applications, the method has never been obtained from first principles, relying instead on the "Im F" premise. In this paper, the same expression for the rate of barrier penetration at finite temperature is rederived from quantum scattering theory [W. H. Miller, S. D. Schwartz, and J. W. Tromp, J. Chem. Phys. 79, 4889 (1983)] using a semiclassical Green's function formalism. This justifies the instanton approach and provides a route to deriving the rate of other processes.
Farazdaghi, Hadi
2011-02-01
Photosynthesis is the origin of oxygenic life on the planet, and its models are the core of all models of plant biology, agriculture, environmental quality and global climate change. A theory is presented here, based on single process biochemical reactions of Rubisco, recognizing that: In the light, Rubisco activase helps separate Rubisco from the stored ribulose-1,5-bisphosphate (RuBP), activates Rubisco with carbamylation and addition of Mg²(+), and then produces two products, in two steps: (Step 1) Reaction of Rubisco with RuBP produces a Rubisco-enediol complex, which is the carboxylase-oxygenase enzyme (Enco) and (Step 2) Enco captures CO₂ and/or O₂ and produces intermediate products leading to production and release of 3-phosphoglycerate (PGA) and Rubisco. PGA interactively controls (1) the carboxylation-oxygenation, (2) electron transport, and (3) triosephosphate pathway of the Calvin-Benson cycle that leads to the release of glucose and regeneration of RuBP. Initially, the total enzyme participates in the two steps of the reaction transitionally and its rate follows Michaelis-Menten kinetics. But, for a continuous steady state, Rubisco must be divided into two concurrently active segments for the two steps. This causes a deviation of the steady state from the transitional rate. Kinetic models are developed that integrate the transitional and the steady state reactions. They are tested and successfully validated with verifiable experimental data. The single-process theory is compared to the widely used two-process theory of Farquhar et al. (1980. Planta 149, 78-90), which assumes that the carboxylation rate is either Rubisco-limited at low CO₂ levels such as CO₂ compensation point, or RuBP regeneration-limited at high CO₂. Since the photosynthesis rate cannot increase beyond the two-process theory's Rubisco limit at the CO₂ compensation point, net photosynthesis cannot increase above zero in daylight, and since there is always respiration at
A Theory of Rate Coding Control by Intrinsic Plasticity Effects
Naudé, J.; Paz, J. T.; Berry, H.; Delord, B.
2012-01-01
Intrinsic plasticity (IP) is a ubiquitous activity-dependent process regulating neuronal excitability and a cellular correlate of behavioral learning and neuronal homeostasis. Because IP is induced rapidly and maintained long-term, it likely represents a major determinant of adaptive collective neuronal dynamics. However, assessing the exact impact of IP has remained elusive. Indeed, it is extremely difficult disentangling the complex non-linear interaction between IP effects, by which conductance changes alter neuronal activity, and IP rules, whereby activity modifies conductance via signaling pathways. Moreover, the two major IP effects on firing rate, threshold and gain modulation, remain unknown in their very mechanisms. Here, using extensive simulations and sensitivity analysis of Hodgkin-Huxley models, we show that threshold and gain modulation are accounted for by maximal conductance plasticity of conductance that situate in two separate domains of the parameter space corresponding to sub- and supra-threshold conductance (i.e. activating below or above the spike onset threshold potential). Analyzing equivalent integrate-and-fire models, we provide formal expressions of sensitivities relating to conductance parameters, unraveling unprecedented mechanisms governing IP effects. Our results generalize to the IP of other conductance parameters and allow strong inference for calcium-gated conductance, yielding a general picture that accounts for a large repertoire of experimental observations. The expressions we provide can be combined with IP rules in rate or spiking models, offering a general framework to systematically assess the computational consequences of IP of pharmacologically identified conductance with both fine grain description and mathematical tractability. We provide an example of such IP loop model addressing the important issue of the homeostatic regulation of spontaneous discharge. Because we do not formulate any assumptions on modification rules
Rate Theory for Correlated Processes: Double Jumps in Adatom Diffusion
Jacobsen, J.; Jacobsen, Karsten Wedel; Sethna, J.
1997-01-01
We study the rate of activated motion over multiple barriers, in particular the correlated double jump of an adatom diffusing on a missing-row reconstructed platinum (110) surface. We develop a transition path theory, showing that the activation energy is given by the minimum-energy trajectory...... which succeeds in the double jump. We explicitly calculate this trajectory within an effective-medium molecular dynamics simulation. A cusp in the acceptance region leads to a root T prefactor for the activated rate of double jumps. Theory and numerical results agree....
Models in theory building: the case of early string theory
Castellani, Elena [Department of Philosophy, Florence (Italy)
2013-07-01
The history of the origins and first steps of string theory, from Veneziano's formulation of his famous scattering amplitude in 1968 to the 'first string revolution' in 1984, provides rich material for discussing traditional issues in the philosophy of science. This paper focusses on the initial phase of this history, that is the making of early string theory out of the 'dual theory of strong interactions' motivated by the aim of finding a viable theory of hadrons in the framework of the so-called S-matrix theory of the Sixties: from the first two models proposed (the Dual Resonance Model and the Shapiro-Virasoro Model) to all the subsequent endeavours to extend and complete the theory, including its string interpretation. As is the aim of this paper to show, by representing an exemplary illustration of the building of a scientific theory out of tentative and partial models this is a particularly fruitful case study for the current philosophical discussion on how to characterize a scientific model, a scientific theory, and the relation between models and theories.
A Membrane Model from Implicit Elasticity Theory
Freed, A. D.; Liao, J.; Einstein, D. R.
2014-01-01
A Fungean solid is derived for membranous materials as a body defined by isotropic response functions whose mathematical structure is that of a Hookean solid where the elastic constants are replaced by functions of state derived from an implicit, thermodynamic, internal-energy function. The theory utilizes Biot’s (1939) definitions for stress and strain that, in 1-dimension, are the stress/strain measures adopted by Fung (1967) when he postulated what is now known as Fung’s law. Our Fungean membrane model is parameterized against a biaxial data set acquired from a porcine pleural membrane subjected to three, sequential, proportional, planar extensions. These data support an isotropic/deviatoric split in the stress and strain-rate hypothesized by our theory. These data also demonstrate that the material response is highly non-linear but, otherwise, mechanically isotropic. These data are described reasonably well by our otherwise simple, four-parameter, material model. PMID:24282079
Application of semiclassical methods to reaction rate theory
Hernandez, R.
1993-11-01
This work is concerned with the development of approximate methods to describe relatively large chemical systems. This effort has been divided into two primary directions: First, we have extended and applied a semiclassical transition state theory (SCTST) originally proposed by Miller to obtain microcanonical and canonical (thermal) rates for chemical reactions described by a nonseparable Hamiltonian, i.e. most reactions. Second, we have developed a method to describe the fluctuations of decay rates of individual energy states from the average RRKM rate in systems where the direct calculation of individual rates would be impossible. Combined with the semiclassical theory this latter effort has provided a direct comparison to the experimental results of Moore and coworkers. In SCTST, the Hamiltonian is expanded about the barrier and the ``good`` action-angle variables are obtained perturbatively; a WKB analysis of the effectively one-dimensional reactive direction then provides the transmission probabilities. The advantages of this local approximate treatment are that it includes tunneling effects and anharmonicity, and it systematically provides a multi-dimensional dividing surface in phase space. The SCTST thermal rate expression has been reformulated providing increased numerical efficiency (as compared to a naive Boltzmann average), an appealing link to conventional transition state theory (involving a ``prereactive`` partition function depending on the action of the reactive mode), and the ability to go beyond the perturbative approximation.
Models in cooperative game theory
Branzei, Rodica; Tijs, Stef
2008-01-01
This book investigates models in cooperative game theory in which the players have the possibility to cooperate partially. In a crisp game the agents are either fully involved or not involved at all in cooperation with some other agents, while in a fuzzy game players are allowed to cooperate with infinite many different participation levels, varying from non-cooperation to full cooperation. A multi-choice game describes the intermediate case in which each player may have a fixed number of activity levels. Different set and one-point solution concepts for these games are presented. The properties of these solution concepts and their interrelations on several classes of crisp, fuzzy, and multi-choice games are studied. Applications of the investigated models to many economic situations are indicated as well. The second edition is highly enlarged and contains new results and additional sections in the different chapters as well as one new chapter.
AGGREGATE RATING MODEL IN THE TOURISM INDUSTRY
Maris Angela
2014-07-01
Full Text Available In the paper the authors present a model aggregate rating based on credit-scoring models, banking models and their rating model. Multi-criteria approach and an aggregate model better capture business risk of the company.
Stochastic models: theory and simulation.
Field, Richard V., Jr.
2008-03-01
Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.
Quiver gauge theories and integrable lattice models
Yagi, Junya
2015-01-01
We discuss connections between certain classes of supersymmetric quiver gauge theories and integrable lattice models from the point of view of topological quantum field theories (TQFTs). The relevant classes include 4d $\\mathcal{N} = 1$ theories known as brane box and brane tilling models, 3d $\\mathcal{N} = 2$ and 2d $\\mathcal{N} = (2,2)$ theories obtained from them by compactification, and 2d $\\mathcal{N} = (0,2)$ theories closely related to these theories. We argue that their supersymmetric indices carry structures of TQFTs equipped with line operators, and as a consequence, are equal to the partition functions of lattice models. The integrability of these models follows from the existence of extra dimension in the TQFTs, which emerges after the theories are embedded in M-theory. The Yang-Baxter equation expresses the invariance of supersymmetric indices under Seiberg duality and its lower-dimensional analogs.
Quiver gauge theories and integrable lattice models
Yagi, Junya [International School for Advanced Studies (SISSA),via Bonomea 265, 34136 Trieste (Italy); INFN - Sezione di Trieste,via Valerio 2, 34149 Trieste (Italy)
2015-10-09
We discuss connections between certain classes of supersymmetric quiver gauge theories and integrable lattice models from the point of view of topological quantum field theories (TQFTs). The relevant classes include 4d N=1 theories known as brane box and brane tilling models, 3d N=2 and 2d N=(2,2) theories obtained from them by compactification, and 2d N=(0,2) theories closely related to these theories. We argue that their supersymmetric indices carry structures of TQFTs equipped with line operators, and as a consequence, are equal to the partition functions of lattice models. The integrability of these models follows from the existence of extra dimension in the TQFTs, which emerges after the theories are embedded in M-theory. The Yang-Baxter equation expresses the invariance of supersymmetric indices under Seiberg duality and its lower-dimensional analogs.
Enterprise Modelling supported by Manufacturing Systems Theory
MYKLEBUST, Odd
2002-01-01
There exist today a large number of enterprise models or enterprise modelling approaches. In a study of standards and project developed models there are two approaches: CIMOSA “The Open Systems Architecture for CIM” and GERAM, “Generalised Enterprise Reference Architecture”, which show a system orientation that can be further followed as interesting research topics for a system theory oriented approach for enterprise models. In the selection of system theories, manufacturing system theory...
Short-run Exchange-Rate Dynamics: Theory and Evidence
Carlson, John A.; Dahl, Christian Møller; Osler, Carol L.
of currency markets, it accurately reflects the constraints and objectives faced by the major participants, and it fits key stylized facts concerning returns and order flow. With respect to macroeconomics, the model is consistent with most of the major puzzles that have emerged under floating rates.......Recent research has revealed a wealth of information about the microeconomics of currency markets and thus the determination of exchange rates at short horizons. This information is valuable to us as scientists since, like evidence of macroeconomic regularities, it can provide critical guidance...... for designing exchange-rate models. This paper presents an optimizing model of short-run exchange-rate dynamics consistent with both the micro evidence and the macro evidence, the first such model of which we are aware. With respect to microeconomics, the model is consistent with the institutional structure...
Towards a model for protein production rates
Dong, J J; Zia, R K P
2007-01-01
In the process of translation, ribosomes read the genetic code on an mRNA and assemble the corresponding polypeptide chain. The ribosomes perform discrete directed motion which is well modeled by a totally asymmetric simple exclusion process (TASEP) with open boundaries. Using Monte Carlo simulations and a simple mean-field theory, we discuss the effect of one or two ``bottlenecks'' (i.e., slow codons) on the production rate of the final protein. Confirming and extending previous work by Chou and Lakatos, we find that the location and spacing of the slow codons can affect the production rate quite dramatically. In particular, we observe a novel ``edge'' effect, i.e., an interaction of a single slow codon with the system boundary. We focus in detail on ribosome density profiles and provide a simple explanation for the length scale which controls the range of these interactions.
Towards a Model for Protein Production Rates
Dong, J. J.; Schmittmann, B.; Zia, R. K. P.
2007-07-01
In the process of translation, ribosomes read the genetic code on an mRNA and assemble the corresponding polypeptide chain. The ribosomes perform discrete directed motion which is well modeled by a totally asymmetric simple exclusion process (TASEP) with open boundaries. Using Monte Carlo simulations and a simple mean-field theory, we discuss the effect of one or two "bottlenecks" (i.e., slow codons) on the production rate of the final protein. Confirming and extending previous work by Chou and Lakatos, we find that the location and spacing of the slow codons can affect the production rate quite dramatically. In particular, we observe a novel "edge" effect, i.e., an interaction of a single slow codon with the system boundary. We focus in detail on ribosome density profiles and provide a simple explanation for the length scale which controls the range of these interactions.
Queuing theory models for computer networks
Galant, David C.
1989-01-01
A set of simple queuing theory models which can model the average response of a network of computers to a given traffic load has been implemented using a spreadsheet. The impact of variations in traffic patterns and intensities, channel capacities, and message protocols can be assessed using them because of the lack of fine detail in the network traffic rates, traffic patterns, and the hardware used to implement the networks. A sample use of the models applied to a realistic problem is included in appendix A. Appendix B provides a glossary of terms used in this paper. This Ames Research Center computer communication network is an evolving network of local area networks (LANs) connected via gateways and high-speed backbone communication channels. Intelligent planning of expansion and improvement requires understanding the behavior of the individual LANs as well as the collection of networks as a whole.
戴天民
2001-01-01
The aim of this paper is to establish new principles of power and energy rate of incremental type in generalized continuum mechanics. By combining new principles of virtual velocity and virtual angular velocity as well as of virtual stress and virtual couple stress with cross terms of incremental rate type a new principle of power and energy rate of incremental rate type with cross terms for micropolar continuum field theories is presented and from it all corresponding equations of motion and boundary conditions as well as power and energy rate equations of incremental rate type for micropolar and nonlocal micropolar continua with the help of generalized Piola's theorems in all and without any additional requirement are derived. Complete results for micromorphic continua could be similarly derived. The derived results in the present paper are believed to be new. They could be used to establish corresponding finite element methods of incremental rate type for generalized continuum mechanics.
Attaining the rate-independent limit of a rate-dependent strain gradient plasticity theory
El-Naaman, Salim Abdallah; Nielsen, Kim Lau; Niordson, Christian Frithiof
2016-01-01
The existence of characteristic strain rates in rate-dependent material models, corresponding to rate-independent model behavior, is studied within a back stress based rate-dependent higher order strain gradient crystal plasticity model. Such characteristic rates have recently been observed...... for steady-state processes, and the present study aims to demonstrate that the observations in fact unearth a more widespread phenomenon. In this work, two newly proposed back stress formulations are adopted to account for the strain gradient effects in the single slip simple shear case, and characteristic...
On Dimer Models and Closed String Theories
Sarkar, Tapobrata
2007-01-01
We study some aspects of the recently discovered connection between dimer models and D-brane gauge theories. We argue that dimer models are also naturally related to closed string theories on non compact orbifolds of $\\BC^2$ and $\\BC^3$, via their twisted sector R charges, and show that perfect matchings in dimer models correspond to twisted sector states in the closed string theory. We also use this formalism to study the combinatorics of some unstable orbifolds of $\\BC^2$.
New Pathways between Group Theory and Model Theory
Fuchs, László; Goldsmith, Brendan; Strüngmann, Lutz
2017-01-01
This volume focuses on group theory and model theory with a particular emphasis on the interplay of the two areas. The survey papers provide an overview of the developments across group, module, and model theory while the research papers present the most recent study in those same areas. With introductory sections that make the topics easily accessible to students, the papers in this volume will appeal to beginning graduate students and experienced researchers alike. As a whole, this book offers a cross-section view of the areas in group, module, and model theory, covering topics such as DP-minimal groups, Abelian groups, countable 1-transitive trees, and module approximations. The papers in this book are the proceedings of the conference “New Pathways between Group Theory and Model Theory,” which took place February 1-4, 2016, in Mülheim an der Ruhr, Germany, in honor of the editors’ colleague Rüdiger Göbel. This publication is dedicated to Professor Göbel, who passed away in 2014. He was one of th...
Applications of model theory to functional analysis
Iovino, Jose
2014-01-01
During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the
Domain Theory, Its Models and Concepts
Andreasen, Mogens Myrup; Howard, Thomas J.; Bruun, Hans Peter Lomholt
2014-01-01
Domain Theory is a systems approach for the analysis and synthesis of products. Its basic idea is to view a product as systems of activities, organs and parts and to define structure, elements, behaviour and function in these domains. The theory is a basis for a long line of research contributions...... and industrial applications especially for the DFX areas (not reported here) and for product modelling. The theory therefore contains a rich ontology of interrelated concepts. The Domain Theory is not aiming to create normative methods but the creation of a collection of concepts related to design phenomena......, which can support design work and to form elements of designers’ mindsets and thereby their practice. The theory is a model-based theory, which means it is composed of concepts and models, which explains certain design phenomena. Many similar theories are described in the literature with differences...
Quantum field theory competitive models
Tolksdorf, Jürgen; Zeidler, Eberhard
2009-01-01
For more than 70 years, quantum field theory (QFT) can be seen as a driving force in the development of theoretical physics. Equally fascinating is the fruitful impact which QFT had in rather remote areas of mathematics. The present book features some of the different approaches, different physically viewpoints and techniques used to make the notion of quantum field theory more precise. For example, the present book contains a discussion including general considerations, stochastic methods, deformation theory and the holographic AdS/CFT correspondence. It also contains a discussion of more recent developments like the use of category theory and topos theoretic methods to describe QFT. The present volume emerged from the 3rd 'Blaubeuren Workshop: Recent Developments in Quantum Field Theory', held in July 2007 at the Max Planck Institute of Mathematics in the Sciences in Leipzig/Germany. All of the contributions are committed to the idea of this workshop series: 'To bring together outstanding experts working in...
Engelen, Peter Jan; Lander, Michel W.; van Essen, Marc
Research on crime has by no means reached a definitive conclusion on which factors are related to crime rates. We contribute to the crime literature by providing an integrated empirical model of economic and sociological theories of criminal behavior and by using a very comprehensive set of
Engelen, Peter Jan; Lander, Michel W.; van Essen, Marc
2016-01-01
Research on crime has by no means reached a definitive conclusion on which factors are related to crime rates. We contribute to the crime literature by providing an integrated empirical model of economic and sociological theories of criminal behavior and by using a very comprehensive set of economic
Solvent Exchange in Liquid Methanol and Rate Theory
Dang, Liem X.; Schenter, Gregory K.
2016-01-01
To enhance our understanding of the solvent exchange mechanism in liquid methanol, we report a systematic study of this process using molecular dynamics simulations. We use transition state theory, the Impey-Madden-McDonald method, the reactive flux method, and Grote-Hynes theory to compute the rate constants for this process. Solvent coupling was found to dominate, resulting in a significantly small transmission coefficient. We predict a positive activation volume for the methanol exchange process. The essential features of the dynamics of the system as well as the pressure dependence are recovered from a Generalized Langevin Equation description of the dynamics. We find that the dynamics and response to anharmonicity can be decomposed into two time regimes, one corresponding to short time response (< 0.1 ps) and long time response (> 5 ps). An effective characterization of the process results from launching dynamics from the planar hypersurface corresponding to Grote-Hynes theory. This results in improved numerical convergence of correlation functions. This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.
Theories, Models and Methodology in Writing Research
Rijlaarsdam, Gert; Bergh, van den Huub; Couzijn, Michel
1996-01-01
Theories, Models and Methodology in Writing Research describes the current state of the art in research on written text production. The chapters in the first part offer contributions to the creation of new theories and models for writing processes. The second part examines specific elements of the w
The Friction Theory for Viscosity Modeling
Cisneros, Sergio; Zeberg-Mikkelsen, Claus Kjær; Stenby, Erling Halfdan
2001-01-01
In this work the one-parameter friction theory (f-theory) general models have been extended to the viscosity prediction and modeling of characterized oils. It is demonstrated that these simple models, which take advantage of the repulsive and attractive pressure terms of cubic equations of state...... such as the SRK, PR and PRSV, can provide accurate viscosity prediction and modeling of characterized oils. In the case of light reservoir oils, whose properties are close to those of normal alkanes, the one-parameter f-theory general models can predict the viscosity of these fluids with good accuracy. Yet......, in the case when experimental information is available a more accurate modeling can be obtained by means of a simple tuning procedure. A tuned f-theory general model can deliver highly accurate viscosity modeling above the saturation pressure and good prediction of the liquid-phase viscosity at pressures...
Quantum-Dot Semiconductor Optical Amplifiers: State Space Model versus Rate Equation Model
Hussein Taleb
2013-01-01
Full Text Available A simple and accurate dynamic model for QD-SOAs is proposed. The proposed model is based on the state space theory, where by eliminating the distance dependence of the rate equation model of the QD-SOA; we derive a state space model for the device. A comparison is made between the rate equation model and the state space model under both steady state and transient regimes. Simulation results demonstrate that the derived state space model not only is much simpler and faster than the rate equation model, but also it is as accurate as the rate equation model.
Power law distribution of seismic rates: theory and data
Saichev, A
2004-01-01
We report an empirical determination of the probability density functions P(r) of the number r of earthquakes in finite space-time windows for the California catalog, over fixed spatial boxes 5 x 5 km^2 and time intervals dt =1, 10, 100 and 1000 days. We find a stable power law tail P(r) ~ 1/r^{1+mu} with exponent mu \\approx 1.6 for all time intervals. These observations are explained by a simple stochastic branching process previously studied by many authors, the ETAS (epidemic-type aftershock sequence) model which assumes that each earthquake can trigger other earthquakes (``aftershocks''). An aftershock sequence results in this model from the cascade of aftershocks of each past earthquake. We develop the full theory in terms of generating functions for describing the space-time organization of earthquake sequences and develop several approximations to solve the equations. The calibration of the theory to the empirical observations shows that it is essential to augment the ETAS model by taking account of th...
Statistical rate theory and kinetic energy-resolved ion chemistry: theory and applications.
Armentrout, P B; Ervin, Kent M; Rodgers, M T
2008-10-16
Ion chemistry, first discovered 100 years ago, has profitably been coupled with statistical rate theories, developed about 80 years ago and refined since. In this overview, the application of statistical rate theory to the analysis of kinetic-energy-dependent collision-induced dissociation (CID) reactions is reviewed. This procedure accounts for and quantifies the kinetic shifts that are observed as systems increase in size. The statistical approach developed allows straightforward extension to systems undergoing competitive or sequential dissociations. Such methods can also be applied to the reverse of the CID process, association reactions, as well as to quantitative analysis of ligand exchange processes. Examples of each of these types of reactions are provided and the literature surveyed for successful applications of this statistical approach to provide quantitative thermochemical information. Such applications include metal-ligand complexes, metal clusters, proton-bound complexes, organic intermediates, biological systems, saturated organometallic complexes, and hydrated and solvated species.
The theory and crisis of free floating exchange rates
R. TAMBORINI
2013-12-01
Full Text Available Following the end of the Bretton Woods system faith in a freely floating exchange rate regime, and in particular the complete Laissez-faire policy of the US after 1980, was supported by a large and influential literature - the so-called "stock theory of the exchange rate" - which has had almost absolute dominance in the field. The present work is of primarily a theoretical nature, calling attention to the recent drastic change of opinion regarding the efficiency of foreign exchange markets, of expectations and speculation. In contrast to the previously held belief, agents that anticipate the market are now accused of inefficient, if not irrational behaviour; they are said to be misinterpreting or violating the "hard data" that should lead the market on the path of equilibrium. According to the author, this new vein is challenging, yet the new "bad" deus ex machina is as unconvincing as its "good" predecessors. Thus, the focus is precisely on the fundamental factors of the dynamics of exchange rates in the current environment of integrated finance.
Application of arrangement theory to unfolding models
Kamiya, Hidehiko; Tokushige, Norihide
2010-01-01
Arrangement theory plays an essential role in the study of the unfolding model used in many fields. This paper describes how arrangement theory can be usefully employed in solving the problems of counting (i) the number of admissible rankings in an unfolding model and (ii) the number of ranking patterns generated by unfolding models. The paper is mostly expository but also contains some new results such as simple upper and lower bounds for the number of ranking patterns in the unidimensional case.
Precision decay rate calculations in quantum field theory
Andreassen, Anders; Frost, William; Schwartz, Matthew D
2016-01-01
Tunneling in quantum field theory is worth understanding properly, not least because it controls the long term fate of our universe. There are however, a number of features of tunneling rate calculations which lack a desirable transparency, such as the necessity of analytic continuation, the appropriateness of using an effective instead of classical potential, and the sensitivity to short-distance physics. This paper attempts to review in pedagogical detail the physical origin of tunneling and its connection to the path integral. Both the traditional potential-deformation method and a recent more direct propagator-based method are discussed. Some new insights from using approximate semi-classical solutions are presented. In addition, we explore the sensitivity of the lifetime of our universe to short distance physics, such as quantum gravity, emphasizing a number of important subtleties.
Scientific Theories, Models and the Semantic Approach
Décio Krause
2007-12-01
Full Text Available According to the semantic view, a theory is characterized by a class of models. In this paper, we examine critically some of the assumptions that underlie this approach. First, we recall that models are models of something. Thus we cannot leave completely aside the axiomatization of the theories under consideration, nor can we ignore the metamathematics used to elaborate these models, for changes in the metamathematics often impose restrictions on the resulting models. Second, based on a parallel between van Fraassen’s modal interpretation of quantum mechanics and Skolem’s relativism regarding set-theoretic concepts, we introduce a distinction between relative and absolute concepts in the context of the models of a scientific theory. And we discuss the significance of that distinction. Finally, by focusing on contemporary particle physics, we raise the question: since there is no general accepted unification of the parts of the standard model (namely, QED and QCD, we have no theory, in the usual sense of the term. This poses a difficulty: if there is no theory, how can we speak of its models? What are the latter models of? We conclude by noting that it is unclear that the semantic view can be applied to contemporary physical theories.
Realizations of interest rate models
Nieuwenhuis, J.W.
2000-01-01
In this paper we comment on a recent paper by Bj¨ork and Gombani. In contrast to this paper our starting point is not the Musiela equation but the forward rate dynamics. In our approach we do not need to talk about infinitesimal generators.
Realizations of interest rate models
Nieuwenhuis, J.W.
2000-01-01
In this paper we comment on a recent paper by Bj¨ork and Gombani. In contrast to this paper our starting point is not the Musiela equation but the forward rate dynamics. In our approach we do not need to talk about infinitesimal generators.
Multiplicative earthquake likelihood models incorporating strain rates
Rhoades, D. A.; Christophersen, A.; Gerstenberger, M. C.
2017-01-01
SUMMARYWe examine the potential for strain-rate variables to improve long-term earthquake likelihood models. We derive a set of multiplicative hybrid earthquake likelihood models in which cell rates in a spatially uniform baseline model are scaled using combinations of covariates derived from earthquake catalogue data, fault data, and strain-rates for the New Zealand region. Three components of the strain rate estimated from GPS data over the period 1991-2011 are considered: the shear, rotational and dilatational strain rates. The hybrid model parameters are optimised for earthquakes of M 5 and greater over the period 1987-2006 and tested on earthquakes from the period 2012-2015, which is independent of the strain rate estimates. The shear strain rate is overall the most informative individual covariate, as indicated by Molchan error diagrams as well as multiplicative modelling. Most models including strain rates are significantly more informative than the best models excluding strain rates in both the fitting and testing period. A hybrid that combines the shear and dilatational strain rates with a smoothed seismicity covariate is the most informative model in the fitting period, and a simpler model without the dilatational strain rate is the most informative in the testing period. These results have implications for probabilistic seismic hazard analysis and can be used to improve the background model component of medium-term and short-term earthquake forecasting models.
Constraint theory multidimensional mathematical model management
Friedman, George J
2017-01-01
Packed with new material and research, this second edition of George Friedman’s bestselling Constraint Theory remains an invaluable reference for all engineers, mathematicians, and managers concerned with modeling. As in the first edition, this text analyzes the way Constraint Theory employs bipartite graphs and presents the process of locating the “kernel of constraint” trillions of times faster than brute-force approaches, determining model consistency and computational allowability. Unique in its abundance of topological pictures of the material, this book balances left- and right-brain perceptions to provide a thorough explanation of multidimensional mathematical models. Much of the extended material in this new edition also comes from Phan Phan’s PhD dissertation in 2011, titled “Expanding Constraint Theory to Determine Well-Posedness of Large Mathematical Models.” Praise for the first edition: "Dr. George Friedman is indisputably the father of the very powerful methods of constraint theory...
Biological evolution model with conditional mutation rates
Saakian, David B.; Ghazaryan, Makar; Bratus, Alexander; Hu, Chin-Kun
2017-05-01
We consider an evolution model, in which the mutation rates depend on the structure of population: the mutation rates from lower populated sequences to higher populated sequences are reduced. We have applied the Hamilton-Jacobi equation method to solve the model and calculate the mean fitness. We have found that the modulated mutation rates, directed to increase the mean fitness.
An "Emergent Model" for Rate of Change
Herbert, Sandra; Pierce, Robyn
2008-01-01
Does speed provide a "model for" rate of change in other contexts? Does JavaMathWorlds (JMW), animated simulation software, assist in the development of the "model for" rate of change? This project investigates the transference of understandings of rate gained in a motion context to a non-motion context. Students were 27 14-15 year old students at…
Adult Attachment Ratings (AAR): an item response theory analysis.
Pilkonis, Paul A; Kim, Yookyung; Yu, Lan; Morse, Jennifer Q
2014-01-01
The Adult Attachment Ratings (AAR) include 3 scales for anxious, ambivalent attachment (excessive dependency, interpersonal ambivalence, and compulsive care-giving), 3 for avoidant attachment (rigid self-control, defensive separation, and emotional detachment), and 1 for secure attachment. The scales include items (ranging from 6-16 in their original form) scored by raters using a 3-point format (0 = absent, 1 = present, and 2 = strongly present) and summed to produce a total score. Item response theory (IRT) analyses were conducted with data from 414 participants recruited from psychiatric outpatient, medical, and community settings to identify the most informative items from each scale. The IRT results allowed us to shorten the scales to 5-item versions that are more precise and easier to rate because of their brevity. In general, the effective range of measurement for the scales was 0 to +2 SDs for each of the attachment constructs; that is, from average to high levels of attachment problems. Evidence for convergent and discriminant validity of the scales was investigated by comparing them with the Experiences of Close Relationships-Revised (ECR-R) scale and the Kobak Attachment Q-sort. The best consensus among self-reports on the ECR-R, informant ratings on the ECR-R, and expert judgments on the Q-sort and the AAR emerged for anxious, ambivalent attachment. Given the good psychometric characteristics of the scale for secure attachment, however, this measure alone might provide a simple alternative to more elaborate procedures for some measurement purposes. Conversion tables are provided for the 7 scales to facilitate transformation from raw scores to IRT-calibrated (theta) scores.
The Nomad Model: Theory, Developments and Applications
Campanella, M.; Hoogendoorn, S.P.; Daamen, W.
2014-01-01
This paper presents details of the developments of the Nomad model after being introduced more than 12 years ago. The model is derived from a normative theory of pedestrian behavior making it unique under microscopic models. Nomad has been successfully applied in several cases indicating that it ful
Integrable Models, SUSY Gauge Theories, and String Theory
Nam, S
1996-01-01
We consider the close relation between duality in N=2 SUSY gauge theories and integrable models. Vario us integrable models ranging from Toda lattices, Calogero models, spinning tops, and spin chains are re lated to the quantum moduli space of vacua of N=2 SUSY gauge theories. In particular, SU(3) gauge t heories with two flavors of massless quarks in the fundamental representation can be related to the spec tral curve of the Goryachev-Chaplygin top, which is a Nahm's equation in disguise. This can be generaliz ed to the cases with massive quarks, and N_f = 0,1,2, where a system with seven dimensional phas e space has the relevant hyperelliptic curve appear in the Painlevé test. To understand the stringy o rigin of the integrability of these theories we obtain exact nonperturbative point particle limit of ty pe II string compactified on a Calabi-Yau manifold, which gives the hyperelliptic curve of SU(2) QCD w ith N_f =1 hypermultiplet.
Theory and modeling of active brazing.
van Swol, Frank B.; Miller, James Edward; Lechman, Jeremy B.; Givler, Richard C.
2013-09-01
Active brazes have been used for many years to produce bonds between metal and ceramic objects. By including a relatively small of a reactive additive to the braze one seeks to improve the wetting and spreading behavior of the braze. The additive modifies the substrate, either by a chemical surface reaction or possibly by alloying. By its nature, the joining process with active brazes is a complex nonequilibrium non-steady state process that couples chemical reaction, reactant and product diffusion to the rheology and wetting behavior of the braze. Most of the these subprocesses are taking place in the interfacial region, most are difficult to access by experiment. To improve the control over the brazing process, one requires a better understanding of the melting of the active braze, rate of the chemical reaction, reactant and product diffusion rates, nonequilibrium composition-dependent surface tension as well as the viscosity. This report identifies ways in which modeling and theory can assist in improving our understanding.
Zhang, Yanchuan; Stecher, Thomas; Cvitaš, Marko T; Althorpe, Stuart C
2014-11-20
Quantum transition-state theory (QTST) and free-energy instanton theory (FEIT) are two closely related methods for estimating the quantum rate coefficient from the free-energy at the reaction barrier. In calculations on one-dimensional models, FEIT typically gives closer agreement than QTST with the exact quantum results at all temperatures below the crossover to deep tunneling, suggesting that FEIT is a better approximation than QTST in this regime. Here we show that this simple trend does not hold for systems of greater dimensionality. We report tests on several collinear and three-dimensional reactions, in which QTST outperforms FEIT over a range of temperatures below crossover, which can extend down to half the crossover temperature (below which FEIT outperforms QTST). This suggests that QTST-based methods such as ring-polymer molecular dynamics (RPMD) may often give closer agreement with the exact quantum results than FEIT.
A course on basic model theory
Sarbadhikari, Haimanti
2017-01-01
This self-contained book is an exposition of the fundamental ideas of model theory. It presents the necessary background from logic, set theory and other topics of mathematics. Only some degree of mathematical maturity and willingness to assimilate ideas from diverse areas are required. The book can be used for both teaching and self-study, ideally over two semesters. It is primarily aimed at graduate students in mathematical logic who want to specialise in model theory. However, the first two chapters constitute the first introduction to the subject and can be covered in one-semester course to senior undergraduate students in mathematical logic. The book is also suitable for researchers who wish to use model theory in their work.
Lattice Gauge Theories and Spin Models
Mathur, Manu
2016-01-01
The Wegner $Z_2$ gauge theory-$Z_2$ Ising spin model duality in $(2+1)$ dimensions is revisited and derived through a series of canonical transformations. These $Z_2$ results are directly generalized to SU(N) lattice gauge theory in $(2+1)$ dimensions to obtain a dual SU(N) spin model in terms of the SU(N) magnetic fields and electric scalar potentials. The gauge-spin duality naturally leads to a new gauge invariant disorder operator for SU(N) lattice gauge theory. A variational ground state of the dual SU(2) spin model with only nearest neighbour interactions is constructed to analyze SU(2) lattice gauge theory.
Gauge theories and integrable lattice models
Witten, Edward
1989-08-01
Investigations of new knot polynomials discovered in the last few years have shown them to be intimately connected with soluble models of two dimensional lattice statistical mechanics. In this paper, these results, which in time may illuminate the whole question of why integrable lattice models exist, are reconsidered from the point of view of three dimensional gauge theory. Expectation values of Wilson lines in three dimensional Chern-Simons gauge theories can be computed by evaluating the partition functions of certain lattice models on finite graphs obtained by projecting the Wilson lines to the plane. The models in question — previously considered in both the knot theory and statistical mechanics — are IRF models in which the local Boltzmann weights are the matrix elements of braiding matrices in rational conformal field theories. These matrix elements, in turn, can be presented in three dimensional gauge theory in terms of the expectation value of a certain tetrahedral configuration of Wilson lines. This representation makes manifest a surprising symmetry of the braiding matrix elements in conformal field theory.
Modeling Techniques: Theory and Practice
Odd A. Asbjørnsen
1985-01-01
A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the pro...
Modeling Techniques: Theory and Practice
Odd A. Asbjørnsen
1985-07-01
Full Text Available A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the process variables. This allows residence time distribution function parameters to be estimated with the reaction in situ, but without any correlation between the estimated residence time distribution parameters and the estimated reaction kinetic parameters. A general word of warning is given to the choice of wrong mathematical structure of models.
RATING CREATION FOR PROFESSIONAL EDUCATIONAL ORGANIZATIONS BASED ON THE ITEM RESPONSE THEORY
N. E. Erganova
2016-01-01
Full Text Available The aim of the investigation is to theoretically justify and describe approval of the measurement of the level of provision of educational services, education qualities and rating of vocational educational organizations.Methods. The fundamentals of methodology of the research conducted by authors are made by provisions of system approach; research on a schematization and modeling of pedagogical objects; the provision of the theory of measurement of latent variables. As the main methods of research the analysis, synthesis, the comparative analysis, statistical methods of processing of results of research are applied.Results. The paper gives a short comparative analysis of potentials of qualitative approach and strong points of the theory of latent variables in evaluating the quality of education and ratings of the investigated object. The technique of measurement of level of rendering educational services at creation of a rating of the professional educational organizations is stated.Scientific novelty. Pedagogical opportunities of the theory of measurement of latent variables are investigated; the principles of creation of ratings of the professional educational organizations are designated.Practical significance. The operational construct of the latent variable «quality of education» for the secondary professional education (SPE approved in the Perm Territory which can form base of formation of similar constructs for creation of a rating of the professional educational organizations in other regions is developed.
Using SAS PROC MCMC for Item Response Theory Models
Ames, Allison J.; Samonte, Kelli
2015-01-01
Interest in using Bayesian methods for estimating item response theory models has grown at a remarkable rate in recent years. This attentiveness to Bayesian estimation has also inspired a growth in available software such as WinBUGS, R packages, BMIRT, MPLUS, and SAS PROC MCMC. This article intends to provide an accessible overview of Bayesian…
Graphical Model Theory for Wireless Sensor Networks
Davis, William B.
2002-12-08
Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.
F-theory and linear sigma models
Bershadsky, M; Greene, Brian R; Johansen, A; Lazaroiu, C I
1998-01-01
We present an explicit method for translating between the linear sigma model and the spectral cover description of SU(r) stable bundles over an elliptically fibered Calabi-Yau manifold. We use this to investigate the 4-dimensional duality between (0,2) heterotic and F-theory compactifications. We indirectly find that much interesting heterotic information must be contained in the `spectral bundle' and in its dual description as a gauge theory on multiple F-theory 7-branes. A by-product of these efforts is a method for analyzing semistability and the splitting type of vector bundles over an elliptic curve given as the sheaf cohomology of a monad.
Spreading Models in Banach Space Theory
Argyros, S A; Tyros, K
2010-01-01
We extend the classical Brunel-Sucheston definition of the spreading model by introducing the $\\mathcal{F}$-sequences $(x_s)_{s\\in\\mathcal{F}}$ in a Banach space and the plegma families in $\\mathcal{F}$ where $\\mathcal{F}$ is a regular thin family. The new concept yields a transfinite increasing hierarchy of classes of 1-subsymmetric sequences. We explore the corresponding theory and we present examples establishing this hierarchy and illustrating the limitation of the theory.
Integrable Lattice Models From Gauge Theory
Witten, Edward
2016-01-01
These notes provide an introduction to recent work by Kevin Costello in which integrable lattice models of classical statistical mechanics in two dimensions are understood in terms of quantum gauge theory in four dimensions. This construction will be compared to the more familiar relationship between quantum knot invariants in three dimensions and Chern-Simons gauge theory. (Based on a Whittaker Colloquium at the University of Edinburgh and a lecture at Strings 2016 in Beijing.)
Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.
Gao, Wei; Kwong, Sam; Jia, Yuheng
2017-08-25
In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.
Security Theorems via Model Theory
Joshua Guttman
2009-11-01
Full Text Available A model-theoretic approach can establish security theorems for cryptographic protocols. Formulas expressing authentication and non-disclosure properties of protocols have a special form. They are quantified implications for all xs . (phi implies for some ys . psi. Models (interpretations for these formulas are *skeletons*, partially ordered structures consisting of a number of local protocol behaviors. *Realized* skeletons contain enough local sessions to explain all the behavior, when combined with some possible adversary behaviors. We show two results. (1 If phi is the antecedent of a security goal, then there is a skeleton A_phi such that, for every skeleton B, phi is satisfied in B iff there is a homomorphism from A_phi to B. (2 A protocol enforces for all xs . (phi implies for some ys . psi iff every realized homomorphic image of A_phi satisfies psi. Hence, to verify a security goal, one can use the Cryptographic Protocol Shapes Analyzer CPSA (TACAS, 2007 to identify minimal realized skeletons, or "shapes," that are homomorphic images of A_phi. If psi holds in each of these shapes, then the goal holds.
Vacation queueing models theory and applications
Tian, Naishuo
2006-01-01
A classical queueing model consists of three parts - arrival process, service process, and queue discipline. However, a vacation queueing model has an additional part - the vacation process which is governed by a vacation policy - that can be characterized by three aspects: 1) vacation start-up rule; 2) vacation termination rule, and 3) vacation duration distribution. Hence, vacation queueing models are an extension of classical queueing theory. Vacation Queueing Models: Theory and Applications discusses systematically and in detail the many variations of vacation policy. By allowing servers to take vacations makes the queueing models more realistic and flexible in studying real-world waiting line systems. Integrated in the book's discussion are a variety of typical vacation model applications that include call centers with multi-task employees, customized manufacturing, telecommunication networks, maintenance activities, etc. Finally, contents are presented in a "theorem and proof" format and it is invaluabl...
Some Remarks on the Model Theory of Epistemic Plausibility Models
Demey, Lorenz
2010-01-01
Classical logics of knowledge and belief are usually interpreted on Kripke models, for which a mathematically well-developed model theory is available. However, such models are inadequate to capture dynamic phenomena. Therefore, epistemic plausibility models have been introduced. Because these are much richer structures than Kripke models, they do not straightforwardly inherit the model-theoretical results of modal logic. Therefore, while epistemic plausibility structures are well-suited for modeling purposes, an extensive investigation of their model theory has been lacking so far. The aim of the present paper is to fill exactly this gap, by initiating a systematic exploration of the model theory of epistemic plausibility models. Like in 'ordinary' modal logic, the focus will be on the notion of bisimulation. We define various notions of bisimulations (parametrized by a language L) and show that L-bisimilarity implies L-equivalence. We prove a Hennesy-Milner type result, and also two undefinability results. ...
Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.
Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth
2015-01-01
This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana.
Transfer Rate Models for Gnutella Signaling Traffic
2006-01-01
This paper reports on transfer rate models for the Gnutella signaling protocol. New results on message-level and IP-level rates are presented. The models are based on traffic captured at the Blekinge Institute of Technology (BTH) campus in Sweden and offer several levels of granularity: message type, application layer and network layer. The aim is to obtain parsimonous models suitable for analysis and simulation of P2P workload. IEEE Explorer
Micromechanical modeling of rate-dependent behavior of Connective tissues.
Fallah, A; Ahmadian, M T; Firozbakhsh, K; Aghdam, M M
2017-03-07
In this paper, a constitutive and micromechanical model for prediction of rate-dependent behavior of connective tissues (CTs) is presented. Connective tissues are considered as nonlinear viscoelastic material. The rate-dependent behavior of CTs is incorporated into model using the well-known quasi-linear viscoelasticity (QLV) theory. A planar wavy representative volume element (RVE) is considered based on the tissue microstructure histological evidences. The presented model parameters are identified based on the available experiments in the literature. The presented constitutive model introduced to ABAQUS by means of UMAT subroutine. Results show that, monotonic uniaxial test predictions of the presented model at different strain rates for rat tail tendon (RTT) and human patellar tendon (HPT) are in good agreement with experimental data. Results of incremental stress-relaxation test are also presented to investigate both instantaneous and viscoelastic behavior of connective tissues.
Overnight Index Rate: Model, calibration and simulation
Olga Yashkir; Yuri Yashkir
2014-01-01
In this study, the extended Overnight Index Rate (OIR) model is presented. The fitting function for the probability distribution of the OIR daily returns is based on three different Gaussian distributions which provide modelling of the narrow central peak and the wide fat-tailed component. The calibration algorithm for the model is developed and investigated using the historical OIR data.
Overnight Index Rate: Model, calibration and simulation
Olga Yashkir
2014-12-01
Full Text Available In this study, the extended Overnight Index Rate (OIR model is presented. The fitting function for the probability distribution of the OIR daily returns is based on three different Gaussian distributions which provide modelling of the narrow central peak and the wide fat-tailed component. The calibration algorithm for the model is developed and investigated using the historical OIR data.
Overnight Index Rate: Model, Calibration, and Simulation
Olga Yashkir; Yuri Yashkir
2013-01-01
In this study, the extended Overnight Index Rate (OIR) model is presented. The fitting function for the probability distribution of the OIR daily returns is based on three different Gaussian distributions which provide modelling of the narrow central peak and the wide fat-tailed component. The calibration algorithm for the model is developed and investigated using the historical OIR data.
Supersymmetric Microscopic Theory of the Standard Model
Ter-Kazarian, G T
2000-01-01
We promote the microscopic theory of standard model (MSM, hep-ph/0007077) into supersymmetric framework in order to solve its technical aspects of vacuum zero point energy and hierarchy problems, and attempt, further, to develop its realistic viable minimal SUSY extension. Among other things that - the MSM provides a natural unification of geometry and the field theory, has clarified the physical conditions in which the geometry and particles come into being, in microscopic sense enables an insight to key problems of particle phenomenology and answers to some of its nagging questions - a present approach also leads to quite a new realization of the SUSY yielding a physically realistic particle spectrum. It stems from the special subquark algebra, from which the nilpotent supercharge operators are derived. The resulting theory makes plausible following testable implications for the current experiments at LEP2, at the Tevatron and at LHC drastically different from those of the conventional MSSM models: 1. All t...
Adsorption Rate Models for Multicomponent Adsorption Systems
姚春才
2004-01-01
Three adsorption rate models are derived for multicomponent adsorption systems under either pore diffusion or surface diffusion control. The linear driving force (LDF) model is obtained by assuming a parabolic intraparticle concentration profile. Models I and Ⅱ are obtained from the parabolic concentration layer approximation. Examples are presented to demonstrate the usage and accuracy of these models. It is shown that Model I is suitable for batch adsorption calculations and Model Ⅱ provides a good approximation in fixed-bed adsorption processes while the LDF model should not be used in batch adsorption and may be considered acceptable in fixed-bed adsorption where the parameter Ti is relatively large.
Application of fuzzy theory on earthquake damage rate estimation of buildings
邵扬威; 吴玉祥; 高士峰; 黄麒然; 张宽勇
2014-01-01
Variations between earthquakes result in many factors that influence post-earthquake building damage (e.g., ground motion parameters, building structure, site information, and quality of construction). Consequently, it is necessary to develop an appropriate building damage-rate estimation model. The building damage survey data were recorded and constructed into files by the Architecture and Building Research Institute (ABRI), Taiwan for the 1999 Chi-Chi earthquake in the Nantou region as a basis for developing a building damage rate estimation model by applying fuzzy theory to express the fragility curves of buildings as a membership function. Empirical verification was performed using post-earthquake building damage data in the Taichung city that suffered relatively severe damage. Results indicate that fuzzy theory can be applied to predict building damage rates and that the estimated results are similar to actual disaster figures. Prediction of disaster damage using building damage rates can provide a reference for immediate disaster response during earthquakes and for regular disaster prevention and rescue planning.
Engaging Theories and Models to Inform Practice
Kraus, Amanda
2012-01-01
Helping students prepare for the complex transition to life after graduation is an important responsibility shared by those in student affairs and others in higher education. This chapter explores theories and models that can inform student affairs practitioners and faculty in preparing students for life after college. The focus is on roles,…
Recursive renormalization group theory based subgrid modeling
Zhou, YE
1991-01-01
Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.
Aligning Grammatical Theories and Language Processing Models
Lewis, Shevaun; Phillips, Colin
2015-01-01
We address two important questions about the relationship between theoretical linguistics and psycholinguistics. First, do grammatical theories and language processing models describe separate cognitive systems, or are they accounts of different aspects of the same system? We argue that most evidence is consistent with the one-system view. Second,…
Generalized Rate Theory for Void and Bubble Swelling and its Application to Plutonium Metal Alloys
Allen, P. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wolfer, W. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-10-16
In the classical rate theory for void swelling, vacancies and self-interstitials are produced by radiation in equal numbers, and in addition, thermal vacancies are also generated at the sinks, primarily at edge dislocations, at voids, and at grain boundaries. In contrast, due to the high formation energy of self-interstitials for normal metals and alloys, their thermal generation is negligible, as pointed out by Bullough and Perrin [1]. However, recent DFT calculations of the formation energy of self-interstitial atoms in bcc metals [2,3] have revealed that the sum of formation and migration energies for self-interstitials atoms (SIA) is of the same order of magnitude as for vacancies. This is illustrated in Fig. 1 that shows the ratio of the activation energies for thermal generation of SIA and vacancies. For fcc metals, this ratio is around three, but for bcc metals it is around 1.5. Reviewing theoretical predictions of point defect properties in δ-Pu [4], this ratio could possibly be less than one. As a result, thermal generation of SIA in bcc metals and in plutonium must be taken into considerations when modeling the growth of voids and of helium bubbles, and the classical rate theory (CRT) for void and bubble swelling must be extended to a generalized rate theory (GRT).
Term structure modeling and asymptotic long rate
Yao, Y.
1999-01-01
This paper examines the dynamics of the asymptotic long rate in three classes of term structure models. It shows that, in a frictionless and arbitrage-free market, the asymptotic long rate is a non-decreasing process. This gives an alternative proof of the same result of Dybvig et al. (Dybvig, P.H.,
Methods of modelling relative growth rate
Arne Pommerening; Anders Muszta
2015-01-01
Background:Analysing and modelling plant growth is an important interdisciplinary field of plant science. The use of relative growth rates, involving the analysis of plant growth relative to plant size, has more or less independently emerged in different research groups and at different times and has provided powerful tools for assessing the growth performance and growth efficiency of plants and plant populations. In this paper, we explore how these isolated methods can be combined to form a consistent methodology for modelling relative growth rates. Methods:We review and combine existing methods of analysing and modelling relative growth rates and apply a combination of methods to Sitka spruce (Picea sitchensis (Bong.) Carr.) stem-analysis data from North Wales (UK) and British Douglas fir (Pseudotsuga menziesi (Mirb.) Franco) yield table data. Results:The results indicate that, by combining the approaches of different plant-growth analysis laboratories and using them simultaneously, we can advance and standardise the concept of relative plant growth. Particularly the growth multiplier plays an important role in modelling relative growth rates. Another useful technique has been the recent introduction of size-standardised relative growth rates. Conclusions:Modelling relative growth rates mainly serves two purposes, 1) an improved analysis of growth performance and efficiency and 2) the prediction of future or past growth rates. This makes the concept of relative growth ideally suited to growth reconstruction as required in dendrochronology, climate change and forest decline research and for interdisciplinary research projects beyond the realm of plant science.
Methods of modelling relative growth rate
Arne Pommerening
2015-03-01
Full Text Available Background Analysing and modelling plant growth is an important interdisciplinary field of plant science. The use of relative growth rates, involving the analysis of plant growth relative to plant size, has more or less independently emerged in different research groups and at different times and has provided powerful tools for assessing the growth performance and growth efficiency of plants and plant populations. In this paper, we explore how these isolated methods can be combined to form a consistent methodology for modelling relative growth rates. Methods We review and combine existing methods of analysing and modelling relative growth rates and apply a combination of methods to Sitka spruce (Picea sitchensis (Bong. Carr. stem-analysis data from North Wales (UK and British Douglas fir (Pseudotsuga menziesii (Mirb. Franco yield table data. Results The results indicate that, by combining the approaches of different plant-growth analysis laboratories and using them simultaneously, we can advance and standardise the concept of relative plant growth. Particularly the growth multiplier plays an important role in modelling relative growth rates. Another useful technique has been the recent introduction of size-standardised relative growth rates. Conclusions Modelling relative growth rates mainly serves two purposes, 1 an improved analysis of growth performance and efficiency and 2 the prediction of future or past growth rates. This makes the concept of relative growth ideally suited to growth reconstruction as required in dendrochronology, climate change and forest decline research and for interdisciplinary research projects beyond the realm of plant science.
Consistency problems for Heath-Jarrow-Morton interest rate models
Filipović, Damir
2001-01-01
The book is written for a reader with knowledge in mathematical finance (in particular interest rate theory) and elementary stochastic analysis, such as provided by Revuz and Yor (Continuous Martingales and Brownian Motion, Springer 1991). It gives a short introduction both to interest rate theory and to stochastic equations in infinite dimension. The main topic is the Heath-Jarrow-Morton (HJM) methodology for the modelling of interest rates. Experts in SDE in infinite dimension with interest in applications will find here the rigorous derivation of the popular "Musiela equation" (referred to in the book as HJMM equation). The convenient interpretation of the classical HJM set-up (with all the no-arbitrage considerations) within the semigroup framework of Da Prato and Zabczyk (Stochastic Equations in Infinite Dimensions) is provided. One of the principal objectives of the author is the characterization of finite-dimensional invariant manifolds, an issue that turns out to be vital for applications. Finally, ge...
Vegge, Tejs
2004-01-01
The dissociation of molecular hydrogen on a Mgs0001d surface and the subsequent diffusion of atomic hydrogen into the magnesium substrate is investigated using Density Functional Theory (DFT) calculations and rate theory. The minimum energy path and corresponding transition states are located using...... the nudged elastic band method, and rates of the activated processes are calculated within the harmonic approximation to transition state rate theory, using both classical and quantum partition functions based atomic vibrational frequencies calculated by DFT. The dissociation/recombination of H2 is found...... to be rate-limiting for the ab- and desorption of hydrogen, respectively. Zero-point energy contributions are found to be substantial for the diffusion of atomic hydrogen, but classical rates are still found to be within an order of magnitude at room temperature....
Lattice gauge theories and spin models
Mathur, Manu; Sreeraj, T. P.
2016-10-01
The Wegner Z2 gauge theory-Z2 Ising spin model duality in (2 +1 ) dimensions is revisited and derived through a series of canonical transformations. The Kramers-Wannier duality is similarly obtained. The Wegner Z2 gauge-spin duality is directly generalized to SU(N) lattice gauge theory in (2 +1 ) dimensions to obtain the SU(N) spin model in terms of the SU(N) magnetic fields and their conjugate SU(N) electric scalar potentials. The exact and complete solutions of the Z2, U(1), SU(N) Gauss law constraints in terms of the corresponding spin or dual potential operators are given. The gauge-spin duality naturally leads to a new gauge invariant magnetic disorder operator for SU(N) lattice gauge theory which produces a magnetic vortex on the plaquette. A variational ground state of the SU(2) spin model with nearest neighbor interactions is constructed to analyze SU(2) gauge theory.
Black hole thermalization rate from brane anti-brane model
Lifschytz, G
2004-01-01
We develop the quasi-particle picture for Schwarzchild and far from extremal black holes. We show that the thermalization equations of the black hole is recovered from the model of branes and anti-branes. This can also be viewed as a field theory explanation of the relationship between area and entropy for these black holes. As a by product the annihilation rate of branes and anti-branes is computed.
Black hole thermalization rate from brane anti-brane model
Lifschytz, Gilad E-mail: giladl@research.haifa.ac.il
2004-08-01
We develop the quasi-particle picture for Schwarzchild and far from extremal black holes. We show that the thermalization equations of the black hole is recovered from the model of branes and anti-branes. This can also be viewed as a field theory explanation of the relationship between area and entropy for these black holes. As a by product the annihilation rate of branes and anti-branes is computed. (author)
F-theory and linear sigma models
Bershadsky, M.; Johansen, A. [Harvard Univ., Cambridge, MA (United States). Lyman Lab. of Physics; Chiang, T.M. [Newman Laboratory of Nuclear Studies, Cornell University, Ithaca, NY 14850 (United States); Greene, B.R.; Lazaroiu, C.I. [Departments of Physics and Mathematics, Columbia University, New York, NY 10027 (United States)
1998-09-07
We present an explicit method for translating between the linear sigma model and the spectral cover description of SU(r) stable bundles over an elliptically fibered Calabi-Yau manifold. We use this to investigate the four-dimensional duality between (0,2) heterotic and F-theory compactifications. We indirectly find that much interesting heterotic information must be contained in the `spectral bundle` and in its dual description as a gauge theory on multiple F-theory 7-branes. A by-product of these efforts is a method for analyzing semistability and the splitting type of vector bundles over an elliptic curve given as the sheaf cohomology of a monad. (orig.) 24 refs.
Microscopic Theory of the Standard Model
Ter-Kazarian, G T
2000-01-01
The operator manifold formalism (part I) enables the unification of the geometry and the field theory, and yields the quantization of geometry. This is the mathematical framework for our physical outlook that the geometry and fields, with the internal symmetries and all interactions, as well the four major principles of relativity (special and general), quantum, gauge and colour confinement, are derivative, and come into being simultaneously in the stable system of the underlying ``primordial structures''. In part II we attempt to develop, further, the microscopic approach to the Standard Model of particle physics, which enables an insight to the key problems of particle phenomenology. We suggest the microscopic theory of the unified electroweak interactions. The Higgs bosons have arisen on an analogy of the Cooper pairs in superconductivity. Besides of microscopic interpretation of all physical parameters the resulting theory also makes plausible following testable implications for the current experiments: 1...
Inference of emission rates from multiple sources using Bayesian probability theory.
Yee, Eugene; Flesch, Thomas K
2010-03-01
The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.
Crack propagation modeling using Peridynamic theory
Hafezi, M. H.; Alebrahim, R.; Kundu, T.
2016-04-01
Crack propagation and branching are modeled using nonlocal peridynamic theory. One major advantage of this nonlocal theory based analysis tool is the unifying approach towards material behavior modeling - irrespective of whether the crack is formed in the material or not. No separate damage law is needed for crack initiation and propagation. This theory overcomes the weaknesses of existing continuum mechanics based numerical tools (e.g. FEM, XFEM etc.) for identifying fracture modes and does not require any simplifying assumptions. Cracks grow autonomously and not necessarily along a prescribed path. However, in some special situations such as in case of ductile fracture, the damage evolution and failure depend on parameters characterizing the local stress state instead of peridynamic damage modeling technique developed for brittle fracture. For brittle fracture modeling the bond is simply broken when the failure criterion is satisfied. This simulation helps us to design more reliable modeling tool for crack propagation and branching in both brittle and ductile materials. Peridynamic analysis has been found to be very demanding computationally, particularly for real-world structures (e.g. vehicles, aircrafts, etc.). It also requires a very expensive visualization process. The goal of this paper is to bring awareness to researchers the impact of this cutting-edge simulation tool for a better understanding of the cracked material response. A computer code has been developed to implement the peridynamic theory based modeling tool for two-dimensional analysis. A good agreement between our predictions and previously published results is observed. Some interesting new results that have not been reported earlier by others are also obtained and presented in this paper. The final objective of this investigation is to increase the mechanics knowledge of self-similar and self-affine cracks.
Liechty, Derek S.; Lewis, Mark J.
2010-01-01
Recently introduced molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties (i.e., no macroscopic reaction rate information) are extended to include reactions involving charged particles and electronic energy levels. The proposed extensions include ionization reactions, exothermic associative ionization reactions, endothermic and exothermic charge exchange reactions, and other exchange reactions involving ionized species. The extensions are shown to agree favorably with the measured Arrhenius rates for near-equilibrium conditions.
An ETAS model with varying productivity rates
Harte, D. S.
2014-07-01
We present an epidemic type aftershock sequenc (ETAS) model where the offspring rates vary both spatially and temporally. This is achieved by distinguishing between those space-time volumes where the interpoint space and time distances are small, and those where they are considerably larger. We also question the nature of the background component in the ETAS model. Is it simply a temporal boundary correction (t = 0) or does it represent an additional tectonic process not described by the aftershock component? The form of these stochastic models should not be considered to be fixed. As we accumulate larger and better earthquake catalogues, GPS data, strain rates, etc., we have the ability to ask more complex questions about the nature of the process. By fitting modified models consistent with such questions, we should gain a better insight into the earthquake process. Hence, we consider a sequence of incrementally modified ETAS type models rather than `the' ETAS model.
QUANTUM THEORY FOR THE BINOMIAL MODEL IN FINANCE THEORY
CHEN Zeqian
2004-01-01
In this paper, a quantum model for the binomial market in finance is proposed. We show that its risk-neutral world exhibits an intriguing structure as a disk in the unit ball of R3, whose radius is a function of the risk-free interest rate with two thresholds which prevent arbitrage opportunities from this quantum market. Furthermore, from the quantum mechanical point of view we re-deduce the Cox-Ross-Rubinstein binomial option pricing formula by considering Maxwell-Boltzmann statistics of the system of N distinguishable particles.
Gallis, Michael A; Bond, Ryan B; Torczynski, John R
2009-09-28
Recently proposed molecular-level chemistry models that predict equilibrium and nonequilibrium reaction rates using only kinetic theory and fundamental molecular properties (i.e., no macroscopic reaction-rate information) are investigated for chemical reactions occurring in upper-atmosphere hypersonic flows. The new models are in good agreement with the measured Arrhenius rates for near-equilibrium conditions and with both measured rates and other theoretical models for far-from-equilibrium conditions. Additionally, the new models are applied to representative combustion and ionization reactions and are in good agreement with available measurements and theoretical models. Thus, molecular-level chemistry modeling provides an accurate method for predicting equilibrium and nonequilibrium chemical-reaction rates in gases.
Topos models for physics and topos theory
Wolters, Sander
2014-08-01
What is the role of topos theory in the topos models for quantum theory as used by Isham, Butterfield, Döring, Heunen, Landsman, Spitters, and others? In other words, what is the interplay between physical motivation for the models and the mathematical framework used in these models? Concretely, we show that the presheaf topos model of Butterfield, Isham, and Döring resembles classical physics when viewed from the internal language of the presheaf topos, similar to the copresheaf topos model of Heunen, Landsman, and Spitters. Both the presheaf and copresheaf models provide a "quantum logic" in the form of a complete Heyting algebra. Although these algebras are natural from a topos theoretic stance, we seek a physical interpretation for the logical operations. Finally, we investigate dynamics. In particular, we describe how an automorphism on the operator algebra induces a homeomorphism (or isomorphism of locales) on the associated state spaces of the topos models, and how elementary propositions and truth values transform under the action of this homeomorphism. Also with dynamics the focus is on the internal perspective of the topos.
THE EVOLUTION OF CURRENCY RELATIONS IN THE LIGHT OF MAJOR EXCHANGE RATE ADJUSTMENT THEORIES
Sergiy TKACH
2014-07-01
Full Text Available This paper examines the impact of major exchange rate adjustment theories on the global monetary system. The reasons of the previous organization forms of monetary relations collapse at the global level are defined. The main achievements and failures of major exchange rate theories are described.
ECONOMETRIC MODELS FOR DETERMING THE EXCHANGE RATE
Mihaela BRATU
2012-05-01
Full Text Available The simple econometric models for the exchange rate, according to recent researches, generates the forecasts with the highest degree of accuracy. This type of models (Simultaneous Equations Model, MA(1 Procedure, Model with lagged variables is used to describe the evolution of the average exchange rate in Romanian in January 1991-March 2012 and to predict it on short run. The best forecasts, in terms of accuracy, on the forecasting horizon April-May 2012 were those based on a Simultaneous Equations Model that takes into account the Granger causality. An almost high degree of accuracy was gotten by combining the predictions based on MA(1 model with those based on the simultaneous equations model, when INV weighting scheme was applied (the forecasts are inversely weighted to their relative mean squared forecast error. The lagged variables Model provided the highest prediction errors. The importance of knowing the best exchange rate forecasts is related to the improvement of decision-making and the building of the monetary policy.
Theory, Modeling and Simulation Annual Report 2000
Dixon, David A.; Garrett, Bruce C.; Straatsma, Tp; Jones, Donald R.; Studham, Ronald S.; Harrison, Robert J.; Nichols, Jeffrey A.
2001-11-01
This annual report describes the 2000 research accomplishments for the Theory, Modeling, and Simulation (TM&S) directorate, one of the six research organizations in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory (PNNL). EMSL is a U.S. Department of Energy (DOE) national scientific user facility and is the centerpiece of the DOE commitment to providing world-class experimental, theoretical, and computational capabilities for solving the nation's environmental problems.
Theory, Modeling and Simulation Annual Report 2000
Dixon, David A; Garrett, Bruce C; Straatsma, TP; Jones, Donald R; Studham, Scott; Harrison, Robert J; Nichols, Jeffrey A
2001-11-01
This annual report describes the 2000 research accomplishments for the Theory, Modeling, and Simulation (TM and S) directorate, one of the six research organizations in the William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) at Pacific Northwest National Laboratory (PNNL). EMSL is a U.S. Department of Energy (DOE) national scientific user facility and is the centerpiece of the DOE commitment to providing world-class experimental, theoretical, and computational capabilities for solving the nation's environmental problems.
Dielectronic recombination rate in statistical model
Demura A.V.; Leontyev D.S.; Lisitsa V.S.; Shurigyn V.A.
2017-01-01
The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear...
Dielectronic recombination rate in statistical model
Demura A.V.
2017-01-01
Full Text Available The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear plasmas with the tungsten impurities.
Dielectronic recombination rate in statistical model
Demura, A. V.; Leontyev, D. S.; Lisitsa, V. S.; Shurigyn, V. A.
2016-12-01
The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear plasmas with the tungsten impurities.
Alster, Charlotte J.; Koyama, Akihiro; Johnson, Nels G.; Wallenstein, Matthew D.; Fischer, Joseph C.
2016-06-01
There is compelling evidence that microbial communities vary widely in their temperature sensitivity and may adapt to warming through time. To date, this sensitivity has been largely characterized using a range of models relying on versions of the Arrhenius equation, which predicts an exponential increase in reaction rate with temperature. However, there is growing evidence from laboratory and field studies that observe nonmonotonic responses of reaction rates to variation in temperature, indicating that Arrhenius is not an appropriate model for quantitatively characterizing temperature sensitivity. Recently, Hobbs et al. (2013) developed macromolecular rate theory (MMRT), which incorporates thermodynamic temperature optima as arising from heat capacity differences between isoenzymes. We applied MMRT to measurements of respiration from soils incubated at different temperatures. These soils were collected from three grassland sites across the U.S. Great Plains and reciprocally transplanted, allowing us to isolate the effects of microbial community type from edaphic factors. We found that microbial community type explained roughly 30% of the variation in the CO2 production rate from the labile C pool but that temperature and soil type were most important in explaining variation in labile and recalcitrant C pool size. For six out of the nine soil × inoculum combinations, MMRT was superior to Arrhenius. The MMRT analysis revealed that microbial communities have distinct heat capacity values and temperature sensitivities sometimes independent of soil type. These results challenge the current paradigm for modeling temperature sensitivity of soil C pools and understanding of microbial enzyme dynamics.
A Quantitative Theory Model of a Photobleaching Mechanism
陈同生; 曾绍群; 周炜; 骆清铭
2003-01-01
A photobleaching model:D-P(dye-photon interaction)and D-O(Dye-oxygen oxidative reaction)photobleaching theory,is proposed.The quantitative power dependences of photobleaching rates with both one-and two-photon excitations(1 PE and TPE)are obtained.This photobleaching model can be used to elucidate our and other experimental results commendably.Experimental studies of the photobleaching rates for rhodamine B with TPE under unsaturation conditions reveals that the power dependences of photobleaching rates increase with the increasing dye concentration,and that the photobleaching rate of a single molecule increases in the second power of the excitation intensity,which is different from the high-order(＞ 3)nonlinear dependence of ensemble molecules.
Parametric Regression Models Using Reversed Hazard Rates
Asokan Mulayath Variyath
2014-01-01
Full Text Available Proportional hazard regression models are widely used in survival analysis to understand and exploit the relationship between survival time and covariates. For left censored survival times, reversed hazard rate functions are more appropriate. In this paper, we develop a parametric proportional hazard rates model using an inverted Weibull distribution. The estimation and construction of confidence intervals for the parameters are discussed. We assess the performance of the proposed procedure based on a large number of Monte Carlo simulations. We illustrate the proposed method using a real case example.
An Application of Durkheim's Theory of Suicide to Prison Suicide Rates in the United States
Tartaro, Christine; Lester, David
2005-01-01
E. Durkheim (1897) suggested that the societal rate of suicide might be explained by societal factors, such as marriage, divorce, and birth rates. The current study examined male prison suicide rates and suicide rates for men in the total population in the United States and found that variables based on Durkheim's theory of suicide explained…
An Application of Durkheim's Theory of Suicide to Prison Suicide Rates in the United States
Tartaro, Christine; Lester, David
2005-01-01
E. Durkheim (1897) suggested that the societal rate of suicide might be explained by societal factors, such as marriage, divorce, and birth rates. The current study examined male prison suicide rates and suicide rates for men in the total population in the United States and found that variables based on Durkheim's theory of suicide explained…
Utilizing Cognitive Dissonance Theory To Improve Student Ratings of College Faculty Members.
Carson, Rebecca Davis; Smith, Albert B.; Olivarez, Arturo, Jr.
This study examined the impact of mid-semester student ratings feedback on a faculty's end-of-semester student ratings. The positive direction of the end-of-semester ratings in the two mid-semester feedback groups lent support to the premise that cognitive dissonance theory and various forms of mid-semester, student rating feedback can be used to…
Sparse modeling theory, algorithms, and applications
Rish, Irina
2014-01-01
""A comprehensive, clear, and well-articulated book on sparse modeling. This book will stand as a prime reference to the research community for many years to come.""-Ricardo Vilalta, Department of Computer Science, University of Houston""This book provides a modern introduction to sparse methods for machine learning and signal processing, with a comprehensive treatment of both theory and algorithms. Sparse Modeling is an ideal book for a first-year graduate course.""-Francis Bach, INRIA - École Normale Supřieure, Paris
Theory, modeling and simulation: Annual report 1993
Dunning, T.H. Jr.; Garrett, B.C.
1994-07-01
Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE`s research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies.
Theory and experiment on the cuprous-cupric electron transfer rate at a copper electrode.
Halley, J. W.; Smith, B. B.; Walbran, S.; Curtiss, L. A.; Rigney, R. O.; Sutjianto, A.; Hung, N. C.; Yonco, R. M.; Nagy, Z.; Univ. of Minnesota; NREL
1999-04-01
We describe results of experiment and theory of the cuprous-cupric electron transfer rate in an aqueous solution at a copper electrode. The methods are similar to those we reported earlier for the ferrous-ferric rate. The comparison strongly suggests that, in marked distinction to the ferrous-ferric case, the electron transfer reaction is adiabatic. The model shows that the activation barrier is dominated by the energy required for the ion to approach the electrode, rather than by the energy required for rearrangement of the solvation shell, also in sharp distinction to the case of the ferric-ferrous electron transfer at a gold electrode. Calculated activation barriers based on this image agree with the experimental results reported here.
Theory and experiment on the cuprous{endash}cupric electron transfer rate at a copper electrode
Halley, J.W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455 (United States); Smith, B.B. [National Renewable Energy Laboratory, Golden, Colorado (United States); Walbran, S. [School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455 (United States); Curtiss, L.A.; Rigney, R.O.; Sutjianto, A.; Hung, N.C.; Yonco, R.M.; Nagy, Z. [Argonne National Laboratory, Divisions of Materials Science, Chemistry and Chemical Technology, Argonne, Illinois 60439-4837 (United States)
1999-04-01
We describe results of experiment and theory of the cuprous{endash}cupric electron transfer rate in an aqueous solution at a copper electrode. The methods are similar to those we reported earlier for the ferrous{endash}ferric rate. The comparison strongly suggests that, in marked distinction to the ferrous{endash}ferric case, the electron transfer reaction is adiabatic. The model shows that the activation barrier is dominated by the energy required for the ion to approach the electrode, rather than by the energy required for rearrangement of the solvation shell, also in sharp distinction to the case of the ferric{endash}ferrous electron transfer at a gold electrode. Calculated activation barriers based on this image agree with the experimental results reported here. {copyright} {ital 1999 American Institute of Physics.}
An Optimization Model Based on Game Theory
Yang Shi
2014-04-01
Full Text Available Game Theory has a wide range of applications in department of economics, but in the field of computer science, especially in the optimization algorithm is seldom used. In this paper, we integrate thinking of game theory into optimization algorithm, and then propose a new optimization model which can be widely used in optimization processing. This optimization model is divided into two types, which are called “the complete consistency” and “the partial consistency”. In these two types, the partial consistency is added disturbance strategy on the basis of the complete consistency. When model’s consistency is satisfied, the Nash equilibrium of the optimization model is global optimal and when the model’s consistency is not met, the presence of perturbation strategy can improve the application of the algorithm. The basic experiments suggest that this optimization model has broad applicability and better performance, and gives a new idea for some intractable problems in the field of artificial intelligence
Bridging Economic Theory Models and the Cointegrated Vector Autoregressive Model
Møller, Niels Framroze
2008-01-01
Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity in the econo......Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity...... parameters of the CVAR are shown to be interpretable in terms of expectations formation, market clearing, nominal rigidities, etc. The general-partial equilibrium distinction is also discussed....
Bridging Economic Theory Models and the Cointegrated Vector Autoregressive Model
Møller, Niels Framroze
2008-01-01
Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity in the econo......Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity...... are related to expectations formation, market clearing, nominal rigidities, etc. Finally, the general-partial equilibrium distinction is analyzed....
Bridging Economic Theory Models and the Cointegrated Vector Autoregressive Model
Møller, Niels Framroze
2008-01-01
Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity in the econo......Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity...... are related to expectations formation, market clearing, nominal rigidities, etc. Finally, the general-partial equilibrium distinction is analyzed....
Bridging Economic Theory Models and the Cointegrated Vector Autoregressive Model
Møller, Niels Framroze
2008-01-01
Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity in the econo......Examples of simple economic theory models are analyzed as restrictions on the Cointegrated VAR (CVAR). This establishes a correspondence between basic economic concepts and the econometric concepts of the CVAR: The economic relations correspond to cointegrating vectors and exogeneity...... parameters of the CVAR are shown to be interpretable in terms of expectations formation, market clearing, nominal rigidities, etc. The general-partial equilibrium distinction is also discussed....
Black hole accretion versus star formation rate: theory confronts observations
Volonteri, Marta; Netzer, Hagai; Bellovary, Jillian; Dotti, Massimo; Governato, Fabio
2015-01-01
We use a suite of hydrodynamical simulations of galaxy mergers to compare star formation rate (SFR) and black hole accretion rate (BHAR) for galaxies before the interaction ('stochastic' phase), during the 'merger' proper, lasting ~0.2-0.3 Gyr, and in the 'remnant' phase. We calculate the bi-variate distribution of SFR and BHAR and define the regions in the SFR-BHAR plane that the three phases occupy. No strong correlation between BHAR and galaxy-wide SFR is found. A possible exception are galaxies with the highest SFR and the highest BHAR. We also bin the data in the same way used in several observational studies, by either measuring the mean SFR for AGN in different luminosity bins, or the mean BHAR for galaxies in bins of SFR. We find that the apparent contradiction or SFR versus BHAR for observed samples of AGN and star forming galaxies is actually caused by binning effects. The two types of samples use different projections of the full bi-variate distribution, and the full information would lead to unamb...
Integrating dynamic energy budget (DEB) theory with traditional bioenergetic models.
Nisbet, Roger M; Jusup, Marko; Klanjscek, Tin; Pecquerie, Laure
2012-03-15
Dynamic energy budget (DEB) theory offers a systematic, though abstract, way to describe how an organism acquires and uses energy and essential elements for physiological processes, in addition to how physiological performance is influenced by environmental variables such as food density and temperature. A 'standard' DEB model describes the performance (growth, development, reproduction, respiration, etc.) of all life stages of an animal (embryo to adult), and predicts both intraspecific and interspecific variation in physiological rates. This approach contrasts with a long tradition of more phenomenological and parameter-rich bioenergetic models that are used to make predictions from species-specific rate measurements. These less abstract models are widely used in fisheries studies; they are more readily interpretable than DEB models, but lack the generality of DEB models. We review the interconnections between the two approaches and present formulae relating the state variables and fluxes in the standard DEB model to measured bioenergetic rate processes. We illustrate this synthesis for two large fishes: Pacific bluefin tuna (Thunnus orientalis) and Pacific salmon (Oncorhynchus spp.). For each, we have a parameter-sparse, full-life-cycle DEB model that requires adding only a few species-specific features to the standard model. Both models allow powerful integration of knowledge derived from data restricted to certain life stages, processes and environments.
THE NEW CLASSICAL THEORY AND THE REAL BUSINESS CYCLE MODEL
Oana Simona HUDEA (CARAMAN
2014-11-01
Full Text Available The present paper aims at describing some key elements of the new classical theory-related model, namely the Real Business Cycle, mainly describing the economy from the perspective of a perfectly competitive market, characterised by price, wage and interest rate flexibility. The rendered impulse-response functions, that help us in revealing the capacity of the model variables to return to their steady state under the impact of a structural shock, be it technology or monetary policy oriented, give points to the neutrality of the monetary entity decisions, therefore confirming the well-known classical dichotomy existing between the nominal and the real factors of the economy.
Economic contract theory tests models of mutualism.
Weyl, E Glen; Frederickson, Megan E; Yu, Douglas W; Pierce, Naomi E
2010-09-01
Although mutualisms are common in all ecological communities and have played key roles in the diversification of life, our current understanding of the evolution of cooperation applies mostly to social behavior within a species. A central question is whether mutualisms persist because hosts have evolved costly punishment of cheaters. Here, we use the economic theory of employment contracts to formulate and distinguish between two mechanisms that have been proposed to prevent cheating in host-symbiont mutualisms, partner fidelity feedback (PFF) and host sanctions (HS). Under PFF, positive feedback between host fitness and symbiont fitness is sufficient to prevent cheating; in contrast, HS posits the necessity of costly punishment to maintain mutualism. A coevolutionary model of mutualism finds that HS are unlikely to evolve de novo, and published data on legume-rhizobia and yucca-moth mutualisms are consistent with PFF and not with HS. Thus, in systems considered to be textbook cases of HS, we find poor support for the theory that hosts have evolved to punish cheating symbionts; instead, we show that even horizontally transmitted mutualisms can be stabilized via PFF. PFF theory may place previously underappreciated constraints on the evolution of mutualism and explain why punishment is far from ubiquitous in nature.
A general additive-multiplicative rates model for recurrent event data
无
2009-01-01
In this article, we propose a general additive-multiplicative rates model for recurrent event data. The proposed model includes the additive rates and multiplicative rates models as special cases. For the inference on the model parameters, estimating equation approaches are developed, and asymptotic properties of the proposed estimators are established through modern empirical process theory. In addition, an illustration with multiple-infection data from a clinic study on chronic granulomatous disease is provided.
A matrix model from string field theory
Syoji Zeze
2016-09-01
Full Text Available We demonstrate that a Hermitian matrix model can be derived from level truncated open string field theory with Chan-Paton factors. The Hermitian matrix is coupled with a scalar and U(N vectors which are responsible for the D-brane at the tachyon vacuum. Effective potential for the scalar is evaluated both for finite and large N. Increase of potential height is observed in both cases. The large $N$ matrix integral is identified with a system of N ZZ branes and a ghost FZZT brane.
Polarimetric clutter modeling: Theory and application
Kong, J. A.; Lin, F. C.; Borgeaud, M.; Yueh, H. A.; Swartz, A. A.; Lim, H. H.; Shim, R. T.; Novak, L. M.
1988-01-01
The two-layer anisotropic random medium model is used to investigate fully polarimetric scattering properties of earth terrain media. The polarization covariance matrices for the untilted and tilted uniaxial random medium are evaluated using the strong fluctuation theory and distorted Born approximation. In order to account for the azimuthal randomness in the growth direction of leaves in tree and grass fields, an averaging scheme over the azimuthal direction is also applied. It is found that characteristics of terrain clutter can be identified through the analysis of each element of the covariance matrix. Theoretical results are illustrated by the comparison with experimental data provided by MIT Lincoln Laboratory for tree and grass fields.
A matrix model from string field theory
Zeze, Syoji
2016-09-01
We demonstrate that a Hermitian matrix model can be derived from level truncated open string field theory with Chan-Paton factors. The Hermitian matrix is coupled with a scalar and U(N) vectors which are responsible for the D-brane at the tachyon vacuum. Effective potential for the scalar is evaluated both for finite and large N. Increase of potential height is observed in both cases. The large N matrix integral is identified with a system of N ZZ branes and a ghost FZZT brane.
Ziegler, Sigurd; Pedersen, Mads L; Mowinckel, Athanasia M; Biele, Guido
2016-12-01
Attention deficit hyperactivity disorder (ADHD) is characterized by altered decision-making (DM) and reinforcement learning (RL), for which competing theories propose alternative explanations. Computational modelling contributes to understanding DM and RL by integrating behavioural and neurobiological findings, and could elucidate pathogenic mechanisms behind ADHD. This review of neurobiological theories of ADHD describes predictions for the effect of ADHD on DM and RL as described by the drift-diffusion model of DM (DDM) and a basic RL model. Empirical studies employing these models are also reviewed. While theories often agree on how ADHD should be reflected in model parameters, each theory implies a unique combination of predictions. Empirical studies agree with the theories' assumptions of a lowered DDM drift rate in ADHD, while findings are less conclusive for boundary separation. The few studies employing RL models support a lower choice sensitivity in ADHD, but not an altered learning rate. The discussion outlines research areas for further theoretical refinement in the ADHD field.
Quantum Model Theory (QMod): Modeling Contextual Emergent Entangled Interfering Entities
Aerts, Diederik
2012-01-01
In this paper we present 'Quantum Model Theory' (QMod), a theory we developed to model entities that entail the typical quantum effects of 'contextuality, 'superposition', 'interference', 'entanglement' and 'emergence'. This aim of QMod is to put forward a theoretical framework that has the technical power of standard quantum mechanics, namely it makes explicitly use of the standard complex Hilbert space and its quantum mechanical calculus, but is also more general than standard quantum mechanics, in the sense that it only uses this quantum calculus locally, i.e. for each context corresponding to a measurement. In this sense, QMod is a generalization of quantum mechanics, similar to how the general relativity manifold mathematical formalism is a generalization of special relativity and classical physics. We prove by means of a representation theorem that QMod can be used for any entity entailing the typical quantum effects mentioned above. Some examples of application of QMod in concept theory and macroscopic...
Theory and Model of Agricultural Insurance Subsidy
Wan Kailiang; Long Wenjun
2007-01-01
The issue of agricultural insurance subsidy is discussed in this paper aiming to make it provided more rationally and scientifically.It is started with the connection between agricultural insurance and financial subsidy.It is really necessary and crucial to implement the financial insurance due to the bad operational performance,especially in the developing countries.But the subsidy should be provided more rationally because financial subsidy has lots of negative effects.A model in competitive insurance markets developed by Ahsan et al(1982)and a farmers'decision model arc developed to solve the optimal subsidized rate.Finally,the equation is got to calculate it.But a quantitative subsidized rate is not made here because the calculation should be under some restricted conditions,which are always absent in the developing countries.So the government should provide some subsidy for the ex ante research and preparation to get the scientific probability and premium rate.
Forecasting Exchange Rates with Mixed Models
Laura Maria Badea
2013-06-01
Full Text Available Gaining accuracy in exchange rate forecasting applications provides true benefits for financial activities. Supported today by the advancements in computing power, machine learning techniques provide good alternatives to traditional time series estimation methods. Very approached in time series forecasting are Artificial Neural Networks (ANNs which offer robust results and allow a flexible data manipulation. When integrating both, the “white-box” feature of conventional methods and the complexity of machine learning techniques, forecasting models perform even better in terms of generated errors. In this study, input variables (independent variables are selected using an ARIMA technique and are further employed in differently configured multilayered feed-forward neural networks using Broyden-Fletcher-Goldfarb-Shanno (BFGS optimization algorithm to perform predictions on EUR/RON and CHF/RON exchange rates. Results in terms of mean squared error highlight good results when using mixed models.
Gaussian mixture model of heart rate variability.
Tommaso Costa
Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.
Moral Distress Model Reconstructed Using Grounded Theory.
Ko, Hsun-Kuei; Chin, Chi-Chun; Hsu, Min-Tao
2016-12-29
The problems of nurse burnout and manpower shortage relate to moral distress. Thus, having a good understanding of moral distress is critical to developing strategies that effectively improve the clinical ethical climate and improve nursing retention in Taiwan. The aim of this study was to reconstruct the model of moral distress using the grounded theory. Twenty-five staff nurses at work units who attend to the needs of adult, pediatric, acute, and critical disease or end-of-life-care patients were recruited as participants using theoretical sampling from three teaching hospitals in Taiwan. Data were collected using intensive, 2- to 3-hour interviews with each participant. Audio recordings of the interviews were made and then converted into transcripts. The data were analyzed using the grounded theory. In the clinical setting, the perspective that nurses take toward clinical moral events reflects their moral values, which trigger moral cognition, provocation, and appraisal. The moral barriers that form when moral events that occur in clinical settings contradict personal moral values may later develop into moral distress. In handling moral barriers in the clinical environment, nurses make moral judgments and determine what is morally correct. Influenced by moral efficacy, the consequence may either be a moral action or an expression of personal emotion. Wasting National Health Insurance resources and Chinese culture are key sources of moral distress for nurses in Taiwan. The role of self-confidence in promoting moral efficacy and the role of heterodox skills in promoting moral actions represent findings that are unique to this study. The moral distress model was used in this study to facilitate the development of future nursing theories. On the basis of our findings, we suggested that nursing students be encouraged to use case studies to establish proper moral values, improve moral cognition and judgment capabilities, and promote moral actions to better handle the
Application of Chaos Theory to Psychological Models
Blackerby, Rae Fortunato
This dissertation shows that an alternative theoretical approach from physics--chaos theory--offers a viable basis for improved understanding of human beings and their behavior. Chaos theory provides achievable frameworks for potential identification, assessment, and adjustment of human behavior patterns. Most current psychological models fail to address the metaphysical conditions inherent in the human system, thus bringing deep errors to psychological practice and empirical research. Freudian, Jungian and behavioristic perspectives are inadequate psychological models because they assume, either implicitly or explicitly, that the human psychological system is a closed, linear system. On the other hand, Adlerian models that require open systems are likely to be empirically tenable. Logically, models will hold only if the model's assumptions hold. The innovative application of chaotic dynamics to psychological behavior is a promising theoretical development because the application asserts that human systems are open, nonlinear and self-organizing. Chaotic dynamics use nonlinear mathematical relationships among factors that influence human systems. This dissertation explores these mathematical relationships in the context of a sample model of moral behavior using simulated data. Mathematical equations with nonlinear feedback loops describe chaotic systems. Feedback loops govern the equations' value in subsequent calculation iterations. For example, changes in moral behavior are affected by an individual's own self-centeredness, family and community influences, and previous moral behavior choices that feed back to influence future choices. When applying these factors to the chaos equations, the model behaves like other chaotic systems. For example, changes in moral behavior fluctuate in regular patterns, as determined by the values of the individual, family and community factors. In some cases, these fluctuations converge to one value; in other cases, they diverge in
Nanofluid Drop Evaporation: Experiment, Theory, and Modeling
Gerken, William James
Nanofluids, stable colloidal suspensions of nanoparticles in a base fluid, have potential applications in the heat transfer, combustion and propulsion, manufacturing, and medical fields. Experiments were conducted to determine the evaporation rate of room temperature, millimeter-sized pendant drops of ethanol laden with varying amounts (0-3% by weight) of 40-60 nm aluminum nanoparticles (nAl). Time-resolved high-resolution drop images were collected for the determination of early-time evaporation rate (D2/D 02 > 0.75), shown to exhibit D-square law behavior, and surface tension. Results show an asymptotic decrease in pendant drop evaporation rate with increasing nAl loading. The evaporation rate decreases by approximately 15% at around 1% to 3% nAl loading relative to the evaporation rate of pure ethanol. Surface tension was observed to be unaffected by nAl loading up to 3% by weight. A model was developed to describe the evaporation of the nanofluid pendant drops based on D-square law analysis for the gas domain and a description of the reduction in liquid fraction available for evaporation due to nanoparticle agglomerate packing near the evaporating drop surface. Model predictions are in relatively good agreement with experiment, within a few percent of measured nanofluid pendant drop evaporation rate. The evaporation of pinned nanofluid sessile drops was also considered via modeling. It was found that the same mechanism for nanofluid evaporation rate reduction used to explain pendant drops could be used for sessile drops. That mechanism is a reduction in evaporation rate due to a reduction in available ethanol for evaporation at the drop surface caused by the packing of nanoparticle agglomerates near the drop surface. Comparisons of the present modeling predictions with sessile drop evaporation rate measurements reported for nAl/ethanol nanofluids by Sefiane and Bennacer [11] are in fairly good agreement. Portions of this abstract previously appeared as: W. J
An Inflationary Model in String Theory
Iizuka, N; Iizuka, Norihiro; Trivedi, Sandip P.
2004-01-01
We construct a model of inflation in string theory after carefully taking into account moduli stabilization. The setting is a warped compactification of Type IIB string theory in the presence of D3 and anti-D3-branes. The inflaton is the position of a D3-brane in the internal space. By suitably adjusting fluxes and the location of symmetrically placed anti-D3-branes, we show that at a point of enhanced symmetry, the inflaton potential V can have a broad maximum, satisfying the condition V''/V << 1 in Planck units. On starting close to the top of this potential the slow-roll conditions can be met. Observational constraints impose significant restrictions. As a first pass we show that these can be satisfied and determine the important scales in the compactification to within an order of magnitude. One robust feature is that the scale of inflation is low, H = O(10^{10}) GeV. Removing the observational constraints makes it much easier to construct a slow-roll inflationary model. Generalizations and conseque...
Chemical Reaction Rates from Ring Polymer Molecular Dynamics: Theory and Practical Applications
Suleimanov, Yury V; Guo, Hua
2016-01-01
This Feature Article presents an overview of the current status of Ring Polymer Molecular Dynamics (RPMD) rate theory. We first analyze theory and its connection to quantum transition state theory. We then focus on its practical application to prototypical chemical reactions in the gas phase, which demonstrate how accurate and reliable RPMD is for calculating thermal chemical reaction rates in multifarious cases. This review serves as an important checkpoint in RPMD rate theory development, which shows that RPMD is shifting from being just one of recent novel ideas to a well-established and validated alternative to conventional techniques for calculating thermal chemical rates. We also hope it will motivate further applications of RPMD to various chemical reactions.
"Depletion": A Game with Natural Rules for Teaching Reaction Rate Theory.
Olbris, Donald J.; Herzfeld, Judith
2002-01-01
Depletion is a game that reinforces central concepts of reaction rate theory through simulation. Presents the game with a set of follow-up questions suitable for either a quiz or discussion. Also describes student reaction to the game. (MM)
Modified perturbation theory for the Yukawa model
Poluektov, Yu M
2016-01-01
A new formulation of perturbation theory for a description of the Dirac and scalar fields (the Yukawa model) is suggested. As the main approximation the self-consistent field model is chosen, which allows in a certain degree to account for the effects caused by the interaction of fields. Such choice of the main approximation leads to a normally ordered form of the interaction Hamiltonian. Generation of the fermion mass due to the interaction with exchange of the scalar boson is investigated. It is demonstrated that, for zero bare mass, the fermion can acquire mass only if the coupling constant exceeds the critical value determined by the boson mass. In this connection, the problem of the neutrino mass is discussed.
Modeling the Volatility of Exchange Rates: GARCH Models
Fahima Charef
2017-03-01
Full Text Available The modeling of the dynamics of the exchange rate at a long time remains a financial and economic research center. In our research we tried to study the relationship between the evolution of exchange rates and macroeconomic fundamentals. Our empirical study is based on a series of exchange rates for the Tunisian dinar against three currencies of major trading partners (dollar, euro, yen and fundamentals (the terms of trade, the inflation rate, the interest rate differential, of monthly data, from jan 2000 to dec-2014, for the case of the Tunisia. We have adopted models of conditional heteroscedasticity (ARCH, GARCH, EGARCH, TGARCH. The results indicate that there is a partial relationship between the evolution of the Tunisian dinar exchange rates and macroeconomic variables.
PARFUME Theory and Model basis Report
Darrell L. Knudson; Gregory K Miller; G.K. Miller; D.A. Petti; J.T. Maki; D.L. Knudson
2009-09-01
The success of gas reactors depends upon the safety and quality of the coated particle fuel. The fuel performance modeling code PARFUME simulates the mechanical, thermal and physico-chemical behavior of fuel particles during irradiation. This report documents the theory and material properties behind vari¬ous capabilities of the code, which include: 1) various options for calculating CO production and fission product gas release, 2) an analytical solution for stresses in the coating layers that accounts for irradiation-induced creep and swelling of the pyrocarbon layers, 3) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 4) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, and kernel migration (or amoeba effect), 5) two independent methods for determining particle failure probabilities, 6) a model for calculating release-to-birth (R/B) ratios of gaseous fission products that accounts for particle failures and uranium contamination in the fuel matrix, and 7) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. The accident condi¬tion entails diffusion of fission products through the particle coating layers and through the fuel matrix to the coolant boundary. This document represents the initial version of the PARFUME Theory and Model Basis Report. More detailed descriptions will be provided in future revisions.
Stochastic linear programming models, theory, and computation
Kall, Peter
2011-01-01
This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...
Modelling of rate effects at multiple scales
Pedersen, R.R.; Simone, A.; Sluys, L. J.
2008-01-01
At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone......, the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from the micro-scale....
Queuing theory accurately models the need for critical care resources.
McManus, Michael L; Long, Michael C; Cooper, Abbot; Litvak, Eugene
2004-05-01
Allocation of scarce resources presents an increasing challenge to hospital administrators and health policy makers. Intensive care units can present bottlenecks within busy hospitals, but their expansion is costly and difficult to gauge. Although mathematical tools have been suggested for determining the proper number of intensive care beds necessary to serve a given demand, the performance of such models has not been prospectively evaluated over significant periods. The authors prospectively collected 2 years' admission, discharge, and turn-away data in a busy, urban intensive care unit. Using queuing theory, they then constructed a mathematical model of patient flow, compared predictions from the model to observed performance of the unit, and explored the sensitivity of the model to changes in unit size. The queuing model proved to be very accurate, with predicted admission turn-away rates correlating highly with those actually observed (correlation coefficient = 0.89). The model was useful in predicting both monthly responsiveness to changing demand (mean monthly difference between observed and predicted values, 0.4+/-2.3%; range, 0-13%) and the overall 2-yr turn-away rate for the unit (21%vs. 22%). Both in practice and in simulation, turn-away rates increased exponentially when utilization exceeded 80-85%. Sensitivity analysis using the model revealed rapid and severe degradation of system performance with even the small changes in bed availability that might result from sudden staffing shortages or admission of patients with very long stays. The stochastic nature of patient flow may falsely lead health planners to underestimate resource needs in busy intensive care units. Although the nature of arrivals for intensive care deserves further study, when demand is random, queuing theory provides an accurate means of determining the appropriate supply of beds.
VR closure rates for two vocational models.
Fraser, Virginia V; Jones, Amanda M; Frounfelker, Rochelle; Harding, Brian; Hardin, Teresa; Bond, Gary R
2008-01-01
The Individual Placement and Support (IPS) model of supported employment is an evidence-based practice for individuals with psychiatric disabilities. To be financially viable, IPS programs require funding from the state-federal vocational rehabilitation (VR) system. However, some observers have questioned the compatibility of IPS and the VR system. Using a randomized controlled trial comparing IPS to a well-established vocational program called the Diversified Placement Approach (DPA), we examined rates of VR sponsorship and successful VR closures. We also describe the establishment of an active collaboration between a psychiatric rehabilitation agency and the state VR system to facilitate rapid VR sponsorship for IPS clients. Both IPS and DPA achieved a 44% rate of VR Status 26 closure when considering all clients entering the study. IPS and DPA averaged similar amount of time to achieve VR sponsorship. Time from vocational program entry to Status 26 was 51 days longer on average for IPS. Even though several IPS principles seem to run counter to VR practices, such as zero exclusion and rapid job search, we found IPS closure rates comparable to those for DPA, a vocational model that screens for readiness, provides prevocational preparation, and extensively uses agency-run businesses.
Modeling and Optimization : Theory and Applications Conference
Terlaky, Tamás
2015-01-01
This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.
Theory and modelling of nanocarbon phase stability.
Barnard, A. S.
2006-01-01
The transformation of nanodiamonds into carbon-onions (and vice versa) has been observed experimentally and has been modeled computationally at various levels of sophistication. Also, several analytical theories have been derived to describe the size, temperature and pressure dependence of this phase transition. However, in most cases a pure carbon-onion or nanodiamond is not the final product. More often than not an intermediary is formed, known as a bucky-diamond, with a diamond-like core encased in an onion-like shell. This has prompted a number of studies investigating the relative stability of nanodiamonds, bucky-diamonds, carbon-onions and fullerenes, in various size regimes. Presented here is a review outlining results of numerous theoretical studies examining the phase diagrams and phase stability of carbon nanoparticles, to clarify the complicated relationship between fullerenic and diamond structures at the nanoscale.
Galaxy alignments: Theory, modelling and simulations
Kiessling, Alina; Joachimi, Benjamin; Kirk, Donnacha; Kitching, Thomas D; Leonard, Adrienne; Mandelbaum, Rachel; Schäfer, Björn Malte; Sifón, Cristóbal; Brown, Michael L; Rassat, Anais
2015-01-01
The shapes of galaxies are not randomly oriented on the sky. During the galaxy formation and evolution process, environment has a strong influence, as tidal gravitational fields in large-scale structure tend to align the shapes and angular momenta of nearby galaxies. Additionally, events such as galaxy mergers affect the relative alignments of galaxies throughout their history. These "intrinsic galaxy alignments" are known to exist, but are still poorly understood. This review will offer a pedagogical introduction to the current theories that describe intrinsic galaxy alignments, including the apparent difference in intrinsic alignment between early- and late-type galaxies and the latest efforts to model them analytically. It will then describe the ongoing efforts to simulate intrinsic alignments using both $N$-body and hydrodynamic simulations. Due to the relative youth of this field, there is still much to be done to understand intrinsic galaxy alignments and this review summarises the current state of the ...
Models of decoherence with negative dephasing rate
Pernice, Ansgar; Strunz, Walter T
2012-01-01
We determine the total state dynamics of a dephasing open quantum system using the standard environment of harmonic oscillators. Of particular interest are random unitary approaches to the same reduced dynamics and system-environment correlations in the full model. Concentrating on a model with an at times negative dephasing rate, the issue of "non-Markovianity" will also be addressed with the emphasis on information obtained from the dynamics of the total state of system and environment: making use of criteria that allow us to distinguish between classically correlated and entangled total states, we employ a simple measure for the correlations emerging from the increase of the two local entropies, and relate it the nature of the correlations.
Visceral obesity and psychosocial stress: a generalised control theory model
Wallace, Rodrick
2016-07-01
The linking of control theory and information theory via the Data Rate Theorem and its generalisations allows for construction of necessary conditions statistical models of body mass regulation in the context of interaction with a complex dynamic environment. By focusing on the stress-related induction of central obesity via failure of HPA axis regulation, we explore implications for strategies of prevention and treatment. It rapidly becomes evident that individual-centred biomedical reductionism is an inadequate paradigm. Without mitigation of HPA axis or related dysfunctions arising from social pathologies of power imbalance, economic insecurity, and so on, it is unlikely that permanent changes in visceral obesity for individuals can be maintained without constant therapeutic effort, an expensive - and likely unsustainable - public policy.
Modeling missing data in knowledge space theory.
de Chiusole, Debora; Stefanutti, Luca; Anselmi, Pasquale; Robusto, Egidio
2015-12-01
Missing data are a well known issue in statistical inference, because some responses may be missing, even when data are collected carefully. The problem that arises in these cases is how to deal with missing data. In this article, the missingness is analyzed in knowledge space theory, and in particular when the basic local independence model (BLIM) is applied to the data. Two extensions of the BLIM to missing data are proposed: The former, called ignorable missing BLIM (IMBLIM), assumes that missing data are missing completely at random; the latter, called missing BLIM (MissBLIM), introduces specific dependencies of the missing data on the knowledge states, thus assuming that the missing data are missing not at random. The IMBLIM and the MissBLIM modeled the missingness in a satisfactory way, in both a simulation study and an empirical application, depending on the process that generates the missingness: If the missing data-generating process is of type missing completely at random, then either IMBLIM or MissBLIM provide adequate fit to the data. However, if the pattern of missingness is functionally dependent upon unobservable features of the data (e.g., missing answers are more likely to be wrong), then only a correctly specified model of the missingness distribution provides an adequate fit to the data.
Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models
Nortey, Ezekiel NN; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth
2015-01-01
This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, t...
Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models
Nortey, Ezekiel NN; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth
2015-01-01
This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, t...
A Mathematical Theory of the Gauged Linear Sigma Model
Fan, Huijun; Ruan, Yongbin
2015-01-01
We construct a rigorous mathematical theory of Witten's Gauged Linear Sigma Model (GLSM). Our theory applies to a wide range of examples, including many cases with non-Abelian gauge group. Both the Gromov-Witten theory of a Calabi-Yau complete intersection X and the Landau-Ginzburg dual (FJRW-theory) of X can be expressed as gauged linear sigma models. Furthermore, the Landau-Ginzburg/Calabi-Yau correspondence can be interpreted as a variation of the moment map or a deformation of GIT in the GLSM. This paper focuses primarily on the algebraic theory, while a companion article will treat the analytic theory.
The Properties of Model Selection when Retaining Theory Variables
Hendry, David F.; Johansen, Søren
Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....
Gravothermal Star Clusters - Theory and Computer Modelling
Spurzem, Rainer
2010-11-01
In the George Darwin lecture, delivered to the British Royal Astronomical Society in 1960 by Viktor A. Ambartsumian he wrote on the evolution of stellar systems that it can be described by the "dynamic evolution of a gravitating gas" complemented by "a statistical description of the changes in the physical states of stars". This talk will show how this physical concept has inspired theoretical modeling of star clusters in the following decades up to the present day. The application of principles of thermodynamics shows, as Ambartsumian argued in his 1960 lecture, that there is no stable state of equilibrium of a gravitating star cluster. The trend to local thermodynamic equilibrium is always disturbed by escaping stars (Ambartsumian), as well as by gravothermal and gravogyro instabilities, as it was detected later. Here the state-of-the-art of modeling the evolution of dense stellar systems based on principles of thermodynamics and statistical mechanics (Fokker-Planck approximation) will be reviewed. Recent progress including rotation and internal correlations (primordial binaries) is presented. The models have also very successfully been used to study dense star clusters around massive black holes in galactic nuclei and even (in a few cases) relativistic supermassive dense objects in centres of galaxies (here again briefly touching one of the many research fields of V.A. Ambartsumian). For the modern present time of high-speed supercomputing, where we are tackling direct N-body simulations of star clusters, we will show that such direct modeling supports and proves the concept of the statistical models based on the Fokker-Planck theory, and that both theoretical concepts and direct computer simulations are necessary to support each other and make scientific progress in the study of star cluster evolution.
A Realizability Model for Impredicative Hoare Type Theory
Petersen, Rasmus Lerchedal; Birkedal, Lars; Nanevski, Alexandar
2008-01-01
We present a denotational model of impredicative Hoare Type Theory, a very expressive dependent type theory in which one can specify and reason about mutable abstract data types. The model ensures soundness of the extension of Hoare Type Theory with impredicative polymorphism; makes the connections...... to separation logic clear, and provides a basis for investigation of further sound extensions of the theory, in particular equations between computations and types....
Venture Theory: A Model of Decision Weights.
1988-01-01
restrictions are important in that nonadditive decision weights can be used to "explain" many anomalies of standard choice theory . Implications. There are...1974). On utility functions. Theory and Decision, 5, 205-242. Chew, S. H., & MacCrimmon, K. R. Alpha-nu choice theory : A generalization of expected
Metacommunity speciation models and their implications for diversification theory.
Hubert, Nicolas; Calcagno, Vincent; Etienne, Rampal S; Mouquet, Nicolas
2015-08-01
The emergence of new frameworks combining evolutionary and ecological dynamics in communities opens new perspectives on the study of speciation. By acknowledging the relative contribution of local and regional dynamics in shaping the complexity of ecological communities, metacommunity theory sheds a new light on the mechanisms underlying the emergence of species. Three integrative frameworks have been proposed, involving neutral dynamics, niche theory, and life history trade-offs respectively. Here, we review these frameworks of metacommunity theory to emphasise that: (1) studies on speciation and community ecology have converged towards similar general principles by acknowledging the central role of dispersal in metacommunities dynamics, (2) considering the conditions of emergence and maintenance of new species in communities has given rise to new models of speciation embedded in the metacommunity theory, (3) studies of diversification have shifted from relating phylogenetic patterns to landscapes spatial and ecological characteristics towards integrative approaches that explicitly consider speciation in a mechanistic ecological framework. We highlight several challenges, in particular the need for a better integration of the eco-evolutionary consequences of dispersal and the need to increase our understanding on the relative rates of evolutionary and ecological changes in communities.
Annonaceae substitution rates: a codon model perspective
Lars Willem Chatrou
2014-01-01
Full Text Available The Annonaceae includes cultivated species of economic interest and represents an important source of information for better understanding the evolution of tropical rainforests. In phylogenetic analyses of DNA sequence data that are used to address evolutionary questions, it is imperative to use appropriate statistical models. Annonaceae are cases in point: Two sister clades, the subfamilies Annonoideae and Malmeoideae, contain the majority of Annonaceae species diversity. The Annonoideae generally show a greater degree of sequence divergence compared to the Malmeoideae, resulting in stark differences in branch lengths in phylogenetic trees. Uncertainty in how to interpret and analyse these differences has led to inconsistent results when estimating the ages of clades in Annonaceae using molecular dating techniques. We ask whether these differences may be attributed to inappropriate modelling assumptions in the phylogenetic analyses. Specifically, we test for (clade-specific differences in rates of non-synonymous and synonymous substitutions. A high ratio of nonsynonymous to synonymous substitutions may lead to similarity of DNA sequences due to convergence instead of common ancestry, and as a result confound phylogenetic analyses. We use a dataset of three chloroplast genes (rbcL, matK, ndhF for 129 species representative of the family. We find that differences in branch lengths between major clades are not attributable to different rates of non-synonymous and synonymous substitutions. The differences in evolutionary rate between the major clades of Annonaceae pose a challenge for current molecular dating techniques that should be seen as a warning for the interpretation of such results in other organisms.
Suleimanov, Yury V; Aoiz, F Javier; Guo, Hua
2016-11-03
This Feature Article presents an overview of the current status of ring polymer molecular dynamics (RPMD) rate theory. We first analyze the RPMD approach and its connection to quantum transition-state theory. We then focus on its practical applications to prototypical chemical reactions in the gas phase, which demonstrate how accurate and reliable RPMD is for calculating thermal chemical reaction rate coefficients in multifarious cases. This review serves as an important checkpoint in RPMD rate theory development, which shows that RPMD is shifting from being just one of recent novel ideas to a well-established and validated alternative to conventional techniques for calculating thermal chemical rate coefficients. We also hope it will motivate further applications of RPMD to various chemical reactions.
Model of Polyakov duality: String field theory Hamiltonians from Yang-Mills theories
Periwal, Vipul
2000-08-01
Polyakov has conjectured that Yang-Mills theory should be equivalent to a noncritical string theory. I point out, based on the work of Marchesini, Ishibashi, Kawai and collaborators, and Jevicki and Rodrigues, that the loop operator of the Yang-Mills theory is the temporal gauge string field theory Hamiltonian of a noncritical string theory. The consistency condition of the string interpretation is the zig-zag symmetry emphasized by Polyakov. I explicitly show how this works for the one-plaquette model, providing a consistent direct string interpretation of the unitary matrix model for the first time.
Standard Model in multiscale theories and observational constraints
Calcagni, Gianluca; Nardelli, Giuseppe; Rodríguez-Fernández, David
2016-08-01
We construct and analyze the Standard Model of electroweak and strong interactions in multiscale spacetimes with (i) weighted derivatives and (ii) q -derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multiscale measures with only one characteristic time, length and energy scale t*, ℓ* and E*, we compute the Lamb shift in the hydrogen atom and constrain the multiscale correction to the ordinary result, getting the absolute upper bound t*28 TeV . Stronger bounds are obtained from the measurement of the fine-structure constant. (ii) In the theory with q -derivatives, considering the muon decay rate and the Lamb shift in light atoms, we obtain the independent absolute upper bounds t*35 MeV . For α0=1 /2 , the Lamb shift alone yields t*450 GeV .
Glueball Decay Rates in the Witten-Sakai-Sugimoto Model
Brünner, Frederic; Rebhan, Anton
2015-01-01
We revisit and extend previous calculations of glueball decay rates in the Sakai-Sugimoto model, a holographic top-down approach for QCD with chiral quarks based on D8 probe branes in Witten's holographic model of nonsupersymmetric Yang-Mills theory. The rates for decays into two pions, two vector mesons, four pions, and the strongly suppressed decay into four pi0 are worked out quantitatively, using a range of the 't Hooft coupling which closely reproduces the decay rate of rho and omega mesons and also leads to a gluon condensate consistent with QCD sum rule calculations. The lowest holographic glueball, which arises from a rather exotic polarization of gravitons in the supergravity background, turns out to have a significantly lower mass and larger width than the two widely discussed glueball candidates f0(1500) and f0(1710). The lowest nonexotic and predominantly dilatonic scalar mode, which has a mass of 1487 MeV in the Witten-Sakai-Sugimoto model, instead provides a narrow glueball state, and we conject...
General Theory of Decoy-State Quantum Cryptography with Dark Count Rate Fluctuation
GAO Xiang; SUN Shi-Hai; LIANG Lin-Mei
2009-01-01
The existing theory of decoy-state quantum cryptography assumes that the dark count rate is a constant, but in practice there exists fluctuation. We develop a new scheme of the decoy state, achieve a more practical key generation rate in the presence of fluctuation of the dark count rate, and compare the result with the result of the decoy-state without fluctuation.It is found that the key generation rate and maximal secure distance will be decreased under the influence of the fluctuation of the dark count rate.
Density functional theory and multiscale materials modeling
Swapan K Ghosh
2003-01-01
One of the vital ingredients in the theoretical tools useful in materials modeling at all the length scales of interest is the concept of density. In the microscopic length scale, it is the electron density that has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids. In the intermediate mesoscopic length scale, an appropriate picture of the equilibrium and dynamical processes has been obtained through the single particle number density of the constituent atoms or molecules. A wide class of problems involving nanomaterials, interfacial science and soft condensed matter has been addressed using the density based theoretical formalism as well as atomistic simulation in this regime. In the macroscopic length scale, however, matter is usually treated as a continuous medium and a description using local mass density, energy density and other related density functions has been found to be quite appropriate. A unique single unified theoretical framework that emerges through the density concept at these diverse length scales and is applicable to both quantum and classical systems is the so called density functional theory (DFT) which essentially provides a vehicle to project the many-particle picture to a single particle one. Thus, the central equation for quantum DFT is a one-particle Schrödinger-like Kohn–Sham equation, while the same for classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential. Selected illustrative applications of quantum DFT to microscopic modeling of intermolecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are presented.
Models of Particle Physics from Type IIB String Theory and F-theory: A Review
Maharana, Anshuman
2012-01-01
We review particle physics model building in type IIB string theory and F-theory. This is a region in the landscape where in principle many of the key ingredients required for a realistic model of particle physics can be combined successfully. We begin by reviewing moduli stabilisation within this framework and its implications for supersymmetry breaking. We then review model building tools and developments in the weakly coupled type IIB limit, for both local D3-branes at singularities and global models of intersecting D7-branes. Much of recent model building work has been in the strongly coupled regime of F-theory due to the presence of exceptional symmetries which allow for the construction of phenomenologically appealing Grand Unified Theories. We review both local and global F-theory model building starting from the fundamental concepts and tools regarding how the gauge group, matter sector and operators arise, and ranging to detailed phenomenological properties explored in the literature.
String-like dual models for scalar theories
Baadsgaard, Christian; Bjerrum-Bohr, N. E. J.; Bourjaily, Jacob; Damgaard, Poul H.
2016-12-01
We show that all tree-level amplitudes in φ p scalar field theory can be represented as the α ' → 0 limit of an SL(2, ℝ)-invariant, string-theory-like dual model integral. These dual models are constructed according to constraints that admit families of solutions. We derive these dual models, and give closed formulae for all tree-level amplitudes of any φ p scalar field theory.
String-Like Dual Models for Scalar Theories
Baadsgaard, Christian; Bourjaily, Jacob L; Damgaard, Poul H
2016-01-01
We show that all tree-level amplitudes in $\\varphi^p$ scalar field theory can be represented as the $\\alpha'\\to0$ limit of an $SL(2,R)$-invariant, string-theory-like dual model integral. These dual models are constructed according to constraints that admit families of solutions. We derive these dual models, and give closed formulae for all tree-level amplitudes of any $\\varphi^p$ scalar field theory.
Catastrophe Theory: A Unified Model for Educational Change.
Cryer, Patricia; Elton, Lewis
1990-01-01
Catastrophe Theory and Herzberg's theory of motivation at work was used to create a model of change that unifies and extends Lewin's two separate stage and force field models. This new model is used to analyze the behavior of academics as they adapt to the changing university environment. (Author/MLW)
Catastrophe Theory: A Unified Model for Educational Change.
Cryer, Patricia; Elton, Lewis
1990-01-01
Catastrophe Theory and Herzberg's theory of motivation at work was used to create a model of change that unifies and extends Lewin's two separate stage and force field models. This new model is used to analyze the behavior of academics as they adapt to the changing university environment. (Author/MLW)
Chaos Theory as a Model for Managing Issues and Crises.
Murphy, Priscilla
1996-01-01
Uses chaos theory to model public relations situations in which the salient feature is volatility of public perceptions. Discusses the premises of chaos theory and applies them to issues management, the evolution of interest groups, crises, and rumors. Concludes that chaos theory is useful as an analogy to structure image problems and to raise…
The Theory of Finite Models without Equal Sign
Li Bo LUO
2006-01-01
In this paper, it is the first time ever to suggest that we study the model theory of all finite structures and to put the equal sign in the same situtation as the other relations. Using formulas of infinite lengths we obtain new theorems for the preservation of model extensions, submodels, model homomorphisms and inverse homomorphisms. These kinds of theorems were discussed in Chang and Keisler's Model Theory, systematically for general models, but Gurevich obtained some different theorems in this direction for finite models. In our paper the old theorems manage to survive in the finite model theory. There are some differences between into homomorphisms and onto homomorphisms in preservation theorems too. We also study reduced models and minimum models. The characterization sentence of a model is given, which derives a general result for any theory T to be equivalent to a set of existential-universal sentences. Some results about completeness and model completeness are also given.
Ryo Oizumi
Full Text Available Despite the fact that density effects and individual differences in life history are considered to be important for evolution, these factors lead to several difficulties in understanding the evolution of life history, especially when population sizes reach the carrying capacity. r/K selection theory explains what types of life strategies evolve in the presence of density effects and individual differences. However, the relationship between the life schedules of individuals and population size is still unclear, even if the theory can classify life strategies appropriately. To address this issue, we propose a few equations on adaptive life strategies in r/K selection where density effects are absent or present. The equations detail not only the adaptive life history but also the population dynamics. Furthermore, the equations can incorporate temporal individual differences, which are referred to as internal stochasticity. Our framework reveals that maximizing density effects is an evolutionarily stable strategy related to the carrying capacity. A significant consequence of our analysis is that adaptive strategies in both selections maximize an identical function, providing both population growth rate and carrying capacity. We apply our method to an optimal foraging problem in a semelparous species model and demonstrate that the adaptive strategy yields a lower intrinsic growth rate as well as a lower basic reproductive number than those obtained with other strategies. This study proposes that the diversity of life strategies arises due to the effects of density and internal stochasticity.
Making sense of implementation theories, models and frameworks
Nilsen, Per
2015-01-01
.... The aim of this article is to propose a taxonomy that distinguishes between different categories of theories, models and frameworks in implementation science, to facilitate appropriate selection...
Tantalum strength model incorporating temperature, strain rate and pressure
Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt
Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Bottino, P.J.; Sparrow, A.H.; Schwemmer, S.S.; Thompson, K.H.
1975-01-01
Germinating seeds of barley were irradiated with /sup 137/Cs gamma rays at various combinations of total exposure (400-3200 R) and exposure rate (30-24,000 R/hr). Seedling height was measured 5 days after the initiation of irradiation and the various levels of growth inhibition produced by each combination of treatments were determined. The results obtained ranged from no effect on growth to 100 percent growth inhibition. Growth inhibition curves based on both total exposure and exposure rate were constructed. The exposures required to produce 20 and 35 percent growth inhibition at each exposure rate were determined, 35 percent growth inhibition being the highest level that could be determined over the entire range of rates used (20 percent growth inhibition was used for comparative purposes). For both levels of growth inhibition, as exposure rate increased (or, concomitantly, as exposure time decreased), the total exposure required to produce the end point decreased (effectiveness increased) as a straight line relationship on a double logarithmic plot between 30 and 1500 R/hr (0.03 to 0.3 hr exposure time). Above 1500 R/hr, further increases in exposure rate (or decreases in exposure time) increased the total exposure required for a given effect, i.e., effectiveness decreased. Conversion of exposure rate to exposure time demonstrates this point of change in effectiveness to occur well within one mitotic cycle. These results are discussed with regard to current dose-rate theory and are at least partially consistent therewith. A straight-line dependency of the exposure rate producing maximum growth inhibition on total exposure is shown. The point at which the combinations of exposure and exposure rate for 35 percent growth inhibition occurs is restricted to barley and may differ for other species. This may depend on chromosome size or DNA content and/or the mitotic cycle time characteristic of a species. (auth)
A CVAR scenario for a standard monetary model using theory-consistent expectations
Juselius, Katarina
2017-01-01
A theory-consistent CVAR scenario describes a set of testable regularities capturing basic assumptions of the theoretical model. Using this concept, the paper considers a standard model for exchange rate determination and shows that all assumptions about the model's shock structure and steady...
Information Theory: a Multifaceted Model of Information
Mark Burgin
2003-06-01
Full Text Available A contradictory and paradoxical situation that currently exists in information studies can be improved by the introduction of a new information approach, which is called the general theory of information. The main achievement of the general theory of information is explication of a relevant and adequate definition of information. This theory is built as a system of two classes of principles (ontological and sociological and their consequences. Axiological principles, which explain how to measure and evaluate information and information processes, are presented in the second section of this paper. These principles systematize and unify different approaches, existing as well as possible, to construction and utilization of information measures. Examples of such measures are given by ShannonÃ¢Â€Â™s quantity of information, algorithmic quantity of information or volume of information. It is demonstrated that all other known directions of information theory may be treated inside general theory of information as its particular cases.
Hypergame Theory: A Model for Conflict, Misperception, and Deception
Nicholas S. Kovach
2015-01-01
Full Text Available When dealing with conflicts, game theory and decision theory can be used to model the interactions of the decision-makers. To date, game theory and decision theory have received considerable modeling focus, while hypergame theory has not. A metagame, known as a hypergame, occurs when one player does not know or fully understand all the strategies of a game. Hypergame theory extends the advantages of game theory by allowing a player to outmaneuver an opponent and obtaining a more preferred outcome with a higher utility. The ability to outmaneuver an opponent occurs in the hypergame because the different views (perception or deception of opponents are captured in the model, through the incorporation of information unknown to other players (misperception or intentional deception. The hypergame model more accurately provides solutions for complex theoretic modeling of conflicts than those modeled by game theory and excels where perception or information differences exist between players. This paper explores the current research in hypergame theory and presents a broad overview of the historical literature on hypergame theory.
Rate theory of solvent exchange and kinetics of Li(+) - BF4 (-)/PF6 (-) ion pairs in acetonitrile.
Dang, Liem X; Chang, Tsun-Mei
2016-09-07
In this paper, we describe our efforts to apply rate theories in studies of solvent exchange around Li(+) and the kinetics of ion pairings in lithium-ion batteries (LIBs). We report one of the first computer simulations of the exchange dynamics around solvated Li(+) in acetonitrile (ACN), which is a common solvent used in LIBs. We also provide details of the ion-pairing kinetics of Li(+)-[BF4] and Li(+)-[PF6] in ACN. Using our polarizable force-field models and employing classical rate theories of chemical reactions, we examine the ACN exchange process between the first and second solvation shells around Li(+). We calculate exchange rates using transition state theory and weighted them with the transmission coefficients determined by the reactive flux, Impey, Madden, and McDonald approaches, and Grote-Hynes theory. We found the relaxation times changed from 180 ps to 4600 ps and from 30 ps to 280 ps for Li(+)-[BF4] and Li(+)-[PF6] ion pairs, respectively. These results confirm that the solvent response to the kinetics of ion pairing is significant. Our results also show that, in addition to affecting the free energy of solvation into ACN, the anion type also should significantly influence the kinetics of ion pairing. These results will increase our understanding of the thermodynamic and kinetic properties of LIB systems.
Empirical Tests of the Assumptions Underlying Models for Foreign Exchange Rates.
1984-03-01
Martinengo (1980) extends a model by Dornbusch (1976) in which market equilibrium is formalized in terms of interest rates, level of prices, public...55-65. Dornbusch , R., "The Theory of Flexible Exchange Rate Regimes and Macroeconomic Policy", Scandinavian Journal of Economics, 78, 1976, pP. 255
MULTI-FLEXIBLE SYSTEM DYNAMIC MODELING THEORY AND APPLICATION
仲昕; 周兵; 杨汝清
2001-01-01
The flexible body modeling theory was demonstrated. An example of modeling a kind of automobile's front suspension as a multi-flexible system was shown. Finally, it shows that the simulation results of multi-flexible dynamic model more approach the road test data than those of multi-rigid dynamic model do. Thus, it is fully testified that using multi-flexible body theory to model is necessary and effective.
Mean-field theory of a recurrent epidemiological model.
Nagy, Viktor
2009-06-01
Our purpose is to provide a mean-field theory for the discrete time-step susceptible-infected-recovered-susceptible (SIRS) model on uncorrelated networks with arbitrary degree distributions. The effect of network structure, time delays, and infection rate on the stability of oscillating and fixed point solutions is examined through analysis of discrete time mean-field equations. Consideration of two scenarios for disease contagion demonstrates that the manner in which contagion is transmitted from an infected individual to a contacted susceptible individual is of primary importance. In particular, the manner of contagion transmission determines how the degree distribution affects model behavior. We find excellent agreement between our theoretical results and numerical simulations on networks with large average connectivity.
Renfree, Andrew; Martin, Louise; Micklewright, Dominic; St Clair Gibson, Alan
2014-02-01
Successful participation in competitive endurance activities requires continual regulation of muscular work rate in order to maximise physiological performance capacities, meaning that individuals must make numerous decisions with regards to the muscular work rate selected at any point in time. Decisions relating to the setting of appropriate goals and the overall strategic approach to be utilised are made prior to the commencement of an event, whereas tactical decisions are made during the event itself. This review examines current theories of decision-making in an attempt to explain the manner in which regulation of muscular work is achieved during athletic activity. We describe rational and heuristic theories, and relate these to current models of regulatory processes during self-paced exercise in an attempt to explain observations made in both laboratory and competitive environments. Additionally, we use rational and heuristic theories in an attempt to explain the influence of the presence of direct competitors on the quality of the decisions made during these activities. We hypothesise that although both rational and heuristic models can plausibly explain many observed behaviours in competitive endurance activities, the complexity of the environment in which such activities occur would imply that effective rational decision-making is unlikely. However, at present, many proposed models of the regulatory process share similarities with rational models. We suggest enhanced understanding of the decision-making process during self-paced activities is crucial in order to improve the ability to understand regulation of performance and performance outcomes during athletic activity.
Eye growth and myopia development: Unifying theory and Matlab model.
Hung, George K; Mahadas, Kausalendra; Mohammad, Faisal
2016-03-01
The aim of this article is to present an updated unifying theory of the mechanisms underlying eye growth and myopia development. A series of model simulation programs were developed to illustrate the mechanism of eye growth regulation and myopia development. Two fundamental processes are presumed to govern the relationship between physiological optics and eye growth: genetically pre-programmed signaling and blur feedback. Cornea/lens is considered to have only a genetically pre-programmed component, whereas eye growth is considered to have both a genetically pre-programmed and a blur feedback component. Moreover, based on the Incremental Retinal-Defocus Theory (IRDT), the rate of change of blur size provides the direction for blur-driven regulation. The various factors affecting eye growth are shown in 5 simulations: (1 - unregulated eye growth): blur feedback is rendered ineffective, as in the case of form deprivation, so there is only genetically pre-programmed eye growth, generally resulting in myopia; (2 - regulated eye growth): blur feedback regulation demonstrates the emmetropization process, with abnormally excessive or reduced eye growth leading to myopia and hyperopia, respectively; (3 - repeated near-far viewing): simulation of large-to-small change in blur size as seen in the accommodative stimulus/response function, and via IRDT as well as nearwork-induced transient myopia (NITM), leading to the development of myopia; (4 - neurochemical bulk flow and diffusion): release of dopamine from the inner plexiform layer of the retina, and the subsequent diffusion and relay of neurochemical cascade show that a decrease in dopamine results in a reduction of proteoglycan synthesis rate, which leads to myopia; (5 - Simulink model): model of genetically pre-programmed signaling and blur feedback components that allows for different input functions to simulate experimental manipulations that result in hyperopia, emmetropia, and myopia. These model simulation programs
The Standard Model is Natural as Magnetic Gauge Theory
Sannino, Francesco
2011-01-01
matter. The absence of scalars in the electric theory indicates that the associated magnetic theory is free from quadratic divergences. Our novel solution to the Standard Model hierarchy problem leads also to a new insight on the mystery of the observed number of fundamental fermion generations......We suggest that the Standard Model can be viewed as the magnetic dual of a gauge theory featuring only fermionic matter content. We show this by first introducing a Pati-Salam like extension of the Standard Model and then relating it to a possible dual electric theory featuring only fermionic...
Temperature Sensitivity as a Microbial Trait Using Parameters from Macromolecular Rate Theory
Charlotte Jean Alster
2016-11-01
Full Text Available The activity of soil microbial extracellular enzymes is strongly controlled by temperature, yet the degree to which temperature sensitivity varies by microbe and enzyme type is unclear. Such information would allow soil microbial enzymes to be incorporated in a traits-based framework to improve prediction of ecosystem response to global change. If temperature sensitivity varies for specific soil enzymes, then determining the underlying causes of variation in temperature sensitivity of these enzymes will provide fundamental insights for predicting nutrient dynamics belowground. In this study, we characterized how both microbial taxonomic variation as well as substrate type affects temperature sensitivity. We measured β-glucosidase, leucine aminopeptidase, and phosphatase activities at six temperatures: 4, 11, 25, 35, 45, and 60°C, for seven different soil microbial isolates. To calculate temperature sensitivity, we employed two models, Arrhenius, which predicts an exponential increase in reaction rate with temperature, and Macromolecular Rate Theory (MMRT, which predicts rate to peak and then decline as temperature increases. We found MMRT provided a more accurate fit and allowed for more nuanced interpretation of temperature sensitivity in all of the enzyme × isolate combinations tested. Our results revealed that both the enzyme type and soil isolate type explain variation in parameters associated with temperature sensitivity. Because we found temperature sensitivity to be an inherent and variable property of an enzyme, we argue that it can be incorporated as a microbial functional trait, but only when using the MMRT definition of temperature sensitivity. We show that the Arrhenius metrics of temperature sensitivity are overly sensitive to test conditions, with activation energy changing depending on the temperature range it was calculated within. Thus, we propose the use of the MMRT definition of temperature sensitivity for accurate
Temperature Sensitivity as a Microbial Trait Using Parameters from Macromolecular Rate Theory.
Alster, Charlotte J; Baas, Peter; Wallenstein, Matthew D; Johnson, Nels G; von Fischer, Joseph C
2016-01-01
The activity of soil microbial extracellular enzymes is strongly controlled by temperature, yet the degree to which temperature sensitivity varies by microbe and enzyme type is unclear. Such information would allow soil microbial enzymes to be incorporated in a traits-based framework to improve prediction of ecosystem response to global change. If temperature sensitivity varies for specific soil enzymes, then determining the underlying causes of variation in temperature sensitivity of these enzymes will provide fundamental insights for predicting nutrient dynamics belowground. In this study, we characterized how both microbial taxonomic variation as well as substrate type affects temperature sensitivity. We measured β-glucosidase, leucine aminopeptidase, and phosphatase activities at six temperatures: 4, 11, 25, 35, 45, and 60°C, for seven different soil microbial isolates. To calculate temperature sensitivity, we employed two models, Arrhenius, which predicts an exponential increase in reaction rate with temperature, and Macromolecular Rate Theory (MMRT), which predicts rate to peak and then decline as temperature increases. We found MMRT provided a more accurate fit and allowed for more nuanced interpretation of temperature sensitivity in all of the enzyme × isolate combinations tested. Our results revealed that both the enzyme type and soil isolate type explain variation in parameters associated with temperature sensitivity. Because we found temperature sensitivity to be an inherent and variable property of an enzyme, we argue that it can be incorporated as a microbial functional trait, but only when using the MMRT definition of temperature sensitivity. We show that the Arrhenius metrics of temperature sensitivity are overly sensitive to test conditions, with activation energy changing depending on the temperature range it was calculated within. Thus, we propose the use of the MMRT definition of temperature sensitivity for accurate interpretation of
Introducing AORN's new model for evidence rating.
Spruce, Lisa; Van Wicklin, Sharon A; Hicks, Rodney W; Conner, Ramona; Dunn, Debra
2014-02-01
Nurses today are expected to implement evidence-based practices in the perioperative setting to assess and implement practice changes. All evidence-based practice begins with a question, a practice problem to address, or a needed change that is identified. To assess the question, a literature search is performed and relevant literature is identified and appraised. The types of evidence used to inform practice can be scientific research (eg, randomized controlled trials, systematic reviews) or nonresearch evidence (eg, regulatory and accrediting agency requirements, professional association practice standards and guidelines, quality improvement project reports). The AORN recommended practices are a synthesis of related knowledge on a given topic, and the authorship process begins with a systematic review of the literature conducted in collaboration with a medical librarian. At least two appraisers independently evaluate the applicable literature for quality and strength by using the AORN Research Appraisal Tool and AORN Non-Research Appraisal Tool. To collectively appraise the evidence supporting particular practice recommendations, the AORN recommended practices authors have implemented a new evidence rating model that is appropriate for research and nonresearch literature and that is relevant to the perioperative setting.
Modeling Equity for Alternative Water Rate Structures
Griffin, R.; Mjelde, J.
2011-12-01
The rising popularity of increasing block rates for urban water runs counter to mainstream economic recommendations, yet decision makers in rate design forums are attracted to the notion of higher prices for larger users. Among economists, it is widely appreciated that uniform rates have stronger efficiency properties than increasing block rates, especially when volumetric prices incorporate intrinsic water value. Yet, except for regions where water market purchases have forced urban authorities to include water value in water rates, economic arguments have weakly penetrated policy. In this presentation, recent evidence will be reviewed regarding long term trends in urban rate structures while observing economic principles pertaining to these choices. The main objective is to investigate the equity of increasing block rates as contrasted to uniform rates for a representative city. Using data from four Texas cities, household water demand is established as a function of marginal price, income, weather, number of residents, and property characteristics. Two alternative rate proposals are designed on the basis of recent experiences for both water and wastewater rates. After specifying a reasonable number (~200) of diverse households populating the city and parameterizing each household's characteristics, every household's consumption selections are simulated for twelve months. This procedure is repeated for both rate systems. Monthly water and wastewater bills are also computed for each household. Most importantly, while balancing the budget of the city utility we compute the effect of switching rate structures on the welfares of households of differing types. Some of the empirical findings are as follows. Under conditions of absent water scarcity, households of opposing characters such as low versus high income do not have strong preferences regarding rate structure selection. This changes as water scarcity rises and as water's opportunity costs are allowed to
Study on landslide hazard zonation based on factor weighting-rating theory in Slanic Prahova
Maftei, R.-M.; Vina, G.; Filipciuc, C.
2012-04-01
Studying the risks caused by landslides is important in the context of its forecast triggering. This study mainly integrates the background data that are related to historical and environmental factors and also current triggering factors. The theory on zoning hazard caused by landslides, Landslide Hazard Zonation, (LHZ) appeared in the 1960s. In this period the U.S. and many European countries began to use other triggers factors, besides the slope factor, in achieving hazard zoning. This theory has progressed due to the development of remote sensing and GIS technology, which were used to develop and analys methods and techniques consisting in combining data from different sources. The study of an area involves analysing the geographical position data, estimating the surface, the type of terrain, altitude, identifing the landslides in the area and some geological summary data. Data sources. The data used in this study are: · Landsat 7 satellite images; · 30 m spatial resolution, from which is derived the vegetation index; · topographic maps 1:25 000 from which we can obtain the numerical altitude model (DEM) (used to calculate the slope and relative altitude to land) · geological maps 1:50 000. Studied factors. The main factors used and studied in achieving land slides hazard zoning are: - the rate of displacement, the angle of slope, lithology - the index of vegetation or ground coverage of vegetation (NDVI) - river network, structural factor 1. The calculation of normalized vegetation index is made based on Landsat ETM satellite images. This vegetation factor can be both a principal and a secondary trigger factor in landslides. In areas devoid of vegetation, landslides are triggered more often compared with those in which coverage is greater. 2. Factors derived from the numerical model are the slope and elevation relative altitude. This operation was made using the topographic map 1:25 000 from were the level curvs contour was extracted by digitization, and
Summary of papers presented in the Theory and Modelling session
Lin-Liu Y.R.; Westerhof E.
2012-01-01
A total of 14 contributions were presented in the Theory and Modelling sessions at EC-17. One Theory and Modelling paper was included in the ITER ECRH and ECE sessions each. Three papers were in the area of nonlinear physics discussing parametric processes accompanying ECRH. Eight papers were based on the quasi-linear theory of wave heating and current drive. Three of these addressed the application of ECCD for NTM stabilization. Two papers considered scattering of EC waves by edge density fl...
Modeling And Forecasting Exchange-Rate Shocks
Andreou, A. S.; Zombanakis, George A.; Likothanassis, S. D.; Georgakopoulos, E.
1998-01-01
This paper considers the extent to which the application of neural networks methodology can be used in order to forecast exchange-rate shocks. Four major foreign currency exchange rates against the Greek Drachma as well as the overnight interest rate in the Greek market are employed in an attempt to predict the extent to which the local currency may be suffering an attack. The forecasting is extended to the estimation of future exchange rates and interest rates. The MLP proved to be highly ...
Dark matter relics and the expansion rate in scalar-tensor theories
Dutta, Bhaskar; Jimenez, Esteban; Zavala, Ivonne
2017-06-01
We study the impact of a modified expansion rate on the dark matter relic abundance in a class of scalar-tensor theories. The scalar-tensor theories we consider are motivated from string theory constructions, which have conformal as well as disformally coupled matter to the scalar. We investigate the effects of such a conformal coupling to the dark matter relic abundance for a wide range of initial conditions, masses and cross-sections. We find that exploiting all possible initial conditions, the annihilation cross-section required to satisfy the dark matter content can differ from the thermal average cross-section in the standard case. We also study the expansion rate in the disformal case and find that physically relevant solutions require a nontrivial relation between the conformal and disformal functions. We study the effects of the disformal coupling in an explicit example where the disformal function is quadratic.
The logical foundations of scientific theories languages, structures, and models
Krause, Decio
2016-01-01
This book addresses the logical aspects of the foundations of scientific theories. Even though the relevance of formal methods in the study of scientific theories is now widely recognized and regaining prominence, the issues covered here are still not generally discussed in philosophy of science. The authors focus mainly on the role played by the underlying formal apparatuses employed in the construction of the models of scientific theories, relating the discussion with the so-called semantic approach to scientific theories. The book describes the role played by this metamathematical framework in three main aspects: considerations of formal languages employed to axiomatize scientific theories, the role of the axiomatic method itself, and the way set-theoretical structures, which play the role of the models of theories, are developed. The authors also discuss the differences and philosophical relevance of the two basic ways of aximoatizing a scientific theory, namely Patrick Suppes’ set theoretical predicate...
Solid modeling and applications rapid prototyping, CAD and CAE theory
Um, Dugan
2016-01-01
The lessons in this fundamental text equip students with the theory of Computer Assisted Design (CAD), Computer Assisted Engineering (CAE), the essentials of Rapid Prototyping, as well as practical skills needed to apply this understanding in real world design and manufacturing settings. The book includes three main areas: CAD, CAE, and Rapid Prototyping, each enriched with numerous examples and exercises. In the CAD section, Professor Um outlines the basic concept of geometric modeling, Hermite and Bezier Spline curves theory, and 3-dimensional surface theories as well as rendering theory. The CAE section explores mesh generation theory, matrix notion for FEM, the stiffness method, and truss Equations. And in Rapid Prototyping, the author illustrates stereo lithographic theory and introduces popular modern RP technologies. Solid Modeling and Applications: Rapid Prototyping, CAD and CAE Theory is ideal for university students in various engineering disciplines as well as design engineers involved in product...
Generalization of the Activated Complex Theory of Reaction Rates. II. Classical Mechanical Treatment
Marcus, R. A.
1964-01-01
In its usual classical form activated complex theory assumes a particular expression for the kinetic energy of the reacting system -- one associated with a rectilinear motion along the reaction coordinate. The derivation of the rate expression given in the present paper is based on the general kinetic energy expression.
Matrix models, topological strings, and supersymmetric gauge theories
Dijkgraaf, Robbert; Vafa, Cumrun
2002-11-01
We show that B-model topological strings on local Calabi-Yau threefolds are large- N duals of matrix models, which in the planar limit naturally give rise to special geometry. These matrix models directly compute F-terms in an associated N=1 supersymmetric gauge theory, obtained by deforming N=2 theories by a superpotential term that can be directly identified with the potential of the matrix model. Moreover by tuning some of the parameters of the geometry in a double scaling limit we recover ( p, q) conformal minimal models coupled to 2d gravity, thereby relating non-critical string theories to type II superstrings on Calabi-Yau backgrounds.
Introduction to gauge theories and the Standard Model
de Wit, Bernard
1995-01-01
The conceptual basis of gauge theories is introduced to enable the construction of generic models.Spontaneous symmetry breaking is dicussed and its relevance for the renormalization of theories with massive vector field is explained. Subsequently a d standard model. When time permits we will address more practical questions that arise in the evaluation of quantum corrections.
A Quantitative Causal Model Theory of Conditional Reasoning
Fernbach, Philip M.; Erb, Christopher D.
2013-01-01
The authors propose and test a causal model theory of reasoning about conditional arguments with causal content. According to the theory, the acceptability of modus ponens (MP) and affirming the consequent (AC) reflect the conditional likelihood of causes and effects based on a probabilistic causal model of the scenario being judged. Acceptability…
The Properties of Model Selection when Retaining Theory Variables
Hendry, David F.; Johansen, Søren
Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...
Funk, Alexander M; Harvey, Peter; Finney, Katie-Louise N A; Fox, Mark A; Kenwright, Alan M; Rogers, Nicola J; Senanayake, P Kanthi; Parker, David
2015-07-07
Measurements of the proton NMR paramagnetic relaxation rates for several series of isostructural lanthanide(III) complexes have been performed in aqueous solution over the field range 1.0 to 16.5 Tesla. The field dependence has been modeled using Bloch-Redfield-Wangsness theory, allowing values for the electronic relaxation time, Tle and the magnetic susceptibility, μeff, to be estimated. Anomalous relaxation rate profiles were obtained, notably for erbium and thulium complexes of low symmetry 8-coordinate aza-phosphinate complexes. Such behaviour challenges accepted theory and can be interpreted in terms of changes in Tle values that are a function of the transient ligand field induced by solvent collision and vary considerably between Ln(3+) ions, along with magnetic susceptibilities that deviate significantly from free-ion values.
To Save or to Consume: Linking Growth Theory with the Keynesian Model
Kwok, Yun-kwong
2007-01-01
In the neoclassical growth theory, higher saving rate gives rise to higher output per capita. However, in the Keynesian model, higher saving rate causes lower consumption, which may lead to a recession. Students may ask, "Should we save or should we consume?" In most of the macroeconomics textbooks, economic growth and Keynesian economics are in…
Application of multidimensional item response theory models to longitudinal data
Marvelde, te Janneke M.; Glas, Cees A.W.; Van Landeghem, Georges; Van Damme, Jan
2006-01-01
The application of multidimensional item response theory (IRT) models to longitudinal educational surveys where students are repeatedly measured is discussed and exemplified. A marginal maximum likelihood (MML) method to estimate the parameters of a multidimensional generalized partial credit model
Item response theory modeling with nonignorable missing data
Pimentel, Jonald L.
2005-01-01
This thesis discusses methods to detect nonignorable missing data and methods to adjust for the bias caused by nonignorable missing data, both by introducing a model for the missing data indicator using item response theory (IRT) models.
Theory of stellar convection - II. First stellar models
Pasetto, S.; Chiosi, C.; Chiosi, E.; Cropper, M.; Weiss, A.
2016-07-01
We present here the first stellar models on the Hertzsprung-Russell diagram, in which convection is treated according to the new scale-free convection theory (SFC theory) by Pasetto et al. The aim is to compare the results of the new theory with those from the classical, calibrated mixing-length (ML) theory to examine differences and similarities. We integrate the equations describing the structure of the atmosphere from the stellar surface down to a few per cent of the stellar mass using both ML theory and SFC theory. The key temperature over pressure gradients, the energy fluxes, and the extension of the convective zones are compared in both theories. The analysis is first made for the Sun and then extended to other stars of different mass and evolutionary stage. The results are adequate: the SFC theory yields convective zones, temperature gradients ∇ and ∇e, and energy fluxes that are very similar to those derived from the `calibrated' MT theory for main-sequence stars. We conclude that the old scale dependent ML theory can now be replaced with a self-consistent scale-free theory able to predict correct results, as it is more physically grounded than the ML theory. Fundamentally, the SFC theory offers a deeper insight of the underlying physics than numerical simulations.
Prejudiced attitude measurement using the Rasch Rating Scale model.
Rojas Tejada, Antonio J; Lozano Rojas, Oscar M; Navas Luque, Marisol; Pérez Moreno, Pedro J
2011-10-01
There have been two basic approaches for the study of minority group prejudice against the majority: to adapt instruments from the majority group, and to use qualitative techniques by analyzing the content of the discourse of the groups involved. Neither of these procedures solves the problem of measuring intergroup attitudes of majorities and minorities in interaction. This study shows the result of a prejudice scale which was developed to measure the attitude of both the minority and majority groups. Prejudice is conceived as an attitude which requires the beliefs or opinions about the out-group, the emotions it elicits, and the behavior or intentional behavior toward it to be known for its evaluation. The innovation in this work is that the psychometric development of the scale was based on the item response theory, and more specifically, the rating scale model.
Large field inflation models from higher-dimensional gauge theories
Furuuchi, Kazuyuki; Koyama, Yoji
2015-02-01
Motivated by the recent detection of B-mode polarization of CMB by BICEP2 which is possibly of primordial origin, we study large field inflation models which can be obtained from higher-dimensional gauge theories. The constraints from CMB observations on the gauge theory parameters are given, and their naturalness are discussed. Among the models analyzed, Dante's Inferno model turns out to be the most preferred model in this framework.
Large field inflation models from higher-dimensional gauge theories
Furuuchi, Kazuyuki [Manipal Centre for Natural Sciences, Manipal University, Manipal, Karnataka 576104 (India); Koyama, Yoji [Department of Physics, National Tsing-Hua University, Hsinchu 30013, Taiwan R.O.C. (China)
2015-02-23
Motivated by the recent detection of B-mode polarization of CMB by BICEP2 which is possibly of primordial origin, we study large field inflation models which can be obtained from higher-dimensional gauge theories. The constraints from CMB observations on the gauge theory parameters are given, and their naturalness are discussed. Among the models analyzed, Dante’s Inferno model turns out to be the most preferred model in this framework.
Biplot models applied to cancer mortality rates.
Osmond, C
1985-01-01
"A graphical method developed by Gabriel to display the rows and columns of a matrix is applied to tables of age- and period-specific cancer mortality rates. It is particularly useful when the pattern of age-specific rates changes with time. Trends in age-specific rates and changes in the age distribution are identified as projections. Three examples [from England and Wales] are given."
Model Uncertainty and Exchange Rate Forecasting
Kouwenberg, Roy; Markiewicz, Agnieszka; Verhoeks, Ralph; Zwinkels, Remco
2013-01-01
textabstractWe propose a theoretical framework of exchange rate behavior where investors focus on a subset of economic fundamentals. We find that any adjustment in the set of predictors used by investors leads to changes in the relation between the exchange rate and fundamentals. We test the validity of this framework via a backward elimination rule which captures the current set of fundamentals that best predicts the exchange rate. Out-of-sample forecasting tests show that the backward elimi...
Factor Model Forecasts of Exchange Rates
Charles Engel; Nelson C. Mark; Kenneth D. West
2012-01-01
We construct factors from a cross section of exchange rates and use the idiosyncratic deviations from the factors to forecast. In a stylized data generating process, we show that such forecasts can be effective even if there is essentially no serial correlation in the univariate exchange rate processes. We apply the technique to a panel of bilateral U.S. dollar rates against 17 OECD countries. We forecast using factors, and using factors combined with any of fundamentals suggested by Taylor r...
Quantum Quenches in Free Field Theory: Universal Scaling at Any Rate
Das, Sumit R; Myers, Robert C
2016-01-01
Quantum quenches display universal scaling in several regimes. For quenches which start from a gapped phase and cross a critical point, with a rate slow compared to the initial gap, many systems obey Kibble-Zurek scaling. More recently, a different scaling behaviour has been shown to occur when the quench rate is fast compared to all other physical scales, but still slow compared to the UV cutoff. We investigate the passage from fast to slow quenches in scalar and fermionic free field theories with time dependent masses for which the dynamics can be solved exactly for all quench rates. We find that renormalized one point functions smoothly cross over between the regimes.
Model Uncertainty and Exchange Rate Forecasting
R.R.P. Kouwenberg (Roy); A. Markiewicz (Agnieszka); R. Verhoeks (Ralph); R.C.J. Zwinkels (Remco)
2013-01-01
textabstractWe propose a theoretical framework of exchange rate behavior where investors focus on a subset of economic fundamentals. We find that any adjustment in the set of predictors used by investors leads to changes in the relation between the exchange rate and fundamentals. We test the validit
The effect of modelling on drinking rate.
Garlington, W K; Dericco, D A
1977-01-01
Three male college seniors were asked to drink beer at their normal rate in a simulated tavern setting. Each was paired with a confederate, also a male college senior, in an ABACA single subject design. In the baseline conditions, the confederate matched the drinking rate of the subject. Baseline and all subsequent conditions were continued in 1-hr sessions until a stable drinking rate was achieved. In Condition B, the confederate drank either one third more or one third less than the subject's baseline rate. In Condition C, the direction was reversed. All three subjects closely matched the confederate's drinking rate, whether high or low. All subjects reported they were unaware of the true purpose of the study.
Theories, models and urban realities. From New York to Kathmandu
Román Rodríguez González
2004-12-01
Full Text Available At the beginning of the 21st century, there are various social theories that speak of global changes in the history of human civilization. Urban models have been through obvious changes throughout the last century according to the important transformation that are pro-posed by previous general theories. Nevertheless global diversity contradicts the generaliza-tion of these theories and models. From our own simple observations and reflections we arrive at conclusions that distance themselves from the prevailing theory of our civilized world. New York, Delhi, Salvador de Bahia, Bruges, Paris, Cartagena de Indias or Kath-mandu still have more internal differences than similarities.
Theories, models and urban realities. From New York to Kathmandu
José Somoza Medina
2004-01-01
Full Text Available At the beginning of the 21st century, there are various social theories that speak of globalchanges in the history of human civilization. Urban models have been through obviouschanges throughout the last century according to the important transformation that are proposedby previous general theories. Nevertheless global diversity contradicts the generalizationof these theories and models. From our own simple observations and reflections wearrive at conclusions that distance themselves from the prevailing theory of our civilizedworld. New York, Delhi, Salvador de Bahia, Bruges, Paris, Cartagena de Indias or Kathmandustill have more internal differences than similarities.
Toric Methods in F-Theory Model Building
Johanna Knapp
2011-01-01
Full Text Available We discuss recent constructions of global F-theory GUT models and explain how to make use of toric geometry to do calculations within this framework. After introducing the basic properties of global F-theory GUTs, we give a self-contained review of toric geometry and introduce all the tools that are necessary to construct and analyze global F-theory models. We will explain how to systematically obtain a large class of compact Calabi-Yau fourfolds which can support F-theory GUTs by using the software package PALP.
General autocatalytic theory and simple model of financial markets
Thuy Anh, Chu; Lan, Nguyen Tri; Viet, Nguyen Ai
2015-06-01
The concept of autocatalytic theory has become a powerful tool in understanding evolutionary processes in complex systems. A generalization of autocatalytic theory was assumed by considering that the initial element now is being some distribution instead of a constant value as in traditional theory. This initial condition leads to that the final element might have some distribution too. A simple physics model for financial markets is proposed, using this general autocatalytic theory. Some general behaviours of evolution process and risk moment of a financial market also are investigated in framework of this simple model.
Bayesian item fit analysis for unidimensional item response theory models.
Sinharay, Sandip
2006-11-01
Assessing item fit for unidimensional item response theory models for dichotomous items has always been an issue of enormous interest, but there exists no unanimously agreed item fit diagnostic for these models, and hence there is room for further investigation of the area. This paper employs the posterior predictive model-checking method, a popular Bayesian model-checking tool, to examine item fit for the above-mentioned models. An item fit plot, comparing the observed and predicted proportion-correct scores of examinees with different raw scores, is suggested. This paper also suggests how to obtain posterior predictive p-values (which are natural Bayesian p-values) for the item fit statistics of Orlando and Thissen that summarize numerically the information in the above-mentioned item fit plots. A number of simulation studies and a real data application demonstrate the effectiveness of the suggested item fit diagnostics. The suggested techniques seem to have adequate power and reasonable Type I error rate, and psychometricians will find them promising.
Petri nets extension to model state-varying failure rates
Lazarova-Molnar, Sanja
2013-01-01
One of the most common assumptions in reliability modeling is the constant failure rate. This has been increasingly changing lately, yielding significant research towards abandoning simulation results based on this assumption; thus, deeming constant failure rates as inadequate to model failures......-varying failure rates and extend the formalism of Petri nets to model them. To illustrate our approach we provide an example model that features state-varying failure rates....
Zhu, Lin-Fa; Kim, Soo; Chattopadhyay, Aditi; Goldberg, Robert K.
2004-01-01
A numerical procedure has been developed to investigate the nonlinear and strain rate dependent deformation response of polymer matrix composite laminated plates under high strain rate impact loadings. A recently developed strength of materials based micromechanics model, incorporating a set of nonlinear, strain rate dependent constitutive equations for the polymer matrix, is extended to account for the transverse shear effects during impact. Four different assumptions of transverse shear deformation are investigated in order to improve the developed strain rate dependent micromechanics model. The validities of these assumptions are investigated using numerical and theoretical approaches. A method to determine through the thickness strain and transverse Poisson's ratio of the composite is developed. The revised micromechanics model is then implemented into a higher order laminated plate theory which is modified to include the effects of inelastic strains. Parametric studies are conducted to investigate the mechanical response of composite plates under high strain rate loadings. Results show the transverse shear stresses cannot be neglected in the impact problem. A significant level of strain rate dependency and material nonlinearity is found in the deformation response of representative composite specimens.
Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat
2017-01-23
Although widely recognized as a comprehensive framework for representing score reliability, generalizability theory (G-theory), despite its potential benefits, has been used sparingly in reporting of results for measures of individual differences. In this article, we highlight many valuable ways that G-theory can be used to quantify, evaluate, and improve psychometric properties of scores. Our illustrations encompass assessment of overall reliability, percentages of score variation accounted for by individual sources of measurement error, dependability of cut-scores for decision making, estimation of reliability and dependability for changes made to measurement procedures, disattenuation of validity coefficients for measurement error, and linkages of G-theory with classical test theory and structural equation modeling. We also identify computer packages for performing G-theory analyses, most of which can be obtained free of charge, and describe how they compare with regard to data input requirements, ease of use, complexity of designs supported, and output produced. (PsycINFO Database Record
Dimensional reduction of Markov state models from renormalization group theory
Orioli, S.; Faccioli, P.
2016-09-01
Renormalization Group (RG) theory provides the theoretical framework to define rigorous effective theories, i.e., systematic low-resolution approximations of arbitrary microscopic models. Markov state models are shown to be rigorous effective theories for Molecular Dynamics (MD). Based on this fact, we use real space RG to vary the resolution of the stochastic model and define an algorithm for clustering microstates into macrostates. The result is a lower dimensional stochastic model which, by construction, provides the optimal coarse-grained Markovian representation of the system's relaxation kinetics. To illustrate and validate our theory, we analyze a number of test systems of increasing complexity, ranging from synthetic toy models to two realistic applications, built form all-atom MD simulations. The computational cost of computing the low-dimensional model remains affordable on a desktop computer even for thousands of microstates.
Dimensional reduction of Markov state models from renormalization group theory.
Orioli, S; Faccioli, P
2016-09-28
Renormalization Group (RG) theory provides the theoretical framework to define rigorous effective theories, i.e., systematic low-resolution approximations of arbitrary microscopic models. Markov state models are shown to be rigorous effective theories for Molecular Dynamics (MD). Based on this fact, we use real space RG to vary the resolution of the stochastic model and define an algorithm for clustering microstates into macrostates. The result is a lower dimensional stochastic model which, by construction, provides the optimal coarse-grained Markovian representation of the system's relaxation kinetics. To illustrate and validate our theory, we analyze a number of test systems of increasing complexity, ranging from synthetic toy models to two realistic applications, built form all-atom MD simulations. The computational cost of computing the low-dimensional model remains affordable on a desktop computer even for thousands of microstates.
Turbulent Boundary Layers - Experiments, Theory and Modelling
1980-01-01
DEVELOPMENT (ORGANISATION DU TRAITE DE L’ATLANTIQUE NORD ) AGARD Conference Proceedings No.271 TURBULENT BOUNDARY LAYERS - EXPERIMENTS, THEORY AND...photographs of Figures 21 and 22. In this case, the photographs are taken with a single flash strobe and thus yield the instantaneous positions of the
Probing flame chemistry with MBMS, theory, and modeling
Westmoreland, P.R. [Univ. of Massachusetts, Amherst (United States)
1993-12-01
The objective is to establish kinetics of combustion and molecular-weight growth in C{sub 3} hydrocarbon flames as part of an ongoing study of flame chemistry. Specific reactions being studied are (1) the growth reactions of C{sub 3}H{sub 5} and C{sub 3}H{sub 3} with themselves and with unsaturated hydrocarbons and (2) the oxidation reactions of O and OH with C{sub 3}`s. This approach combines molecular-beam mass spectrometry (MBMS) experiments on low-pressure flat flames; theoretical predictions of rate constants by thermochemical kinetics, Bimolecular Quantum-RRK, RRKM, and master-equation theory; and whole-flame modeling using full mechanisms of elementary reactions.
Forecasting the Euro exchange rate using vector error correction models
Aarle, B. van; Bos, M.; Hlouskova, J.
2000-01-01
Forecasting the Euro Exchange Rate Using Vector Error Correction Models. — This paper presents an exchange rate model for the Euro exchange rates of four major currencies, namely the US dollar, the British pound, the Japanese yen and the Swiss franc. The model is based on the monetary approach of ex
Reconstructing constructivism: Causal models, Bayesian learning mechanisms and the theory theory
Gopnik, Alison; Wellman, Henry M.
2012-01-01
We propose a new version of the “theory theory” grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and non-technical way, and review an extensive but ...
The mathematical theory of reduced MHD models for fusion plasmas
Guillard, Hervé
2015-01-01
The derivation of reduced MHD models for fusion plasma is here formulated as a special instance of the general theory of singular limit of hyperbolic system of PDEs with large operator. This formulation allows to use the general results of this theory and to prove rigorously that reduced MHD models are valid approximations of the full MHD equations. In particular, it is proven that the solutions of the full MHD system converge to the solutions of an appropriate reduced model.
Gutzwiller variational theory for the Hubbard model with attractive interaction.
Bünemann, Jörg; Gebhard, Florian; Radnóczi, Katalin; Fazekas, Patrik
2005-06-29
We investigate the electronic and superconducting properties of a negative-U Hubbard model. For this purpose we evaluate a recently introduced variational theory based on Gutzwiller-correlated BCS wavefunctions. We find significant differences between our approach and standard BCS theory, especially for the superconducting gap. For small values of |U|, we derive analytical expressions for the order parameter and the superconducting gap which we compare to exact results from perturbation theory.
Nonequilibrium Dynamical Mean-Field Theory for Bosonic Lattice Models
2015-01-01
We develop the nonequilibrium extension of bosonic dynamical mean-field theory and a Nambu real-time strong-coupling perturbative impurity solver. In contrast to Gutzwiller mean-field theory and strong-coupling perturbative approaches, nonequilibrium bosonic dynamical mean-field theory captures not only dynamical transitions but also damping and thermalization effects at finite temperature. We apply the formalism to quenches in the Bose-Hubbard model, starting from both the normal and the Bos...
Population changes: contemporary models and theories.
Sauvy, A
1981-01-01
In many developing countries rapid population growth has promoted a renewed interest in the study of the effect of population growth on economic development. This research takes either the macroeconomic viewpoint, where the nation is the framework, or the microeconomic perspective, where the family is the framework. For expository purposes, the macroeconomic viewpoint is assumed, and an example of such an investment is presented. Attention is directed to the following: a simplified model--housing; the lessons learned from experience (primitive populations, Spain in the 17th and 18th centuries, comparing development in Spain and Italy, 19th century Western Europe, and underdeveloped countries); the positive factors of population growth; and the concept of the optimal rate of growth. Housing is the typical investment that an individual makes. Hence, the housing per person (roughly 1/3 of the necessary amount of housing per family) is taken as a unit, and the calculations are made using averages. The conclusion is that growth is expensive. A population decrease might be advantageous, for this decrease would enable the entire population to benefit from past capital accumulation. It is also believed, "a priori," that population growth is more expensive for a developed than for a developing country. This belief may be attributable to the fact that the capital per person tends to be high in the developed countries. Any further increase in the population requires additional capital investments, driving this ratio even higher. Yet, investment is not the only factor inhibiting economic development. The literature describes factors regarding population growth, yet this writer prefers to emphasize 2 other factors that have been the subject of less study: a growing population's ease of adaptation and the human factor--behavior. A growing population adapts better to new conditions than does a stationary or declining population, and contrary to "a priori" belief, a growing
Further Results on Dynamic Additive Hazard Rate Model
Zhengcheng Zhang
2014-01-01
Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.
Geometry model construction in infrared image theory simulation of buildings
谢鸣; 李玉秀; 徐辉; 谈和平
2004-01-01
Geometric model construction is the basis of infrared image theory simulation. Taking the construction of the geometric model of one building in Harbin as an example, this paper analyzes the theoretical groundings of simplification and principles of geometric model construction of buildings. It then discusses some particular treatment methods in calculating the radiation transfer coefficient in geometric model construction using the Monte Carlo Method.
Theory and model use in social marketing health interventions.
Luca, Nadina Raluca; Suggs, L Suzanne
2013-01-01
The existing literature suggests that theories and models can serve as valuable frameworks for the design and evaluation of health interventions. However, evidence on the use of theories and models in social marketing interventions is sparse. The purpose of this systematic review is to identify to what extent papers about social marketing health interventions report using theory, which theories are most commonly used, and how theory was used. A systematic search was conducted for articles that reported social marketing interventions for the prevention or management of cancer, diabetes, heart disease, HIV, STDs, and tobacco use, and behaviors related to reproductive health, physical activity, nutrition, and smoking cessation. Articles were published in English, after 1990, reported an evaluation, and met the 6 social marketing benchmarks criteria (behavior change, consumer research, segmentation and targeting, exchange, competition and marketing mix). Twenty-four articles, describing 17 interventions, met the inclusion criteria. Of these 17 interventions, 8 reported using theory and 7 stated how it was used. The transtheoretical model/stages of change was used more often than other theories. Findings highlight an ongoing lack of use or underreporting of the use of theory in social marketing campaigns and reinforce the call to action for applying and reporting theory to guide and evaluate interventions.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.
Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
Modeling Routinization in Games: An Information Theory Approach
Wallner, Simon; Pichlmair, Martin; Hecher, Michael
2015-01-01
-time, discrete-space Markov chains and information theory to measure the actual error between the dynamically trained models and the player interaction. Preliminary research supports the hypothesis that Markov chains can be effectively used to model routinization in games. A full study design is presented......Routinization is the result of practicing until an action stops being a goal-directed process. This paper formulates a definition of routinization in games based on prior research in the fields of activity theory and practice theory. Routinization is analyzed using the formal model of discrete...
Modeling Routinization in Games: An Information Theory Approach
Wallner, Simon; Pichlmair, Martin; Hecher, Michael
2015-01-01
Routinization is the result of practicing until an action stops being a goal-directed process. This paper formulates a definition of routinization in games based on prior research in the fields of activity theory and practice theory. Routinization is analyzed using the formal model of discrete......-time, discrete-space Markov chains and information theory to measure the actual error between the dynamically trained models and the player interaction. Preliminary research supports the hypothesis that Markov chains can be effectively used to model routinization in games. A full study design is presented...
Behavioral momentum theory fails to account for the effects of reinforcement rate on resurgence.
Craig, Andrew R; Shahan, Timothy A
2016-05-01
The behavioral-momentum model of resurgence predicts reinforcer rates within a resurgence preparation should have three effects on target behavior. First, higher reinforcer rates in baseline (Phase 1) produce more persistent target behavior during extinction plus alternative reinforcement. Second, higher rate alternative reinforcement during Phase 2 generates greater disruption of target responding during extinction. Finally, higher rates of either reinforcement source should produce greater responding when alternative reinforcement is suspended in Phase 3. Recent empirical reports have produced mixed results in terms of these predictions. Thus, the present experiment further examined reinforcer-rate effects on persistence and resurgence. Rats pressed target levers for high-rate or low-rate variable-interval food during Phase 1. In Phase 2, target-lever pressing was extinguished, an alternative nose-poke became available, and nose-poking produced either high-rate variable-interval, low-rate variable-interval, or no (an extinction control) alternative reinforcement. Alternative reinforcement was suspended in Phase 3. For groups that received no alternative reinforcement, target-lever pressing was less persistent following high-rate than low-rate Phase-1 reinforcement. Target behavior was more persistent with low-rate alternative reinforcement than with high-rate alternative reinforcement or extinction alone. Finally, no differences in Phase-3 responding were observed for groups that received either high-rate or low-rate alternative reinforcement, and resurgence occurred only following high-rate alternative reinforcement. These findings are inconsistent with the momentum-based model of resurgence. We conclude this model mischaracterizes the effects of reinforcer rates on persistence and resurgence of operant behavior.
Theories and models of globalization ethicizing
Dritan Abazović
2016-05-01
Full Text Available Globalization as a phenomenon is under the magnifying glass of many philosophical discussions and theoretical deliberations. While most theorists deal with issues that are predominantly of economic or political character, this article has a different logic. The article presents six theories which in their own way explain the need for movement by ethicizing globalization. Globalization is a process that affects all and as such it has become inevitable, but it is up the people to determine its course and make it either functional or uncontrolled. The survival and development of any society is measured primarily by the quality of its moral and ethical foundation. Therefore, it is clear that global society can survive and be functional only if it finds a minimum consensus on ethical norms or, as said in theory, if it establishes its ethical system based on which it would be built and developed.
The danger model: questioning an unconvincing theory.
Józefowski, Szczepan
2016-02-01
Janeway's pattern recognition theory holds that the immune system detects infection through a limited number of the so-called pattern recognition receptors (PRRs). These receptors bind specific chemical compounds expressed by entire groups of related pathogens, but not by host cells (pathogen-associated molecular patterns (PAMPs). In contrast, Matzinger's danger hypothesis postulates that products released from stressed or damaged cells have a more important role in the activation of immune system than the recognition of nonself. These products, named by analogy to PAMPs as danger-associated molecular patterns (DAMPs), are proposed to act through the same receptors (PRRs) as PAMPs and, consequently, to stimulate largely similar responses. Herein, I review direct and indirect evidence that contradict the widely accepted danger theory, and suggest that it may be false.
Wojcik, Mariusz; Tachiya, M
2009-03-14
This paper deals with the exact extension of the original Onsager theory of the escape probability to the case of finite recombination rate at nonzero reaction radius. The empirical theories based on the Eigen model and the Braun model, which are applicable in the absence and presence of an external electric field, respectively, are based on a wrong assumption that both recombination and separation processes in geminate recombination follow exponential kinetics. The accuracies of the empirical theories are examined against the exact extension of the Onsager theory. The Eigen model gives the escape probability in the absence of an electric field, which is different by a factor of 3 from the exact one. We have shown that this difference can be removed by operationally redefining the volume occupied by the dissociating partner before dissociation, which appears in the Eigen model as a parameter. The Braun model gives the escape probability in the presence of an electric field, which is significantly different from the exact one over the whole range of electric fields. Appropriate modification of the original Braun model removes the discrepancy at zero or low electric fields, but it does not affect the discrepancy at high electric fields. In all the above theories it is assumed that recombination takes place only at the reaction radius. The escape probability in the case when recombination takes place over a range of distances is also calculated and compared with that in the case of recombination only at the reaction radius.
[Models of economic theory of population growth].
Von Zameck, W
1987-01-01
"The economic theory of population growth applies the opportunity cost approach to the fertility decision. Variations and differentials in fertility are caused by the available resources and relative prices or by the relative production costs of child services. Pure changes in real income raise the demand for children or the total amount spent on children. If relative prices or production costs and real income are affected together the effect on fertility requires separate consideration." (SUMMARY IN ENG)
Measurement Models for Reasoned Action Theory
Hennessy, Michael; Bleakley, Amy; FISHBEIN, MARTIN
2012-01-01
Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are ...
Theory, modeling, and simulation annual report, 1992
1993-05-01
This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.
A Model of the Economic Theory of Regulation for Undergraduates.
Wilson, Brooks
1995-01-01
Presents a model of the economic theory of regulation and recommends its use in undergraduate economics classes. Describes the use of computer-assisted instruction to teach the theory. Maintains that the approach enables students to gain access to graphs and tables that they produce themselves. (CFR)
A continuum theory for modeling the dynamics of crystalline materials.
Xiong, Liming; Chen, Youping; Lee, James D
2009-02-01
This paper introduces a multiscale field theory for modeling and simulation of the dynamics of crystalline materials. The atomistic formulation of a multiscale field theory is briefly introduced. Its applicability is discussed. A few application examples, including phonon dispersion relations of ferroelectric materials BiScO3 and MgO nano dot under compression are presented.
Nielsen, Kim Lau; Niordson, Christian Frithiof
2014-01-01
of a single plastic zone is analyzed to illustrate the agreement with earlier published results, whereafter examples of (ii) multiple plastic zone interaction, and (iii) elastic–plastic loading/unloading are presented. Here, the simple shear problem of an infinite slab constrained between rigid plates......–plastic loading/unloading and the interaction of multiple plastic zones, is proposed. The predicted model response is compared to the corresponding rate-dependent version of visco-plastic origin, and coinciding results are obtained in the limit of small strain-rate sensitivity. First, (i) the evolution...
Theory analysis of the Dental Hygiene Human Needs Conceptual Model.
MacDonald, L; Bowen, D M
2016-11-09
Theories provide a structural knowing about concept relationships, practice intricacies, and intuitions and thus shape the distinct body of the profession. Capturing ways of knowing and being is essential to any professions' practice, education and research. This process defines the phenomenon of the profession - its existence or experience. Theory evaluation is a systematic criterion-based assessment of a specific theory. This study presents a theory analysis of the Dental Hygiene Human Needs Conceptual Model (DH HNCM). Using the Walker and Avant Theory Analysis, a seven-step process, the DH HNCM, was analysed and evaluated for its meaningfulness and contribution to dental hygiene. The steps include the following: (i) investigate the origins; (ii) examine relationships of the theory's concepts; (iii) assess the logic of the theory's structure; (iv) consider the usefulness to practice; (v) judge the generalizability; (vi) evaluate the parsimony; and (vii) appraise the testability of the theory. Human needs theory in nursing and Maslow's Hierarchy of Need Theory prompted this theory's development. The DH HNCM depicts four concepts based on the paradigm concepts of the profession: client, health/oral health, environment and dental hygiene actions, and includes validated eleven human needs that evolved overtime to eight. It is logical, simplistic, allows scientific predictions and testing, and provides a unique lens for the dental hygiene practitioner. With this model, dental hygienists have entered practice, knowing they enable clients to meet their human needs. For the DH HNCM, theory analysis affirmed that the model is reasonable and insightful and adds to the dental hygiene professions' epistemology and ontology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Schwinger Boson Formulation and Solution of the Crow-Kimura and Eigen Models of Quasispecies Theory
Park, Jeong-Man; Deem, Michael W.
2006-11-01
We express the Crow-Kimura and Eigen models of quasispecies theory in a functional integral representation. We formulate the spin coherent state functional integrals using the Schwinger Boson method. In this formulation, we are able to deduce the long-time behavior of these models for arbitrary replication and degradation functions. We discuss the phase transitions that occur in these models as a function of mutation rate. We derive for these models the leading order corrections to the infinite genome length limit.
Modelling the filling rate of pit latrines
2012-09-18
Sep 18, 2012 ... 4 July 2013. ISSN 1816-7950 (On-line) = Water SA Vol. 39 No. 4 July 2013 ... Keywords: Pit latrine, filling rate, biodegradation, solid waste disposal ...... by considerations of logistics, human resources, cost and the subsequent ...
Chougule, Abhijit S.; Mann, Jakob; Kelly, Mark C.
2017-01-01
A spectral tensor model is presented for turbulent fluctuations of wind velocity components and temperature, assuming uniform vertical gradients in mean temperature and mean wind speed. The model is built upon rapid distortion theory (RDT) following studies by Mann and by Hanazaki and Hunt, using...... the eddy lifetime parameterization of Mann to make the model stationary. The buoyant spectral tensor model is driven via five parameters: the viscous dissipation rate epsilon, length scale of energy-containing eddies L, a turbulence anisotropy parameter Gamma, gradient Richardson number (Ri) representing...... separation. Finally, it is shown that the RDT output can deviate from Monin-Obukhov similarity theory....
Generalization of the Activated Complex Theory of Reaction Rates. I. Quantum Mechanical Treatment
Marcus, R. A.
1964-01-01
In its usual form activated complex theory assumes a quasi-equilibrium between reactants and activated complex, a separable reaction coordinate, a Cartesian reaction coordinate, and an absence of interaction of rotation with internal motion in the complex. In the present paper a rate expression is derived without introducing the Cartesian assumption. The expression bears a formal resemblance to the usual one and reduces to it when the added assumptions of the latter are introduced.
Modeling the Dynamics of Chinese Spot Interest Rates
Yongmiao Hong; Hai Lin; Shouyang Wang
2013-01-01
Understanding the dynamics of spot interest rates is important for derivatives pricing, risk management, interest rate liberalization, and macroeconomic control. Based on a daily data of Chinese 7-day repo rates from July 22, 1996 to August 26, 2004, we estimate and test a variety of popular spot rate models, including single factor diffusion, GARCH, Markov regime switching and jump diffusion models, to examine how well they can capture the dynamics of the Chinese spot rates and whether the d...
Brock L. Casselman
Full Text Available Since 2012 we have tracked general chemistry student success rates at the University of Utah. In efforts to improve those rates we have implemented math prerequisites, changed our discussion session format, installed some metacognitive exercises aimed at the lowest quartile of students and instituted a flipped classroom model. Furthermore, using Item Response Theory we have identified what topics each individual student struggles with on practice tests. These steps have increased our success rates to ~76%. As well, student performance on nationally normed American Chemical Society final exams has improved to a median of 86 percentile. Our lowest quartile of students in spring 2016 scored at the 51 st percentile, above the national median.
Atomistic modeling at experimental strain rates and timescales
Yan, Xin; Cao, Penghui; Tao, Weiwei; Sharma, Pradeep; Park, Harold S.
2016-12-01
Modeling physical phenomena with atomistic fidelity and at laboratory timescales is one of the holy grails of computational materials science. Conventional molecular dynamics (MD) simulations enable the elucidation of an astonishing array of phenomena inherent in the mechanical and chemical behavior of materials. However, conventional MD, with our current computational modalities, is incapable of resolving timescales longer than microseconds (at best). In this short review article, we briefly review a recently proposed approach—the so-called autonomous basin climbing (ABC) method—that in certain instances can provide valuable information on slow timescale processes. We provide a general summary of the principles underlying the ABC approach, with emphasis on recent methodological developments enabling the study of mechanically-driven processes at slow (experimental) strain rates and timescales. Specifically, we show that by combining a strong physical understanding of the underlying phenomena, kinetic Monte Carlo, transition state theory and minimum energy pathway methods, the ABC method has been found to be useful in a variety of mechanically-driven problems ranging from the prediction of creep-behavior in metals, constitutive laws for grain boundary sliding, void nucleation rates, diffusion in amorphous materials to protein unfolding. Aside from reviewing the basic ideas underlying this approach, we emphasize some of the key challenges encountered in our own personal research work and suggest future research avenues for exploration.
Extensions to DSD theory: Analysis of PBX 9502 rate stick data
Aslam, T.D.; Bdzil, J.B.; Hill, L.G.
1998-12-31
Recent extensions to DSD theory and modeling argue that the intrinsic front propagation law can depend on variables in addition to the total shock-front curvature. Here the authors outline this work and present results of high-resolution numerical simulations of 2D detonation that verify the theory on some points, but disagree with it on others. Chief among these is the verification of the extended propagation laws and the observation that the curvature is infinite at the HE boundary. The authors discuss how these results impact the analysis of PBX 9502.
Modeling Multivariate Volatility Processes: Theory and Evidence
Jelena Z. Minovic
2009-05-01
Full Text Available This article presents theoretical and empirical methodology for estimation and modeling of multivariate volatility processes. It surveys the model specifications and the estimation methods. Multivariate GARCH models covered are VEC (initially due to Bollerslev, Engle and Wooldridge, 1988, diagonal VEC (DVEC, BEKK (named after Baba, Engle, Kraft and Kroner, 1995, Constant Conditional Correlation Model (CCC, Bollerslev, 1990, Dynamic Conditional Correlation Model (DCC models of Tse and Tsui, 2002, and Engle, 2002. I illustrate approach by applying it to daily data from the Belgrade stock exchange, I examine two pairs of daily log returns for stocks and index, report the results obtained, and compare them with the restricted version of BEKK, DVEC and CCC representations. The methods for estimation parameters used are maximum log-likehood (in BEKK and DVEC models and twostep approach (in CCC model.
Extended Nambu models: Their relation to gauge theories
Escobar, C. A.; Urrutia, L. F.
2017-05-01
Yang-Mills theories supplemented by an additional coordinate constraint, which is solved and substituted in the original Lagrangian, provide examples of the so-called Nambu models, in the case where such constraints arise from spontaneous Lorentz symmetry breaking. Some explicit calculations have shown that, after additional conditions are imposed, Nambu models are capable of reproducing the original gauge theories, thus making Lorentz violation unobservable and allowing the interpretation of the corresponding massless gauge bosons as the Goldstone bosons arising from the spontaneous symmetry breaking. A natural question posed by this approach in the realm of gauge theories is to determine under which conditions the recovery of an arbitrary gauge theory from the corresponding Nambu model, defined by a general constraint over the coordinates, becomes possible. We refer to these theories as extended Nambu models (ENM) and emphasize the fact that the defining coordinate constraint is not treated as a standard gauge fixing term. At this level, the mechanism for generating the constraint is irrelevant and the case of spontaneous Lorentz symmetry breaking is taken only as a motivation, which naturally bring this problem under consideration. Using a nonperturbative Hamiltonian analysis we prove that the ENM yields the original gauge theory after we demand current conservation for all time, together with the imposition of the Gauss laws constraints as initial conditions upon the dynamics of the ENM. The Nambu models yielding electrodynamics, Yang-Mills theories and linearized gravity are particular examples of our general approach.
AN EOQ MODEL WITH CONTROLLABLE SELLING RATE
HORNG-JINH CHANG; PO-YU CHEN
2008-01-01
According to the marketing principle, a decision maker may control demand rate through selling price and the unit facility cost of promoting transaction. In fact, the upper bound of willing-to-pay price and the transaction cost probably depend upon the subjective judgment of individual consumer in purchasing merchandise. This study therefore attempts to construct a bivariate distribution function to simultaneously incorporate the willing-to-pay price and the transaction cost into the classica...
A Model of PCF in Guarded Type Theory
Paviotti, Marco; Møgelberg, Rasmus Ejlers; Birkedal, Lars
2015-01-01
of coinductive types. In this paper we investigate how type theory with guarded recursion can be used as a metalanguage for denotational semantics useful both for constructing models and for proving properties of these. We do this by constructing a fairly intensional model of PCF and proving it computationally...... adequate. The model construction is related to Escardo's metric model for PCF, but here everything is carried out entirely in type theory with guarded recursion, including the formulation of the operational semantics, the model construction and the proof of adequacy....
Classical conformality in the Standard Model from Coleman's theory
Kawana, Kiyoharu
2016-01-01
The classical conformality is one of the possible candidates for explaining the gauge hierarchy of the Standard Model. We show that it is naturally obtained from the Coleman's theory on baby universe.
Linear control theory for gene network modeling.
Shin, Yong-Jun; Bleris, Leonidas
2010-09-16
Systems biology is an interdisciplinary field that aims at understanding complex interactions in cells. Here we demonstrate that linear control theory can provide valuable insight and practical tools for the characterization of complex biological networks. We provide the foundation for such analyses through the study of several case studies including cascade and parallel forms, feedback and feedforward loops. We reproduce experimental results and provide rational analysis of the observed behavior. We demonstrate that methods such as the transfer function (frequency domain) and linear state-space (time domain) can be used to predict reliably the properties and transient behavior of complex network topologies and point to specific design strategies for synthetic networks.
Computing decay rates for new physics theories with FEYNRULES and MADGRAPH 5_AMC@NLO
Alwall, Johan; Duhr, Claude; Fuks, Benjamin; Mattelaer, Olivier; Öztürk, Deniz Gizem; Shen, Chia-Hsien
2015-12-01
We present new features of the FEYNRULES and MADGRAPH 5_AMC@NLO programs for the automatic computation of decay widths that consistently include channels of arbitrary final-state multiplicity. The implementations are generic enough so that they can be used in the framework of any quantum field theory, possibly including higher-dimensional operators. We extend at the same time the conventions of the Universal FEYNRULES Output (or UFO) format to include decay tables and information on the total widths. We finally provide a set of representative examples of the usage of the new functions of the different codes in the framework of the Standard Model, the Higgs Effective Field Theory, the Strongly Interacting Light Higgs model and the Minimal Supersymmetric Standard Model and compare the results to available literature and programs for validation purposes.
A QCD Model Using Generalized Yang-Mills Theory
WANG Dian-Fu; SONG He-Shan; KOU Li-Na
2007-01-01
Generalized Yang-Mills theory has a covariant derivative,which contains both vector and scalar gauge bosons.Based on this theory,we construct a strong interaction model by using the group U(4).By using this U(4)generalized Yang-Mills model,we also obtain a gauge potential solution,which can be used to explain the asymptotic behavior and color confinement.
Matrix models vs. Seiberg-Witten/Whitham theories
Chekhov, L.; Mironov, A
2003-01-23
We discuss the relation between matrix models and the Seiberg-Witten type (SW) theories, recently proposed by Dijkgraaf and Vafa. In particular, we prove that the partition function of the Hermitian one-matrix model in the planar (large N) limit coincides with the prepotential of the corresponding SW theory. This partition function is the logarithm of a Whitham {tau}-function. The corresponding Whitham hierarchy is explicitly constructed. The double-point problem is solved.
Chubing Zhang
2013-01-01
Full Text Available We study the optimal investment strategies of DC pension, with the stochastic interest rate (including the CIR model and the Vasicek model and stochastic salary. In our model, the plan member is allowed to invest in a risk-free asset, a zero-coupon bond, and a single risky asset. By applying the Hamilton-Jacobi-Bellman equation, Legendre transform, and dual theory, we find the explicit solutions for the CRRA and CARA utility functions, respectively.
Marshall, David J; McQuaid, Christopher D
2011-01-22
The universal temperature-dependence model (UTD) of the metabolic theory of ecology (MTE) proposes that temperature controls mass-scaled, whole-animal resting metabolic rate according to the first principles of physics (Boltzmann kinetics). Controversy surrounds the model's implication of a mechanistic basis for metabolism that excludes the effects of adaptive regulation, and it is unclear how this would apply to organisms that live in fringe environments and typically show considerable metabolic adaptation. We explored thermal scaling of metabolism in a rocky-shore eulittoral-fringe snail (Echinolittorina malaccana) that experiences constrained energy gain and fluctuating high temperatures (between 25°C and approximately 50°C) during prolonged emersion (weeks). In contrast to the prediction of the UTD model, metabolic rate was often negatively related to temperature over a benign range (30-40°C), the relationship depending on (i) the temperature range, (ii) the degree of metabolic depression (related to the quiescent period), and (iii) whether snails were isolated within their shells. Apparent activation energies (E) varied between 0.05 and -0.43 eV, deviating excessively from the UTD's predicted range of between 0.6 and 0.7 eV. The lowering of metabolism when heated should improve energy conservation in a high-temperature environment and challenges both the theory's generality and its mechanistic basis.
Non-oscillatory flux correlation functions for efficient nonadiabatic rate theory.
Richardson, Jeremy O; Thoss, Michael
2014-08-21
There is currently much interest in the development of improved trajectory-based methods for the simulation of nonadiabatic processes in complex systems. An important goal for such methods is the accurate calculation of the rate constant over a wide range of electronic coupling strengths and it is often the nonadiabatic, weak-coupling limit, which being far from the Born-Oppenheimer regime, provides the greatest challenge to current methods. We show that in this limit there is an inherent sign problem impeding further development which originates from the use of the usual quantum flux correlation functions, which can be very oscillatory at short times. From linear response theory, we derive a modified flux correlation function for the calculation of nonadiabatic reaction rates, which still rigorously gives the correct result in the long-time limit regardless of electronic coupling strength, but unlike the usual formalism is not oscillatory in the weak-coupling regime. In particular, a trajectory simulation of the modified correlation function is naturally initialized in a region localized about the crossing of the potential energy surfaces. In the weak-coupling limit, a simple link can be found between the dynamics initialized from this transition-state region and an generalized quantum golden-rule transition-state theory, which is equivalent to Marcus theory in the classical harmonic limit. This new correlation function formalism thus provides a platform on which a wide variety of dynamical simulation methods can be built aiding the development of accurate nonadiabatic rate theories applicable to complex systems.
Bianchi class A models in Sàez-Ballester's theory
Socorro, J.; Espinoza-García, Abraham
2012-08-01
We apply the Sàez-Ballester (SB) theory to Bianchi class A models, with a barotropic perfect fluid in a stiff matter epoch. We obtain exact classical solutions à la Hamilton for Bianchi type I, II and VIh=-1 models. We also find exact quantum solutions to all Bianchi Class A models employing a particular ansatz for the wave function of the universe.
A Dynamic Systems Theory Model of Visual Perception Development
Coté, Carol A.
2015-01-01
This article presents a model for understanding the development of visual perception from a dynamic systems theory perspective. It contrasts to a hierarchical or reductionist model that is often found in the occupational therapy literature. In this proposed model vision and ocular motor abilities are not foundational to perception, they are seen…
Mathematical System Theory and System Modeling
1980-01-01
Choosing models related effectively to the questions to be addressed is a central issue in the craft of systems analysis. Since the mathematical description the analyst chooses constrains the types of issues he candeal with, it is important for these models to be selected so as to yield limitations that are acceptable in view of the questions the systems analysis seeks to answer. In this paper, the author gives an overview of the central issues affecting the question of model choice. To ...
The Neuman Systems Model Institute: testing middle-range theories.
Gigliotti, Eileen
2003-07-01
The credibility of the Neuman systems model can only be established through the generation and testing of Neuman systems model-derived middle-range theories. However, due to the number and complexity of Neuman systems model concepts/concept interrelations and the diversity of middle-range theory concepts linked to these Neuman systems model concepts by researchers, no explicit middle-range theories have yet been derived from the Neuman systems model. This article describes the development of an organized program for the systematic study of the Neuman systems model. Preliminary work, already accomplished, is detailed, and a tentative plan for the completion of further preliminary work as well as beginning the actual research conduction phase is proposed.
Consumer preference models: fuzzy theory approach
Turksen, I. B.; Wilson, I. A.
1993-12-01
Consumer preference models are widely used in new product design, marketing management, pricing and market segmentation. The purpose of this article is to develop and test a fuzzy set preference model which can represent linguistic variables in individual-level models implemented in parallel with existing conjoint models. The potential improvements in market share prediction and predictive validity can substantially improve management decisions about what to make (product design), for whom to make it (market segmentation) and how much to make (market share prediction).
Measurement-based load modeling: Theory and application
无
2007-01-01
Load model is one of the most important elements in power system operation and control. However, owing to its complexity, load modeling is still an open and very difficult problem. Summarizing our work on measurement-based load modeling in China for more than twenty years, this paper systematically introduces the mathematical theory and applications regarding the load modeling. The flow chart and algorithms for measurement-based load modeling are presented. A composite load model structure with 13 parameters is also proposed. Analysis results based on the trajectory sensitivity theory indicate the importance of the load model parameters for the identification. Case studies show the accuracy of the presented measurement-based load model. The load model thus built has been validated by field measurements all over China. Future working directions on measurement- based load modeling are also discussed in the paper.
Modeling in applied sciences a kinetic theory approach
Pulvirenti, Mario
2000-01-01
Modeling complex biological, chemical, and physical systems, in the context of spatially heterogeneous mediums, is a challenging task for scientists and engineers using traditional methods of analysis Modeling in Applied Sciences is a comprehensive survey of modeling large systems using kinetic equations, and in particular the Boltzmann equation and its generalizations An interdisciplinary group of leading authorities carefully develop the foundations of kinetic models and discuss the connections and interactions between model theories, qualitative and computational analysis and real-world applications This book provides a thoroughly accessible and lucid overview of the different aspects, models, computations, and methodology for the kinetic-theory modeling process Topics and Features * Integrated modeling perspective utilized in all chapters * Fluid dynamics of reacting gases * Self-contained introduction to kinetic models * Becker–Doring equations * Nonlinear kinetic models with chemical reactions * Kinet...
The Family FIRO Model: The Integration of Group Theory and Family Theory.
Colangelo, Nicholas; Doherty, William J.
1988-01-01
Presents the Family Fundamental Interpersonal Relations Orientation (Family FIRO) Model, an integration of small-group theory and family therapy. The model is offered as a framework for organizing family issues. Discusses three fundamental issues of human relatedness and their applicability to group dynamics. (Author/NB)
Modeling acquaintance networks based on balance theory
Vukašinović Vida
2014-09-01
Full Text Available An acquaintance network is a social structure made up of a set of actors and the ties between them. These ties change dynamically as a consequence of incessant interactions between the actors. In this paper we introduce a social network model called the Interaction-Based (IB model that involves well-known sociological principles. The connections between the actors and the strength of the connections are influenced by the continuous positive and negative interactions between the actors and, vice versa, the future interactions are more likely to happen between the actors that are connected with stronger ties. The model is also inspired by the social behavior of animal species, particularly that of ants in their colony. A model evaluation showed that the IB model turned out to be sparse. The model has a small diameter and an average path length that grows in proportion to the logarithm of the number of vertices. The clustering coefficient is relatively high, and its value stabilizes in larger networks. The degree distributions are slightly right-skewed. In the mature phase of the IB model, i.e., when the number of edges does not change significantly, most of the network properties do not change significantly either. The IB model was found to be the best of all the compared models in simulating the e-mail URV (University Rovira i Virgili of Tarragona network because the properties of the IB model more closely matched those of the e-mail URV network than the other models
Theory-based Practice: Comparing and Contrasting OT Models
Nielsen, Kristina Tomra; Berg, Brett
2012-01-01
Theory- Based Practice: Comparing and Contrasting OT Models The workshop will present a critical analysis of the major models of occupational therapy, A Model of Human Occupation, Enabling Occupation II, and Occupational Therapy Intervention Process Model. Similarities and differences among...... the models will be discussed, including each model’s limitations and unique contributions to the profession. Workshop format will include short lectures and group discussions....
Training evaluation models: Theory and applications
Carbone, V.; MORVILLO, A
2002-01-01
This chapter has the following aims: 1. Compare the various conceptual models for evaluation, identifying their strengths and weaknesses; 2. Define an evaluation model consistent with the aims and constraints of the fit project; 3. Describe, in critical fashion, operative tools for evaluating training which are reliable, flexible and analytical.
Baldrige Theory into Practice: A Generic Model
Arif, Mohammed
2007-01-01
Purpose: The education system globally has moved from a push-based or producer-centric system to a pull-based or customer centric system. Malcolm Baldrige Quality Award (MBQA) model happens to be one of the latest additions to the pull based models. The purpose of this paper is to develop a generic framework for MBQA that can be used by…
Baldrige Theory into Practice: A Generic Model
Arif, Mohammed
2007-01-01
Purpose: The education system globally has moved from a push-based or producer-centric system to a pull-based or customer centric system. Malcolm Baldrige Quality Award (MBQA) model happens to be one of the latest additions to the pull based models. The purpose of this paper is to develop a generic framework for MBQA that can be used by…
Oreiro José Luis
2013-01-01
Full Text Available This article analyzes the relationship between economic growth, income distribution and real exchange rate within the neo-Kaleckian literature, through the construction of a nonlinear macrodynamic model for an open economy in which investment in fixed capital is assumed to be a quadratic function of the real exchange rate. The model demonstrates that the prevailing regime of accumulation in a given economy depends on the type of currency misalignment, so if the real exchange rate is overvalued, then the regime of accumulation will be profit-led, but if the exchange rate is undervalued, then the accumulation regime is wage-led. Subsequently, the adherence of the theoretical model to data is tested for Brazil in the period 1994/Q3-2008/Q4. The econometric results are consistent with the theoretical non-linear specification of the investment function used in the model, so that we can define the existence of a real exchange rate that maximizes the rate of capital accumulation for the Brazilian economy. From the estimate of this optimal rate we show that the real exchange rate is overvalued in 1994/Q3- 2001/Q1 and 2005/Q4-2008/Q4 and undervalued in the period 2001/Q2-2005/Q3. As a direct corollary of this result, it follows that the prevailing regime of accumulation in the Brazilian economy after the last quarter of 2005 is profit-led.
Measurement Models for Reasoned Action Theory.
Hennessy, Michael; Bleakley, Amy; Fishbein, Martin
2012-03-01
Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach.
Optimal transportation networks models and theory
Bernot, Marc; Morel, Jean-Michel
2009-01-01
The transportation problem can be formalized as the problem of finding the optimal way to transport a given measure into another with the same mass. In contrast to the Monge-Kantorovitch problem, recent approaches model the branched structure of such supply networks as minima of an energy functional whose essential feature is to favour wide roads. Such a branched structure is observable in ground transportation networks, in draining and irrigation systems, in electrical power supply systems and in natural counterparts such as blood vessels or the branches of trees. These lectures provide mathematical proof of several existence, structure and regularity properties empirically observed in transportation networks. The link with previous discrete physical models of irrigation and erosion models in geomorphology and with discrete telecommunication and transportation models is discussed. It will be mathematically proven that the majority fit in the simple model sketched in this volume.
Monetary models and exchange rate determination: The Nigerian ...
Monetary models and exchange rate determination: The Nigerian evidence. ... income levels and real interest rate differentials provide better forecasts of the naira-US dollar ... in this regard is that monetary policy should be positively predicted.
Sustainable theory of a logistic model - Fisher information approach.
Al-Saffar, Avan; Kim, Eun-Jin
2017-03-01
Information theory provides a useful tool to understand the evolution of complex nonlinear systems and their sustainability. In particular, Fisher information has been evoked as a useful measure of sustainability and the variability of dynamical systems including self-organising systems. By utilising Fisher information, we investigate the sustainability of the logistic model for different perturbations in the positive and/or negative feedback. Specifically, we consider different oscillatory modulations in the parameters for positive and negative feedback and investigate their effect on the evolution of the system and Probability Density Functions (PDFs). Depending on the relative time scale of the perturbation to the response time of the system (the linear growth rate), we demonstrate the maintenance of the initial condition for a long time, manifested by a broad bimodal PDF. We present the analysis of Fisher information in different cases and elucidate its implications for the sustainability of population dynamics. We also show that a purely oscillatory growth rate can lead to a finite amplitude solution while self-organisation of these systems can break down with an exponentially growing solution due to the periodic fluctuations in negative feedback. Copyright © 2017 Elsevier Inc. All rights reserved.
Huang, M.; Rivera-Diaz-del-Castillo, P.E.J.; Bouaziz, O.; Van der Zwaag, S.
2009-01-01
Based on the theory of irreversible thermodynamics, the present work proposes a dislocation-based model to describe the plastic deformation of FCC metals over wide ranges of strain rates. The stress-strain behaviour and the evolution of the average dislocation density are derived. It is found that t
USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS
A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...
Temperature-dependent rate models of vascular cambium cell mortality
Matthew B. Dickinson; Edward A. Johnson
2004-01-01
We use two rate-process models to describe cell mortality at elevated temperatures as a means of understanding vascular cambium cell death during surface fires. In the models, cell death is caused by irreversible damage to cellular molecules that occurs at rates that increase exponentially with temperature. The models differ in whether cells show cumulative effects of...
Quantum Field Theory and the Electroweak Standard Model
Boos, E
2015-01-01
The Standard Model is one of the main intellectual achievements for about the last 50 years, a result of many theoretical and experimental studies. In this lecture a brief introduction to the electroweak part of the Standard Model is given. Since the Standard Model is a quantum field theory, some aspects for understanding of quantization of abelian and non-abelian gauge theories are also briefly discussed. It is demonstrated how well the electroweak Standard Model works in describing a large variety of precise experimental measure- ments at lepton and hadron collider.
An equity-interest rate hybrid model with stochastic volatility and the interest rate smile
Grzelak, L.A.; Oosterlee, C.W.
2010-01-01
We define an equity-interest rate hybrid model in which the equity part is driven by the Heston stochastic volatility [Hes93], and the interest rate (IR) is generated by the displaced-diffusion stochastic volatility Libor Market Model [AA02]. We assume a non-zero correlation between the main
Sticker DNA computer model--Part Ⅰ:Theory
XU Jin; DONG Yafei; WEI Xiaopeng
2004-01-01
Sticker model is one of the basic models in the DNA computer models. This model is coded with single-double stranded DNA molecules. It has the following advantages that the operations require no strands extension and use no enzymes; What's more, the materials are reusable. Therefore it arouses attention and interest of scientists in many fields. In this paper, we will systematically analyze the theories and applications of the model, summarize other scientists' contributions in this field, and propose our research results. This paper is the theoretical portion of the sticker model on DNA computer, which includes the introduction of the basic model of sticker computing. Firstly, we systematically introduce the basic theories of classic models about sticker computing; Secondly, we discuss the sticker system which is an abstract computing model based on the sticker model and formal languages; Finally, extend and perfect the model, and present two types of models that are more extensive in the applications and more perfect in the theory than the past models: one is the so-called k-bit sticker model, the other is full-message sticker DNA computing model.
Theory of stellar convection II: first stellar models
Pasetto, S; Chiosi, E; Cropper, M; Weiss, A
2015-01-01
We present here the first stellar models on the Hertzsprung-Russell diagram (HRD), in which convection is treated according to the novel scale-free convection theory (SFC theory) by Pasetto et al. (2014). The aim is to compare the results of the new theory with those from the classical, calibrated mixing-length (ML) theory to examine differences and similarities. We integrate the equations describing the structure of the atmosphere from the stellar surface down to a few percent of the stellar mass using both ML theory and SFC theory. The key temperature over pressure gradients, the energy fluxes, and the extension of the convective zones are compared in both theories. The analysis is first made for the Sun and then extended to other stars of different mass and evolutionary stage. The results are adequate: the SFC theory yields convective zones, temperature gradients of the ambient and of the convective element, and energy fluxes that are very similar to those derived from the "calibrated" MT theory for main s...
HIV Transmission Rate Modeling: A Primer, Review, and Extension
Pinkerton, Steven D.
2012-01-01
Several mathematical modeling studies based on the concept of “HIV transmission rates” have recently appeared in the literature. The transmission rate for a particular group of HIV-infected persons is defined as the mean number of secondary infections per member of the group per unit time. This article reviews the fundamental principles and mathematics of transmission rate models; explicates the relationship between these models, Bernoullian models of HIV transmission, and mathematical models...
王艳; 钱英; 冯文林; 刘若庄
2003-01-01
An implementation of the variational quantum RRKM program is presented to utilize the direct ab initio dynamics approach for calculating k(E, J), k(E) and k(T) within the framework of the microcanonical transition state (μTST) and microcanonical variational TST (μVT) theories. An algorithm including tunneling contributions in Beyer-Swinehart method for calculating microcanonical rate constants is also proposed. An efficient piece-wise interpolation method is developed to evaluate the Boltzmann integral in calculation of thermal rate constants. Calculations on several test reactions, namely the H(D)2CO→H(D)2 + CO, CH2CO→CH2 + CO and CH4 + H→CH3 + H2 reactions, show that the results are in good agreement with the previous rate constants calculations. This approach would require much less computational resource.
Mixed models theory and applications with R
Demidenko, Eugene
2013-01-01
Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g
Single crystal plasticity by modeling dislocation density rate behavior
Hansen, Benjamin L [Los Alamos National Laboratory; Bronkhorst, Curt [Los Alamos National Laboratory; Beyerlein, Irene [Los Alamos National Laboratory; Cerreta, E. K. [Los Alamos National Laboratory; Dennis-Koller, Darcie [Los Alamos National Laboratory
2010-12-23
The goal of this work is to formulate a constitutive model for the deformation of metals over a wide range of strain rates. Damage and failure of materials frequently occurs at a variety of deformation rates within the same sample. The present state of the art in single crystal constitutive models relies on thermally-activated models which are believed to become less reliable for problems exceeding strain rates of 10{sup 4} s{sup -1}. This talk presents work in which we extend the applicability of the single crystal model to the strain rate region where dislocation drag is believed to dominate. The elastic model includes effects from volumetric change and pressure sensitive moduli. The plastic model transitions from the low-rate thermally-activated regime to the high-rate drag dominated regime. The direct use of dislocation density as a state parameter gives a measurable physical mechanism to strain hardening. Dislocation densities are separated according to type and given a systematic set of interactions rates adaptable by type. The form of the constitutive model is motivated by previously published dislocation dynamics work which articulated important behaviors unique to high-rate response in fcc systems. The proposed material model incorporates thermal coupling. The hardening model tracks the varying dislocation population with respect to each slip plane and computes the slip resistance based on those values. Comparisons can be made between the responses of single crystals and polycrystals at a variety of strain rates. The material model is fit to copper.
Solid mechanics theory, modeling, and problems
Bertram, Albrecht
2015-01-01
This textbook offers an introduction to modeling the mechanical behavior of solids within continuum mechanics and thermodynamics. To illustrate the fundamental principles, the book starts with an overview of the most important models in one dimension. Tensor calculus, which is called for in three-dimensional modeling, is concisely presented in the second part of the book. Once the reader is equipped with these essential mathematical tools, the third part of the book develops the foundations of continuum mechanics right from the beginning. Lastly, the book’s fourth part focuses on modeling the mechanics of materials and in particular elasticity, viscoelasticity and plasticity. Intended as an introductory textbook for students and for professionals interested in self-study, it also features numerous worked-out examples to aid in understanding.
Matrix Models, Topological Strings, and Supersymmetric Gauge Theories
Dijkgraaf, R; Dijkgraaf, Robbert; Vafa, Cumrun
2002-01-01
We show that B-model topological strings on local Calabi-Yau threefolds are large N duals of matrix models, which in the planar limit naturally give rise to special geometry. These matrix models directly compute F-terms in an associated N=1 supersymmetric gauge theory, obtained by deforming N=2 theories by a superpotential term that can be directly identified with the potential of the matrix model. Moreover by tuning some of the parameters of the geometry in a double scaling limit we recover (p,q) conformal minimal models coupled to 2d gravity, thereby relating non-critical string theories to type II superstrings on Calabi-Yau backgrounds.
Matrix models, topological strings, and supersymmetric gauge theories
Dijkgraaf, Robbert E-mail: rhd@science.uva.nl; Vafa, Cumrun
2002-11-11
We show that B-model topological strings on local Calabi-Yau threefolds are large-N duals of matrix models, which in the planar limit naturally give rise to special geometry. These matrix models directly compute F-terms in an associated N=1 supersymmetric gauge theory, obtained by deforming N=2 theories by a superpotential term that can be directly identified with the potential of the matrix model. Moreover by tuning some of the parameters of the geometry in a double scaling limit we recover (p,q) conformal minimal models coupled to 2d gravity, thereby relating non-critical string theories to type II superstrings on Calabi-Yau backgrounds.
Modeling workplace bullying using catastrophe theory.
Escartin, J; Ceja, L; Navarro, J; Zapf, D
2013-10-01
Workplace bullying is defined as negative behaviors directed at organizational members or their work context that occur regularly and repeatedly over a period of time. Employees' perceptions of psychosocial safety climate, workplace bullying victimization, and workplace bullying perpetration were assessed within a sample of nearly 5,000 workers. Linear and nonlinear approaches were applied in order to model both continuous and sudden changes in workplace bullying. More specifically, the present study examines whether a nonlinear dynamical systems model (i.e., a cusp catastrophe model) is superior to the linear combination of variables for predicting the effect of psychosocial safety climate and workplace bullying victimization on workplace bullying perpetration. According to the AICc, and BIC indices, the linear regression model fits the data better than the cusp catastrophe model. The study concludes that some phenomena, especially unhealthy behaviors at work (like workplace bullying), may be better studied using linear approaches as opposed to nonlinear dynamical systems models. This can be explained through the healthy variability hypothesis, which argues that positive organizational behavior is likely to present nonlinear behavior, while a decrease in such variability may indicate the occurrence of negative behaviors at work.
Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W
2007-07-01
Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.
New model of propagation rates of long crack due to structure fatigue
Jian-tao LIU; Ping-an DU; Ming-jing HUANG; Qing ZHOU
2009-01-01
By comparison of the characteristics of existing models for long fatigue crack propagation rates,a new model,called the generalized passivation-lancet model for long fatigue crack propagation rates (GPLFCPR),and a general formula for characterizing the process of crack growth rates are proposed based on the passivation-lancet theory.The GPLFCPR model overcomes disadvantages of the existing models and can describe the rules of the entire fatiguc crack growth process from the cracking threshold to the critical fracturing point effectively with explicit physical meaning. It also reflects the influence of material characteristics,such as strength parameters,fracture parameters and heat treatment. Experimental results obtained by testing LZ50 steel,AlZnMgCu0.5,0.5Cr0.5Mo0.25V steel,etc.,show good consistency with the new model. The GPLFCPR model is valuable in theoretical research and practical applications.
Modelling predation as a capped rate stochastic process, with applications to fish recruitment
James, Alex; Baxter, Paul D; Pitchford, Jonathan W
2005-01-01
Many mathematical models use functions the value of which cannot exceed some physically or biologically imposed maximum value. A model can be described as ‘capped-rate’ when the rate of change of a variable cannot exceed a maximum value. This presents no problem when the models are deterministic but, in many applications, results from deterministic models are at best misleading. The need to account for stochasticity, both demographic and environmental, in models is therefore important but, as this paper shows, incorporating stochasticity into capped-rate models is not trivial. A method using queueing theory is presented, which allows randomness and spatial heterogeneity to be incorporated rigorously into capped rate models. The method is applied to the feeding and growth of fish larvae. PMID:16849207
Rate equation modelling and investigation of quantum cascade detector characteristics
Saha, Sumit; Kumar, Jitendra
2016-10-01
A simple precise transport model has been proposed using rate equation approach for the characterization of a quantum cascade detector. The resonant tunneling transport is incorporated in the rate equation model through a resonant tunneling current density term. All the major scattering processes are included in the rate equation model. The effect of temperature on the quantum cascade detector characteristics has been examined considering the temperature dependent band parameters and the carrier scattering processes. Incorporation of the resonant tunneling process in the rate equation model improves the detector performance appreciably and reproduces the detector characteristics within experimental accuracy.
Theory and modeling of electron fishbones
Vlad, G.; Fusco, V.; Briguglio, S.; Fogaccia, G.; Zonca, F.; Wang, X.
2016-10-01
Internal kink instabilities exhibiting fishbone like behavior have been observed in a variety of experiments where a high energy electron population, generated by strong auxiliary heating and/or current drive systems, was present. After briefly reviewing the experimental evidences of energetic electrons driven fishbones, and the main results of linear and nonlinear theory of electron fishbones, the results of global, self-consistent, nonlinear hybrid MHD-Gyrokinetic simulations will be presented. To this purpose, the extended/hybrid MHD-Gyrokinetic code XHMGC will be used. Linear dynamics analysis will enlighten the effect of considering kinetic thermal ion compressibility and diamagnetic response, and kinetic thermal electrons compressibility, in addition to the energetic electron contribution. Nonlinear saturation and energetic electron transport will also be addressed, making extensive use of Hamiltonian mapping techniques, discussing both centrally peaked and off-axis peaked energetic electron profiles. It will be shown that centrally peaked energetic electron profiles are characterized by resonant excitation and nonlinear response of deeply trapped energetic electrons. On the other side, off-axis peaked energetic electron profiles are characterized by resonant excitation and nonlinear response of barely circulating energetic electrons which experience toroidal precession reversal of their motion.
Population growth, saving, interest rates and stagnation: Discussing the Eggertsson-Mehrotra model
Spahn, Peter
2016-01-01
Post Keynesian stagnation theory argues that slower population growth dampens consumption and investment. A New Keynesian OLG model derives an unemployment equilibrium due to a negative natural rate in a three-generations credit contract framework. Besides deleveraging or rising inequality, also a shrinking population is a triggering factor. In all cases, a saving surplus drives real interest rates down. In other OLG settings however, with bonds as stores of value, slower population growth, o...
Spatial interaction models facility location using game theory
D'Amato, Egidio; Pardalos, Panos
2017-01-01
Facility location theory develops the idea of locating one or more facilities by optimizing suitable criteria such as minimizing transportation cost, or capturing the largest market share. The contributions in this book focus an approach to facility location theory through game theoretical tools highlighting situations where a location decision is faced by several decision makers and leading to a game theoretical framework in non-cooperative and cooperative methods. Models and methods regarding the facility location via game theory are explored and applications are illustrated through economics, engineering, and physics. Mathematicians, engineers, economists and computer scientists working in theory, applications and computational aspects of facility location problems using game theory will find this book useful.
Electrorheological fluids modeling and mathematical theory
Růžička, Michael
2000-01-01
This is the first book to present a model, based on rational mechanics of electrorheological fluids, that takes into account the complex interactions between the electromagnetic fields and the moving liquid. Several constitutive relations for the Cauchy stress tensor are discussed. The main part of the book is devoted to a mathematical investigation of a model possessing shear-dependent viscosities, proving the existence and uniqueness of weak and strong solutions for the steady and the unsteady case. The PDS systems investigated possess so-called non-standard growth conditions. Existence results for elliptic systems with non-standard growth conditions and with a nontrivial nonlinear r.h.s. and the first ever results for parabolic systems with a non-standard growth conditions are given for the first time. Written for advanced graduate students, as well as for researchers in the field, the discussion of both the modeling and the mathematics is self-contained.
A catastrophe theory model of the conflict helix, with tests.
Rummel, R J
1987-10-01
Macro social field theory has undergone extensive development and testing since the 1960s. One of these has been the articulation of an appropriate conceptual micro model--called the conflict helix--for understanding the process from conflict to cooperation and vice versa. Conflict and cooperation are viewed as distinct equilibria of forces in a social field; the movement between these equilibria is a jump, energized by a gap between social expectations and power, and triggered by some minor event. Quite independently, there also has been much recent application of catastrophe theory to social behavior, but usually without a clear substantive theory and lacking empirical testing. This paper uses catastrophe theory--namely, the butterfly model--mathematically to structure the conflict helix. The social field framework and helix provide the substantive interpretation for the catastrophe theory; and catastrophe theory provides a suitable mathematical model for the conflict helix. The model is tested on the annual conflict and cooperation between India and Pakistan, 1948 to 1973. The results are generally positive and encouraging.
An emergency department patient flow model based on queueing theory principles.
Wiler, Jennifer L; Bolandifar, Ehsan; Griffey, Richard T; Poirier, Robert F; Olsen, Tava
2013-09-01
The objective was to derive and validate a novel queuing theory-based model that predicts the effect of various patient crowding scenarios on patient left without being seen (LWBS) rates. Retrospective data were collected from all patient presentations to triage at an urban, academic, adult-only emergency department (ED) with 87,705 visits in calendar year 2008. Data from specific time windows during the day were divided into derivation and validation sets based on odd or even days. Patient records with incomplete time data were excluded. With an established call center queueing model, input variables were modified to adapt this model to the ED setting, while satisfying the underlying assumptions of queueing theory. The primary aim was the derivation and validation of an ED flow model. Chi-square and Student's t-tests were used for model derivation and validation. The secondary aim was estimating the effect of varying ED patient arrival and boarding scenarios on LWBS rates using this model. The assumption of stationarity of the model was validated for three time periods (peak arrival rate = 10:00 a.m. to 12:00 p.m.; a moderate arrival rate = 8:00 a.m. to 10:00 a.m.; and lowest arrival rate = 4:00 a.m. to 6:00 a.m.) and for different days of the week and month. Between 10:00 a.m. and 12:00 p.m., defined as the primary study period representing peak arrivals, 3.9% (n = 4,038) of patients LWBS. Using the derived model, the predicted LWBS rate was 4%. LWBS rates increased as the rate of ED patient arrivals, treatment times, and ED boarding times increased. A 10% increase in hourly ED patient arrivals from the observed average arrival rate increased the predicted LWBS rate to 10.8%; a 10% decrease in hourly ED patient arrivals from the observed average arrival rate predicted a 1.6% LWBS rate. A 30-minute decrease in treatment time from the observed average treatment time predicted a 1.4% LWBS. A 1% increase in patient arrivals has the same effect on LWBS rates as a 1
RESULTS OF INTERBANK EXCHANGE RATES FORECASTING USING STATE SPACE MODEL
Muhammad Kashif
2008-07-01
Full Text Available This study evaluates the performance of three alternative models for forecasting daily interbank exchange rate of U.S. dollar measured in Pak rupees. The simple ARIMA models and complex models such as GARCH-type models and a state space model are discussed and compared. Four different measures are used to evaluate the forecasting accuracy. The main result is the state space model provides the best performance among all the models.
Mean field theory, topological field theory, and multi-matrix models
Dijkgraaf, R. (Princeton Univ., NJ (USA). Joseph Henry Labs.); Witten, E. (Institute for Advanced Study, Princeton, NJ (USA). School of Natural Sciences)
1990-10-08
We show that the genus zero correlation functions of an arbitrary topological field theory coupled to two-dimensional topological gravity are determined by an appropriate Landau-Ginzburg potential. We determine the potentials that arise for topological sigma models with CP{sup 1} or a Calabi-Yau manifold for target space. We present substantial evidence that the multi-matrix models that have been studied recently are equivalent to certain topological field theories coupled to topological gravity. We also describe a topological version of the general 'string equation'. (orig.).
Mean field theory, topological field theory, and multi-matrix models
Dijkgraaf, Robbert; Witten, Edward
1990-10-01
We show that the genus zero correlation functions of an arbitrary topological field theory coupled to two-dimensional topological gravity are determined by an appropriate Landau-Ginzburg potential. We determine the potentials that arise for topological sigma models with CP 1 or a Calabi-Yau manifold for target space. We present substantial evidence that the multi-matrix models that have been studied recently are equivalent to certain topological field theories coupled to topological gravity. We also describe a topological version of the general "string equation".
Reaction Rate Theory in Coordination Number Space: An Application to Ion Solvation
Roy, Santanu; Baer, Marcel D.; Mundy, Christopher J.; Schenter, Gregory K.
2016-04-14
Understanding reaction mechanisms in many chemical and biological processes require application of rare event theories. In these theories, an effective choice of a reaction coordinate to describe a reaction pathway is essential. To this end, we study ion solvation in water using molecular dynamics simulations and explore the utility of coordination number (n = number of water molecules in the first solvation shell) as the reaction coordinate. Here we compute the potential of mean force (W(n)) using umbrella sampling, predicting multiple metastable n-states for both cations and anions. We find with increasing ionic size, these states become more stable and structured for cations when compared to anions. We have extended transition state theory (TST) to calculate transition rates between n-states. TST overestimates the rate constant due to solvent-induced barrier recrossings that are not accounted for. We correct the TST rates by calculating transmission coefficients using the reactive flux method. This approach enables a new way of understanding rare events involving coordination complexes. We gratefully acknowledge Liem Dang and Panos Stinis for useful discussion. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. SR, CJM, and GKS were supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. MDB was supported by MS3 (Materials Synthesis and Simulation Across Scales) Initiative, a Laboratory Directed Research and Development Program at Pacific Northwest National Laboratory (PNNL). PNNL is a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy.
Theory and Model for Martensitic Transformations
Lindgård, Per-Anker; Mouritsen, Ole G.
1986-01-01
Martensitic transformations are shown to be driven by the interplay between two fluctuating strain components. No soft mode is needed, but a central peak occurs representing the dynamics of strain clusters. A two-dimensional magnetic-analog model with the martensitic-transition symmetry...
Markov models of aging: theory and practice.
Steinsaltz, David; Mohan, Gurjinder; Kolb, Martin
2012-10-01
We review and structure some of the mathematical and statistical models that have been developed over the past half century to grapple with theoretical and experimental questions about the stochastic development of aging over the life course. We suggest that the mathematical models are in large part addressing the problem of partitioning the randomness in aging: How does aging vary between individuals, and within an individual over the lifecourse? How much of the variation is inherently related to some qualities of the individual, and how much is entirely random? How much of the randomness is cumulative, and how much is merely short-term flutter? We propose that recent lines of statistical inquiry in survival analysis could usefully grapple with these questions, all the more so if they were more explicitly linked to the relevant mathematical and biological models of aging. To this end, we describe points of contact among the various lines of mathematical and statistical research. We suggest some directions for future work, including the exploration of information-theoretic measures for evaluating components of stochastic models as the basis for analyzing experiments and anchoring theoretical discussions of aging. Copyright © 2012 Elsevier Inc. All rights reserved.
Study on Strand Space Model Theory
JI QingGuang(季庆光); QING SiHan(卿斯汉); ZHOU YongBin(周永彬); FENG DengGuo(冯登国)
2003-01-01
The growing interest in the application of formal methods of cryptographic pro-tocol analysis has led to the development of a number of different ways for analyzing protocol. Inthis paper, it is strictly proved that if for any strand, there exists at least one bundle containingit, then an entity authentication protocol is secure in strand space model (SSM) with some smallextensions. Unfortunately, the results of attack scenario demonstrate that this protocol and the Yahalom protocol and its modification are de facto insecure. By analyzing the reasons of failure offormal inference in strand space model, some deficiencies in original SSM are pointed out. In orderto break through these limitations of analytic capability of SSM, the generalized strand space model(GSSM) induced by some protocol is proposed. In this model, some new classes of strands, oraclestrands, high order oracle strands etc., are developed, and some notions are formalized strictly in GSSM, such as protocol attacks, valid protocol run and successful protocol run. GSSM can thenbe used to further analyze the entity authentication protocol. This analysis sheds light on why thisprotocol would be vulnerable while it illustrates that GSSM not only can prove security protocolcorrect, but also can be efficiently used to construct protocol attacks. It is also pointed out thatusing other protocol to attack some given protocol is essentially the same as the case of using themost of protocol itself.
Modeling Environmental Concern: Theory and Application.
Hackett, Paul M. W.
1993-01-01
Human concern for the quality and protection of the natural environment forms the basis of successful environmental conservation activities. Considers environmental concern research and proposes a model that incorporates the multiple dimensions of research through which environmental concern may be evaluated. (MDH)
L∞-algebra models and higher Chern-Simons theories
Ritter, Patricia; Sämann, Christian
2016-10-01
We continue our study of zero-dimensional field theories in which the fields take values in a strong homotopy Lie algebra. In the first part, we review in detail how higher Chern-Simons theories arise in the AKSZ-formalism. These theories form a universal starting point for the construction of L∞-algebra models. We then show how to describe superconformal field theories and how to perform dimensional reductions in this context. In the second part, we demonstrate that Nambu-Poisson and multisymplectic manifolds are closely related via their Heisenberg algebras. As a byproduct of our discussion, we find central Lie p-algebra extensions of 𝔰𝔬(p + 2). Finally, we study a number of L∞-algebra models which are physically interesting and which exhibit quantized multisymplectic manifolds as vacuum solutions.
Rock mechanics modeling based on soft granulation theory
Owladeghaffari, H
2008-01-01
This paper describes application of information granulation theory, on the design of rock engineering flowcharts. Firstly, an overall flowchart, based on information granulation theory has been highlighted. Information granulation theory, in crisp (non-fuzzy) or fuzzy format, can take into account engineering experiences (especially in fuzzy shape-incomplete information or superfluous), or engineering judgments, in each step of designing procedure, while the suitable instruments modeling are employed. In this manner and to extension of soft modeling instruments, using three combinations of Self Organizing Map (SOM), Neuro-Fuzzy Inference System (NFIS), and Rough Set Theory (RST) crisp and fuzzy granules, from monitored data sets are obtained. The main underlined core of our algorithms are balancing of crisp(rough or non-fuzzy) granules and sub fuzzy granules, within non fuzzy information (initial granulation) upon the open-close iterations. Using different criteria on balancing best granules (information pock...
Applying learning theories and instructional design models for effective instruction.
Khalil, Mohammed K; Elkhider, Ihsan A
2016-06-01
Faculty members in higher education are involved in many instructional design activities without formal training in learning theories and the science of instruction. Learning theories provide the foundation for the selection of instructional strategies and allow for reliable prediction of their effectiveness. To achieve effective learning outcomes, the science of instruction and instructional design models are used to guide the development of instructional design strategies that elicit appropriate cognitive processes. Here, the major learning theories are discussed and selected examples of instructional design models are explained. The main objective of this article is to present the science of learning and instruction as theoretical evidence for the design and delivery of instructional materials. In addition, this article provides a practical framework for implementing those theories in the classroom and laboratory. Copyright © 2016 The American Physiological Society.
Perturbation theory in the catalytic rate constant of the Henri-Michaelis-Menten enzymatic reaction.
Bakalis, Evangelos; Kosmas, Marios; Papamichael, Emmanouel M
2012-11-01
The Henry-Michaelis-Menten (HMM) mechanism of enzymatic reaction is studied by means of perturbation theory in the reaction rate constant k (2) of product formation. We present analytical solutions that provide the concentrations of the enzyme (E), the substrate (S), as well as those of the enzyme-substrate complex (C), and the product (P) as functions of time. For k (2) small compared to k (-1), we properly describe the entire enzymatic activity from the beginning of the reaction up to longer times without imposing extra conditions on the initial concentrations E ( o ) and S ( o ), which can be comparable or much different.
Functional response models to estimate feeding rates of wading birds
Collazo, J.A.; Gilliam, J.F.; Miranda-Castro, L.
2010-01-01
Forager (predator) abundance may mediate feeding rates in wading birds. Yet, when modeled, feeding rates are typically derived from the purely prey-dependent Holling Type II (HoII) functional response model. Estimates of feeding rates are necessary to evaluate wading bird foraging strategies and their role in food webs; thus, models that incorporate predator dependence warrant consideration. Here, data collected in a mangrove swamp in Puerto Rico in 1994 were reanalyzed, reporting feeding rates for mixed-species flocks after comparing fits of the HoII model, as used in the original work, to the Beddington-DeAngelis (BD) and Crowley-Martin (CM) predator-dependent models. Model CM received most support (AIC c wi = 0.44), but models BD and HoII were plausible alternatives (AIC c ??? 2). Results suggested that feeding rates were constrained by predator abundance. Reductions in rates were attributed to interference, which was consistent with the independently observed increase in aggression as flock size increased (P rates. However, inferences derived from the HoII model, as used in the original work, were sound. While Holling's Type II and other purely prey-dependent models have fostered advances in wading bird foraging ecology, evaluating models that incorporate predator dependence could lead to a more adequate description of data and processes of interest. The mechanistic bases used to derive models used here lead to biologically interpretable results and advance understanding of wading bird foraging ecology.
Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A
2012-06-01
The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population's phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model
Automated Physico-Chemical Cell Model Development through Information Theory
Peter J. Ortoleva
2005-11-29
The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.
From integrable models to gauge theories Festschrift Matinyan (Sergei G)
Gurzadyan, V G
2002-01-01
This collection of twenty articles in honor of the noted physicist and mentor Sergei Matinyan focuses on topics that are of fundamental importance to high-energy physics, field theory and cosmology. The topics range from integrable quantum field theories, three-dimensional Ising models, parton models and tests of the Standard Model, to black holes in loop quantum gravity, the cosmological constant and magnetic fields in cosmology. A pedagogical essay by Lev Okun concentrates on the problem of fundamental units. The articles have been written by well-known experts and are addressed to graduate
A macro-physics model of depreciation rate in economic exchange
Marmont Lobo, Rui F.; de Sousa, Miguel Rocha
2014-02-01
This article aims at a new approach for a known fundamental result: barter or trade increases economic value. It successfully bridges the gap between the theory of value and the exchange process attached to the transition from endowments to the equilibrium in the core and contract curve. First, we summarise the theory of value; in Section 2, we present the Edgeworth (1881) box and an axiomatic approach and in Section 3, we apply our pure exchange model. Finally (in Section 4), using our open econo-physics pure barter (EPB) model, we derive an improvement in value, which means that pure barter leads to a decline in depreciation rate.
A new conceptual model for aeolian transport rates on beaches
de Vries, S.; Stive, M.J.F.; van Rijn, L.; Ranasinghe, R.
2012-01-01
In this paper a new conceptual model for aeolian sediment transport rates is presented. Traditional sediment transport formulations have known limitations when applied to coastal beach situations. A linear model for sediment transport rates with respect to wind speed is proposed and supported by both data and numerical model simulations. The presented model does not solve complex wind fields and is therefore very easily applicable. Physical principles such as the presence of a threshold veloc...
Spectral and scattering theory for translation invariant models in quantum field theory
Rasmussen, Morten Grud
This thesis is concerned with a large class of massive translation invariant models in quantum field theory, including the Nelson model and the Fröhlich polaron. The models in the class describe a matter particle, e.g. a nucleon or an electron, linearly coupled to a second quantised massive scalar...... of the essential energy-momentum spectrum and either the two-body threshold, if there are no exited isolated mass shells, or the one-body threshold pertaining to the first exited isolated mass shell, if it exists. For the model restricted to the vacuum and one-particle sectors, the absence of singular continuous...... spectrum is proven to hold globally and scattering theory of the model is studied using time-dependent methods, of which the main result is asymptotic completeness....
Chiral field theories as models for hadron substructure
Kahana, S.H.
1987-03-01
A model for the nucleon as soliton of quarks interacting with classical meson fields is described. The theory, based on the linear sigma model, is renormalizable and capable of including sea quarks straightforwardly. Application to nuclear matter is made in a Wigner-Seitz approximation.
Pilot evaluation in TENCompetence: a theory-driven model1
J. Schoonenboom; H. Sligte; A. Moghnieh; M. Specht; C. Glahn; K. Stefanov
2008-01-01
This paper describes a theory-driven evaluation model that is used in evaluating four pilots in which an infrastructure for lifelong competence development, which is currently being developed, is validated. The model makes visible the separate implementation steps that connect the envisaged infrastr
Reciprocal Ontological Models Show Indeterminism Comparable to Quantum Theory
Bandyopadhyay, Somshubhro; Banik, Manik; Bhattacharya, Some Sankar; Ghosh, Sibasish; Kar, Guruprasad; Mukherjee, Amit; Roy, Arup
2016-12-01
We show that within the class of ontological models due to Harrigan and Spekkens, those satisfying preparation-measurement reciprocity must allow indeterminism comparable to that in quantum theory. Our result implies that one can design quantum random number generator, for which it is impossible, even in principle, to construct a reciprocal deterministic model.
Reciprocal Ontological Models Show Indeterminism Comparable to Quantum Theory
Bandyopadhyay, Somshubhro; Banik, Manik; Bhattacharya, Some Sankar; Ghosh, Sibasish; Kar, Guruprasad; Mukherjee, Amit; Roy, Arup
2017-02-01
We show that within the class of ontological models due to Harrigan and Spekkens, those satisfying preparation-measurement reciprocity must allow indeterminism comparable to that in quantum theory. Our result implies that one can design quantum random number generator, for which it is impossible, even in principle, to construct a reciprocal deterministic model.
Kinetic theories for spin models for cooperative relaxation dynamics
Pitts, Steven Jerome
The facilitated kinetic Ising models with asymmetric spin flip constraints introduced by Jackle and co-workers [J. Jackle, S. Eisinger, Z. Phys. B 84, 115 (1991); J. Reiter, F. Mauch, J. Jackle, Physica A 184, 458 (1992)] exhibit complex relaxation behavior in their associated spin density time correlation functions. This includes the growth of relaxation times over many orders of magnitude when the thermodynamic control parameter is varied, and, in some cases, ergodic-nonergodic transitions. Relaxation equations for the time dependence of the spin density autocorrelation function for a set of these models are developed that relate this autocorrelation function to the irreducible memory function of Kawasaki [K. Kawasaki, Physica A 215, 61 (1995)] using a novel diagrammatic series approach. It is shown that the irreducible memory function in a theory of the relaxation of an autocorrelation function in a Markov model with detailed balance plays the same role as the part of the memory function approximated by a polynomial function of the autocorrelation function with positive coefficients in schematic simple mode coupling theories for supercooled liquids [W. Gotze, in Liquids, Freezing and the Glass Transition, D. Levesque, J. P. Hansen, J. Zinn-Justin eds., 287 (North Holland, New York, 1991)]. Sets of diagrams in the series for the irreducible memory function are summed which lead to approximations of this type. The behavior of these approximations is compared with known results from previous analytical calculations and from numerical simulations. For the simplest one dimensional model, relaxation equations that are closely related to schematic extended mode coupling theories [W. Gotze, ibid] are also derived using the diagrammatic series. Comparison of the results of these approximate theories with simulation data shows that these theories improve significantly on the results of the theories of the simple schematic mode coupling theory type. The potential
The origin of discrete symmetries in F-theory models
2015-01-01
While non-abelian groups are undoubtedly the cornerstone of Grand Unified Theories (GUTs), phenomenology shows that the role of abelian and discrete symmetries is equally important in model building. The latter are the appropriate tool to suppress undesired proton decay operators and various flavour violating interactions, to generate a hierarchical fermion mass spectrum, etc. In F-theory, GUT symmetries are linked to the singularities of the elliptically fibred K3 manifolds; they are of ADE ...
Lenses on reading an introduction to theories and models
Tracey, Diane H
2017-01-01
Widely adopted as an ideal introduction to the major models of reading, this text guides students to understand and facilitate children's literacy development. Coverage encompasses the full range of theories that have informed reading instruction and research, from classical thinking to cutting-edge cognitive, social learning, physiological, and affective perspectives. Readers learn how theory shapes instructional decision making and how to critically evaluate the assumptions and beliefs that underlie their own teaching. Pedagogical features include framing and discussion questions, learning a
Fuzzy Stochastic Optimization Theory, Models and Applications
Wang, Shuming
2012-01-01
Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies. The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...
Nonlinear model predictive control theory and algorithms
Grüne, Lars
2017-01-01
This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...
Computational hemodynamics theory, modelling and applications
Tu, Jiyuan; Wong, Kelvin Kian Loong
2015-01-01
This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system. Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...
Network Data: Statistical Theory and New Models
2016-02-17
Using AERONET DRAGON Campaign Data, IEEE Transactions on Geoscience and Remote Sensing, (08 2015): 0. doi: 10.1109/TGRS.2015.2395722 Geoffrey...are not viable, i.e. the fruit fly dies after the knock-out of the gene. Further examination of the ftz stained embryos indicates that the lack of...our approach for spatial gene expression analysis for early stage fruit fly embryos, we are in a process to extend it to model later stage gene
Debris Discs: Modeling/theory review
Thébault, P.
2012-03-01
An impressive amount of photometric, spectroscopic and imaging observations of circumstellar debris discs has been accumulated over the past 3 decades, revealing that they come in all shapes and flavours, from young post-planet-formation systems like Beta-Pic to much older ones like Vega. What we see in these systems are small grains, which are probably only the tip of the iceberg of a vast population of larger (undetectable) collisionally-eroding bodies, leftover from the planet-formation process. Understanding the spatial structure, physical properties, origin and evolution of this dust is of crucial importance, as it is our only window into what is going on in these systems. Dust can be used as a tracer of the distribution of their collisional progenitors and of possible hidden massive pertubers, but can also allow to derive valuable information about the disc's total mass, size distribution or chemical composition. I will review the state of the art in numerical models of debris disc, and present some important issues that are explored by current modelling efforts: planet-disc interactions, link between cold (i.e. Herschel-observed) and hot discs, effect of binarity, transient versus continuous processes, etc. I will finally present some possible perspectives for the development of future models.
Neurath, C.; Smith, R. B.
The growth of unstable structures was studied experimentally in layered wax models. The rheological properties of the two wax types were determined independently by a series of cylinder compression tests. Both waxes enhibited (1) a non-Newtonian stress vs strain-rate relationship (2) strain softening and (3) temperature-dependent viscosity. The stress-strain-rate relationships approximated a power-law, with stress exponents of 5 for the microcrystalline wax and 1.8 for paraffin wax. Blocks of paraffin with a single embedded layer of microcrystalline wax were deformed in two-dimensional pure shear with the layer oriented either parallel to the compressive strain axis so that it shortened and folded, or perpendicular to that axis so that it would stretch and boundinage would form. The growth rates of tiny initial disturbances were measured. The growth rates for folding and boudinage were much higher than could be accounted for by theories assuming Newtonian material properties. Theories taking non-Newtonian behaviour into account (Smith, R. B. 1975. Bull. geol. Soc. Am.86, 1601-1609; Fletcher, R. C. 1974. Am. J. Sci.274, 1029-1043) better describe the folding growth rates. Boudinage, however, grew almost three times faster than would be predicted even by existing non-Newtonian theory. A possible reason for this discrepancy is that the waxes do not exhibit steady-state creep as assumed in the theory. We, therefore, extend the theory to include strain-softening. The crucial step in this theory is the use of a scalar measure of the deformation as a state variable in the constitutive law. In this way the isotropic manifestation of strain-softening can be taken into account. The analysis shows that strain-softening can lead to greatly increased boudinage growth rates while having little influence on the growth rates of folds, which is in agreement with the experiments.
A class of effective field theory models of cosmic acceleration
Bloomfield, Jolyon K.; Flanagan, Éanna É., E-mail: jkb84@cornell.edu, E-mail: eef3@cornell.edu [Center for Radiophysics and Space Research, Cornell University, Space Science Building, Ithaca, NY 14853 (United States)
2012-10-01
We explore a class of effective field theory models of cosmic acceleration involving a metric and a single scalar field. These models can be obtained by starting with a set of ultralight pseudo-Nambu-Goldstone bosons whose couplings to matter satisfy the weak equivalence principle, assuming that one boson is lighter than all the others, and integrating out the heavier fields. The result is a quintessence model with matter coupling, together with a series of correction terms in the action in a covariant derivative expansion, with specific scalings for the coefficients. After eliminating higher derivative terms and exploiting the field redefinition freedom, we show that the resulting theory contains nine independent free functions of the scalar field when truncated at four derivatives. This is in contrast to the four free functions found in similar theories of single-field inflation, where matter is not present. We discuss several different representations of the theory that can be obtained using the field redefinition freedom. For perturbations to the quintessence field today on subhorizon lengthscales larger than the Compton wavelength of the heavy fields, the theory is weakly coupled and natural in the sense of t'Hooft. The theory admits a regime where the perturbations become modestly nonlinear, but very strong nonlinearities lie outside its domain of validity.
Standage, Martyn; Duda, Joan L; Ntoumanis, Nikos
2006-03-01
In the present study, we used a model of motivation grounded in self-determination theory (Deci & Ryan, 1985, 1991; Ryan & Deci, 2000a, 2000b, 2002) to examine the relationship between physical education (PE) students' motivational processes and ratings of their effort and persistence as provided by their PE teacher. Data were obtained from 394 British secondary school students (204 boys, 189 girls, 1 gender not specified; M age = 11.97 years; SD = .89; range = 11-14 years) who responded to a multisection inventory (tapping autonomy-support, autonomy, competence, relatedness, and self-determined motivation). The students' respective PE teachers subsequently provided ratings reflecting the effort and persistence each student exhibited in their PE classes. The hypothesized relationships among the study variables were examined via structural equation modeling analysis using latent factors. Results of maximum likelihood analysis using the bootstrapping method revealed the proposed model demonstrated a good fit to the data, chi-squared (292) = 632.68, p motivation positively predicted teacher ratings of effort and persistence in PE. The findings are discussed with regard to enhancing student motivation in PE settings.
Rate Theory of Ion Pairing at the Water Liquid–Vapor Interface
Dang, Liem X.; Schenter, Gregory K.; Wick, Collin D.
2017-04-28
There is overwhelming evidence that certain ions are present near the vapor–liquid interface of aqueous salt solutions. Despite their importance in many chemical reactive phenomena, how ion–ion interactions are affected by interfaces and their influence on kinetic processes is not well understood. Molecular simulations were carried out to exam the thermodynamics and kinetics of small alkali halide ions in the bulk and near the water vapor–liquid interface. We calculated dissociation rates using classical transition state theory, and corrected them with transmission coefficients determined by the reactive flux method and Grote-Hynes theory. Our results show that, in addition to affecting the free energy of ions in solution, the interfacial environments significantly influence the kinetics of ion pairing. The results obtained from the reactive flux method and Grote-Hynes theory on the relaxation time present an unequivocal picture of the interface suppressing ion dissociation. This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.
Bukoski, A.; Blumling, D.; Harrison, I.
2003-01-01
A model of gas-surface reactivity is developed based on the ideas that (a) adsorbate chemistry is a local phenomenon, (b) the active system energy of an adsorbed molecule and a few immediately adjacent surface atoms suffices to fix microcanonical rate constants for surface kinetic processes such as desorption and dissociation, and (c) energy exchange between the local adsorbate-surface complexes and the surrounding substrate can be modeled via a Master equation to describe the system/heat reservoir coupling. The resulting microcanonical unimolecular rate theory (MURT) for analyzing and predicting both thermal equilibrium and nonequilibrium kinetics for surface reactions is applied to the dissociative chemisorption of methane on Pt(111). Energy exchange due to phonon-mediated energy transfer between the local adsorbate-surface complexes and the surface is explored and estimated to be insignificant for the reactive experimental conditions investigated here. Simulations of experimental molecular beam data indicate that the apparent threshold energy for CH4 dissociative chemisorption on Pt(111) is E0=0.61 eV (over a C-H stretch reaction coordinate), the local adsorbate-surface complex includes three surface oscillators, and the pooled energy from 16 active degrees of freedom is available to help surmount the dissociation barrier. For nonequilibrium molecular beam experiments, predictions are made for the initial methane dissociative sticking coefficient as a function of isotope, normal translational energy, molecular beam nozzle temperature, and surface temperature. MURT analysis of the thermal programmed desorption of CH4 physisorbed on Pt(111) finds the physisorption well depth is 0.16 eV. Thermal equilibrium dissociative sticking coefficients for methane on Pt(111) are predicted for the temperature range from 250-2000 K. Tolman relations for the activation energy under thermal equilibrium conditions and for a variety of "effective activation energies" under
Automated Production Flow Line Failure Rate Mathematical Analysis with Probability Theory
Tan Chan Sin
2014-12-01
Full Text Available Automated lines have been widely used in the industries especially for mass production and to customize product. Productivity of automated line is a crucial indicator to show the output and performance of the production. Failure or breakdown of station or mechanisms is commonly occurs in the automated line in real condition due to the technological and technical problem which is highly affect the productivity. The failure rates of automated line are not express or analyse in terms of mathematic form. This paper presents the mathematic analysis by using probability theory towards the failure condition in automated line. The mathematic express for failure rates can produce and forecast the output of productivity accurately
Invisible 'glue' bosons in model field theory
Shirokov, M I
2002-01-01
Fermionic psi(x) and bosonic phi(x) fields with vector coupling are discussed. It is shown that 'clothed' bosons of the model do not interact with fermions and between themselves. If phi(x) does not interact with other fields of the particle physics, then the 'clothed' bosons have properties of the cosmological 'dark' matter': they cannot be detected in Earth's laboratories. This cause of the boson invisibility contrasts with the origin of the unobservability of the isolated gluons in QCD which is explained by the confinement of colour
Marxist Crisis Theory and the Rate of Profit in the U.S. Economy during 1975-2008
FUSHENG; XIE; AN; LI; ANDONG; ZHU
2013-01-01
The cyclical fall in the rate of profit reveals the basic mechanism of the cyclical fluctuation of the economy.A new synthesis of the Marxist crisis theory necessitates calculating the rate of profit as well as considering factors such as capital-labor relations,realization of value,the organic composition of capital,and money and credit.Empirical studies suggest that the U.S.profit rate in real economy showed no signs of effective recovery during 195-2008.The shrinking profit share caused by growing employment of non-production workers turns out to be the major factor contributing to the cyclical fall in the rate of profit,which in turn may be traced to the reorganization of the production process before the 1990s and the growing flexibility of employment relations after the 1990s.With long-term stagnation of the rate of profit,a new,financialised model of accumulation that heavily depends on increasing liquidity in the economy took shape in the United States,making the U.S.economy more fragile.The current crisis is but a natural result of the intrinsic contradiction between the Fed’s efforts to encourage financialised accumulation and to maintain the dollar as a legitimate quasi international reserve currency.
Solutions of two-factor models with variable interest rates
Li, Jinglu; Clemons, C. B.; Young, G. W.; Zhu, J.
2008-12-01
The focus of this work is on numerical solutions to two-factor option pricing partial differential equations with variable interest rates. Two interest rate models, the Vasicek model and the Cox-Ingersoll-Ross model (CIR), are considered. Emphasis is placed on the definition and implementation of boundary conditions for different portfolio models, and on appropriate truncation of the computational domain. An exact solution to the Vasicek model and an exact solution for the price of bonds convertible to stock at expiration under a stochastic interest rate are derived. The exact solutions are used to evaluate the accuracy of the numerical simulation schemes. For the numerical simulations the pricing solution is analyzed as the market completeness decreases from the ideal complete level to one with higher volatility of the interest rate and a slower mean-reverting environment. Simulations indicate that the CIR model yields more reasonable results than the Vasicek model in a less complete market.
Rate Modelling of Alkali Gelatinization at Low Conversions
Osoka Emmanuel CHIBUIKE
2010-12-01
Full Text Available The rate of starch gelatinisation under strong alkali conditions was modeled at low conversion (x < 0.4, with the degree of gelatinisation (conversion defined in terms of sample viscosity. Experimental data at low conversion were fit to eleven rate models based on the mechanism of the unreacted-core model and the rate controlling steps determined. Film diffusion (stokes regime plus Product layer diffusion steps control the rate of reaction for all sodium hydroxide concentrations and at low conversion (x < 0.4, with the dominance shifting from Film diffusion to Product layer diffusion as sodium hydroxide concentration is increased.
Modeling baroreflex regulation of heart rate during orthostatic stress
Olufsen, Mette; Tran, Hien T.; Ottesen, Johnny T.
2006-01-01
. The model uses blood pressure measured in the finger as an input to model heart rate dynamics in response to changes in baroreceptor nerve firing rate, sympathetic and parasympathetic responses, vestibulo-sympathetic reflex, and concentrations of norepinephrine and acetylcholine. We formulate an inverse...
A new conceptual model for aeolian transport rates on beaches
De Vries, S.; Stive, M.J.F.; Van Rijn, L.; Ranasinghe, R.
2012-01-01
In this paper a new conceptual model for aeolian sediment transport rates is presented. Traditional sediment transport formulations have known limitations when applied to coastal beach situations. A linear model for sediment transport rates with respect to wind speed is proposed and supported by
Delineating the Average Rate of Change in Longitudinal Models
Kelley, Ken; Maxwell, Scott E.
2008-01-01
The average rate of change is a concept that has been misunderstood in the literature. This article attempts to clarify the concept and show unequivocally the mathematical definition and meaning of the average rate of change in longitudinal models. The slope from the straight-line change model has at times been interpreted as if it were always the…
Lin, Zheng-Zhe
2013-01-01
By molecular dynamics simulations and free energy calculations based on Monte Carlo method, the detailed balance between Pt cluster isomers was investigated. For clusters of n50. Then, a statistical mechanical model was built to evaluate unimolecular isomerization rate and simplify the prediction of isomer formation probability. This model is simpler than transition state theory and can be easily applied on ab initio calculations to predict the lifetime of nanostructures.
Using Omega and NIF to Advance Theories of High-Pressure, High-Strain-Rate Tantalum Plastic Flow
Rudd, R. E.; Arsenlis, A.; Barton, N. R.; Cavallo, R. M.; Huntington, C. M.; McNaney, J. M.; Orlikowski, D. A.; Park, H.-S.; Prisbrey, S. T.; Remington, B. A.; Wehrenberg, C. E.
2015-11-01
Precisely controlled plasmas are playing an important role as both pump and probe in experiments to understand the strength of solid metals at high energy density (HED) conditions. In concert with theory, these experiments have enabled a predictive capability to model material strength at Mbar pressures and high strain rates. Here we describe multiscale strength models developed for tantalum and vanadium starting with atomic bonding and extending up through the mobility of individual dislocations, the evolution of dislocation networks and so on up to full scale. High-energy laser platforms such as the NIF and the Omega laser probe ramp-compressed strength to 1-5 Mbar. The predictions of the multiscale model agree well with the 1 Mbar experiments without tuning. The combination of experiment and theory has shown that solid metals can behave significantly differently at HED conditions; for example, the familiar strengthening of metals as the grain size is reduced has been shown not to occur in the high pressure experiments. Work performed under the auspices of the U.S. Dept. of Energy by Lawrence Livermore National Lab under contract DE-AC52-07NA273.
Beta-decay rate and beta-delayed neutron emission probability of improved gross theory
Koura, Hiroyuki
2014-09-01
A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for unmeasured nuclei are adopted from the KTUY nuclear mass formula, which is based on the spherical-basis method. Considering the properties of the integrated Fermi function, we can roughly categorized energy region of excited-state of a daughter nucleus into three regions: a highly-excited energy region, which fully affect a delayed neutron probability, a middle energy region, which is estimated to contribute the decay heat, and a region neighboring the ground-state, which determines the beta-decay rate. Some results will be given in the presentation. A theoretical study has been carried out on beta-decay rate and beta-delayed neutron emission probability. The gross theory of the beta decay is based on an idea of the sum rule of the beta-decay strength function, and has succeeded in describing beta-decay half-lives of nuclei overall nuclear mass region. The gross theory includes not only the allowed transition as the Fermi and the Gamow-Teller, but also the first-forbidden transition. In this work, some improvements are introduced as the nuclear shell correction on nuclear level densities and the nuclear deformation for nuclear strength functions, those effects were not included in the original gross theory. The shell energy and the nuclear deformation for
Stoller, Roger E [ORNL; Golubov, Stanislav I [ORNL; Becquart, C. S. [Universite de Lille; Domain, C. [EDF R& D, Clamart, France
2006-09-01
The multiscale modeling scheme encompasses models from the atomistic to the continuum scale. Phenomena at the mesoscale are typically simulated using reaction rate theory (RT), Monte Carlo (MC), or phase field models. These mesoscale models are appropriate for application to problems that involve intermediate length scales ( m to >mm), and timescales from diffusion (~ s) to long-term microstructural evolution (~years). Phenomena at this scale have the most direct impact on mechanical properties in structural materials of interest to nuclear energy systems, and are also the most accessible to direct comparison between the results of simulations and experiments. Recent advances in computational power have substantially expanded the range of application for MC models. Although the RT and MC models can be used simulate the same phenomena, many of the details are handled quite differently in the two approaches. A direct comparison of the RT and MC descriptions has been made in the domain of point defect cluster dynamics modeling, which is relevant to both the nucleation and evolution of radiation-induced defect structures. The relative merits and limitations of the two approaches are discussed, and the predictions of the two approaches are compared for specific irradiation conditions.
Mökkönen, Harri; Ala-Nissila, Tapio; Jónsson, Hannes
2016-09-07
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates is found. The use of a sequence of hyperplanes in the evaluation of the recrossing correction speeds up the calculation by an order of magnitude as compared with the traditional approach. As the temperature is lowered, the direct Langevin dynamics simulations as well as the forward flux simulations become computationally too demanding, while the harmonic transition state theory estimate corrected for recrossings can be calculated without significant increase in the computational effort.
Greene, Samuel M; Shan, Xiao; Clary, David C
2016-06-28
Semiclassical Transition State Theory (SCTST), a method for calculating rate constants of chemical reactions, offers gains in computational efficiency relative to more accurate quantum scattering methods. In full-dimensional (FD) SCTST, reaction probabilities are calculated from third and fourth potential derivatives along all vibrational degrees of freedom. However, the computational cost of FD SCTST scales unfavorably with system size, which prohibits its application to larger systems. In this study, the accuracy and efficiency of 1-D SCTST, in which only third and fourth derivatives along the reaction mode are used, are investigated in comparison to those of FD SCTST. Potential derivatives are obtained from numerical ab initio Hessian matrix calculations at the MP2/cc-pVTZ level of theory, and Richardson extrapolation is applied to improve the accuracy of these derivatives. Reaction barriers are calculated at the CCSD(T)/cc-pVTZ level. Results from FD SCTST agree with results from previous theoretical and experimental studies when Richardson extrapolation is applied. Results from our implementation of 1-D SCTST, which uses only 4 single-point MP2/cc-pVTZ energy calculations in addition to those for conventional TST, agree with FD results to within a factor of 5 at 250 K. This degree of agreement and the efficiency of the 1-D method suggest its potential as a means of approximating rate constants for systems too large for existing quantum scattering methods.
The Theory of Exchange Rates and the Value of the USD
Mansoor Maitah
2011-04-01
Full Text Available Ernesto M. Pernia (2002 stated that the theory of exchange rate determination can be regarded as known purchasing power parity (PPP. The exchange rate is an important economic factor and variable in any country as it is always consistent with the predictions of economic theory in terms of balance of trade. The US dollar ($ is an international currency with high reputation in domestic and international markets. The US current accounts are predicated on the US government monetary polices based on the principle that a country cannot be accumulating more of a single currency and thus increase its imports in order to generate more foreign exchange currencies for a growing surplus current account. Therefore, the US markets is largely dependent on import of consumer products form Asian countries such as China, which in turn stimulates Chinese current account continues and possess excess US dollars that make the Yuan continuously appreciate against the US dollars.The paper was processed within the framework of the Research Project of MSM 6046070906 "The economics of Czech agricultural resources and their effective use within the framework of multifunctional agri-food systems".
Mökkönen, Harri; Ala-Nissila, Tapio; Jónsson, Hannes
2016-09-01
The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates is found. The use of a sequence of hyperplanes in the evaluation of the recrossing correction speeds up the calculation by an order of magnitude as compared with the traditional approach. As the temperature is lowered, the direct Langevin dynamics simulations as well as the forward flux simulations become computationally too demanding, while the harmonic transition state theory estimate corrected for recrossings can be calculated without significant increase in the computational effort.
Hannah, David R.; Venkatachary, Ranga
2010-01-01
In this article, the authors present a retrospective analysis of an instructor's multiyear redesign of a course on organization theory into what is called a hybrid Classroom-as-Organization model. It is suggested that this new course design served to apprentice students to function in quasi-real organizational structures. The authors further argue…
Bouncing Model in Brane World Theory
Maier, Rodrigo; Soares, Ivano Damião
2013-01-01
We examine the nonlinear dynamics of a closed Friedmann-Robertson-Walker universe in the framework of Brane World formalism with a timelike extra dimension. In this scenario, the Friedmann equations contain additional terms arising from the bulk-brane interaction which provide a concrete model for nonsingular bounces in the early phase of the Universe. We construct a nonsingular cosmological scenario sourced with dust, radiation and a cosmological constant. The structure of the phase space shows a nonsingular orbit with two accelerated phases, separated by a smooth transition corresponding to a decelerated expansion. Given observational parameters we connect such phases to a primordial accelerated phase, a soft transition to Friedmann (where the classical regime is valid), and a graceful exit to a de Sitter accelerated phase.
Scaling Theory and Modeling of DNA Evolution
Buldyrev, Sergey V.
1998-03-01
We present evidence supporting the possibility that the nucleotide sequence in noncoding DNA is power-law correlated. We do not find such long-range correlation in the coding regions of the gene, so we build a ``coding sequence finder'' to locate the coding regions of an unknown DNA sequence. We also propose a different coding sequence finding algorithm, based on the concept of mutual information(I. Große, S. V. Buldyrev, H. Herzel, H. E. Stanley, (preprint).). We describe our recent work on quantification of DNA patchiness, using long-range correlation measures (G. M. Viswanathan, S. V. Buldyrev, S. Havlin, and H. E. Stanley, Biophysical Journal 72), 866-875 (1997).. We also present our recent study of the simple repeat length distributions. We find that the distributions of some simple repeats in noncoding DNA have long power-law tails, while in coding DNA all simple repeat distributions decay exponentially. (N. V. Dokholyan, S. V. Buldyrev, S. Havlin, and H. E. Stanley, Phys. Rev. Lett (in press).) We discuss several models based on insertion-deletion and mutation-duplication mechanisms that relate long-range correlations in non-coding DNA to DNA evolution. Specifically, we relate long-range correlations in non-coding DNA to simple repeat expansion, and propose an evolutionary model that reproduces the power law distribution of simple repeat lengths. We argue that the absence of long-range correlations in protein coding sequences is related to their highly conserved primary structure which is necessary to insure protein folding.
An Abstraction Theory for Qualitative Models of Biological Systems
Banks, Richard; 10.4204/EPTCS.40.3
2010-01-01
Multi-valued network models are an important qualitative modelling approach used widely by the biological community. In this paper we consider developing an abstraction theory for multi-valued network models that allows the state space of a model to be reduced while preserving key properties of the model. This is important as it aids the analysis and comparison of multi-valued networks and in particular, helps address the well-known problem of state space explosion associated with such analysis. We also consider developing techniques for efficiently identifying abstractions and so provide a basis for the automation of this task. We illustrate the theory and techniques developed by investigating the identification of abstractions for two published MVN models of the lysis-lysogeny switch in the bacteriophage lambda.
Summary of papers presented in the Theory and Modelling session
Lin-Liu Y.R.
2012-09-01
Full Text Available A total of 14 contributions were presented in the Theory and Modelling sessions at EC-17. One Theory and Modelling paper was included in the ITER ECRH and ECE sessions each. Three papers were in the area of nonlinear physics discussing parametric processes accompanying ECRH. Eight papers were based on the quasi-linear theory of wave heating and current drive. Three of these addressed the application of ECCD for NTM stabilization. Two papers considered scattering of EC waves by edge density fluctuations and related phenomena. In this summary, we briefly describe the highlights of these contributions. Finally, the three papers concerning modelling of various aspects of ECE are reported in the ECE session.
Modeling Reusable and Interoperable Faceted Browsing Systems with Category Theory.
Harris, Daniel R
2015-08-01
Faceted browsing has become ubiquitous with modern digital libraries and online search engines, yet the process is still difficult to abstractly model in a manner that supports the development of interoperable and reusable interfaces. We propose category theory as a theoretical foundation for faceted browsing and demonstrate how the interactive process can be mathematically abstracted. Existing efforts in facet modeling are based upon set theory, formal concept analysis, and lightweight ontologies, but in many regards, they are implementations of faceted browsing rather than a specification of the basic, underlying structures and interactions. We will demonstrate that category theory allows us to specify faceted objects and study the relationships and interactions within a faceted browsing system. Implementations can then be constructed through a category-theoretic lens using these models, allowing abstract comparison and communication that naturally support interoperability and reuse.
Foundations of reusable and interoperable facet models using category theory.
Harris, Daniel R
2016-10-01
Faceted browsing has become ubiquitous with modern digital libraries and online search engines, yet the process is still difficult to abstractly model in a manner that supports the development of interoperable and reusable interfaces. We propose category theory as a theoretical foundation for faceted browsing and demonstrate how the interactive process can be mathematically abstracted. Existing efforts in facet modeling are based upon set theory, formal concept analysis, and light-weight ontologies, but in many regards, they are implementations of faceted browsing rather than a specification of the basic, underlying structures and interactions. We will demonstrate that category theory allows us to specify faceted objects and study the relationships and interactions within a faceted browsing system. Resulting implementations can then be constructed through a category-theoretic lens using these models, allowing abstract comparison and communication that naturally support interoperability and reuse.
M-theory model-building and proton stability
Ellis, J. [CERN, Geneva (Switzerland). Theory Div.; Faraggi, A.E. [Florida Univ., Gainesville, FL (United States). Inst. for Fundamental Theory; Nanopoulos, D.V. [Texas A and M Univ., College Station, TX (United States)]|[Houston Advanced Research Center, The Woodlands, TX (United States). Astroparticle Physics Group]|[Academy of Athens (Greece). Div. of Natural Sciences
1997-09-01
The authors study the problem of baryon stability in M theory, starting from realistic four-dimensional string models constructed using the free-fermion formulation of the weakly-coupled heterotic string. Suitable variants of these models manifest an enhanced custodial gauge symmetry that forbids to all orders the appearance of dangerous dimension-five baryon-decay operators. The authors exhibit the underlying geometric (bosonic) interpretation of these models, which have a Z{sub 2} x Z{sub 2} orbifold structure similar, but not identical, to the class of Calabi-Yau threefold compactifications of M and F theory investigated by Voisin and Borcea. A related generalization of their work may provide a solution to the problem of proton stability in M theory.
A Positive Theory of Fixed-Rate Funds-Supplying Operations in an Accommodative Financial Environment
Junnosuke Shino
2011-01-01
This paper studies bidding behaviors in fixed-rate funds-supplying auctions using a simple game-theoretic model. While the existing literature argues that such auction schemes are vulnerable to the overbidding problem, the bid-to-cover ratio for the Bank of Japan's current fixed-rate operations has remained stable. We modify the stylized repo game by incorporating the current framework of fixed-rate funds-supplying auctions operated by the Bank of Japan and the accommodative financial environ...
A model for the burning rates of composite propellants
Cohen, N. S.; Strand, L. D.
1980-01-01
An analytical model of the steady-state burning of composite solid propellants is presented. An improved burning rate model is achieved by incorporating an improved AP monopropellant model, a separate energy balance for the binder in which a portion of the diffusion flame is used to heat the binder, proper use of the binder regression rate in the model, and a model for the combustion of the energetic binder component of CMDB propellants. Also, an improved correlation and model of aluminum agglomeration is developed which properly describes compositional trends.
Reconstructing constructivism: causal models, Bayesian learning mechanisms, and the theory theory.
Gopnik, Alison; Wellman, Henry M
2012-11-01
We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and nontechnical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and the psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists.
The economic production lot size model with several production rates
Larsen, Christian
should be chosen in the interval between the demand rate and the production rate, which minimize unit production costs, and should be used in an increasing order. Then, given the production rates, we derive closed form solutions for the optimal runtimes as well as the minimum average cost. Finally we......We study an extension of the economic production lot size model, where more than one production rate can be used during a cycle. The production rates and their corresponding runtimes are decision variables. We decompose the problem into two subproblems. First, we show that all production rates...
Magnetized cosmological models in bimetric theory of gravitation
S D Katore; R S Rane
2006-08-01
Bianchi type-III magnetized cosmological model when the field of gravitation is governed by either a perfect fluid or cosmic string is investigated in Rosen's [1] bimetric theory of gravitation. To complete determinate solution, the condition, viz., = (), where is a constant, between the metric potentials is used. We have assumed different equations of state for cosmic string [2] for the complete solution of the model. Some physical and geometrical properties of the exhibited model are discussed and studied.
Hydrodynamics Research on Amphibious Vehicle Systems:Modeling Theory
JU Nai-jun
2006-01-01
For completing the hydrodynamics software development and the engineering application research on the amphibious vehicle systems, hydrodynamic modeling theory of the amphibious vehicle systems is elaborated, which includes to build up the dynamic system model of amphibious vehicle motion on water, gun tracking-aiming-firing, bullet hit and armored check-target, gunner operating control, and the simulation computed model of time domain for random sea wave.
Dynamics in Nonlocal Cosmological Models Derived from String Field Theory
Joukovskaya, Liudmila
2007-01-01
A general class of nonlocal cosmological models is considered. A new method for solving nonlocal Friedmann equations is proposed, and solutions of the Friedmann equations with nonlocal operator are presented. The cosmological properties of these solutions are discussed. Especially indicated is $p$-adic cosmological model in which we have obtained nonsingular bouncing solution and string field theory tachyon model in which we have obtained full solution of nonlocal Friedmann equations with $w=...
A Model of Resurgence Based on Behavioral Momentum Theory
Shahan, Timothy A; Sweeney, Mary M.
2011-01-01
Resurgence is the reappearance of an extinguished behavior when an alternative behavior reinforced during extinction is subsequently placed on extinction. Resurgence is of particular interest because it may be a source of relapse to problem behavior following treatments involving alternative reinforcement. In this article we develop a quantitative model of resurgence based on the augmented model of extinction provided by behavioral momentum theory. The model suggests that alternative reinforc...
Fuzzy logic technology for modeling of greenhouse crop transpiration rate
Deng, Lujuan; Wang, Huaishan
2006-11-01
The objective of this paper was present a reasonable greenhouse crop transpiration rate model for irrigation scheduling thereby to achieve the best effect, for example, water and energy economizing furthermore to make crop growing better. So it was essential to measure crop transpiration rate. Owing to the difficulty of obtaining accurate real time data of crop transpiration, it was commonly estimated from weather parameters. So the fuzzy logic model for estimation of greenhouse crop transpiration rate was developed. The model was made up of five sub-systems and three layers. There were nine input variables and one output variable. The results of comparison between measured and fuzzy model is inspirer. The squared correlation coefficient (r2) by fuzzy model method (r2=0.9302) is slightly higher than by FAO Penman-Monteith formula (r2=0.9213). The fuzzy logic crop transpiration rate model could be easily extended for irrigation decision-making.
Optimal mutation rates in dynamic environments: The eigen model
Ancliff, Mark; Park, Jeong-Man
2011-03-01
We consider the Eigen quasispecies model with a dynamic environment. For an environment with sharp-peak fitness in which the most-fit sequence moves by k spin-flips each period T we find an asymptotic stationary state in which the quasispecies population changes regularly according to the regular environmental change. From this stationary state we estimate the maximum and the minimum mutation rates for a quasispecies to survive under the changing environment and calculate the optimum mutation rate that maximizes the population growth. Interestingly we find that the optimum mutation rate in the Eigen model is lower than that in the Crow-Kimura model, and at their optimum mutation rates the corresponding mean fitness in the Eigen model is lower than that in the Crow-Kimura model, suggesting that the mutation process which occurs in parallel to the replication process as in the Crow-Kimura model gives an adaptive advantage under changing environment.
Perturbation theory for string sigma models
Bianchi, Lorenzo
2016-01-01
In this thesis we investigate quantum aspects of the Green-Schwarz superstring in various AdS backgrounds relevant for the AdS/CFT correspondence, providing several examples of perturbative computations in the corresponding integrable sigma-models. We start by reviewing in details the supercoset construction of the superstring action in $AdS_5 \\times S^5$, pointing out the limits of this procedure for $AdS_4$ and $AdS_3$ backgrounds. For the $AdS_4 \\times CP^3$ case we give a thorough derivation of an alternative action, based on the double-dimensional reduction of eleven-dimensional super-membranes. We then consider the expansion about the BMN vacuum and the S-matrix for the scattering of worldsheet excitations in the decompactification limit. To evaluate its elements efficiently we describe a unitarity-based method resulting in a very compact formula yielding the cut-constructible part of any one-loop two-dimensional S-matrix. In the second part of this review we analyze the superstring action on $AdS_4 \\ti...
Lenses on Reading An Introduction to Theories and Models
Tracey, Diane H
2012-01-01
This widely adopted text explores key theories and models that frame reading instruction and research. Readers learn why theory matters in designing and implementing high-quality instruction and research; how to critically evaluate the assumptions and beliefs that guide their own work; and what can be gained by looking at reading through multiple theoretical lenses. For each theoretical model, classroom applications are brought to life with engaging vignettes and teacher reflections. Research applications are discussed and illustrated with descriptions of exemplary studies. New to This Edition
Effective Field Theory and the No-Core Shell Model
Stetcua I.
2010-04-01
Full Text Available In ﬁnite model space suitable for many-body calculations via the no-core shell model (NCSM, I illustrate the direct application of the eﬀective ﬁeld theory (EFT principles to solving the many-body Schrödinger equation. Two diﬀerent avenues for ﬁxing the low-energy constants naturally arising in an EFT approach are discussed. I review results for both nuclear and trapped atomic systems, using eﬀective theories formally similar, albeit describing diﬀerent underlying physics.
Consistent constraints on the Standard Model Effective Field Theory
Berthier, Laure
2015-01-01
We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, $\\Lambda \\gtrsim \\, 3 \\, {\\rm TeV}$. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an $\\rm S,T$ analysis is modified by the theory errors we include as an illustrative example.
Theory of compressive modeling and simulation
Szu, Harold; Cha, Jae; Espinola, Richard L.; Krapels, Keith
2013-05-01
Modeling and Simulation (M&S) has been evolving along two general directions: (i) data-rich approach suffering the curse of dimensionality and (ii) equation-rich approach suffering computing power and turnaround time. We suggest a third approach. We call it (iii) compressive M&S (CM&S); because the basic Minimum Free-Helmholtz Energy (MFE) facilitating CM&S can reproduce and generalize Candes, Romberg, Tao & Donoho (CRT&D) Compressive Sensing (CS) paradigm as a linear Lagrange Constraint Neural network (LCNN) algorithm. CM&S based MFE can generalize LCNN to 2nd order as Nonlinear augmented LCNN. For example, during the sunset, we can avoid a reddish bias of sunlight illumination due to a long-range Rayleigh scattering over the horizon. With CM&S we can take instead of day camera, a night vision camera. We decomposed long wave infrared (LWIR) band with filter into 2 vector components (8~10μm and 10~12μm) and used LCNN to find pixel by pixel the map of Emissive-Equivalent Planck Radiation Sources (EPRS). Then, we up-shifted consistently, according to de-mixed sources map, to the sub-micron RGB color image. Moreover, the night vision imaging can also be down-shifted at Passive Millimeter Wave (PMMW) imaging, suffering less blur owing to dusty smokes scattering and enjoying apparent smoothness of surface reflectivity of man-made objects under the Rayleigh resolution. One loses three orders of magnitudes in the spatial Rayleigh resolution; but gains two orders of magnitude in the reflectivity, and gains another two orders in the propagation without obscuring smog . Since CM&S can generate missing data and hard to get dynamic transients, CM&S can reduce unnecessary measurements and their associated cost and computing in the sense of super-saving CS: measuring one & getting one's neighborhood free .
Real Exchange Rate and Commodity Prices in a Neoclassical Model
Reinhart, Carmen
1988-01-01
This paper represents a neoclassical model that explains the observed empirical relationship between government spending and world commodity supplies and the real exchange rate and real commodity prices. It is shown that fiscal expansion and increasing world commodity supplies simultaneously lead to an appreciation of the real exchange rate and a decline in relative commodity prices. The structural model is estimated and its forecasting performance is compared to a variety of models. We fin...
Theories beyond the standard model, one year before the LHC
Dimopoulos, Savas
2006-04-01
Next year the Large Hadron Collider at CERN will begin what may well be a new golden era of particle physics. I will discuss three theories that will be tested at the LHC. I will begin with the supersymmetric standard model, proposed with Howard Georgi in 1981. This theory made a precise quantitative prediction, the unification of couplings, that has been experimentally confirmed in 1991 by experiments at CERN and SLAC. This established it as the leading theory for physics beyond the standard model. Its main prediction, the existence of supersymmetric particles, will be tested at the large hadron collider. I will next overview theories with large new dimensions, proposed with Nima Arkani-Hamed and Gia Dvali in 1998. This links the weakness of gravity to the presence of sub-millimeter size dimensions, that are presently searched for in experiments looking for deviations from Newton's law at short distances. In this framework quantum gravity, string theory, and black holes may be experimentally investigated at the large hadron collider. I will end with the recent proposal of split supersymmetry with Nima Arkani-Hamed. This theory is motivated by the possible existence of an enormous number of ground states in the fundamental theory, as suggested by the cosmological constant problem and recent developments in string theory and cosmology. It can be tested at the large hadron collider and, if confirmed, it will lend support to the idea that our universe and its laws are not unique and that there is an enormous variety of universes each with its own distinct physical laws.
陆靖; 范康年
1999-01-01
A dynamical theory of spectroscopy with femtosecond pulse excitation is developed in Liouville space. By using density matrix formalism, the transient rate equation that can be reduced to the classical KHD expression in CW case is obtained. This theory is applied to the Raman excitation profile of IBr and the results are in agreement with the experiments.
Knoester, J.; Himbergen, J.E. Van
1987-01-01
The incorporation is studied of the orientation factor occurring in the complete Förster rate of incoherent energy transfer, into the theory of concentration self-quenching by statistical pairs of luminescent molecules. Within Burshtein’s theory of hopping transport, exact results for the steady
Higher-Rank Supersymmetric Models and Topological Field Theory
Kawai, T; Yang, S K; Kawai, Toshiya; Uchino, Taku; Yang, Sung-Kil
1993-01-01
In the first part of this paper we investigate the operator aspect of higher-rank supersymmetric model which is introduced as a Lie theoretic extension of the $N=2$ minimal model with the simplest case $su(2)$ corresponding to the $N=2$ minimal model. In particular we identify the analogs of chirality conditions and chiral ring. In the second part we construct a class of topological conformal field theories starting with this higher-rank supersymmetric model. We show the BRST-exactness of the twisted stress-energy tensor, find out physical observables and discuss how to make their correlation functions. It is emphasized that in the case of $su(2)$ the topological field theory constructed in this paper is distinct from the one obtained by twisting the $N=2$ minimal model through the usual procedure.
Bridging emotion theory and neurobiology through dynamic systems modeling.
Lewis, Marc D
2005-04-01
Efforts to bridge emotion theory with neurobiology can be facilitated by dynamic systems (DS) modeling. DS principles stipulate higher-order wholes emerging from lower-order constituents through bidirectional causal processes--offering a common language for psychological and neurobiological models. After identifying some limitations of mainstream emotion theory, I apply DS principles to emotion-cognition relations. I then present a psychological model based on this reconceptualization, identifying trigger, self-amplification, and self-stabilization phases of emotion-appraisal states, leading to consolidating traits. The article goes on to describe neural structures and functions involved in appraisal and emotion, as well as DS mechanisms of integration by which they interact. These mechanisms include nested feedback interactions, global effects of neuromodulation, vertical integration, action-monitoring, and synaptic plasticity, and they are modeled in terms of both functional integration and temporal synchronization. I end by elaborating the psychological model of emotion-appraisal states with reference to neural processes.
A model of PCF in guarded type theory
Paviotti, Marco; Møgelberg, Rasmus Ejlers; Birkedal, Lars
2015-01-01
Guarded recursion is a form of recursion where recursive calls are guarded by delay modalities. Previous work has shown how guarded recursion is useful for constructing logics for reasoning about programming languages with advanced features, as well as for constructing and reasoning about elements...... of coinductive types. In this paper we investigate how type theory with guarded recursion can be used as a metalanguage for denotational semantics useful both for constructing models and for proving properties of these. We do this by constructing a fairly intensional model of PCF and proving it computationally...... adequate. The model construction is related to Escardo's metric model for PCF, but here everything is carried out entirely in type theory with guarded recursion, including the formulation of the operational semantics, the model construction and the proof of adequacy...
An introduction to queueing theory modeling and analysis in applications
Bhat, U Narayan
2015-01-01
This introductory textbook is designed for a one-semester course on queueing theory that does not require a course on stochastic processes as a prerequisite. By integrating the necessary background on stochastic processes with the analysis of models, the work provides a sound foundational introduction to the modeling and analysis of queueing systems for a wide interdisciplinary audience of students in mathematics, statistics, and applied disciplines such as computer science, operations research, and engineering. This edition includes additional topics in methodology and applications. Key features: • An introductory chapter including a historical account of the growth of queueing theory in more than 100 years. • A modeling-based approach with emphasis on identification of models. • Rigorous treatment of the foundations of basic models commonly used in applications with appropriate references for advanced topics. • Applications in manufacturing and, computer and communication systems. • A chapter on ...
Cosmological Model Based on Gauge Theory of Gravity
WU Ning
2005-01-01
A cosmological model based on gauge theory of gravity is proposed in this paper. Combining cosmological principle and field equation of gravitational gauge field, dynamical equations of the scale factor R(t) of our universe can be obtained. This set of equations has three different solutions. A prediction of the present model is that, if the energy density of the universe is not zero and the universe is expanding, the universe must be space-flat, the total energy density must be the critical density ρc of the universe. For space-flat case, this model gives the same solution as that of the Friedmann model. In other words, though they have different dynamics of gravitational interactions, general relativity and gauge theory of gravity give the same cosmological model.
Structural properties of effective potential model by liquid state theories
Xiang Yuan-Tao; Andrej Jamnik; Yang Kai-Wei
2010-01-01
This paper investigates the structural properties of a model fluid dictated by an effective inter-particle oscillatory potential by grand canonical ensemble Monte Carlo (GCEMC) simulation and classical liquid state theories. The chosen oscillatory potential incorporates basic interaction terms used in modeling of various complex fluids which is composed of mesoscopic particles dispersed in a solvent bath, the studied structural properties include radial distribution function in bulk and inhomogeneous density distribution profile due to influence of several external fields. The GCEMC results are employed to test the validity of two recently proposed theoretical approaches in the field of atomic fluids. One is an Ornstein-Zernike integral equation theory approach; the other is a third order + second order perturbation density functional theory. Satisfactory agreement between the GCEMC simulation and the pure theories fully indicates the ready adaptability of the atomic fluid theories to effective model potentials in complex fluids, and classifies the proposed theoretical approaches as convenient tools for the investigation of complex fluids under the single component macro-fluid approximation.
Twisted gauge theories in 3D Walker-Wang models
Wang, Zitao
2016-01-01
Three dimensional gauge theories with a discrete gauge group can emerge from spin models as a gapped topological phase with fractional point excitations (gauge charge) and loop excitations (gauge flux). It is known that 3D gauge theories can be "twisted", in the sense that the gauge flux loops can have nontrivial braiding statistics among themselves and such twisted gauge theories are realized in models discovered by Dijkgraaf and Witten. A different framework to systematically construct three dimensional topological phases was proposed by Walker and Wang and a series of examples have been studied. Can the Walker Wang construction be used to realize the topological order in twisted gauge theories? This is not immediately clear because the Walker-Wang construction is based on a loop condensation picture while the Dijkgraaf-Witten theory is based on a membrane condensation picture. In this paper, we show that the answer to this question is Yes, by presenting an explicit construction of the Walker Wang models wh...
New model describing the dynamical behaviour of penetration rates
Tashiro, Tohru; Minagawa, Hiroe; Chiba, Michiko
2013-02-01
We propose a hierarchical logistic equation as a model to describe the dynamical behaviour of a penetration rate of a prevalent stuff. In this model, a memory, how many people who already possess it a person who does not process it yet met, is considered, which does not exist in the logistic model. As an application, we apply this model to iPod sales data, and find that this model can approximate the data much better than the logistic equation.
Theory, modeling and simulation of superconducting qubits
Berman, Gennady P [Los Alamos National Laboratory; Kamenev, Dmitry I [Los Alamos National Laboratory; Chumak, Alexander [INSTIT OF PHYSICS, KIEV; Kinion, Carin [LLNL; Tsifrinovich, Vladimir [POLYTECHNIC INSTIT OF NYU
2011-01-13
We analyze the dynamics of a qubit-resonator system coupled with a thermal bath and external electromagnetic fields. Using the evolution equations for the set of Heisenberg operators that describe the whole system, we derive an expression for the resonator field, that includes the resonator-drive, the resonator-bath, and resonator-qubit interactions. The renormalization of the resonator frequency, caused by the qubit-resonator interaction, is accounted for. Using the solutions for the resonator field, we derive the equation that describes the qubit dynamics. The dependence of the qubit evolution during the measurement time on the fidelity of a single-shot measurement is studied. The relation between the fidelity and measurement time is shown explicitly. We proposed a novel adiabatic method for the phase qubit measurement. The method utilizes a low-frequency, quasi-classical resonator inductively coupled to the qubit. The resonator modulates the qubit energy, and the back reaction of the qubit causes a shift in the phase of the resonator. The resonator phase shift can be used to determine the qubit state. We have simulated this measurement taking into the account the energy levels outside the phase qubit manifold. We have shown that, for qubit frequencies in the range of 8-12GHZ, a resonator frequency of 500 MHz and a measurement time of 100 ns, the phase difference between the two qubit states is greater than 0.2 rad. This phase difference exceeds the measurement uncertainty, and can be detected using a classical phase-meter. A fidelity of 0.9999 can be achieved for a relaxation time of 0.5 ms. We also model and simulate a microstrip-SQUID amplifier of frequency about 500 MHz, which could be used to amplify the resonator oscillations in the phase qubit adiabatic measurement. The voltage gain and the amplifier noise temperature are calculated. We simulate the preparation of a generalized Bell state and compute the relaxation times required for achieving high
Theory, modeling and simulation of superconducting qubits
Berman, Gennady P [Los Alamos National Laboratory; Kamenev, Dmitry I [Los Alamos National Laboratory; Chumak, Alexander [INSTIT OF PHYSICS, KIEV; Kinion, Carin [LLNL; Tsifrinovich, Vladimir [POLYTECHNIC INSTIT OF NYU
2011-01-13
We analyze the dynamics of a qubit-resonator system coupled with a thermal bath and external electromagnetic fields. Using the evolution equations for the set of Heisenberg operators that describe the whole system, we derive an expression for the resonator field, that includes the resonator-drive, the resonator-bath, and resonator-qubit interactions. The renormalization of the resonator frequency, caused by the qubit-resonator interaction, is accounted for. Using the solutions for the resonator field, we derive the equation that describes the qubit dynamics. The dependence of the qubit evolution during the measurement time on the fidelity of a single-shot measurement is studied. The relation between the fidelity and measurement time is shown explicitly. We proposed a novel adiabatic method for the phase qubit measurement. The method utilizes a low-frequency, quasi-classical resonator inductively coupled to the qubit. The resonator modulates the qubit energy, and the back reaction of the qubit causes a shift in the phase of the resonator. The resonator phase shift can be used to determine the qubit state. We have simulated this measurement taking into the account the energy levels outside the phase qubit manifold. We have shown that, for qubit frequencies in the range of 8-12GHZ, a resonator frequency of 500 MHz and a measurement time of 100 ns, the phase difference between the two qubit states is greater than 0.2 rad. This phase difference exceeds the measurement uncertainty, and can be detected using a classical phase-meter. A fidelity of 0.9999 can be achieved for a relaxation time of 0.5 ms. We also model and simulate a microstrip-SQUID amplifier of frequency about 500 MHz, which could be used to amplify the resonator oscillations in the phase qubit adiabatic measurement. The voltage gain and the amplifier noise temperature are calculated. We simulate the preparation of a generalized Bell state and compute the relaxation times required for achieving high
Traffic Games: Modeling Freeway Traffic with Game Theory
Cortés-Berrueco, Luis E.; Gershenson, Carlos; Stephens, Christopher R.
2016-01-01
We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers’ interactions. PMID:27855176
Traffic Games: Modeling Freeway Traffic with Game Theory.
Cortés-Berrueco, Luis E; Gershenson, Carlos; Stephens, Christopher R
2016-01-01
We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.
A review of the microscopic modeling of the 5-dim. black hole of IIB string theory
Spenta R Wadia
2001-01-01
We review the theory of the microscopic modeling of the 5-dim. black hole of type IIB string theory in terms of the 1-5 brane system. A detailed discussion of the low energy effective Lagrangian of the brane system is presented and the black hole micro-states are identiﬁed. These considerations are valid in the strong coupling regime of supergravity due to the non-renormalization of the low energy dynamics in this model. Using Maldacena duality and standard statistical mechanics methods one can account for black hole thermodynamics and calculate the absorption cross section and the Hawking radiation rates. Hence, at least in the case of this model black hole, since we can account for black hole properties within a unitary theory, there is no information paradox.
Thimble regularization at work besides toy models: from Random Matrix Theory to Gauge Theories
Eruzzi, G
2015-01-01
Thimble regularization as a solution to the sign problem has been successfully put at work for a few toy models. Given the non trivial nature of the method (also from the algorithmic point of view) it is compelling to provide evidence that it works for realistic models. A Chiral Random Matrix theory has been studied in detail. The known analytical solution shows that the model is non-trivial as for the sign problem (in particular, phase quenched results can be very far away from the exact solution). This study gave us the chance to address a couple of key issues: how many thimbles contribute to the solution of a realistic problem? Can one devise algorithms which are robust as for staying on the correct manifold? The obvious step forward consists of applications to gauge theories.
Application of Kalman Filter on modelling interest rates
Long H. Vo
2014-03-01
Full Text Available This study aims to test the feasibility of using a data set of 90-day bank bill forward rates from the Australian market to predict spot interest rates. To achieve this goal I utilized the application of Kalman Filter in a state space model with time-varying state variable. It is documented that in the case of short-term interest rates,the state space model yields robust predictive power. In addition, this predictive power of implied forward rate is heavily impacted by the existence of a time-varying risk premium in the term structure.
Sissay, Adonay; Abanador, Paul; Mauger, François; Gaarde, Mette; Schafer, Kenneth J; Lopata, Kenneth
2016-09-07
Strong-field ionization and the resulting electronic dynamics are important for a range of processes such as high harmonic generation, photodamage, charge resonance enhanced ionization, and ionization-triggered charge migration. Modeling ionization dynamics in molecular systems from first-principles can be challenging due to the large spatial extent of the wavefunction which stresses the accuracy of basis sets, and the intense fields which require non-perturbative time-dependent electronic structure methods. In this paper, we develop a time-dependent density functional theory approach which uses a Gaussian-type orbital (GTO) basis set to capture strong-field ionization rates and dynamics in atoms and small molecules. This involves propagating the electronic density matrix in time with a time-dependent laser potential and a spatial non-Hermitian complex absorbing potential which is projected onto an atom-centered basis set to remove ionized charge from the simulation. For the density functional theory (DFT) functional we use a tuned range-separated functional LC-PBE*, which has the correct asymptotic 1/r form of the potential and a reduced delocalization error compared to traditional DFT functionals. Ionization rates are computed for hydrogen, molecular nitrogen, and iodoacetylene under various field frequencies, intensities, and polarizations (angle-dependent ionization), and the results are shown to quantitatively agree with time-dependent Schrödinger equation and strong-field approximation calculations. This tuned DFT with GTO method opens the door to predictive all-electron time-dependent density functional theory simulations of ionization and ionization-triggered dynamics in molecular systems using tuned range-separated hybrid functionals.
Sissay, Adonay; Abanador, Paul; Mauger, François; Gaarde, Mette; Schafer, Kenneth J.; Lopata, Kenneth
2016-09-01
Strong-field ionization and the resulting electronic dynamics are important for a range of processes such as high harmonic generation, photodamage, charge resonance enhanced ionization, and ionization-triggered charge migration. Modeling ionization dynamics in molecular systems from first-principles can be challenging due to the large spatial extent of the wavefunction which stresses the accuracy of basis sets, and the intense fields which require non-perturbative time-dependent electronic structure methods. In this paper, we develop a time-dependent density functional theory approach which uses a Gaussian-type orbital (GTO) basis set to capture strong-field ionization rates and dynamics in atoms and small molecules. This involves propagating the electronic density matrix in time with a time-dependent laser potential and a spatial non-Hermitian complex absorbing potential which is projected onto an atom-centered basis set to remove ionized charge from the simulation. For the density functional theory (DFT) functional we use a tuned range-separated functional LC-PBE*, which has the correct asymptotic 1/r form of the potential and a reduced delocalization error compared to traditional DFT functionals. Ionization rates are computed for hydrogen, molecular nitrogen, and iodoacetylene under various field frequencies, intensities, and polarizations (angle-dependent ionization), and the results are shown to quantitatively agree with time-dependent Schrödinger equation and strong-field approximation calculations. This tuned DFT with GTO method opens the door to predictive all-electron time-dependent density functional theory simulations of ionization and ionization-triggered dynamics in molecular systems using tuned range-separated hybrid functionals.
A Modeling Perspective on Interpreting Rates of Change in Context
Ärlebäck, Jonas B.; Doerr, Helen M.; O'Neil, AnnMarie H.
2013-01-01
Functions provide powerful tools for describing change, but research has shown that students find difficulty in using functions to create and interpret models of changing phenomena. In this study, we drew on a models and modeling perspective to design an instructional approach to develop students' abilities to describe and interpret rates of…
Identification and Estimation of Exchange Rate Models with Unobservable Fundamentals
Chambers, M.J.; McCrorie, J.R.
2004-01-01
This paper is concerned with issues of model specification, identification, and estimation in exchange rate models with unobservable fundamentals.We show that the model estimated by Gardeazabal, Reg´ulez and V´azquez (International Economic Review, 1997) is not identified and demonstrate how to spec
Matrix Factorizations for Local F-Theory Models
Omer, Harun
2016-01-01
I use matrix factorizations to describe branes at simple singularities as they appear in elliptic fibrations of local F-theory models. Each node of the corresponding Dynkin diagrams of the ADE-type singularities is associated with one indecomposable matrix factorization which can be deformed into one or more factorizations of lower rank. Branes with internal fluxes arise naturally as bound states of the indecomposable factorizations. Describing branes in such a way avoids the need to resolve singularities and encodes information which is neglected in conventional F-theory treatments. This paper aims to show how branes arising in local F-theory models around simple singularities can be described in this framework.
Non-white noise and a multiple-rate Markovian closure theory for turbulence
Hammett, G W; Hammett, Gregory W.; Bowman, John C.
2002-01-01
Markovian models of turbulence can be derived from the renormalized statistical closure equations of the direct-interaction approximation (DIA). Various simplifications are often introduced, including an assumption that the two-time correlation function is proportional to the renormalized infinitesimal propagator (Green's function), i.e. the decorrelation rate for fluctuations is equal to the decay rate for perturbations. While this is a rigorous result of the fluctuation--dissipation theorem for thermal equilibrium, it does not necessarily apply to all types of turbulence. Building on previous work on realizable Markovian closures, we explore a way to allow the decorrelation and decay rates to differ (which in some cases affords a more accurate treatment of effects such as non-white noise), while retaining the computational advantages of a Markovian approximation. Some Markovian approximations differ only in the initial transient phase, but the multiple-rate Markovian closure (MRMC) presented here could modi...
Tsai, Chung-Hung
2014-05-07
Telehealth has become an increasingly applied solution to delivering health care to rural and underserved areas by remote health care professionals. This study integrated social capital theory, social cognitive theory, and the technology acceptance model (TAM) to develop a comprehensive behavioral model for analyzing the relationships among social capital factors (social capital theory), technological factors (TAM), and system self-efficacy (social cognitive theory) in telehealth. The proposed framework was validated with 365 respondents from Nantou County, located in Central Taiwan. Structural equation modeling (SEM) was used to assess the causal relationships that were hypothesized in the proposed model. The finding indicates that elderly residents generally reported positive perceptions toward the telehealth system. Generally, the findings show that social capital factors (social trust, institutional trust, and social participation) significantly positively affect the technological factors (perceived ease of use and perceived usefulness respectively), which influenced usage intention. This study also confirmed that system self-efficacy was the salient antecedent of perceived ease of use. In addition, regarding the samples, the proposed model fitted considerably well. The proposed integrative psychosocial-technological model may serve as a theoretical basis for future research and can also offer empirical foresight to practitioners and researchers in the health departments of governments, hospitals, and rural communities.
Chung-Hung Tsai
2014-05-01
Full Text Available Telehealth has become an increasingly applied solution to delivering health care to rural and underserved areas by remote health care professionals. This study integrated social capital theory, social cognitive theory, and the technology acceptance model (TAM to develop a comprehensive behavioral model for analyzing the relationships among social capital factors (social capital theory, technological factors (TAM, and system self-efficacy (social cognitive theory in telehealth. The proposed framework was validated with 365 respondents from Nantou County, located in Central Taiwan. Structural equation modeling (SEM was used to assess the causal relationships that were hypothesized in the proposed model. The finding indicates that elderly residents generally reported positive perceptions toward the telehealth system. Generally, the findings show that social capital factors (social trust, institutional trust, and social participation significantly positively affect the technological factors (perceived ease of use and perceived usefulness respectively, which influenced usage intention. This study also confirmed that system self-efficacy was the salient antecedent of perceived ease of use. In addition, regarding the samples, the proposed model fitted considerably well. The proposed integrative psychosocial-technological model may serve as a theoretical basis for future research and can also offer empirical foresight to practitioners and researchers in the health departments of governments, hospitals, and rural communities.
Turnover Rate of Popularity Charts in Neutral Models
Evans, T S
2011-01-01
It has been shown recently that in many different cultural phenomena the turnover rate on the most popular artefacts in a population exhibit some regularities. A very simple expression for this turnover rate has been proposed by Bentley et al. and its validity in two simple models for copying and innovation is investigated in this paper. It is found that Bentley's formula is an approximation of the real behaviour of the turnover rate in the Wright-Fisher model, while it is not valid in the Moran model.
Adam, J; Tater, M; Truhlik, E; Epelbaum, E; Machleidt, R; Ricci, P
2011-01-01
The doublet capture rate of the negative muon capture in deuterium is calculated employing the nuclear wave functions generated from accurate nucleon-nucleon potentials constructed at next-to-next-to-next-to-leading order of heavy-baryon chiral perturbation theory and the weak meson exchange current operator derived within the same formalism. All but one of the low-energy constants that enter the calculation were fixed from pion-nucleon and nucleon-nucleon scattering data. The low-energy constant d^R (c_D), which cannot be determined from the purely two-nucleon data, was extracted recently from the triton beta-decay and the binding energies of the three-nucleon systems. The calculated values of the doublet capture rates show a rather large spread for the used values of the d^R. Precise measurement of the doublet capture rate in the future will not only help to constrain the value of d^R, but also provide a highly nontrivial test of the nuclear chiral EFT framework. Besides, the precise knowledge of the consta...
Proeschold-Bell, Rae Jean; Miles, Andrew; Toth, Matthew; Adams, Christopher; Smith, Bruce W; Toole, David
2013-12-01
The clergy occupation is unique in its combination of role strains and higher calling, putting clergy mental health at risk. We surveyed all United Methodist clergy in North Carolina, and 95% (n = 1,726) responded, with 38% responding via phone interview. We compared clergy phone interview depression rates, assessed using the Patient Health Questionnaire (PHQ-9), to those of in-person interviews in a representative United States sample that also used the PHQ-9. The clergy depression prevalence was 8.7%, significantly higher than the 5.5% rate of the national sample. We used logistic regression to explain depression, and also anxiety, assessed using the Hospital Anxiety and Depression Scale. As hypothesized by effort-reward imbalance theory, several extrinsic demands (job stress, life unpredictability) and intrinsic demands (guilt about not doing enough work, doubting one's call to ministry) significantly predicted depression and anxiety, as did rewards such as ministry satisfaction and lack of financial stress. The high rate of clergy depression signals the need for preventive policies and programs for clergy. The extrinsic and intrinsic demands and rewards suggest specific actions to improve clergy mental health.
Chuang, Yao-Yuan
2007-08-01
Variational transition state theory with multidimensional tunneling (VTST/MT) has been used for calculating the rate constants of reactions. The updated Hessians have been used to reduce the computational costs for both geometry optimization and trajectory following procedures. In this paper, updated Hessians are used to reduce the computational costs while calculating the rate constants applying VTST/MT. Although we found that directly applying the updated Hessians will not generate good vibrational frequencies along the minimum energy path (MEP), however, we can either re-compute the full Hessian matrices at fixed intervals or calculate the Block Hessians, which is constructed by numerical one-side difference for the Hessian elements in the "critical" region and Bofill updating scheme for the rest of the Hessian elements. Due to the numerical instability of the Bofill update method near the saddle point region, we have suggested a simple strategy in which we follow the MEP until certain percentage of the classical barrier height from the barrier top with full Hessians computed and then performing rate constant calculation with the extended MEP using Block Hessians. This strategy results a mean unsigned percentage deviation (MUPD) around 10% with full Hessians computed till the point with 80% classical barrier height for four studied reactions. This proposed strategy is attractive not only it can be implemented as an automatic procedure but also speeds up the VTST/MT calculation via embarrassingly parallelization to a personal computer cluster.
Excellence in Physics Education Award: Modeling Theory for Physics Instruction
Hestenes, David
2014-03-01
All humans create mental models to plan and guide their interactions with the physical world. Science has greatly refined and extended this ability by creating and validating formal scientific models of physical things and processes. Research in physics education has found that mental models created from everyday experience are largely incompatible with scientific models. This suggests that the fundamental problem in learning and understanding science is coordinating mental models with scientific models. Modeling Theory has drawn on resources of cognitive science to work out extensive implications of this suggestion and guide development of an approach to science pedagogy and curriculum design called Modeling Instruction. Modeling Instruction has been widely applied to high school physics and, more recently, to chemistry and biology, with noteworthy results.
Correlations and total muon capture rates. [Primakoff effect, isospin, shell model
Mekjian, A.
1978-08-01
The total muon capture rate for s-wave muons can be accounted for by the Primakoff expression which gives the dependence of this rate on the mass number A and the proton number Z of the absorbing nucleus. The expression is a simple three parameter phenomenological formulae which accurately describes these rates from light weight nuclei to heavy nuclei. These parameters relate to the isospin structure of the squared isovector operator which appears in a sum rule approach to such rates. A microscopic analysis of the parameters appearing in the capture rate expression is presented in the light of recent developments concerning photonuclear reactions. A shell model analysis is given and it is found that the predictions of the unperturbed shell model and also Hartree-Fock theory are in complete disagreement with the data. Considerable improvement is obtained when long range correlations are included in the ground state wave function of the absorbing nucleus. 21 references.
Model analysis of the link between interest rates and crashes
Broga, Kristijonas M.; Viegas, Eduardo; Jensen, Henrik Jeldtoft
2016-09-01
We analyse the effect of distinct levels of interest rates on the stability of the financial network under our modelling framework. We demonstrate that banking failures are likely to emerge early on under sustained high interest rates, and at much later stage-with higher probability-under a sustained low interest rate scenario. Moreover, we demonstrate that those bank failures are of a different nature: high interest rates tend to result in significantly more bankruptcies associated to credit losses whereas lack of liquidity tends to be the primary cause of failures under lower rates.
Linking Complexity and Sustainability Theories: Implications for Modeling Sustainability Transitions
Camaren Peter
2014-03-01
Full Text Available In this paper, we deploy a complexity theory as the foundation for integration of different theoretical approaches to sustainability and develop a rationale for a complexity-based framework for modeling transitions to sustainability. We propose a framework based on a comparison of complex systems’ properties that characterize the different theories that deal with transitions to sustainability. We argue that adopting a complexity theory based approach for modeling transitions requires going beyond deterministic frameworks; by adopting a probabilistic, integrative, inclusive and adaptive approach that can support transitions. We also illustrate how this complexity-based modeling framework can be implemented; i.e., how it can be used to select modeling techniques that address particular properties of complex systems that we need to understand in order to model transitions to sustainability. In doing so, we establish a complexity-based approach towards modeling sustainability transitions that caters for the broad range of complex systems’ properties that are required to model transitions to sustainability.
Rational choice theory and Becker's model of random behavior
Krstić Miloš
2015-01-01
Full Text Available According to rational choice theory, rational consumers tend to maximize utility under a given budget constraints. This will be achieved if they choose a combination of goods that can satisfy their needs and provide the maximum level of utility. Gary Becker, on the other hand, imagines irrational consumers who choose bundle on the budget line. As irrational consumers have an equal probability of choosing any bundle on the budget line, on average, we expect that they will pick the bundle lying at the midpoint of the line. The results of research in which artificial Becker's agents choose among more than two commodities, rational choice theory is small and more than two budget/price situations show that the percentage of agents whose behavior violate. Adding some factors to Becker's model of random behavior, experimenters can minimize these minor violations. Therefore, rational choice theory is unfalsifiable. The results of our research have confirmed this theory. In addition, in the paper we discussed about explanatory value of rational choice theory in specific circumstances (positive substitution effect and we concluded that the explanatory value of rational choice theory was significantly reduced in specific cases.
Variable bit rate video traffic modeling by multiplicative multifractal model
Huang Xiaodong; Zhou Yuanhua; Zhang Rongfu
2006-01-01
Multiplicative multifractal process could well model video traffic. The multiplier distributions in the multiplicative multifractal model for video traffic are investigated and it is found that Gaussian is not suitable for describing the multipliers on the small time scales. A new statistical distribution-symmetric Pareto distribution is introduced. It is applied instead of Gaussian for the multipliers on those scales. Based on that, the algorithm is updated so that symmetric pareto distribution and Gaussian distribution are used to model video traffic but on different time scales. The simulation results demonstrate that the algorithm could model video traffic more accurately.
Pilot evaluation in TENCompetence: a theory-driven model
Schoonenboom, Judith; Sligte, Henk; Moghnieh, Ayman; Specht, Marcus; Glahn, Christian; Stefanov, Krassen
2007-01-01
Schoonenboom, J., Sligte, H., Moghnieh, A., Specht, M., Glahn, C., & Stefanov, K. (2007). Pilot evaluation in TENCompetence: a theory-driven model. In T. Navarette, J. Blat & R. Koper (Eds.). Proceedings of the 3rd TENCompetence Open Workshop 'Current Research on IMS Learning Design and Lifelong Com
Anisotropic cosmological models and generalized scalar tensor theory
Subenoy Chakraborty; Batul Chandra Santra; Nabajit Chakravarty
2003-10-01
In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–Sachs space-time. For bulk viscous ﬂuid, both exponential and power-law solutions have been studied and some assumptions among the physical parameters and solutions have been discussed.
Stochastic models in risk theory and management accounting
Brekelmans, R.C.M.
2000-01-01
This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest
Conceptualizations of Creativity: Comparing Theories and Models of Giftedness
Miller, Angie L.
2012-01-01
This article reviews seven different theories of giftedness that include creativity as a component, comparing and contrasting how each one conceptualizes creativity as a part of giftedness. The functions of creativity vary across the models, suggesting that while the field of gifted education often cites the importance of creativity, the…
Classical and Quantum Theory of Perturbations in Inflationary Universe Models
Brandenberger, R H; Mukhanov, V
1993-01-01
A brief introduction to the gauge invariant classical and quantum theory of cosmological perturbations is given. The formalism is applied to inflationary Universe models and yields a consistent and unified description of the generation and evolution of fluctuations. A general formula for the amplitude of cosmological perturbations in inflationary cosmology is derived.
Multilevel Higher-Order Item Response Theory Models
Huang, Hung-Yu; Wang, Wen-Chung
2014-01-01
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
Evaluating hydrological model performance using information theory-based metrics
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...
Stochastic models in risk theory and management accounting
Brekelmans, R.C.M.
2000-01-01
This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest for
A Proposed Model of Jazz Theory Knowledge Acquisition
Ciorba, Charles R.; Russell, Brian E.
2014-01-01
The purpose of this study was to test a hypothesized model that proposes a causal relationship between motivation and academic achievement on the acquisition of jazz theory knowledge. A reliability analysis of the latent variables ranged from 0.92 to 0.94. Confirmatory factor analyses of the motivation (standardized root mean square residual…
[General systems theory, analog models and essential arterial hypertension].
Indovina, I; Bonelli, M
1991-02-15
The application of the General System Theory to the fields of biology and particularly of medicine is fraught with many difficulties deriving from the mathematical complexities of application. The authors suggest that these difficulties can be overcome by applying analogical models, thus opening new prospects for the resolution of the manifold problems involved in connection with the study of arterial hypertension.
Conceptualizations of Creativity: Comparing Theories and Models of Giftedness
Miller, Angie L.
2012-01-01
This article reviews seven different theories of giftedness that include creativity as a component, comparing and contrasting how each one conceptualizes creativity as a part of giftedness. The functions of creativity vary across the models, suggesting that while the field of gifted education often cites the importance of creativity, the…
A Proposed Model of Jazz Theory Knowledge Acquisition
Ciorba, Charles R.; Russell, Brian E.
2014-01-01
The purpose of this study was to test a hypothesized model that proposes a causal relationship between motivation and academic achievement on the acquisition of jazz theory knowledge. A reliability analysis of the latent variables ranged from 0.92 to 0.94. Confirmatory factor analyses of the motivation (standardized root mean square residual…
Application of Health Promotion Theories and Models for Environmental Health
Parker, Edith A.; Baldwin, Grant T.; Israel, Barbara; Salinas, Maria A.
2004-01-01
The field of environmental health promotion gained new prominence in recent years as awareness of physical environmental stressors and exposures increased in communities across the country and the world. Although many theories and conceptual models are used routinely to guide health promotion and health education interventions, they are rarely…
Using Conceptual Change Theories to Model Position Concepts in Astronomy
Yang, Chih-Chiang; Hung, Jeng-Fung
2012-01-01
The roles of conceptual change and model building in science education are very important and have a profound and wide effect on teaching science. This study examines the change in children's position concepts after instruction, based on different conceptual change theories. Three classes were chosen and divided into three groups, including a…
Stochastic models in risk theory and management accounting
Brekelmans, R.C.M.
2000-01-01
This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest for
Spherical finite rate of innovation theory for the recovery of fiber orientations.
Deslauriers-Gauthier, Samuel; Marziliano, Pina
2012-01-01
In this paper, we investigate the reconstruction of a signal defined as the sum of K orientations from samples taken with a kernel defined on the 3D rotation group. A potential application is the recovery of fiber orientations in diffusion magnetic resonance imaging. We propose an exact reconstruction algorithm based on the finite rate of innovation theory that makes use of the spherical harmonics representation of the signal. The number of measurements needed for perfect recovery, which may be as low as 3K, depends only on the number of orientations and the bandwidth of the kernel used. Furthermore, the angular resolution of our method does not depend on the number of available measurements. We illustrate the performance of the algorithm using several simulations.