Sample records for theory model formulation

  1. Cascade Version 1: Theory and Model Formulation (United States)


    that provides this modeling framework , potentially allowing for an arbitrary number of scales. The coupling between coastal evolution at different...breakpoint. The two equations are written as follows: 2 2cos coso go o b gb bH C H Cθ = θ (7) sin sino b o bC C θ θ= (8) where H = wave height

  2. Economic model predictive control theory, formulations and chemical process applications

    CERN Document Server

    Ellis, Matthew; Christofides, Panagiotis D


    This book presents general methods for the design of economic model predictive control (EMPC) systems for broad classes of nonlinear systems that address key theoretical and practical considerations including recursive feasibility, closed-loop stability, closed-loop performance, and computational efficiency. Specifically, the book proposes: Lyapunov-based EMPC methods for nonlinear systems; two-tier EMPC architectures that are highly computationally efficient; and EMPC schemes handling explicitly uncertainty, time-varying cost functions, time-delays and multiple-time-scale dynamics. The proposed methods employ a variety of tools ranging from nonlinear systems analysis, through Lyapunov-based control techniques to nonlinear dynamic optimization. The applicability and performance of the proposed methods are demonstrated through a number of chemical process examples. The book presents state-of-the-art methods for the design of economic model predictive control systems for chemical processes. In addition to being...

  3. Algebraic formulation of higher gauge theory (United States)

    Zucchini, Roberto


    In this paper, we present a purely algebraic formulation of higher gauge theory and gauged sigma models based on the abstract theory of graded commutative algebras and their morphisms. The formulation incorporates naturally Becchi - Rouet -Stora - Tyutin (BRST) symmetry and is also suitable for Alexandrov - Kontsevich - Schwartz-Zaboronsky (AKSZ) type constructions. It is also shown that for a full-fledged Batalin-Vilkovisky formulation including ghost degrees of freedom, higher gauge and gauged sigma model fields must be viewed as internal smooth functions on the shifted tangent bundle of a space-time manifold valued in a shifted L∞-algebroid encoding symmetry. The relationship to other formulations where the L∞-algebroid arises from a higher Lie groupoid by Lie differentiation is highlighted.

  4. Open-ended formulation of self-consistent field response theory with the polarizable continuum model for solvation. (United States)

    Di Remigio, Roberto; Beerepoot, Maarten T P; Cornaton, Yann; Ringholm, Magnus; Steindal, Arnfinn Hykkerud; Ruud, Kenneth; Frediani, Luca


    The study of high-order absorption properties of molecules is a field of growing importance. Quantum-chemical studies can help design chromophores with desirable characteristics. Given that most experiments are performed in solution, it is important to devise a cost-effective strategy to include solvation effects in quantum-chemical studies of these properties. We here present an open-ended formulation of self-consistent field (SCF) response theory for a molecular solute coupled to a polarizable continuum model (PCM) description of the solvent. Our formulation relies on the open-ended, density matrix-based quasienergy formulation of SCF response theory of Thorvaldsen, et al., [J. Chem. Phys., 2008, 129, 214108] and the variational formulation of the PCM, as presented by Lipparini et al., [J. Chem. Phys., 2010, 133, 014106]. Within the PCM approach to solvation, the mutual solute-solvent polarization is represented by means of an apparent surface charge (ASC) spread over the molecular cavity defining the solute-solvent boundary. In the variational formulation, the ASC is an independent, variational degree of freedom. This allows us to formulate response theory for molecular solutes in the fixed-cavity approximation up to arbitrary order and with arbitrary perturbation operators. For electric dipole perturbations, pole and residue analyses of the response functions naturally lead to the identification of excitation energies and transition moments. We document the implementation of this approach in the Dalton program package using a recently developed open-ended response code and the PCMSolver libraries and present results for one-, two-, three-, four- and five-photon absorption processes of three small molecules in solution.

  5. Integrating theory and practice in conservatoires: formulating holistic models for teaching and learning improvisation


    Parsonage, Catherine; Fadnes, Petter Frost; Taylor, James


    Academic study has become a more significant part of a conservatoire education in recent times, but it has not always informed performance as effectively as it might. There is a need for further development of an academic curriculum that is specifically relevant to performers, in which the links between theory and practice are made explicit rather than expecting students to construct these for themselves. This article reports on research into the integration of theory and practice at Leeds Co...

  6. Integrating Theory and Practice in Conservatoires: Formulating Holistic Models for Teaching and Learning Improvisation (United States)

    Parsonage, Catherine; Fadnes, Petter Frost; Taylor, James


    Academic study has become a more significant part of a conservatoire education in recent times, but it has not always informed performance as effectively as it might. There is a need for further development of an academic curriculum that is specifically relevant to performers, in which the links between theory and practice are made explicit rather…

  7. Analysis of Dynamic Fracture Compliance Based on Poroelastic Theory. Part I: Model Formulation and Analytical Expressions (United States)

    Wang, Ding; Qu, Shou-Li; Ding, Ping-Bo; Zhao, Qun


    The presence of bedding-parallel fractures at any scale in a rock will considerably add to its compliance and elastic anisotropy. Those properties will be more significantly affected when there is a relatively high degree of connectivity between the fractures and the corresponding interconnected pores. This contribution uses linear poroelasticity to reveal the characteristics of the full frequency-dependent compliance of an infinitely extended fracture model assuming the periodicity of the fractured structures. The fracture compliance tensor is complex-valued due to the wave-induced fluid flow between fractures and pores. The interaction between the adjacent fractures is considered under fluid mass conservation throughout the whole pore space. The quantitative effects of fracture (volume) density (the ratio between fracture thickness and spacing) and host rock porosity are analyzed by the diffusion equation for a relatively low-frequency band. The model in this paper is equivalent to the classical dry linear slip model when the bulk modulus of fluid in the fractures tends to zero. For the liquid-filled case, the model becomes the anisotropic Gassmann's model and sealed saturated linear slip model at the low-frequency and high-frequency limits, respectively. Using the dynamic compliance definition, we can effectively distinguish the saturating fluids in the fractures with the same order magnitude of bulk modulus (e.g., water and oil) using the compliance ratio method. Additionally, the modified dynamic model can be simplified as acceptable empirical formulas if the strain on the fractures induced by the incoming waves is small enough.

  8. The coevent formulation of quantum theory (United States)

    Wallden, Petros


    Understanding quantum theory has been a subject of debate from its birth. Many different formulations and interpretations have been proposed. Here we examine a recent novel formulation, namely the coevents formulation. It is a histories formulation and has as starting point the Feynman path integral and the decoherence functional. The new ontology turns out to be that of a coarse-grained history. We start with a quantum measure defined on the space of histories, and the existence of zero covers rules out single-history as potential reality (the Kochen Specker theorem casted in histories form is a special case of a zero cover). We see that allowing coarse-grained histories as potential realities avoids the previous paradoxes, maintains deductive non-contextual logic (alas non-Boolean) and gives rise to a unique classical domain. Moreover, we can recover the probabilistic predictions of quantum theory with the use of the Cournot's principle. This formulation, being both a realist formulation and based on histories, is well suited conceptually for the purposes of quantum gravity and cosmology.

  9. A lattice formulation of chiral gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Bodwin, G.T. [Argonne National Lab., IL (United States). High Energy Physics Div.


    The authors present a method for formulating gauge theories of chiral fermions in lattice field theory. The method makes use of a Wilson mass to remove doublers. Gauge invariance is then restored by modifying the theory in two ways: the magnitude of the fermion determinant is replaced with the square root of the determinant for a fermion with vector-like couplings to the gauge field; a double limit is taken in which the lattice spacing associated with the fermion field is taken to zero before the lattice spacing associated with the gauge field. The method applies only to theories whose fermions are in an anomaly-free representation of the gauge group. They also present a related technique for computing matrix elements of operators involving fermion fields. Although the analyses of these methods are couched in weak-coupling perturbation theory, it is argued that computational prescriptions are gauge invariant in the presence of a nonperturbative gauge-field configuration.

  10. Improved solar models constructed with a formulation of convection for stellar structure and evolution calculations without the mixing-length theory approximations (United States)

    Lydon, Thomas J.; Fox, Peter A.; Sofia, Sabatino


    We have updated a previous attempt to incorporate within a solar model a treatment of convection based upon numerical simulations of convection rather than mixing-length theory (MLT). We have modified our formulation of convection for a better treatment of the kinetic energy flux. Our solar model has been updated to include a complete range of OPAL opacities, the Debye-Hueckel correction to the equation of state, helium diffusion due to gravitational settling, and atmospheres by Kurucz. We construct a series of models using both MLT and our revised formulation of convection and the compared results to measurements of the solar radius, the solar luminosity, and the depth of the solar convection zone as inferred from helioseismology. We find X(solar) = 0.702 +/- 0.005, Y(solar) = 0.278 +/- 0.005, and Z(solar) = 0.0193 +/- 0.0005.

  11. Model theory

    CERN Document Server

    Chang, CC


    Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko

  12. Nonequilibrium formulation of abelian gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Zoeller, Thorsten


    This work is about a formulation of abelian gauge theories out-of-equilibrium. In contrast to thermal equilibrium, systems out-of-equilibrium are not constant in time, and the interesting questions in such systems refer to time evolution problems. After a short introduction to quantum electrodynamics (QED), the two-particle irreducible (2PI) effective action is introduced as an essential technique for the study of quantum field theories out-of-equilibrium. The equations of motion (EOMs) for the propagators of the theory are then derived from it. It follows a discussion of the physical degrees of freedom (DOFs) of the theory, in particular with respect to the photons, since in covariant formulations of gauge theories unphysical DOFs are necessarily contained. After that the EOMs for the photon propagator are examined more closely. It turns out that they are structurally complicated, and a reformulation of the equations is presented which for the untruncated theory leads to an essential structural simplification of the EOMs. After providing the initial conditions which are necessary in order to solve the EOMs, the free photon EOMs are solved with the help of the reformulated equations. It turns out that the solutions diverge in time, i.e. they are secular. This is a manifestation of the fact that gauge theories contain unphysical DOFs. It is reasoned that these secularities exist only in the free case and are therefore ''artificial''. It is however emphasized that they may not be a problem in principle, but certainly are in practice, in particular for the numerical solution of the EOMs. Further, the origin of the secularities, for which there exists an illustrative explanation, is discussed in more detail. Another characteristic feature of 2PI formulations of gauge theories is the fact that quantities calculated from approximations of the 2PI effective action, which are gauge invariant in the exact theory as well as in an approximated theory at

  13. Formulation of lattice gauge theories for quantum simulations

    DEFF Research Database (Denmark)

    Zohar, Erez; Burrello, Michele


    . This formulation allows for a natural scheme to achieve a consistent truncation of the Hilbert space for continuous groups, and provides helpful tools to study the connections of gauge theories with topological quantum double and string-net models for discrete groups. Several examples, including the case......We examine the Kogut-Susskind formulation of lattice gauge theories under the light of fermionic and bosonic degrees of freedom that provide a description useful to the development of quantum simulators of gauge-invariant models. We consider both discrete and continuous gauge groups and adopt...... a realistic multicomponent Fock space for the definition of matter degrees of freedom. In particular, we express the Hamiltonian of the gauge theory and the Gauss law in terms of Fock operators. The gauge fields are described in two different bases based on either group elements or group representations...

  14. Significance of Strain in Formulation in Theory of Solid Mechanics (United States)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.


    The basic theory of solid mechanics was deemed complete circa 1860 when St. Venant provided the strain formulation or the field compatibility condition. The strain formulation was incomplete. The missing portion has been formulated and identified as the boundary compatibility condition (BCC). The BCC, derived through a variational formulation, has been verified through integral theorem and solution of problems. The BCC, unlike the field counterpart, do not trivialize when expressed in displacements. Navier s method and the stiffness formulation have to account for the extra conditions especially at the inter-element boundaries in a finite element model. Completion of the strain formulation has led to the revival of the direct force calculation methods: the Integrated Force Method (IFM) and its dual (IFMD) for finite element analysis, and the completed Beltrami-Michell formulation (CBMF) in elasticity. The benefits from the new methods in elasticity, in finite element analysis, and in design optimization are discussed. Existing solutions and computer codes may have to be adjusted for the compliance of the new conditions. Complacency because the discipline is over a century old and computer codes have been developed for half a century can lead to stagnation of the discipline.

  15. Degradation theories of concrete and development of a new deviatoric model in incremental tangent formulation: limit analysis applied to case of anchor bolts embedded in concrete; Theorie de degradation du beton et developpement d'un nouveau modele d'endommagement en formulation incrementale tangente: calcul a la rupture applique au cas des chevilles de fixation ancrees dans le beton

    Energy Technology Data Exchange (ETDEWEB)

    Ung Quoc, H


    This research is achieved in the general framework of the study of the concrete behaviour. It has for objective the development of a new behaviour model satisfying to the particular requirements for an industrial exploitation. After the analysis of different existent models, a first development has concerned models based on the smeared crack theory. A new formulation of the theory permitted to overcome the stress locking problem. However, the analysis showed the persistence of some limits inert to this approach in spite of this improvement. Then, an analysis of the physical mechanisms of the concrete degradation has been achieved and permitted to develop the new damage model MODEV. The general formulation of this model is based on the theory of the thermodynamics and applied to the case of the heterogeneous and brittle materials. The MODEV model considers two damage mechanisms: extension and sliding. The model considers also that the relative tangent displacement between microcracks lips is responsible of the strain irreversibility. Thus, the rate of inelastic strain becomes function of the damage and the heterogeneity index of the material. The unilateral effect is taken in account as an elastic hardening or softening process according to re-closing or reopening of cracks. The model is written within the framework of non standard generalised materials in incremental tangent formulation and implemented in the general finite element code SYMPHONIE. The validation of the model has been achieved on the basis of several tests issued from the literature. The second part of this research has concerned the development of the CHEVILAB software. This simulation tool based on the limit analysis approach permit the evaluation of the ultimate load capacity of anchors bolts. The kinematics approach of the limit analysis has been adapted to the problem of anchors while considering several specific failure mechanisms. This approach has been validated then by comparison with the

  16. Model theory

    CERN Document Server

    Hodges, Wilfrid


    An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.

  17. The operator tensor formulation of quantum theory. (United States)

    Hardy, Lucien


    In this paper, we provide what might be regarded as a manifestly covariant presentation of discrete quantum theory. A typical quantum experiment has a bunch of apparatuses placed so that quantum systems can pass between them. We regard each use of an apparatus, along with some given outcome on the apparatus (a certain detector click or a certain meter reading for example), as an operation. An operation (e.g. B(b(2)a(3))(a(1))) can have zero or more quantum systems inputted into it and zero or more quantum systems outputted from it. The operation B(b(2)a(3))(a(1)) has one system of type a inputted, and one system of type b and one system of type a outputted. We can wire together operations to form circuits, for example, A(a(1))B(b(2)a(3))(a(1))C(b(2)a(3)). Each repeated integer label here denotes a wire connecting an output to an input of the same type. As each operation in a circuit has an outcome associated with it, a circuit represents a set of outcomes that can happen in a run of the experiment. In the operator tensor formulation of quantum theory, each operation corresponds to an operator tensor. For example, the operation B(b(2)a(3))(a(1)) corresponds to the operator tensor B(b(2)a(3))(a(1)). Further, the probability for a general circuit is given by replacing operations with corresponding operator tensors as in Prob(A(a(1))B(b(2)a(3))(a(1))C(b(2)a(3))) = Â(a(1))B(b(2)a(3))(a(1))C(b(2)a(3)). Repeated integer labels indicate that we multiply in the associated subspace and then take the partial trace over that subspace. Operator tensors must be physical (namely, they must have positive input transpose and satisfy a certain normalization condition).

  18. Multicomponent density functional theory embedding formulation. (United States)

    Culpitt, Tanner; Brorsen, Kurt R; Pak, Michael V; Hammes-Schiffer, Sharon


    Multicomponent density functional theory (DFT) methods have been developed to treat two types of particles, such as electrons and nuclei, quantum mechanically at the same level. In the nuclear-electronic orbital (NEO) approach, all electrons and select nuclei, typically key protons, are treated quantum mechanically. For multicomponent DFT methods developed within the NEO framework, electron-proton correlation functionals based on explicitly correlated wavefunctions have been designed and used in conjunction with well-established electronic exchange-correlation functionals. Herein a general theory for multicomponent embedded DFT is developed to enable the accurate treatment of larger systems. In the general theory, the total electronic density is separated into two subsystem densities, denoted as regular and special, and different electron-proton correlation functionals are used for these two electronic densities. In the specific implementation, the special electron density is defined in terms of spatially localized Kohn-Sham electronic orbitals, and electron-proton correlation is included only for the special electron density. The electron-proton correlation functional depends on only the special electron density and the proton density, whereas the electronic exchange-correlation functional depends on the total electronic density. This scheme includes the essential electron-proton correlation, which is a relatively local effect, as well as the electronic exchange-correlation for the entire system. This multicomponent DFT-in-DFT embedding theory is applied to the HCN and FHF(-) molecules in conjunction with two different electron-proton correlation functionals and three different electronic exchange-correlation functionals. The results illustrate that this approach provides qualitatively accurate nuclear densities in a computationally tractable manner. The general theory is also easily extended to other types of partitioning schemes for multicomponent systems.

  19. Formulations of classical and quantum dynamical theory

    CERN Document Server

    Rosen, Gerald


    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank

  20. [Psychoanalysis and epistemology: mental development and formulation of theories]. (United States)

    Zysman, Samuel


    This paper aims at studying from a psychoanalytical perspective the relationship between the acquisition of knowledge, the formulation of theories based on the generalization of such knowledge, and, what we consider to be an antecedent, the infantile sexual theories (IST). Psychoanalysis is also a psychology of normal psychic processes, among them creative activity which includes scientific thought. This is of interest to psychoanalysts and to epistemologists and paves the way to necessary interdisciplinary endeavors.

  1. Monopoles in the Plaquette Formulation of the 3D SU(2) Lattice Gauge Theory

    CERN Document Server

    Borisenko, O; Boháčik, J


    Using a plaquette formulation for lattice gauge models we describe monopoles of the three dimensional SU(2) theory which appear as configurations in the complete axial gauge and violate the continuum Bianchi identity. Furthemore we derive a dual formulation for the Wilson loop in arbitrary representation and calculate the form of the interaction between generated electric flux and monopoles in the region of a weak coupling relevant for the continuum limit. The effective theory which controls the interaction is of the sine-Gordon type model. The string tension is calculated within the semiclassical approximation.

  2. Fock space formulation of Chern-Simons theories

    Energy Technology Data Exchange (ETDEWEB)

    Lugo, A.R. (Dept. de Particulas Elementales, Univ. Santiago (Spain))


    A Fock space operator formulation of Chern-Simons-Witten theories is presented, and its connection with the usual holomorphic quantization is established. The conformal blocks are obtained as projections of suitable states on a Fock vacuum. We study in detail the abelian case and sketch its extension to non-abelian groups where the powerfulness of the formalism to compute the corresponding WZW conformal blocks on arbitrary Riemann surfaces is emphasized. (orig.).

  3. Variational formulations of guiding-center Vlasov-Maxwell theory

    Energy Technology Data Exchange (ETDEWEB)

    Brizard, Alain J. [Department of Physics, Saint Michael' s College, Colchester, Vermont 05439 (United States); Tronci, Cesare [Department of Mathematics, University of Surrey, Guildford GU2 7XH (United Kingdom)


    The variational formulations of guiding-center Vlasov-Maxwell theory based on Lagrange, Euler, and Euler-Poincaré variational principles are presented. Each variational principle yields a different approach to deriving guiding-center polarization and magnetization effects into the guiding-center Maxwell equations. The conservation laws of energy, momentum, and angular momentum are also derived by Noether method, where the guiding-center stress tensor is now shown to be explicitly symmetric.

  4. Combinatorial formulation of Ising model revisited


    Costa G.A.T.F.da; Maciel A. L.


    In 1952, Kac and Ward developed a combinatorial formulation for the two dimensional Ising model which is another method of obtaining Onsager's famous formula for the free energy per site in the termodynamic limit of the model. Feynman gave an important contribution to this formulation conjecturing a crucial mathematical relation which completed Kac and Ward ideas. In this paper, the method of Kac, Ward and Feynman for the free field Ising model in two dimensions is reviewed in a selfcontained...

  5. Matrix model formulation of four dimensional gravity

    Energy Technology Data Exchange (ETDEWEB)

    De Pietri, Roberto


    The attempt of extending to higher dimensions the matrix model formulation of two-dimensional quantum gravity leads to the consideration of higher rank tensor models. We discuss how these models relate to four dimensional quantum gravity and the precise conditions allowing to associate a four-dimensional simplicial manifold to Feynman diagrams of a rank-four tensor model.

  6. A parcel formulation for Hamiltonian layer models

    NARCIS (Netherlands)

    Bokhove, Onno; Oliver, M.

    Starting from the three-dimensional hydrostatic primitive equations, we derive Hamiltonian N-layer models with isentropic tropospheric and isentropic or isothermal stratospheric layers. Our construction employs a new parcel Hamiltonian formulation which describes the fluid as a continuum of

  7. A parcel formulation for Hamiltonian layer models

    NARCIS (Netherlands)

    Bokhove, Onno; Oliver, M.


    Starting from the three-dimensional hydrostatic primitive equations, we derive Hamiltonian N-layer models with isentropic tropospheric and isentropic or isothermal stratospheric layers. Our construction employs a new parcel Hamiltonian formulation which describes the fluid as a continuum of

  8. Model theory and applications

    CERN Document Server

    Belegradek, OV


    This volume is a collection of papers on model theory and its applications. The longest paper, "Model Theory of Unitriangular Groups" by O. V. Belegradek, forms a subtle general theory behind Mal‴tsev's famous correspondence between rings and groups. This is the first published paper on the topic. Given the present model-theoretic interest in algebraic groups, Belegradek's work is of particular interest to logicians and algebraists. The rest of the collection consists of papers on various questions of model theory, mainly on stability theory. Contributors are leading Russian researchers in the

  9. A general field-covariant formulation of quantum field theory

    Energy Technology Data Exchange (ETDEWEB)

    Anselmi, Damiano [Universita di Pisa, Dipartimento di Fisica ' ' Enrico Fermi' ' , Pisa (Italy)


    In all nontrivial cases renormalization, as it is usually formulated, is not a change of integration variables in the functional integral, plus parameter redefinitions, but a set of replacements, of actions and/or field variables and parameters. Because of this, we cannot write simple identities relating bare and renormalized generating functionals, or generating functionals before and after nonlinear changes of field variables. In this paper we investigate this issue and work out a general field-covariant approach to quantum field theory, which allows us to treat all perturbative changes of field variables, including the relation between bare and renormalized fields, as true changes of variables in the functional integral, under which the functionals Z and W=lnZ behave as scalars. We investigate the relation between composite fields and changes of field variables, and we show that, if J are the sources coupled to the elementary fields, all changes of field variables can be expressed as J-dependent redefinitions of the sources L coupled to the composite fields. We also work out the relation between the renormalization of variable-changes and the renormalization of composite fields. Using our transformation rules it is possible to derive the renormalization of a theory in a new variable frame from the renormalization in the old variable frame, without having to calculate it anew. We define several approaches, useful for different purposes, in particular a linear approach where all variable changes are described as linear source redefinitions. We include a number of explicit examples. (orig.)

  10. Model theory and modules

    CERN Document Server

    Prest, M


    In recent years the interplay between model theory and other branches of mathematics has led to many deep and intriguing results. In this, the first book on the topic, the theme is the interplay between model theory and the theory of modules. The book is intended to be a self-contained introduction to the subject and introduces the requisite model theory and module theory as it is needed. Dr Prest develops the basic ideas concerning what can be said about modules using the information which may be expressed in a first-order language. Later chapters discuss stability-theoretic aspects of module


    AERMOD is an advanced plume model that incorporates updated treatments of the boundary layer theory, understanding of turbulence and dispersion, and includes handling of terrain interactions. This paper presents an overview of AERMOD's features relative to ISCST3. AERM...

  12. From the Garbage Can Model to the Theory of Recycling: Reconsidering Policy Formulation as a Process of Struggling to Define a Policy Statement


    Zittoun, Philippe


    One of the main paradoxes produced by the garbage can model is the empirical observation that a proposal does not necessarily appear through problem-solving but by a coupling process where a proposal searches for a problem. This empirical observation looks like a paradox for those who consider that the meaning of a policy proposal is fundamental during the policy process. This article suggests a new way to combine the garbage can model with the argumentative turn by taking into account both o...

  13. An introduction to optimal power flow: Theory, formulation, and examples

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Stephen; Rebennack, Steffen


    The set of optimization problems in electric power systems engineering known collectively as Optimal Power Flow (OPF) is one of the most practically important and well-researched subfields of constrained nonlinear optimization. OPF has enjoyed a rich history of research, innovation, and publication since its debut five decades ago. Nevertheless, entry into OPF research is a daunting task for the uninitiated--both due to the sheer volume of literature and because OPF's ubiquity within the electric power systems community has led authors to assume a great deal of prior knowledge that readers unfamiliar with electric power systems may not possess. This article provides an introduction to OPF from an operations research perspective; it describes a complete and concise basis of knowledge for beginning OPF research. The discussion is tailored for the operations researcher who has experience with nonlinear optimization but little knowledge of electrical engineering. Topics covered include power systems modeling, the power flow equations, typical OPF formulations, and common OPF extensions.

  14. Nonlinear Time-Domain Strip Theory Formulation for Low-Speed Manoeuvering and Station-Keeping

    Directory of Open Access Journals (Sweden)

    Thor I. Fossen


    Full Text Available This paper presents a computer effective nonlinear time-domain strip theory formulation for dynamic positioning (DP and low-speed manoeuvring. Strip theory or 2D potential theory, where the ship is divided in 20 to 30 cross sections, can be used to compute the potential coefficients (added mass and potential damping and the exciting wave loads (Froude-Krylov and diffraction forces. Commercially available programs are ShipX (VERES by Marintek (Fathi, 2004 and SEAWAY by Amarcon (Journée & Adegeest, 2003, for instance. The proposed method can easily be extended to utilize other strip theory formulations or 3-D potential programs like WAMIT (2004. The frequency dependent potential damping, which in classic theory results in a convolution integral not suited for real-time simulation, is compactly represented by using the state-space formulation of Kristiansen & Egeland (2003. The separation of the vessel model into a low-frequency model (represented by zerofrequency added mass and damping and a wave-frequency model (represented by motion transfer functions or RAOs, which is commonly used for simulation, is hence made superfluous. Transformations of motions and coefficients between different coordinate systems and origins, i.e. data frame, hydrodynamic frame, body frame, inertial frame etc., are put into the rigid framework of Fossen (1994, 2002. The kinematic equations of motion are formulated in a compact nonlinear vector representation and the classical kinematic assumption that the Euler angles are small is removed. This is important for computation of accurate control forces at higher roll and pitch angles. The hydrodynamic forces in the steadily translating hydrodynamic reference frame (equilibrium axes are, however, assumed tobe linear. Recipes for computation of retardation functions are presented and frequency dependent viscous damping is included. Emphasis is placed on numerical computations and representation of the data from VERES and

  15. Lagrangian formulation of symmetric space sine-Gordon models

    CERN Document Server

    Bakas, Ioannis; Shin, H J; Park, Q Han


    The symmetric space sine-Gordon models arise by conformal reduction of ordinary 2-dim \\sigma-models, and they are integrable exhibiting a black-hole type metric in target space. We provide a Lagrangian formulation of these systems by considering a triplet of Lie groups F \\supset G \\supset H. We show that for every symmetric space F/G, the generalized sine-Gordon models can be derived from the G/H WZW action, plus a potential term that is algebraically specified. Thus, the symmetric space sine-Gordon models describe certain integrable perturbations of coset conformal field theories at the classical level. We also briefly discuss their vacuum structure, Backlund transformations, and soliton solutions.

  16. Studies on the formulation of thermodynamics and stochastic theory for systems far from equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Ross, J. [Stanford Univ., CA (United States)


    We have been working for some time on the formulation of thermodynamics and the theory of fluctuations in systems far from equilibrium and progress in several aspects of that development are reported here.

  17. State variable theories based on Hart's formulation

    Energy Technology Data Exchange (ETDEWEB)

    Korhonen, M.A.; Hannula, S.P.; Li, C.Y.


    In this paper a review of the development of a state variable theory for nonelastic deformation is given. The physical and phenomenological basis of the theory and the constitutive equations describing macroplastic, microplastic, anelastic and grain boundary sliding enhanced deformation are presented. The experimental and analytical evaluation of different parameters in the constitutive equations are described in detail followed by a review of the extensive experimental work on different materials. The technological aspects of the state variable approach are highlighted by examples of the simulative and predictive capabilities of the theory. Finally, a discussion of general capabilities, limitations and future developments of the theory and particularly the possible extensions to cover an even wider range of deformation or deformation-related phenomena is presented.

  18. Gauge Natural Formulation of Conformal Theory of Gravity

    CERN Document Server

    Campigotto, M


    We consider conformal gravity as a gauge natural theory. We study its conservation laws and superpotentials. We also consider the Mannheim and Kazanas spherically symmetric vacuum solution and discuss conserved quantities associated to conformal and diffeomorphism symmetries.

  19. Affine group formulation of the Standard Model coupled to gravity

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Ching-Yi, E-mail: [Department of Physics, National Cheng Kung University, Taiwan (China); Ita, Eyo, E-mail: [Department of Physics, US Naval Academy, Annapolis, MD (United States); Soo, Chopin, E-mail: [Department of Physics, National Cheng Kung University, Taiwan (China)


    In this work we apply the affine group formalism for four dimensional gravity of Lorentzian signature, which is based on Klauder’s affine algebraic program, to the formulation of the Hamiltonian constraint of the interaction of matter and all forces, including gravity with non-vanishing cosmological constant Λ, as an affine Lie algebra. We use the hermitian action of fermions coupled to gravitation and Yang–Mills theory to find the density weight one fermionic super-Hamiltonian constraint. This term, combined with the Yang–Mills and Higgs energy densities, are composed with York’s integrated time functional. The result, when combined with the imaginary part of the Chern–Simons functional Q, forms the affine commutation relation with the volume element V(x). Affine algebraic quantization of gravitation and matter on equal footing implies a fundamental uncertainty relation which is predicated upon a non-vanishing cosmological constant. -- Highlights: •Wheeler–DeWitt equation (WDW) quantized as affine algebra, realizing Klauder’s program. •WDW formulated for interaction of matter and all forces, including gravity, as affine algebra. •WDW features Hermitian generators in spite of fermionic content: Standard Model addressed. •Constructed a family of physical states for the full, coupled theory via affine coherent states. •Fundamental uncertainty relation, predicated on non-vanishing cosmological constant.


    Directory of Open Access Journals (Sweden)

    Sergey I. Zhavoronok


    Full Text Available Some variants of the generalized Hamiltonian formulation of the plate theory of I. N. Vekua – A. A. Amosov type are presented. The infinite dimensional formulation with one evolution variable, or an “instantaneous” formalism, as well as the de Donder – Weyl one are considered, and their application to the numerical simulation of shell and plate dynamics is briefly discussed. The main conservation laws are formulated for the general plate theory of Nth order, and the possible motion integrals are introduced

  1. Boost invariant formulation of the chiral kinetic theory (United States)

    Ebihara, Shu; Fukushima, Kenji; Pu, Shi


    We formulate the chiral kinetic equation with the longitudinal boost invariance. We particularly focus on the physical interpretation of the particle number conservation. There appear two terms associated with the expansion, which did not exist in the nonchiral kinetic equation. One is a contribution to the transverse current arising from the side-jump effect, and the other is a change in the density whose flow makes the longitudinal current. We point out a characteristic pattern in the transverse current driven by the expansion, which we call the chiral circular displacement.

  2. Formulated linear programming problems from game theory and its ...

    African Journals Online (AJOL)

    ... using the primal and symmetric dual linear programming problem and its numerical illustration of the super linear programming problem using TORA package. Keywords: Game theory, linear programming, zero sum game, TORA package, computer. International Journal of Natural and Applied Sciences, 6(4): 413 - 422, ...

  3. Mimetic Theory for Cell-Centered Lagrangian Finite Volume Formulation on General Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Sambasivan, Shiv Kumar [Los Alamos National Laboratory; Shashkov, Mikhail J. [Los Alamos National Laboratory; Burton, Donald E. [Los Alamos National Laboratory; Christon, Mark A. [Los Alamos National Laboratory


    A finite volume cell-centered Lagrangian scheme for solving large deformation problems is constructed based on the hypo-elastic model and using the mimetic theory. Rigorous analysis in the context of gas and solid dynamics, and arbitrary polygonal meshes, is presented to demonstrate the ability of cell-centered schemes in mimicking the continuum properties and principles at the discrete level. A new mimetic formulation based gradient evaluation technique and physics-based, frame independent and symmetry preserving slope limiters are proposed. Furthermore, a physically consistent dissipation model is employed which is both robust and inexpensive to implement. The cell-centered scheme along with these additional new features are applied to solve solids undergoing elasto-plastic deformation.

  4. Theory and modeling group (United States)

    Holman, Gordon D.


    The primary purpose of the Theory and Modeling Group meeting was to identify scientists engaged or interested in theoretical work pertinent to the Max '91 program, and to encourage theorists to pursue modeling which is directly relevant to data which can be expected to result from the program. A list of participants and their institutions is presented. Two solar flare paradigms were discussed during the meeting -- the importance of magnetic reconnection in flares and the applicability of numerical simulation results to solar flare studies.

  5. Formule.

    Directory of Open Access Journals (Sweden)

    Le Comité de Rédaction d'


    Full Text Available Vous découvrez aujourd’hui la nouvelle formule d’EspacesTemps. net. Ce basculement repose sur des changements techniques d’une certaine ampleur et nous vous demandons d’être indulgents si quelques imperfections subsistent dans les prochains jours. Il s’agit d’abord de la substitution du dispositif de mise en ligne : à partir de maintenant, nous utilisons le logiciel Lodel. Dans l’esprit de l’association, à laquelle EspacesTemps adhère, l’unification du ...

  6. Phase-Field Formulation for Quantitative Modeling of Alloy Solidification

    Energy Technology Data Exchange (ETDEWEB)

    Karma, Alain


    A phase-field formulation is introduced to simulate quantitatively microstructural pattern formation in alloys. The thin-interface limit of this formulation yields a much less stringent restriction on the choice of interface thickness than previous formulations and permits one to eliminate nonequilibrium effects at the interface. Dendrite growth simulations with vanishing solid diffusivity show that both the interface evolution and the solute profile in the solid are accurately modeled by this approach.

  7. Phase-Field Formulation for Quantitative Modeling of Alloy Solidification (United States)

    Karma, Alain


    A phase-field formulation is introduced to simulate quantitatively microstructural pattern formation in alloys. The thin-interface limit of this formulation yields a much less stringent restriction on the choice of interface thickness than previous formulations and permits one to eliminate nonequilibrium effects at the interface. Dendrite growth simulations with vanishing solid diffusivity show that both the interface evolution and the solute profile in the solid are accurately modeled by this approach.

  8. Ambitwistor formulations of R 2 gravity and ( DF)2 gauge theories (United States)

    Azevedo, Thales; Engelund, Oluf Tang


    We consider D-dimensional amplitudes in R 2 gravities (conformal gravity in D = 4) and in the recently introduced ( DF)2 gauge theory, from the perspective of the CHY formulae and ambitwistor string theory. These theories are related through the BCJ double-copy construction, and the ( DF)2 gauge theory obeys color-kinematics duality. We work out the worldsheet details of these theories and show that they admit a formulation as integrals on the support of the scattering equations, or alternatively, as ambitwistor string theories. For gravity, this generalizes the work done by Berkovits and Witten on conformal gravity to D dimensions. The ambitwistor is also interpreted as a D-dimensional generalization of Witten's twistor string (SYM + conformal supergravity). As part of our ambitwistor investigation, we discover another ( DF)2 gauge theory containing a photon that couples to Einstein gravity. This theory can provide an alternative KLT description of Einstein gravity compared to the usual Yang-Mills squared.

  9. Learning the Game of Formulating and Testing Hypotheses and Theories (United States)

    Maloney, David P.; Masters, Mark F.


    Physics is not immune to questioning by supporters of nonscientific propositions such as "intelligent design" and "creationism." The supporters of these propositions use phrases such as "it's just a theory" to influence those unfamiliar with or even fearful of science, making it increasingly important that all students and in particular science students (since it is often assumed that science students have an innate understanding of science in contradiction to all evidence) learn about the nature of science. Indeed, for at least a century one of the major objectives of science instruction has been to help students develop a sense of the nature of scientific investigation.2-5 In physics, the laboratory experiences are often used as a method to teach the nature of the scientific endeavor. Unfortunately, all too often these experiences are simply directed demonstrations that do no more than teach students to follow directions. In this situation, the scientific processes involved are simply ignored, even among science majors, and this omission is exacerbated when dealing with non-science majors in general education science courses, where the students may have a fear or dislike of science.

  10. Multivector field formulation of Hamiltonian field theories: equations and symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Echeverria-Enriquez, A.; Munoz-Lecanda, M.C.; Roman-Roy, N. [Departamento de Matematica Aplicada y Telematica, Edificio C-3, Campus Norte UPC, Barcelona (Spain)


    We state the intrinsic form of the Hamiltonian equations of first-order classical field theories in three equivalent geometrical ways: using multivector fields, jet fields and connections. Thus, these equations are given in a form similar to that in which the Hamiltonian equations of mechanics are usually given. Then, using multivector fields, we study several aspects of these equations, such as the existence and non-uniqueness of solutions, and the integrability problem. In particular, these problems are analysed for the case of Hamiltonian systems defined in a submanifold of the multimomentum bundle. Furthermore, the existence of first integrals of these Hamiltonian equations is considered, and the relation between Cartan-Noether symmetries and general symmetries of the system is discussed. Noether's theorem is also stated in this context, both the 'classical' version and its generalization to include higher-order Cartan-Noether symmetries. Finally, the equivalence between the Lagrangian and Hamiltonian formalisms is also discussed. (author)

  11. k-Cosymplectic Classical Field Theories: Tulczyjew and Skinner-Rusk Formulations

    Energy Technology Data Exchange (ETDEWEB)

    Rey, Angel M., E-mail: [Universidade de Santiago de Compostela, Departamento de Xeometria e Topoloxia, Facultade de Matematicas (Spain); Roman-Roy, Narciso, E-mail: [Technical University of Catalonia, Departamento de Matematica Aplicada IV (Spain); Salgado, Modesto, E-mail: [Universidade de Santiago de Compostela, Departamento de Xeometria e Topoloxia, Facultade de Matematicas (Spain); Vilarino, Silvia, E-mail: [Centro Universitario de La Defensa. Academia General Militar, Carretera de Huesca (Spain)


    The k-cosymplectic Lagrangian and Hamiltonian formalisms of first-order classical field theories are reviewed and completed. In particular, they are stated for singular and almost-regular systems. Subsequently, several alternative formulations for k-cosymplectic first-order field theories are developed: First, generalizing the construction of Tulczyjew for mechanics, we give a new interpretation of the classical field equations. Second, the Lagrangian and Hamiltonian formalisms are unified by giving an extension of the Skinner-Rusk formulation on classical mechanics.

  12. k-Cosymplectic Classical Field Theories: Tulczyjew and Skinner-Rusk Formulations (United States)

    Rey, Angel M.; Román-Roy, Narciso; Salgado, Modesto; Vilariño, Silvia


    The k-cosymplectic Lagrangian and Hamiltonian formalisms of first-order classical field theories are reviewed and completed. In particular, they are stated for singular and almost-regular systems. Subsequently, several alternative formulations for k-cosymplectic first-order field theories are developed: First, generalizing the construction of Tulczyjew for mechanics, we give a new interpretation of the classical field equations. Second, the Lagrangian and Hamiltonian formalisms are unified by giving an extension of the Skinner-Rusk formulation on classical mechanics.

  13. Non-Periodic Finite-Element Formulation of Orbital-Free Density Functional Theory

    Energy Technology Data Exchange (ETDEWEB)

    Gavini, V; Knap, J; Bhattacharya, K; Ortiz, M


    We propose an approach to perform orbital-free density functional theory calculations in a non-periodic setting using the finite-element method. We consider this a step towards constructing a seamless multi-scale approach for studying defects like vacancies, dislocations and cracks that require quantum mechanical resolution at the core and are sensitive to long range continuum stresses. In this paper, we describe a local real space variational formulation for orbital-free density functional theory, including the electrostatic terms and prove existence results. We prove the convergence of the finite-element approximation including numerical quadratures for our variational formulation. Finally, we demonstrate our method using examples.

  14. On the exact formulation of multi-configuration density-functional theory: electron density versus orbitals occupation

    CERN Document Server

    Fromager, Emmanuel


    The exact formulation of multi-configuration density-functional theory (DFT) is discussed in this work. As an alternative to range-separated methods, where electron correlation effects are split in the coordinate space, the combination of Configuration Interaction methods with orbital occupation functionals is explored at the formal level through the separation of correlation effects in the orbital space. When applied to model Hamiltonians, this approach leads to an exact Site-Occupation Embedding Theory (SOET). An adiabatic connection expression is derived for the complementary bath functional and a comparison with Density Matrix Embedding Theory (DMET) is made. Illustrative results are given for the simple two-site Hubbard model. SOET is then applied to a quantum chemical Hamiltonian, thus leading to an exact Complete Active Space Site-Occupation Functional Theory (CASSOFT) where active electrons are correlated explicitly within the CAS and the remaining contributions to the correlation energy are described...

  15. A numerical basis for strain-gradient plasticity theory: Rate-independent and rate-dependent formulations

    DEFF Research Database (Denmark)

    Nielsen, Kim Lau; Niordson, Christian Frithiof


    A numerical model formulation of the higher order flow theory (rate-independent) by Fleck and Willis [2009. A mathematical basis for strain-gradient plasticity theory – part II: tensorial plastic multiplier. Journal of the Mechanics and Physics of Solids 57, 1045-1057.], that allows for elastic...... of a single plastic zone is analyzed to illustrate the agreement with earlier published results, whereafter examples of (ii) multiple plastic zone interaction, and (iii) elastic–plastic loading/unloading are presented. Here, the simple shear problem of an infinite slab constrained between rigid plates...

  16. Model Theory for Process Algebra

    NARCIS (Netherlands)

    Bergstra, J.A.; Middelburg, C.A.


    We present a first-order extension of the algebraic theory about processes known as ACP and its main models. Useful predicates on processes, such as deadlock freedom and determinism, can be added to this theory through first-order definitional extensions. Model theory is used to analyse the

  17. Spectral theory and quantum mechanics mathematical foundations of quantum theories, symmetries and introduction to the algebraic formulation

    CERN Document Server

    Moretti, Valter


    This book discusses the mathematical foundations of quantum theories. It offers an introductory text on linear functional analysis with a focus on Hilbert spaces, highlighting the spectral theory features that are relevant in physics. After exploring physical phenomenology, it then turns its attention to the formal and logical aspects of the theory. Further, this Second Edition collects in one volume a number of useful rigorous results on the mathematical structure of quantum mechanics focusing in particular on von Neumann algebras, Superselection rules, the various notions of Quantum Symmetry and Symmetry Groups, and including a number of fundamental results on the algebraic formulation of quantum theories. Intended for Master's and PhD students, both in physics and mathematics, the material is designed to be self-contained: it includes a summary of point-set topology and abstract measure theory, together with an appendix on differential geometry. The book also benefits established researchers by organizing ...

  18. Hamiltonian Formulation of Palatini f(R) theories a la Brans-Dicke


    Olmo, Gonzalo J.; Sanchis-Alepuz, Helios


    We study the Hamiltonian formulation of f(R) theories of gravity both in metric and in Palatini formalism using their classical equivalence with Brans-Dicke theories with a non-trivial potential. The Palatini case, which corresponds to the w=-3/2 Brans-Dicke theory, requires special attention because of new constraints associated with the scalar field, which is non-dynamical. We derive, compare, and discuss the constraints and evolution equations for the ww=-3/2 and w\




  20. Density functional theory formulation for fluid adsorption on correlated random surfaces (United States)

    Aslyamov, Timur; Khlyupin, Aleksey


    We provide novel random surface density functional theory (RSDFT) formulation in the case of geometric heterogeneous surfaces of solid media which is essential for the description of thermodynamic properties of confined fluids. The major difference of our theoretical approach from the existing ones is a stochastic model of solid surfaces which takes into account the correlation properties of geometry. The main building blocks are effective fluid-solid potentials developed in the work of Khlyupin and Aslyamov [J. Stat. Phys. 167, 1519 (2017)] and geometry-based modification of the Helmholtz free energy for Lennard-Jones fluids. The efficiency of RSDFT is demonstrated in the calculation of argon and nitrogen low temperature adsorption on real heterogeneous surfaces (BP280 carbon black). These results are in good agreement with experimental data published in the literature. Also several models of corrugated materials are developed in the framework of RSDFT. Numerical analysis demonstrates a strong influence of surface roughness characteristics on adsorption isotherms. Thus the developed formalism provides a connection between a rigorous description of the stochastic surface and confined fluid thermodynamics.

  1. Differential geometry based solvation model II: Lagrangian formulation. (United States)

    Chen, Zhan; Baker, Nathan A; Wei, G W


    Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation models. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The optimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and PB equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of

  2. A cascade computer model for mocrobicide diffusivity from mucoadhesive formulations


    Lee, Yugyung; Khemka, Alok; Acharya, Gayathri; Giri, Namita; Lee, Chi H.


    Background The cascade computer model (CCM) was designed as a machine-learning feature platform for prediction of drug diffusivity from the mucoadhesive formulations. Three basic models (the statistical regression model, the K nearest neighbor model and the modified version of the back propagation neural network) in CCM operate sequentially in close collaboration with each other, employing the estimated value obtained from the afore-positioned base model as an input value to the next-position...

  3. Frame-Covariant Formulation of Inflation in Scalar-Curvature Theories

    CERN Document Server

    Burns, Daniel; Pilaftsis, Apostolos


    We develop a frame-covariant formulation of inflation in the slow-roll approximation by generalizing the inflationary attractor solution for scalar-curvature theories. Our formulation gives rise to new generalized forms for the potential slow-roll parameters, which enable us to examine the effect of conformal transformations and inflaton reparameterizations in scalar-curvature theories. We find that cosmological observables, such as the power spectrum, the spectral indices and their runnings, can be expressed in a concise manner in terms of the generalized potential slow-roll parameters which depend on the scalar-curvature coupling function, the inflaton wavefunction, and the inflaton potential. We show how the cosmological observables of inflation are frame-invariant in this generalized potential slow-roll formalism, as long as the end-of-inflation condition is appropriately extended to become frame-invariant as well. We then apply our formalism to specific scenarios, such as the induced gravity inflation, H...

  4. The application of contraction theory to an iterative formulation of electromagnetic scattering (United States)

    Brand, J. C.; Kauffman, J. F.


    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  5. Formulation of consumables management models, executive summary (United States)

    Torian, J. G.


    Future manned space programs that have increased launch frequencies and reusable systems require an implementation of new consumables and systems management techniques that relieve both the operations support personnel and flight crew activities. Analytical models and techniques were developed which consist of a Mission Planning Processor (MPP) with appropriate consumables data base, methods of recognizing potential constraint violations in both the planning and flight operations functions, and flight data files for storage/retrieval of information over extended periods interfacing with flight operations processors for monitoring of the actual flights. Consumables subsystems considered in the MPP were electrical power, environmental control and life support, propulsion, hydraulics and auxiliary power.

  6. A cohesive finite element formulation for modelling fracture and ...

    Indian Academy of Sciences (India)

    Abstract. In recent years, cohesive zone models have been employed to simulate fracture and delamination in solids. This paper presents in detail the formulation for incorporating cohesive zone models within the framework of a large deformation finite element procedure. A special Ritz-finite element technique is employed ...

  7. A cohesive finite element formulation for modelling fracture and ...

    Indian Academy of Sciences (India)

    In recent years, cohesive zone models have been employed to simulate fracture and delamination in solids. This paper presents in detail the formulation for incorporating cohesive zone models within the framework of a large deformation finite element procedure. A special Ritz-finite element technique is employed to control ...

  8. Property Model-Based Chemcal Substitution and Chemical Formulation Design

    DEFF Research Database (Denmark)

    Jhamb, Spardha Virendra; Liang, Xiaodong; Hukkerikar, Amol Shivajirao

    Chemical-based products including structured product formulations and single molecule products have proven to be a boon to mankind and have been a significant part of our economies. Our life and the changes around us cannot be imagined without the presence or involvement of chemicals. But like....... The goal therefore is to investigate comprehensively the uses and properties of the chemicals of concern; develop a systematic framework to identify, compare and select safer alternatives to these including their corresponding manufacturing processes; and finally design safe chemical product formulations...... or product formulations with improved product performance. The model-based approach makes use of validated property models to identify the chemicals which need to be substituted, that is, the chemicals that meet the desired physico-chemical properties but not the regulatory (EH&S: environmental, health...

  9. Lectures on algebraic model theory

    CERN Document Server

    Hart, Bradd


    In recent years, model theory has had remarkable success in solving important problems as well as in shedding new light on our understanding of them. The three lectures collected here present recent developments in three such areas: Anand Pillay on differential fields, Patrick Speissegger on o-minimality and Matthias Clasen and Matthew Valeriote on tame congruence theory.

  10. A Computational Theory of Modelling (United States)

    Rossberg, Axel G.


    A metatheory is developed which characterizes the relationship between a modelled system, which complies with some ``basic theory'', and a model, which does not, and yet reproduces important aspects of the modelled system. A model is represented by an (in a certain sense, s.b.) optimal algorithm which generates data that describe the model's state or evolution complying with a ``reduced theory''. Theories are represented by classes of (in a similar sense, s.b.) optimal algorithms that test if their input data comply with the theory. The metatheory does not prescribe the formalisms (data structure, language) to be used for the description of states or evolutions. Transitions to other formalisms and loss of accuracy, common to theory reduction, are explicitly accounted for. The basic assumption of the theory is that resources such as the code length (~ programming time) and the computation time for modelling and testing are costly, but the relative cost of each recourse is unknown. Thus, if there is an algorithm a for which there is no other algorithm b solving the same problem but using less of each recourse, then a is considered optimal. For tests (theories), the set X of wrongly admitted inputs is treated as another resource. It is assumed that X1 is cheaper than X2 when X1 ⊂ X2 (X1 ≠ X2). Depending on the problem, the algorithmic complexity of a reduced theory can be smaller or larger than that of the basic theory. The theory might help to distinguish actual properties of complex systems from mere mental constructs. An application to complex spatio-temporal patterns is discussed.

  11. Mechanistic model and analysis of doxorubicin release from liposomal formulations. (United States)

    Fugit, Kyle D; Xiang, Tian-Xiang; Choi, Du H; Kangarlou, Sogol; Csuhai, Eva; Bummer, Paul M; Anderson, Bradley D


    Reliable and predictive models of drug release kinetics in vitro and in vivo are still lacking for liposomal formulations. Developing robust, predictive release models requires systematic, quantitative characterization of these complex drug delivery systems with respect to the physicochemical properties governing the driving force for release. These models must also incorporate changes in release due to the dissolution media and methods employed to monitor release. This paper demonstrates the successful development and application of a mathematical mechanistic model capable of predicting doxorubicin (DXR) release kinetics from liposomal formulations resembling the FDA-approved nanoformulation DOXIL® using dynamic dialysis. The model accounts for DXR equilibria (e.g. self-association, precipitation, ionization), the change in intravesicular pH due to ammonia release, and dialysis membrane transport of DXR. The model was tested using a Box-Behnken experimental design in which release conditions including extravesicular pH, ammonia concentration in the release medium, and the dilution of the formulation (i.e. suspension concentration) were varied. Mechanistic model predictions agreed with observed DXR release up to 19h. The predictions were similar to a computer fit of the release data using an empirical model often employed for analyzing data generated from this type of experimental design. Unlike the empirical model, the mechanistic model was also able to provide reasonable predictions of release outside the tested design space. These results illustrate the usefulness of mechanistic modeling to predict drug release from liposomal formulations in vitro and its potential for future development of in vitro - in vivo correlations for complex nanoformulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Technologies for conceptual modelling and intelligent query formulation

    CSIR Research Space (South Africa)

    Alberts, R


    Full Text Available modelling and intelligent query formulation R ALBERTS1, K BRITZ1, A GERBER1, K HALLAND1,2, T MEYER1, L PRETORIUS1,2 (1) Knowledge Systems Group, Meraka Institute, CSIR, Pretoria, Gauteng, South Africa (2) School of Computing, University of South...

  13. Mechanistic Modelling of Biodiesel Production using a Liquid Lipase Formulation

    DEFF Research Database (Denmark)

    Price, Jason Anthony; Hofmann, Björn; Silva, Vanessa T. L.


    In this article, a kinetic model for the enzymatic transesterification of rapeseed oil with methanol using CalleraTM Trans L (a liquid formulation of a modified Thermomyces lanuginosus lipase) was developed from first principles. We base the model formulation on a Ping- Pong Bi-Bi mechanism....... Methanol inhibition, along with the interfacial and bulk concentrations of the enzyme was also modeled. The model was developed to describe the effect of different oil compositions, as well as different water, enzyme, and methanol concentrations, which are relevant conditions needed for process evaluation......, with respect to the industrial production of biodiesel. The developed kinetic model, coupled with a mass balance of the system, was fitted to and validated on experimental results for the fed-batch transesterification of rapeseed oil. The confidence intervals of the parameter estimates, along...

  14. Cosmological evolution of the cosmological plasma with interpartial scalar interaction. II. Formulation of mathematical model

    CERN Document Server

    Ignat'ev, Yu G


    On the basis of the relativistic kinetic theory the relativistic statistical systems with scalar interaction particles are investigated. The self-consistent system of the equations describing self-gravitating plasma with interpartial scalar interaction is formulated, macroscopical laws of preservation are received. The closed system of the equations describing cosmological models to which the matter is presented by plasma with interpartial scalar interaction is received.

  15. Validation of neutron current formulations for the response matrix method based on the SP3 theory

    Energy Technology Data Exchange (ETDEWEB)

    Tada, Kenichi, E-mail: [Department of Materials, Physics and Energy Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603 (Japan); Yamamoto, Akio; Yamane, Yoshihiro [Department of Materials, Physics and Energy Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603 (Japan); Kosaka, Shinya; Hirano, Gou [TEPCOSYSTEMS CORPORATION, 2-37-28, Eitai, Koto-ku, Tokyo 135-0034 (Japan)


    The pin-by-pin fine mesh BWR core analysis code SUBARU has been developed as a next-generation BWR core analysis code. SUBARU is based on the SP3 theory and the response matrix method is used for flux calculations. The SP3 theory consists of the 0th and 2nd order neutron fluxes. Therefore, the relations among the 0th and 2nd order partial neutron currents and the fluxes are required to apply the response matrix method. SUBARU is approximated the relations among the partial neutron currents and the fluxes are similar to that the diffusion theory. Our previous study revealed that the prediction accuracy of SUBARU is much higher than that of conventional core analysis codes. However, validity of the above approximation is not directly investigated so far. Therefore, relations among the partial neutron currents and the fluxes are theoretically derived and calculation results with the rigorous and the conventional formulations are compared. The calculation results indicate that the approximation of the conventional formulation is appropriate for the BWR core analysis.

  16. Safe Model Predictive Control Formulations Ensuring Process Operational Safety


    Albalawi, Fahad Ali


    Model predictive control (MPC) is an advanced control strategy widely used in the process industries and beyond. Therefore, industry is interested in the development of MPC formulations that can enhance safety, reliability, and economic profitability of chemical processes. Motivated by these considerations, this dissertation focuses on the development of methods for integrating process operational safety and process economics within model predictive control system designs. To accomplish these...

  17. Model Theory in Algebra, Analysis and Arithmetic

    CERN Document Server

    Dries, Lou; Macpherson, H Dugald; Pillay, Anand; Toffalori, Carlo; Wilkie, Alex J


    Presenting recent developments and applications, the book focuses on four main topics in current model theory: 1) the model theory of valued fields; 2) undecidability in arithmetic; 3) NIP theories; and 4) the model theory of real and complex exponentiation. Young researchers in model theory will particularly benefit from the book, as will more senior researchers in other branches of mathematics.

  18. On the formulation of a crystal plasticity model.

    Energy Technology Data Exchange (ETDEWEB)

    Marin, Esteban B.


    This report presents the formulation of a crystal elasto-viscoplastic model and the corresponding integration scheme. The model is suitable to represent the isothermal, anisotropic, large deformation of polycrystalline metals. The formulation is an extension of a rigid viscoplastic model to account for elasticity effects, and incorporates a number of changes with respect to a previous formulation [Marin & Dawson, 1998]. This extension is formally derived using the well-known multiplicative decomposition of the deformation gradient into an elastic and plastic components, where the elastic part is additionally decomposed into the elastic stretch V{sup e} and the proper orthogonal R{sup e} tensors. The constitutive equations are written in the intermediate, stress-free configuration obtained by unloading the deformed crystal through the elastic stretch V{sup e-}. The model is framed in a thermodynamic setting, and developed initially for large elastic strains. The crystal equations are then specialized to the case of small elastic strains, an assumption typically valid for metals. The developed integration scheme is implicit and proceeds by separating the spherical and deviatoric crystal responses. An ''approximate'' algorithmic material moduli is also derived for applications in implicit numerical codes. The model equations and their integration procedure have been implemented in both a material point simulator and a commercial finite element code. Both implementations are validated by solving a number of examples involving aggregates of either face centered cubic (FCC) or hexagonal close-packed (HCP) crystals subjected to different loading paths.

  19. Modeling Routinization in Games: An Information Theory Approach

    DEFF Research Database (Denmark)

    Wallner, Simon; Pichlmair, Martin; Hecher, Michael


    -time, discrete-space Markov chains and information theory to measure the actual error between the dynamically trained models and the player interaction. Preliminary research supports the hypothesis that Markov chains can be effectively used to model routinization in games. A full study design is presented......Routinization is the result of practicing until an action stops being a goal-directed process. This paper formulates a definition of routinization in games based on prior research in the fields of activity theory and practice theory. Routinization is analyzed using the formal model of discrete...

  20. Deconfinement and universality in the 3DU(1) lattice gauge theory at finite temperature: study in the dual formulation

    Energy Technology Data Exchange (ETDEWEB)

    Borisenko, O.; Chelnokov, V. [Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine,UA-03680 Kiev (Ukraine); Gravina, M.; Papa, A. [Dipartimento di Fisica, Università della Calabria, and INFN - Gruppo collegato di Cosenza,I-87036 Arcavacata di Rende, Cosenza (Italy)


    We study analytically and numerically the three-dimensional U(1) lattice gauge theory at finite temperature in the dual formulation. For an appropriate disorder operator, we obtain the renormalization group equations describing the critical behavior of the model in the vicinity of the deconfinement phase transition. These equations are used to check the validity of the Svetitsky-Yaffe conjecture regarding the critical behavior of the lattice U(1) model. Furthermore, we perform numerical simulations of the model for N{sub t}=1,2,4,8 and compute, by a cluster algorithm, the dual correlation functions and the corresponding second moment correlation length. In this way we locate the position of the critical point and calculate critical indices.

  1. Standard Model as a Double Field Theory. (United States)

    Choi, Kang-Sin; Park, Jeong-Hyuck


    We show that, without any extra physical degree introduced, the standard model can be readily reformulated as a double field theory. Consequently, the standard model can couple to an arbitrary stringy gravitational background in an O(4,4) T-duality covariant manner and manifest two independent local Lorentz symmetries, Spin(1,3)×Spin(3,1). While the diagonal gauge fixing of the twofold spin groups leads to the conventional formulation on the flat Minkowskian background, the enhanced symmetry makes the standard model more rigid, and also stringy, than it appeared. The CP violating θ term may no longer be allowed by the symmetry, and hence the strong CP problem can be solved. There are now stronger constraints imposed on the possible higher order corrections. We speculate that the quarks and the leptons may belong to the two different spin classes.

  2. Formulation of court interpreting models: A South African perspective

    Directory of Open Access Journals (Sweden)

    Samuel Joseph Lebese


    Full Text Available In South Africa there are no models of court interpreting to serve as a guide for court interpreters when performing their task. This is because there is no proper definition of the role of a court interpreter. Models of court interpreting define and describe the process by stating what court interpreters are actually doing when carrying out their task. The absence of these models presents challenges to South African court interpreters as they are expected to follow international models which are formulated culturally, using English metaphorical language which differs from that of indigenous South African languages. As a result, the metaphorical language is likely to be misinterpreted by South African court interpreters as English is not their first language. The application of international models is likely to cause challenges when applied in the South African context, hence the need to formulate models of court interpreting which can be applied to the South African linguistic context. The study follows a qualitative research approach and uses multifaceted theoretical frameworks, namely descriptive translation studies (DTS, cognitive process analysis, and content analysis in collecting and analysing the data.

  3. Thermodynamic Model Formulations for Inhomogeneous Solids with Application to Non-isothermal Phase Field Modelling (United States)

    Gladkov, Svyatoslav; Kochmann, Julian; Reese, Stefanie; Hütter, Markus; Svendsen, Bob


    The purpose of the current work is the comparison of thermodynamic model formulations for chemically and structurally inhomogeneous solids at finite deformation based on "standard" non-equilibrium thermodynamics [SNET: e. g. S. de Groot and P. Mazur, Non-equilibrium Thermodynamics, North Holland, 1962] and the general equation for non-equilibrium reversible-irreversible coupling (GENERIC) [H. C. Öttinger, Beyond Equilibrium Thermodynamics, Wiley Interscience, 2005]. In the process, non-isothermal generalizations of standard isothermal conservative [e. g. J. W. Cahn and J. E. Hilliard, Free energy of a non-uniform system. I. Interfacial energy. J. Chem. Phys. 28 (1958), 258-267] and non-conservative [e. g. S. M. Allen and J. W. Cahn, A macroscopic theory for antiphase boundary motion and its application to antiphase domain coarsening. Acta Metall. 27 (1979), 1085-1095; A. G. Khachaturyan, Theory of Structural Transformations in Solids, Wiley, New York, 1983] diffuse interface or "phase-field" models [e. g. P. C. Hohenberg and B. I. Halperin, Theory of dynamic critical phenomena, Rev. Modern Phys. 49 (1977), 435-479; N. Provatas and K. Elder, Phase Field Methods in Material Science and Engineering, Wiley-VCH, 2010.] for solids are obtained. The current treatment is consistent with, and includes, previous works [e. g. O. Penrose and P. C. Fife, Thermodynamically consistent models of phase-field type for the kinetics of phase transitions, Phys. D 43 (1990), 44-62; O. Penrose and P. C. Fife, On the relation between the standard phase-field model and a "thermodynamically consistent" phase-field model. Phys. D 69 (1993), 107-113] on non-isothermal systems as a special case. In the context of no-flux boundary conditions, the SNET- and GENERIC-based approaches are shown to be completely consistent with each other and result in equivalent temperature evolution relations.

  4. Stochastic Climate Theory and Modelling

    CERN Document Server

    Franzke, Christian L E; Berner, Judith; Williams, Paul D; Lucarini, Valerio


    Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations as well as for model error representation, uncertainty quantification, data assimilation and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochast...

  5. Velocity potential formulations of highly accurate Boussinesq-type models

    DEFF Research Database (Denmark)

    Bingham, Harry B.; Madsen, Per A.; Fuhrman, David R.


    processes on the weather side of reflective structures. Coast. Eng. 53, 929-945). An exact infinite series solution for the potential is obtained via a Taylor expansion about an arbitrary vertical position z=(z) over cap. For practical implementation however, the solution is expanded based on a slow...... variation of (z) over cap and terms are retained to first-order. With shoaling enhancement, the new models obtain a comparable accuracy in linear shoaling to the original velocity formulation. General consistency relations are also derived which are convenient for verifying that the differential operators...

  6. Formulation of a Success Model in Pharmaceutical R&D

    Directory of Open Access Journals (Sweden)

    Hyunju Rachel Kim


    Full Text Available Recently, pharmaceutical R&D has been demanded to increase productivity in terms of time efficiency and innovation as well. There have been discontinuous challenges coming up in this industry, such as globalized R&D competition, stricter regulation, lengthy process of clinical trials, and so on. Considering external changes, high competition, and discontinuities in the industry, it is a good time to redefine the concept of success in pharmaceutical R&D. Thus, this article attempts to formulate a new success model in pharmaceutical R&D, through contextualizing the industry’s success factors.

  7. A variational formulation of the polarizable continuum model. (United States)

    Lipparini, Filippo; Scalmani, Giovanni; Mennucci, Benedetta; Cancès, Eric; Caricato, Marco; Frisch, Michael J


    Continuum solvation models are widely used to accurately estimate solvent effects on energy, structural and spectroscopic properties of complex molecular systems. The polarizable continuum model (PCM) is one of the most versatile among the continuum models because of the variety of properties that can be computed and the diversity of methods that can be used to describe the solute from molecular mechanics (MM) to sophisticated quantum mechanical (QM) post-self-consistent field methods or even hybrid QM/MM methods. In this contribution, we present a new formulation of PCM in terms of a free energy functional whose variational parameters include the continuum polarization (represented by the apparent surface charges), the solute's atomic coordinates and-possibly-its electronic density. The problem of finding the optimized geometry of the (polarized) solute, with the corresponding self-consistent reaction field, is recast as the minimization of this free energy functional, simultaneously with respect to all its variables. The numerous potential applications of this variational formulation of PCM are discussed, including simultaneous optimization of solute's geometry and polarization charges and extended Lagrangian dynamics. In particular, we describe in details the simultaneous optimization procedure and we include several numerical examples.

  8. A variational formulation of the polarizable continuum model (United States)

    Lipparini, Filippo; Scalmani, Giovanni; Mennucci, Benedetta; Cancès, Eric; Caricato, Marco; Frisch, Michael J.


    Continuum solvation models are widely used to accurately estimate solvent effects on energy, structural and spectroscopic properties of complex molecular systems. The polarizable continuum model (PCM) is one of the most versatile among the continuum models because of the variety of properties that can be computed and the diversity of methods that can be used to describe the solute from molecular mechanics (MM) to sophisticated quantum mechanical (QM) post-self-consistent field methods or even hybrid QM/MM methods. In this contribution, we present a new formulation of PCM in terms of a free energy functional whose variational parameters include the continuum polarization (represented by the apparent surface charges), the solute's atomic coordinates and—possibly—its electronic density. The problem of finding the optimized geometry of the (polarized) solute, with the corresponding self-consistent reaction field, is recast as the minimization of this free energy functional, simultaneously with respect to all its variables. The numerous potential applications of this variational formulation of PCM are discussed, including simultaneous optimization of solute's geometry and polarization charges and extended Lagrangian dynamics. In particular, we describe in details the simultaneous optimization procedure and we include several numerical examples.

  9. Modeling and Formulation of a Novel Microoptoelectromechanical Gyroscope

    Directory of Open Access Journals (Sweden)

    Bohua Sun


    Full Text Available This paper proposed a novel design of microgyroscope based on MEMS structures and optic interferometric microdisplacement measurement technique. The gyroscope consists of microvibrator and interferometric readout. Using Coriolis force, the vibrator transfers the system rotation into a forced vibration; the induced vibration can be sensed by the interferometric microdisplacement measurement system. The optic measurement system has two mirrors which will reflect two rays into a detector. The comprehensive studies on the formulation and analysis of the proposed gyroscope have been undertaken; two key sensor equations have been derived in the first time in the world: (1 relation between rotation and phase shift of light Δφ=(4πl0/λ+(8π/λ(xmax⁡Qy/ωyΩ(tsin⁡(ωdt, (2 relation between rotation and interferometric intensity of light I(t≈(8π/λ(xmax⁡Qy/ωyΩ(tsin⁡(ωdtsin⁡(4πl0/λ. The comparison of the proposed gyroscope and well-know Sagnac formulation has been investigated; it shown that the proposed model is much better than Sagnac ones. The new model has finally get rid of needing very long fiber in the case of Sagnac gyroscope. The innovative model gives a new hope to fabricate high accurate and cheaper gyroscope. To date, the proposed gyroscope is the most accurate gyroscope.

  10. Dynamics of HIV neutralization by a microbicide formulation layer: biophysical fundamentals and transport theory. (United States)

    Geonnotti, Anthony R; Katz, David F


    Topical microbicides are an emerging HIV/AIDS prevention modality. Microbicide biofunctionality requires creation of a chemical-physical barrier against HIV transmission. Barrier effectiveness derives from properties of the active compound and its delivery system, but little is known about how these properties translate into microbicide functionality. We developed a mathematical model simulating biologically relevant transport and HIV-neutralization processes occurring when semen-borne virus interacts with a microbicide delivery vehicle coating epithelium. The model enables analysis of how vehicle-related variables, and anti-HIV compound characteristics, affect microbicide performance. Results suggest HIV neutralization is achievable with postcoital coating thicknesses approximately 100 mum. Increased microbicide concentration and potency hasten viral neutralization and diminish penetration of infectious virus through the coating layer. Durable vehicle structures that restrict viral diffusion could provide significant protection. Our findings demonstrate the need to pair potent active ingredients with well-engineered formulation vehicles, and highlight the importance of the dosage form in microbicide effectiveness. Microbicide formulations can function not only as drug delivery vehicles, but also as physical barriers to viral penetration. Total viral neutralization with 100-mum-thin coating layers supports future microbicide use against HIV transmission. This model can be used as a tool to analyze diverse factors that govern microbicide functionality.

  11. Unitary group adapted state specific multireference perturbation theory: Formulation and pilot applications. (United States)

    Sen, Avijit; Sen, Sangita; Samanta, Pradipta Kumar; Mukherjee, Debashis


    We present here a comprehensive account of the formulation and pilot applications of the second-order perturbative analogue of the recently proposed unitary group adapted state-specific multireference coupled cluster theory (UGA-SSMRCC), which we call as the UGA-SSMRPT2. We also discuss the essential similarities and differences between the UGA-SSMRPT2 and the allied SA-SSMRPT2. Our theory, like its parent UGA-SSMRCC formalism, is size-extensive. However, because of the noninvariance of the theory with respect to the transformation among the active orbitals, it requires the use of localized orbitals to ensure size-consistency. We have demonstrated the performance of the formalism with a set of pilot applications, exploring (a) the accuracy of the potential energy surface (PES) of a set of small prototypical difficult molecules in their various low-lying states, using natural, pseudocanonical and localized orbitals and compared the respective nonparallelity errors (NPE) and the mean average deviations (MAD) vis-a-vis the full CI results with the same basis; (b) the efficacy of localized active orbitals to ensure and demonstrate manifest size-consistency with respect to fragmentation. We found that natural orbitals lead to the best overall PES, as evidenced by the NPE and MAD values. The MRMP2 results for individual states and of the MCQDPT2 for multiple states displaying avoided curve crossings are uniformly poorer as compared with the UGA-SSMRPT2 results. The striking aspect of the size-consistency check is the complete insensitivity of the sum of fragment energies with given fragment spin-multiplicities, which are obtained as the asymptotic limit of super-molecules with different coupled spins. © 2015 Wiley Periodicals, Inc.

  12. A weakly compressible formulation for modelling liquid-gas sloshing

    CSIR Research Space (South Africa)

    Heyns, Johan A


    Full Text Available , the implementation of a weakly compressible formulation which accounts for variations in the gas density is presented. With the aim of ensuring a computational efficient implementation of the proposed formulation, an implicit iterative GMRES solver with LU...

  13. Formulating state space models in R with focus on longitudinal regression models

    DEFF Research Database (Denmark)

    Dethlefsen, Claus; Lundbye-Christensen, Søren

    in the formula. However, the model definition and the model fit are separated in different calls. The model definition creates an object with a number of associated functions. The model object may be edited to incorporate extra features before it is fitted to data. The formulation of models does not depend......  We provide a language for formulating a range of state space models. The described methodology is implemented in the R -package sspir available from . A state space model is specified similarly to a generalized linear model in R , by marking the time-varying terms...

  14. Isogeometric shell formulation based on a classical shell model

    KAUST Repository

    Niemi, Antti


    This paper constitutes the first steps in our work concerning isogeometric shell analysis. An isogeometric shell model of the Reissner-Mindlin type is introduced and a study of its accuracy in the classical pinched cylinder benchmark problem presented. In contrast to earlier works [1,2,3,4], the formulation is based on a shell model where the displacement, strain and stress fields are defined in terms of a curvilinear coordinate system arising from the NURBS description of the shell middle surface. The isogeometric shell formulation is implemented using the PetIGA and igakit software packages developed by the authors. The igakit package is a Python package used to generate NURBS representations of geometries that can be utilised by the PetIGA finite element framework. The latter utilises data structures and routines of the portable, extensible toolkit for scientific computation (PETSc), [5,6]. The current shell implementation is valid for static, linear problems only, but the software package is well suited for future extensions to geometrically and materially nonlinear regime as well as to dynamic problems. The accuracy of the approach in the pinched cylinder benchmark problem and present comparisons against the h-version of the finite element method with bilinear elements. Quadratic, cubic and quartic NURBS discretizations are compared against the isoparametric bilinear discretization introduced in [7]. The results show that the quadratic and cubic NURBS approximations exhibit notably slower convergence under uniform mesh refinement as the thickness decreases but the quartic approximation converges relatively quickly within the standard variational framework. The authors future work is concerned with building an isogeometric finite element method for modelling nonlinear structural response of thin-walled shells undergoing large rigid-body motions. The aim is to use the model in a aeroelastic framework for the simulation of flapping wings.

  15. Formulation and kinetic modeling of curcumin loaded intranasal mucoadhesive microemulsion

    Directory of Open Access Journals (Sweden)

    B Mikesh Patel


    Full Text Available It is a challenge to develop the optimum dosage form of poorly water-soluble drugs and to target them due to limited bioavailability, intra and inter subject variability. In this investigation, mucoadhesive microemulsion of curcumin was developed by water titration method taking biocompatible components for intranasal delivery and was characterized. Nasal ciliotoxicity studies were carried out using excised sheep nasal mucosa. in vitro release studies of formulations and PDS were performed. Labrafil M 1944 CS based microemulsion was transparent, stable and nasal non-ciliotoxic having particle size 12.32±0.81nm (PdI=0.223 and from kinetic modeling, the release was found to be Fickian diffusion for mucoadhesive microemulsion.

  16. Communication: Towards first principles theory of relaxation in supercooled liquids formulated in terms of cooperative motion. (United States)

    Freed, Karl F


    A general theory of the long time, low temperature dynamics of glass-forming fluids remains elusive despite the almost 20 years since the famous pronouncement by the Nobel Laureate P. W. Anderson, "The deepest and most interesting unsolved problem in solid state theory is probably the theory of the nature of glass and the glass transition" [Science 267, 1615 (1995)]. While recent work indicates that Adam-Gibbs theory (AGT) provides a framework for computing the structural relaxation time of supercooled fluids and for analyzing the properties of the cooperatively rearranging dynamical strings observed in low temperature molecular dynamics simulations, the heuristic nature of AGT has impeded general acceptance due to the lack of a first principles derivation [G. Adam and J. H. Gibbs, J. Chem. Phys. 43, 139 (1965)]. This deficiency is rectified here by a statistical mechanical derivation of AGT that uses transition state theory and the assumption that the transition state is composed of elementary excitations of a string-like form. The strings are assumed to form in equilibrium with the mobile particles in the fluid. Hence, transition state theory requires the strings to be in mutual equilibrium and thus to have the size distribution of a self-assembling system, in accord with the simulations and analyses of Douglas and co-workers. The average relaxation rate is computed as a grand canonical ensemble average over all string sizes, and use of the previously determined relation between configurational entropy and the average cluster size in several model equilibrium self-associating systems produces the AGT expression in a manner enabling further extensions and more fundamental tests of the assumptions.

  17. Experimental Design of Formulations Utilizing High Dimensional Model Representation. (United States)

    Li, Genyuan; Bastian, Caleb; Welsh, William; Rabitz, Herschel


    Many applications involve formulations or mixtures where large numbers of components are possible to choose from, but a final composition with only a few components is sought. Finding suitable binary or ternary mixtures from all the permissible components often relies on simplex-lattice sampling in traditional design of experiments (DoE), which requires performing a large number of experiments even for just tens of permissible components. The effect rises very rapidly with increasing numbers of components and can readily become impractical. This paper proposes constructing a single model for a mixture containing all permissible components from just a modest number of experiments. Yet the model is capable of satisfactorily predicting the performance for full as well as all possible binary and ternary component mixtures. To achieve this goal, we utilize biased random sampling combined with high dimensional model representation (HDMR) to replace DoE simplex-lattice design. Compared with DoE, the required number of experiments is significantly reduced, especially when the number of permissible components is large. This study is illustrated with a solubility model for solvent mixture screening.

  18. Spatial competition facility location models: definition, formulation and solution approach

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, R.L.; Friesz, T.L.


    Models are presented for locating a firm's production facilities and determining production levels at these facilities so as to maximize the firms profit. These models take into account the changes in prices at each of the spatially separated markets that would result from the increase in supply provided by the new facilities and also from the response of competing firms. Two different models of spatial competition are presented to represent the competitive market situation in which the firm's production facilities are being located. These models are formulated as variational inequalities; recent sensitivity analysis results for variational inequalities are used to develop derivatives of the prices at each of the spatially separated markets with respect to the production levels at each of the spatially separated markets with respect to the production levels at each of the new facilities. These derivatives are used to develop a linear approximation of the implicit function relating prices to productions. A heuristic solution procedure making use of this approximation is proposed.

  19. Modeling of active transmembrane transport in a mixture theory framework. (United States)

    Ateshian, Gerard A; Morrison, Barclay; Hung, Clark T


    This study formulates governing equations for active transport across semi-permeable membranes within the framework of the theory of mixtures. In mixture theory, which models the interactions of any number of fluid and solid constituents, a supply term appears in the conservation of linear momentum to describe momentum exchanges among the constituents. In past applications, this momentum supply was used to model frictional interactions only, thereby describing passive transport processes. In this study, it is shown that active transport processes, which impart momentum to solutes or solvent, may also be incorporated in this term. By projecting the equation of conservation of linear momentum along the normal to the membrane, a jump condition is formulated for the mechano-electrochemical potential of fluid constituents which is generally applicable to nonequilibrium processes involving active transport. The resulting relations are simple and easy to use, and address an important need in the membrane transport literature.

  20. The user-oriented evaluator's role in formulating a program theory: Using a theory-driven approach


    Christie, CA; Alkin, MC


    Program theory plays a prominent role in many evaluations, not only in theory-driven evaluations. This paper presents a case study of the process of developing and refining a program's theory within a user-oriented evaluation. In user-oriented (or utilization-focused) evaluations, primary users can play a role in defining their own program theory. This is different, however, from the typical process by which a program theory is developed when using theory-driven evaluation framework. This cas...

  1. SEACAS Theory Manuals: Part 1. Problem Formulation in Nonlinear Solid Mechancis

    Energy Technology Data Exchange (ETDEWEB)

    Attaway, S.W.; Laursen, T.A.; Zadoks, R.I.


    This report gives an introduction to the basic concepts and principles involved in the formulation of nonlinear problems in solid mechanics. By way of motivation, the discussion begins with a survey of some of the important sources of nonlinearity in solid mechanics applications, using wherever possible simple one dimensional idealizations to demonstrate the physical concepts. This discussion is then generalized by presenting generic statements of initial/boundary value problems in solid mechanics, using linear elasticity as a template and encompassing such ideas as strong and weak forms of boundary value problems, boundary and initial conditions, and dynamic and quasistatic idealizations. The notational framework used for the linearized problem is then extended to account for finite deformation of possibly inelastic solids, providing the context for the descriptions of nonlinear continuum mechanics, constitutive modeling, and finite element technology given in three companion reports.

  2. Models in cooperative game theory

    CERN Document Server

    Branzei, Rodica; Tijs, Stef


    This book investigates models in cooperative game theory in which the players have the possibility to cooperate partially. In a crisp game the agents are either fully involved or not involved at all in cooperation with some other agents, while in a fuzzy game players are allowed to cooperate with infinite many different participation levels, varying from non-cooperation to full cooperation. A multi-choice game describes the intermediate case in which each player may have a fixed number of activity levels. Different set and one-point solution concepts for these games are presented. The properties of these solution concepts and their interrelations on several classes of crisp, fuzzy, and multi-choice games are studied. Applications of the investigated models to many economic situations are indicated as well. The second edition is highly enlarged and contains new results and additional sections in the different chapters as well as one new chapter.

  3. Advances in cognitive theory and therapy: the generic cognitive model. (United States)

    Beck, Aaron T; Haigh, Emily A P


    For over 50 years, Beck's cognitive model has provided an evidence-based way to conceptualize and treat psychological disorders. The generic cognitive model represents a set of common principles that can be applied across the spectrum of psychological disorders. The updated theoretical model provides a framework for addressing significant questions regarding the phenomenology of disorders not explained in previous iterations of the original model. New additions to the theory include continuity of adaptive and maladaptive function, dual information processing, energizing of schemas, and attentional focus. The model includes a theory of modes, an organization of schemas relevant to expectancies, self-evaluations, rules, and memories. A description of the new theoretical model is followed by a presentation of the corresponding applied model, which provides a template for conceptualizing a specific disorder and formulating a case. The focus on beliefs differentiates disorders and provides a target for treatment. A variety of interventions are described.

  4. Halo modelling in chameleon theories

    Energy Technology Data Exchange (ETDEWEB)

    Lombriser, Lucas; Koyama, Kazuya [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX (United Kingdom); Li, Baojiu, E-mail:, E-mail:, E-mail: [Institute for Computational Cosmology, Ogden Centre for Fundamental Physics, Department of Physics, University of Durham, Science Laboratories, South Road, Durham, DH1 3LE (United Kingdom)


    We analyse modelling techniques for the large-scale structure formed in scalar-tensor theories of constant Brans-Dicke parameter which match the concordance model background expansion history and produce a chameleon suppression of the gravitational modification in high-density regions. Thereby, we use a mass and environment dependent chameleon spherical collapse model, the Sheth-Tormen halo mass function and linear halo bias, the Navarro-Frenk-White halo density profile, and the halo model. Furthermore, using the spherical collapse model, we extrapolate a chameleon mass-concentration scaling relation from a ΛCDM prescription calibrated to N-body simulations. We also provide constraints on the model parameters to ensure viability on local scales. We test our description of the halo mass function and nonlinear matter power spectrum against the respective observables extracted from large-volume and high-resolution N-body simulations in the limiting case of f(R) gravity, corresponding to a vanishing Brans-Dicke parameter. We find good agreement between the two; the halo model provides a good qualitative description of the shape of the relative enhancement of the f(R) matter power spectrum with respect to ΛCDM caused by the extra attractive gravitational force but fails to recover the correct amplitude. Introducing an effective linear power spectrum in the computation of the two-halo term to account for an underestimation of the chameleon suppression at intermediate scales in our approach, we accurately reproduce the measurements from the N-body simulations.

  5. A generalized formulation of the dynamic Smagorinsky model

    Directory of Open Access Journals (Sweden)

    Urs Schaefer-Rolffs


    Full Text Available A generalized formulation of the Dynamic Smagorinsky Model (DSM is proposed as a versatile turbulent momentum diffusion scheme for Large-Eddy Simulations. The difference to previous versions of the DSM is a modified test filter range that can be chosen independently from the resolution scale to separate the impact of the test filter on the simulated flow from the impact of the resolution. The generalized DSM (gDSM in a two-dimensional version is validated in a verification study as a horizontal momentum diffusion scheme with the Kühlungsborn Mechanistic General Circulation Model at high resolution (wavenumber 330 without hyperdiffusion. Three-day averaged results applying three different test filters in the macro-turbulent inertial range are presented and compared with analogous simulations where the standard DSM is used instead. The comparison of the different filters results in all cases in similar globally averaged Smagorinsky parameters cS≃0.35$c_S\\simeq0.35$ and horizontal kinetic energy spectra. Hence, the basic assumption of scale invariance underlying the application of the gDSM to parameterize atmospheric turbulence is justified. In addition, the smallest resolved scales contain less energy when the gDSM is applied, thus increasing the stability of the simulation.

  6. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.


    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  7. Demonstration of Emulator-Based Bayesian Calibration of Safety Analysis Codes: Theory and Formulation

    Directory of Open Access Journals (Sweden)

    Joseph P. Yurko


    Full Text Available System codes for simulation of safety performance of nuclear plants may contain parameters whose values are not known very accurately. New information from tests or operating experience is incorporated into safety codes by a process known as calibration, which reduces uncertainty in the output of the code and thereby improves its support for decision-making. The work reported here implements several improvements on classic calibration techniques afforded by modern analysis techniques. The key innovation has come from development of code surrogate model (or code emulator construction and prediction algorithms. Use of a fast emulator makes the calibration processes used here with Markov Chain Monte Carlo (MCMC sampling feasible. This work uses Gaussian Process (GP based emulators, which have been used previously to emulate computer codes in the nuclear field. The present work describes the formulation of an emulator that incorporates GPs into a factor analysis-type or pattern recognition-type model. This “function factorization” Gaussian Process (FFGP model allows overcoming limitations present in standard GP emulators, thereby improving both accuracy and speed of the emulator-based calibration process. Calibration of a friction-factor example using a Method of Manufactured Solution is performed to illustrate key properties of the FFGP based process.

  8. Ising formulation of associative memory models and quantum annealing recall (United States)

    Santra, Siddhartha; Shehab, Omar; Balu, Radhakrishnan


    Associative memory models, in theoretical neuro- and computer sciences, can generally store at most a linear number of memories. Recalling memories in these models can be understood as retrieval of the energy minimizing configuration of classical Ising spins, closest in Hamming distance to an imperfect input memory, where the energy landscape is determined by the set of stored memories. We present an Ising formulation for associative memory models and consider the problem of memory recall using quantum annealing. We show that allowing for input-dependent energy landscapes allows storage of up to an exponential number of memories (in terms of the number of neurons). Further, we show how quantum annealing may naturally be used for recall tasks in such input-dependent energy landscapes, although the recall time may increase with the number of stored memories. Theoretically, we obtain the radius of attractor basins R (N ) and the capacity C (N ) of such a scheme and their tradeoffs. Our calculations establish that for randomly chosen memories the capacity of our model using the Hebbian learning rule as a function of problem size can be expressed as C (N ) =O (eC1N) , C1≥0 , and succeeds on randomly chosen memory sets with a probability of (1 -e-C2N) , C2≥0 with C1+C2=(0.5-f ) 2/(1 -f ) , where f =R (N )/N , 0 ≤f ≤0.5 , is the radius of attraction in terms of the Hamming distance of an input probe from a stored memory as a fraction of the problem size. We demonstrate the application of this scheme on a programmable quantum annealing device, the D-wave processor.

  9. Modified Spin-Wave Theory on Low-Dimensional Heisenberg Ferrimagnets: A New Robust Formulation (United States)

    Noriki, Yusaku; Yamamoto, Shoji


    We propose a new scheme for modifying conventional spin waves so as to precisely describe low-dimensional Heisenberg ferrimagnets at finite temperatures. What is called the modified spin-wave theory was initiated by Takahashi, who intended to calculate the low-temperature thermodynamics of low-dimensional Heisenberg ferromagnets, where Holstein-Primakoff bosons are constrained to keep the total uniform magnetization zero in a straightforward manner. If the concept of an ideal Bose gas with a fixed density is applied to antiferromagnets and ferrimagnets, the formulation is no longer trivial, having rich variety in the way how the conventional spin waves, especially those in ferrimagnets, are constrained and brought into interaction. Which magnetization should be kept zero, uniform, staggered, or both? One or more chemical potentials can be introduced so as to satisfy the relevant constraint condition either in diagonalizing the Hamiltonian or in minimizing the free energy, making the Bogoliubov transformation dependent on temperature or leaving it free from temperature dependence. We can bring the thus-modified spin waves into interaction on the basis of the Hartree-Fock approximation or through the use of Wick's theorem in an attempt to refine their descriptions. Comparing various modification schemes both numerically and analytically in one and two dimensions, we eventually find an excellent bosonic language capable of describing heterogeneous quantum magnets on a variety of lattices over the whole temperature range — Wick's-theorem-based interacting spin waves modified so as to keep every sublattice magnetization zero via the temperature-dependent Bogoliubov transformation.

  10. M-theory model-building and proton stability

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J. [CERN, Geneva (Switzerland). Theory Div.; Faraggi, A.E. [Florida Univ., Gainesville, FL (United States). Inst. for Fundamental Theory; Nanopoulos, D.V. [Texas A and M Univ., College Station, TX (United States)]|[Houston Advanced Research Center, The Woodlands, TX (United States). Astroparticle Physics Group]|[Academy of Athens (Greece). Div. of Natural Sciences


    The authors study the problem of baryon stability in M theory, starting from realistic four-dimensional string models constructed using the free-fermion formulation of the weakly-coupled heterotic string. Suitable variants of these models manifest an enhanced custodial gauge symmetry that forbids to all orders the appearance of dangerous dimension-five baryon-decay operators. The authors exhibit the underlying geometric (bosonic) interpretation of these models, which have a Z{sub 2} x Z{sub 2} orbifold structure similar, but not identical, to the class of Calabi-Yau threefold compactifications of M and F theory investigated by Voisin and Borcea. A related generalization of their work may provide a solution to the problem of proton stability in M theory.

  11. M-Theory Model-Building and Proton Stability

    CERN Document Server

    Ellis, Jonathan Richard; Nanopoulos, Dimitri V; Ellis, John; Faraggi, Alon E.


    We study the problem of baryon stability in M theory, starting from realistic four-dimensional string models constructed using the free-fermion formulation of the weakly-coupled heterotic string. Suitable variants of these models manifest an enhanced custodial gauge symmetry that forbids to all orders the appearance of dangerous dimension-five baryon-decay operators. We exhibit the underlying geometric (bosonic) interpretation of these models, which have a $Z_2 \\times Z_2$ orbifold structure similar, but not identical, to the class of Calabi-Yau threefold compactifications of M and F theory investigated by Voisin and Borcea. A related generalization of their work may provide a solution to the problem of proton stability in M theory.

  12. Economic Modelling in Institutional Economic Theory

    National Research Council Canada - National Science Library

    Wadim Strielkowski; Evgeny Popov


    Our paper is centered around the formation of theory of institutional modelling that includes principles and ideas reflecting the laws of societal development within the framework of institutional economic theory...

  13. On the approximations in formulation of the Vening Meinesz-Moritz theory of isostasy (United States)

    Eshagh, Mehdi


    Different approximations are used in Moho modelling based on isostatic theories. The well-known approximation is considering a plate shell model for isostatic equilibrium, which is an oversimplified assumption for the Earth's crust. Considering a spherical shell model, as used in the Vening Meinesz-Moritz (VMM) theory, is a more realistic assumption, but it suffers from different types of mathematical approximations. In this paper, the idea is to investigate such approximations and present their magnitudes and locations all over the globe. Furthermore, we show that the mathematical model of Moho depth according to the VMM principle can be simplified to that of the plate shell model after four approximations. Linearisation of the binomial term involving the topographic/bathymetric heights is sufficient as long as their spherical harmonic expansion is limited to degree and order 180. The impact of the higher order terms is less than 2 km. The Taylor expansion of the binomial term involving the Moho depth (T) up to second order with the assumption of T2 = TT0, T0 is the mean compensation depth, improves this approximation further by up to 4 km over continents. This approximation has a significant role in Moho modelling over continents; otherwise, loss of frequency occurs in the Moho solution. On the other hand, the linear approximation performs better over oceans and considering higher order terms creates unrealistic frequencies reaching to a magnitude of 5 km in the Moho solution. Involving gravity data according to the VMM principle influences the Moho depth significantly up to 15 km in some areas.

  14. Dissolution Model Development: Formulation Effects and Filter Complications

    DEFF Research Database (Denmark)

    Berthelsen, Ragna; Holm, Rene; Jacobsen, Jette


    This study describes various complications related to sample preparation (filtration) during development of a dissolution method intended to discriminate among different fenofibrate immediate-release formulations. Several dissolution apparatus and sample preparation techniques were tested. The flow...... the mini paddle dissolution method demonstrates that sample preparation influenced the results. The investigations show that excipients from the formulations directly affected the drug–filter interaction, thereby affecting the dissolution profiles and the ability to predict the in vivo data....... With the tested drug–formulation combination, the best in vivo–in vitro correlation was found after filtration of the dissolution samples through 0.45-μm hydrophobic PTFE membrane filters....

  15. A simple exposure-time theory for all time-nonlocal transport formulations and beyond. (United States)

    Ginn, T. R.; Schreyer, L. G.


    Anomalous transport or better put, anomalous non-transport, of solutes or flowing water or suspended colloids or bacteria etc. has been the subject of intense analyses with multiple formulations appearing in scientific literature from hydrology to geomorphology to chemical engineering, to environmental microbiology to mathematical physics. Primary focus has recently been on time-nonlocal mass conservation formulations such as multirate mass transfer, fractional-time advection-dispersion, continuous-time random walks, and dual porosity modeling approaches, that employ a convolution with a memory function to reflect respective conceptual models of delays in transport. These approaches are effective or "proxy" ones that do not always distinguish transport from immobilzation delays, are generally without connection to measurable physicochemical properties, and involve variously fractional calculus, inverse Laplace or Fourier transformations, and/or complex stochastic notions including assumptions of stationarity or ergodicity at the observation scale. Here we show a much simpler approach to time-nonlocal (non-)transport that is free of all these things, and is based on expressing the memory function in terms of a rate of mobilization of immobilized mass that is a function of the continguous time immobilized. Our approach treats mass transfer completely independently from the transport process, and it allows specification of actual immobilization mechanisms or delays. To our surprize we found that for all practical purposes any memory function can be expressed this way, including all of those associated with the multi-rate mass transfer approaches, original powerlaw, different truncated powerlaws, fractional-derivative, etc. More intriguing is the fact that the exposure-time approach can be used to construct heretofore unseen memory functions, e.g., forms that generate oscillating tails of breakthrough curves such as may occur in sediment transport, forms for delay

  16. Linking Simple Economic Theory Models and the Cointegrated Vector AutoRegressive Model

    DEFF Research Database (Denmark)

    Møller, Niels Framroze

    , it is demonstrated how other controversial hypotheses such as Rational Expectations can be formulated directly as restrictions on the CVAR-parameters. A simple example of a "Neoclassical synthetic" AS-AD model is also formulated. Finally, the partial- general equilibrium distinction is related to the CVAR as well......This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its....... Further fundamental extensions and advances to more sophisticated theory models, such as those related to dynamics and expectations (in the structural relations) are left for future papers...

  17. Formulating state space models in R with focus on longitudinal regression models

    DEFF Research Database (Denmark)

    Dethlefsen, Claus; Lundbye-Christensen, Søren


    are marked in the formula. Special functions for specifying polynomial time trends, harmonic seasonal patterns, unstructured seasonal patterns and time-varying covariates can be used in the formula. The model is fitted to data using iterated extended Kalman filtering, but the formulation of models does...

  18. New Pathways between Group Theory and Model Theory

    CERN Document Server

    Fuchs, László; Goldsmith, Brendan; Strüngmann, Lutz


    This volume focuses on group theory and model theory with a particular emphasis on the interplay of the two areas. The survey papers provide an overview of the developments across group, module, and model theory while the research papers present the most recent study in those same areas. With introductory sections that make the topics easily accessible to students, the papers in this volume will appeal to beginning graduate students and experienced researchers alike. As a whole, this book offers a cross-section view of the areas in group, module, and model theory, covering topics such as DP-minimal groups, Abelian groups, countable 1-transitive trees, and module approximations. The papers in this book are the proceedings of the conference “New Pathways between Group Theory and Model Theory,” which took place February 1-4, 2016, in Mülheim an der Ruhr, Germany, in honor of the editors’ colleague Rüdiger Göbel. This publication is dedicated to Professor Göbel, who passed away in 2014. He was one of th...

  19. The cultural formulation: A model to combine nosology and patients' life context in psychiatric diagnostic practice. (United States)

    Bäärnhielm, Sofie; Scarpinati Rosso, Marco


    This article discusses the experience of adapting and applying the Outline for a Cultural Formulation in DSM-IV to the Swedish context. Findings from a research project on the Cultural Formulation highlight the value of combining psychiatric nosological categorization with an understanding of patients' cultural life context in order to increase the validity of categorization and to formulate individualized treatment plans. In clinical care practitioners need models and tools that help them take into account patients' cultural backgrounds, needs, and resources in psychiatric diagnostic practice. We present a summary of a Swedish manual for conducting a Cultural Formulation interview. The need for further development of the Cultural Formulation is also discussed.

  20. Comparison of numerical formulations for the modeling of tensile loaded suction buckets

    DEFF Research Database (Denmark)

    Sørensen, Emil Smed; Clausen, Johan Christian; Damkilde, Lars


    The tensile resistance of a suction bucket is investigated using three different numerical formulations. The first formulation utilizes the three-field u-p-U formulation accounting for solid and fluid displacements, u and U, as well as the pore-fluid pressure, p. The two other formulations comprise...... the simpler u-p formulation in its dynamic and quasi-static form, accounting only for solid displacement and pore-fluid pressure. As basis for comparison, the tensile resistance of a single suction bucket is investigated using a velocity-driven model for a wide range of velocities. It is found, that the quasi...

  1. Evaluating model assumptions in item response theory

    NARCIS (Netherlands)

    Tijmstra, J.


    This dissertation deals with the evaluation of model assumptions in the context of item response theory. Item response theory, also known as modern test theory, provides a statistical framework for the measurement of psychological constructs that cannot by observed directly, such as intelligence or

  2. Formulation of a mathematical model to predict solar water disinfection. (United States)

    Salih, Fadhil M


    A mathematical model was formulated that will facilitate the prediction of solar disinfection by analyzing the effect of sunlight exposure (x(1)) and the load of bacterial contamination (x(2)), as predictor variables, on the efficiency of solar disinfection (y). Aliquots of 0.1 ml containing average numbers of E. coli, ranging between 1 and 5 x 10(3)cells/ml raw water, were introduced into each of the 96 wells of polystyrene microtitre plates. Plates, with the lid on, were exposed to sunlight for varying exposures ranging between 1.04 x 10(3) and 8.40 x 10(3)kJ m(-2). Double strength nutrient broth was then added. After 48 h incubation wells containing visible contamination were considered as containing one cell or more that survived the exposure. Data showed that disinfection is dependent both on the load of bacterial contamination and sunlight exposure. This relationship is characterized by curves having shoulders followed by a steep decline and then tailing off in an asymptotic fashion. The shoulder size increased with the increase of the contamination load, however, the slope remains the same. Statistical analysis indicates a positive correlation among the variables (R(2) = 0.893); the mathematical model, y=1-(1-e(-kx(1)))(x(2)), represents the relationship, with k being the solar inactivation constant. The exposure required to produce a given decontamination level can be predicted using the equation: x(1)=-1/kln[1-(1-y)(-1/x(2))]e(-micro/rho.m/A), where micro is the linear attenuation coefficient (m(-1)), rho is the density, m is the mass and A is the area of the exposed part of the sample. The predictor variables (x(1), x(2)) strongly influence the efficiency of solar disinfection, which can be predicted using the suggested mathematical model. The present data provides a means to predict the efficiency of solar disinfection as an approach to improve the quality of drinking water mainly in developing countries with adequate sunshine all year-round.

  3. Applications of model theory to functional analysis

    CERN Document Server

    Iovino, Jose


    During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the

  4. Quantum field theory competitive models

    CERN Document Server

    Tolksdorf, Jürgen; Zeidler, Eberhard


    For more than 70 years, quantum field theory (QFT) can be seen as a driving force in the development of theoretical physics. Equally fascinating is the fruitful impact which QFT had in rather remote areas of mathematics. The present book features some of the different approaches, different physically viewpoints and techniques used to make the notion of quantum field theory more precise. For example, the present book contains a discussion including general considerations, stochastic methods, deformation theory and the holographic AdS/CFT correspondence. It also contains a discussion of more recent developments like the use of category theory and topos theoretic methods to describe QFT. The present volume emerged from the 3rd 'Blaubeuren Workshop: Recent Developments in Quantum Field Theory', held in July 2007 at the Max Planck Institute of Mathematics in the Sciences in Leipzig/Germany. All of the contributions are committed to the idea of this workshop series: 'To bring together outstanding experts working in...


    Directory of Open Access Journals (Sweden)

    I. V. Florinsky


    Full Text Available Geomorphometric modeling is widely used to study multiscale problems of the Earth and planetary sciences. Existing algorithms of geomorphometry can be applied to terrain models given by plane square grids or spheroidal equal angular grids on a surface of an ellipsoid of revolution or a sphere. Computations on spheroidal equal angular grids are trivial for modeling the Earth, Mars, the Moon, Venus, and Mercury. This is because: (a forms of the abovementioned celestial bodies can be described by an ellipsoid of revolution or a sphere; and (b for these surfaces, this is well-developed theory and computational algorithms for solving direct and inverse geodetic problems, as well as for determining spheroidal trapezium areas. It is advisable to apply a triaxial ellipsoid for describing forms of small moons and asteroids. However, there are no geomorphometric algorithms intended for such a surface. In this paper, we have formulated the problem of geomorphometric modeling on a surface of a triaxial ellipsoid. Let a digital elevation model of a celestial body or its portion be given by a spheroidal equal angular grid using geodetic or planetocentric coordinate systems of a triaxial ellipsoid. To derive models of local morphometric variables, one should: (1 turn to the elliptical coordinate system, and (2 determine linear sizes of spheroidal trapezoidal moving window elements by the Jacobi solution. To derive models of nonlocal morphometric variables, one may determine areas of spheroidal trapezoidal cells by similar way. Related GIS software should be developed.

  6. Adjoint-consistent formulations of slip models for coupled electroosmotic flow systems

    KAUST Repository

    Garg, Vikram V


    Background Models based on the Helmholtz `slip\\' approximation are often used for the simulation of electroosmotic flows. The objectives of this paper are to construct adjoint-consistent formulations of such models, and to develop adjoint-based numerical tools for adaptive mesh refinement and parameter sensitivity analysis. Methods We show that the direct formulation of the `slip\\' model is adjoint inconsistent, and leads to an ill-posed adjoint problem. We propose a modified formulation of the coupled `slip\\' model, which is shown to be well-posed, and therefore automatically adjoint-consistent. Results Numerical examples are presented to illustrate the computation and use of the adjoint solution in two-dimensional microfluidics problems. Conclusions An adjoint-consistent formulation for Helmholtz `slip\\' models of electroosmotic flows has been proposed. This formulation provides adjoint solutions that can be reliably used for mesh refinement and sensitivity analysis.

  7. Theories, Models and Methodology in Writing Research

    NARCIS (Netherlands)

    Rijlaarsdam, Gert; Bergh, van den Huub; Couzijn, Michel


    Theories, Models and Methodology in Writing Research describes the current state of the art in research on written text production. The chapters in the first part offer contributions to the creation of new theories and models for writing processes. The second part examines specific elements of the

  8. The Friction Theory for Viscosity Modeling

    DEFF Research Database (Denmark)

    Cisneros, Sergio; Zeberg-Mikkelsen, Claus Kjær; Stenby, Erling Halfdan


    In this work the one-parameter friction theory (f-theory) general models have been extended to the viscosity prediction and modeling of characterized oils. It is demonstrated that these simple models, which take advantage of the repulsive and attractive pressure terms of cubic equations of state...... such as the SRK, PR and PRSV, can provide accurate viscosity prediction and modeling of characterized oils. In the case of light reservoir oils, whose properties are close to those of normal alkanes, the one-parameter f-theory general models can predict the viscosity of these fluids with good accuracy. Yet......, in the case when experimental information is available a more accurate modeling can be obtained by means of a simple tuning procedure. A tuned f-theory general model can deliver highly accurate viscosity modeling above the saturation pressure and good prediction of the liquid-phase viscosity at pressures...

  9. Domain Theory, Its Models and Concepts

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Howard, Thomas J.; Bruun, Hans Peter Lomholt


    , which can support design work and to form elements of designers’ mindsets and thereby their practice. The theory is a model-based theory, which means it is composed of concepts and models, which explains certain design phenomena. Many similar theories are described in the literature with differences...... and industrial applications especially for the DFX areas (not reported here) and for product modelling. The theory therefore contains a rich ontology of interrelated concepts. The Domain Theory is not aiming to create normative methods but the creation of a collection of concepts related to design phenomena...... in the set of concepts but assumingly all valid. The Domain Theory cannot be falsified or proven; but its value may be seen spanning from its range and productivity as described in the article....

  10. Formulate, Formalize and Run! How Narrative Theories shape and are shaped by Interactive Digital Narrative


    Szilas, Nicolas


    What are the links between narrative theories and computing? Narrative works are countless in the digital world: narrative hypertext and hypermedia, interactive fiction, video games, blogs, location-based narrative, etc. They not only form new analytical objects for narrative theories, but also may extend existing narrative theories. One specific type of digital narratives, AI-based Interactive Digital Narrative (IDN), plays a special role in this landscape because it makes use of narrative t...

  11. Field-Strength Formulation and Duality Transformation of Non-Abelian Gauge Theory


    Kiyomi, ITABASHI; Department of Physics, Tohoku University


    Adopting the gauge A_μ^ax_μ=0, the field-strength representation of the generating functional Z for non-Abelian gauge theory is constructed and the duality transformation of Z is performed. In the resultant dual theory, the "dual potential" B_μ^a obeys again the condition B_μ^ax_μ=0. It is found that, in the weak-coupling (small-g) limit, the theory is approximately self-dual, whereas for the large-g region the resultant form of the dual theory is such that it makes a kind of strong-coupling ...

  12. Formulation and computational aspects of plasticity and damage models with application to quasi-brittle materials

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.; Schreyer, H.L. [New Mexico Engineering Research Institute, Albuquerque, NM (United States)


    The response of underground structures and transportation facilities under various external loadings and environments is critical for human safety as well as environmental protection. Since quasi-brittle materials such as concrete and rock are commonly used for underground construction, the constitutive modeling of these engineering materials, including post-limit behaviors, is one of the most important aspects in safety assessment. From experimental, theoretical, and computational points of view, this report considers the constitutive modeling of quasi-brittle materials in general and concentrates on concrete in particular. Based on the internal variable theory of thermodynamics, the general formulations of plasticity and damage models are given to simulate two distinct modes of microstructural changes, inelastic flow and degradation of material strength and stiffness, that identify the phenomenological nonlinear behaviors of quasi-brittle materials. The computational aspects of plasticity and damage models are explored with respect to their effects on structural analyses. Specific constitutive models are then developed in a systematic manner according to the degree of completeness. A comprehensive literature survey is made to provide the up-to-date information on prediction of structural failures, which can serve as a reference for future research.

  13. An Entropy-Assisted Shielding Function in DDES Formulation for the SST Turbulence Model

    Directory of Open Access Journals (Sweden)

    Ling Zhou


    Full Text Available The intent of shielding functions in delayed detached-eddy simulation methods (DDES is to preserve the wall boundary layers as Reynolds-averaged Navier–Strokes (RANS mode, avoiding possible modeled stress depletion (MSD or even unphysical separation due to grid refinement. An entropy function fs is introduced to construct a DDES formulation for the k-ω shear stress transport (SST model, whose performance is extensively examined on a range of attached and separated flows (flat-plate flow, circular cylinder flow, and supersonic cavity-ramp flow. Two more forms of shielding functions are also included for comparison: one that uses the blending function F2 of SST, the other which adopts the recalibrated shielding function fd_cor of the DDES version based on the Spalart-Allmaras (SA model. In general, all of the shielding functions do not impair the vortex in fully separated flows. However, for flows including attached boundary layer, both F2 and the recalibrated fd_cor are found to be too conservative to resolve the unsteady flow content. On the other side, fs is proposed on the theory of energy dissipation and independent on from any particular turbulence model, showing the generic priority by properly balancing the need of reserving the RANS modeled regions for wall boundary layers and generating the unsteady turbulent structures in detached areas.

  14. The 'cumulative' formulation of (physiologically) structured population models

    NARCIS (Netherlands)

    O. Diekmann (Odo); M. Gyllenberg; J.A.J. Metz; H.R. Thieme; P.P. Clément (Philippe); G. Lumer (Günter)


    textabstractbibliographical data to be processed -- Evolution equations, control theory, and biomathematics (Han sur Lesse, 1991) Pages: 145--154 Series: Lecture Notes in Pure and Appl. Math. Vol: 155 -- Dekker (New York) -- 9

  15. A practitioner's guide to persuasion: an overview of 15 selected persuasion theories, models and frameworks. (United States)

    Cameron, Kenzie A


    To provide a brief overview of 15 selected persuasion theories and models, and to present examples of their use in health communication research. The theories are categorized as message effects models, attitude-behavior approaches, cognitive processing theories and models, consistency theories, inoculation theory, and functional approaches. As it is often the intent of a practitioner to shape, reinforce, or change a patient's behavior, familiarity with theories of persuasion may lead to the development of novel communication approaches with existing patients. This article serves as an introductory primer to theories of persuasion with applications to health communication research. Understanding key constructs and general formulations of persuasive theories may allow practitioners to employ useful theoretical frameworks when interacting with patients.

  16. Estimation of a four-parameter item response theory model. (United States)

    Loken, Eric; Rulison, Kelly L


    We explore the justification and formulation of a four-parameter item response theory model (4PM) and employ a Bayesian approach to recover successfully parameter estimates for items and respondents. For data generated using a 4PM item response model, overall fit is improved when using the 4PM rather than the 3PM or the 2PM. Furthermore, although estimated trait scores under the various models correlate almost perfectly, inferences at the high and low ends of the trait continuum are compromised, with poorer coverage of the confidence intervals when the wrong model is used. We also show in an empirical example that the 4PM can yield new insights into the properties of a widely used delinquency scale. We discuss the implications for building appropriate measurement models in education and psychology to model more accurately the underlying response process.

  17. Nonlinear structural mechanics theory, dynamical phenomena and modeling

    CERN Document Server

    Lacarbonara, Walter


    Nonlinear Structural Mechanics: Theory, Dynamical Phenomena and Modeling offers a concise, coherent presentation of the theoretical framework of nonlinear structural mechanics, computational methods, applications, parametric investigations of nonlinear phenomena and their mechanical interpretation towards design. The theoretical and computational tools that enable the formulation, solution, and interpretation of nonlinear structures are presented in a systematic fashion so as to gradually attain an increasing level of complexity of structural behaviors, under the prevailing assumptions on the geometry of deformation, the constitutive aspects and the loading scenarios. Readers will find a treatment of the foundations of nonlinear structural mechanics towards advanced reduced models, unified with modern computational tools in the framework of the prominent nonlinear structural dynamic phenomena while tackling both the mathematical and applied sciences. Nonlinear Structural Mechanics: Theory, Dynamical Phenomena...

  18. Mean-field theory and self-consistent dynamo modeling

    Energy Technology Data Exchange (ETDEWEB)

    Yoshizawa, Akira; Yokoi, Nobumitsu [Tokyo Univ. (Japan). Inst. of Industrial Science; Itoh, Sanae-I [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics; Itoh, Kimitaka [National Inst. for Fusion Science, Toki, Gifu (Japan)


    Mean-field theory of dynamo is discussed with emphasis on the statistical formulation of turbulence effects on the magnetohydrodynamic equations and the construction of a self-consistent dynamo model. The dynamo mechanism is sought in the combination of the turbulent residual-helicity and cross-helicity effects. On the basis of this mechanism, discussions are made on the generation of planetary magnetic fields such as geomagnetic field and sunspots and on the occurrence of flow by magnetic fields in planetary and fusion phenomena. (author)

  19. Formulation of an elastodynamic theory of laminated shear-deformable flat panels (United States)

    Librescu, L.


    A higher order shear deformation theory (HSDT) is developed, designed to preserve all the characteristics of the first-order shear deformation theory (FSDT) with regard to the number of the unknowns and to the order of the associated governing equations. In contrast to the FSDT theory, the HSDT theory is characterized by a parabolic distribution of transverse shear stress and strain components throughout the laminate thickness thus eliminating the need of a shear correction factor, and by the elimination of the contradictory assumption of the FSDT concerning the simultaneous consideration of zero transverse normal stress and zero transverse normal strain. The governing equations of the HSDT are presented, and several special applications are discussed.

  20. Many Worlds, Many Theories, Many Rules: Formulating an Ethical System for the World to Come

    Directory of Open Access Journals (Sweden)

    Nicholas Onuf

    Full Text Available Abstract There are many ways to speak about the modern world, and many theories setting it apart. I focus on a world facing economic decline and a return to the status-ordering of traditional societies. With republican theory as a backdrop, I show that an updated virtue ethics constitutes an ethical system uniquely suiting any society that is significantly status-ordered.

  1. Projector Augmented-Wave formulation of response to strain and electric field perturbation within the density-functional perturbation theory (United States)

    Martin, Alexandre; Torrent, Marc; Caracas, Razvan


    A formulation of the response of a system to strain and electric field perturbations in the pseudopotential-based density functional perturbation theory (DFPT) has been proposed by D.R Hamman and co-workers. It uses an elegant formalism based on the expression of DFT total energy in reduced coordinates, the key quantity being the metric tensor and its first and second derivatives. We propose to extend this formulation to the Projector Augmented-Wave approach (PAW). In this context, we express the full elastic tensor including the clamped-atom tensor, the atomic-relaxation contributions (internal stresses) and the response to electric field change (piezoelectric tensor and effective charges). With this we are able to compute the elastic tensor for all materials (metals and insulators) within a fully analytical formulation. The comparison with finite differences calculations on simple systems shows an excellent agreement. This formalism has been implemented in the plane-wave based DFT ABINIT code. We apply it to the computation of elastic properties and seismic-wave velocities of iron with impurity elements. By analogy with the materials contained in meteorites, tested impurities are light elements (H, O, C, S, Si).

  2. Flow Formulation-based Model for the Curriculum-based Course Timetabling Problem

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian Fink; Kristiansen, Simon; Sørensen, Matias


    In this work we will present a new mixed integer programming formulation for the curriculum-based course timetabling problem. We show that the model contains an underlying network model by dividing the problem into two models and then connecting the two models back into one model using a maximum ow...... instance in the benchmark data set from the second international timetabling competition....

  3. Constraint theory multidimensional mathematical model management

    CERN Document Server

    Friedman, George J


    Packed with new material and research, this second edition of George Friedman’s bestselling Constraint Theory remains an invaluable reference for all engineers, mathematicians, and managers concerned with modeling. As in the first edition, this text analyzes the way Constraint Theory employs bipartite graphs and presents the process of locating the “kernel of constraint” trillions of times faster than brute-force approaches, determining model consistency and computational allowability. Unique in its abundance of topological pictures of the material, this book balances left- and right-brain perceptions to provide a thorough explanation of multidimensional mathematical models. Much of the extended material in this new edition also comes from Phan Phan’s PhD dissertation in 2011, titled “Expanding Constraint Theory to Determine Well-Posedness of Large Mathematical Models.” Praise for the first edition: "Dr. George Friedman is indisputably the father of the very powerful methods of constraint theory...

  4. Staircase Models from Affine Toda Field Theory

    CERN Document Server

    Dorey, P; Dorey, Patrick; Ravanini, Francesco


    We propose a class of purely elastic scattering theories generalising the staircase model of Al. B. Zamolodchikov, based on the affine Toda field theories for simply-laced Lie algebras g=A,D,E at suitable complex values of their coupling constants. Considering their Thermodynamic Bethe Ansatz equations, we give analytic arguments in support of a conjectured renormalisation group flow visiting the neighbourhood of each W_g minimal model in turn.

  5. A Theory-Based Computer Tutorial Model. (United States)

    Dixon, Robert C.; Clapp, Elizabeth J.

    Because of the need for models to illustrate some possible answers to practical courseware development questions, a specific, three-section model incorporating the Corrective Feedback Paradigm (PCP) is advanced for applying theory to courseware. The model is reconstructed feature-by-feature against a framework of a hypothetical, one-to-one,…

  6. A course on basic model theory

    CERN Document Server

    Sarbadhikari, Haimanti


    This self-contained book is an exposition of the fundamental ideas of model theory. It presents the necessary background from logic, set theory and other topics of mathematics. Only some degree of mathematical maturity and willingness to assimilate ideas from diverse areas are required. The book can be used for both teaching and self-study, ideally over two semesters. It is primarily aimed at graduate students in mathematical logic who want to specialise in model theory. However, the first two chapters constitute the first introduction to the subject and can be covered in one-semester course to senior undergraduate students in mathematical logic. The book is also suitable for researchers who wish to use model theory in their work.

  7. Ether formulations of relativity

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, M.C.


    Contemporary ether theories are surveyed and criticized, especially those formally identical to orthodox Relativity. The historical development of Relativity, Special and General, in terms of an ether, is briefly indicated. Classical interpretations of Generalized Relativity using ether are compared to Euclidean formulations using a background space. The history of a sub-group of theories, formulating a 'new' Relativity involving modified transforms, is outlined. According to the theory with which they agree, recent supposed detections of drift are classified and criticized. Cosmological evidence suggesting an ether is mentioned. Only ether theories formally identical to Relativity have been published in depth. They stand criticized as being contrary to the positivist spirit. The history of mechanical analogues is traced, from Hartley's representing gravitating matter as spherical standing waves, to recent suggestions that vortex-sponge might model electromagnetic, quantum, uncertainty and faster-than-light phenomena. Contemporary theories are particular physical theories, themselves 'second interpretations' of a primary mathematical model. Mechanical analogues are auxiliary, not necessary, to other theory, disclosing relationships between classical and non-classical descriptions of assemblies charging state. The ether-relativity polemic, part of a broader dispute about relativity, is founded on mistaken conceptions of the roles of mathematical and physical models, mechanical analogues; and a distored view of history, which indicates that ether theories have become relativistic. 103 references.

  8. Non-critical string theory formulation of microtubule dynamics and quantum aspects of brain function

    CERN Document Server

    Mavromatos, Nikolaos E


    Microtubule (MT) networks, subneural paracrystalline cytosceletal structures, seem to play a fundamental role in the neurons. We cast here the complicated MT dynamics in the form of a 1+1-dimensional non-critical string theory, thus enabling us to provide a consistent quantum treatment of MTs, including enviromental {\\em friction} effects. We suggest, thus, that the MTs are the microsites, in the brain, for the emergence of stable, macroscopic quantum coherent states, identifiable with the {\\em preconscious states}. Quantum space-time effects, as described by non-critical string theory, trigger then an {\\em organized collapse} of the coherent states down to a specific or {\\em conscious state}. The whole process we estimate to take {\\cal O}(1\\,{\\rm sec}), in excellent agreement with a plethora of experimental/observational findings. The {\\em microscopic arrow of time}, endemic in non-critical string theory, and apparent here in the self-collapse process, provides a satisfactory and simple resolution to the age...

  9. Chemists’ knowledge object. Formulation, modification and abandonment of iconic model

    Directory of Open Access Journals (Sweden)

    Rómulo Gallego Badillo


    Full Text Available This article presents an analysis of different perspectives in regards to chemistry scientific statute. The category of scientific model was considered to characterize the proposal and development of technological-iconic model. It was necessary to have a look at the time in which the introduction of analogical and symbolic models was indispensable to modify the initial model. It also established the way in which the technological-iconic model can be a didactic foundation to lead secondary students towards Chemistry as one of the natural sciences.

  10. Modelling population dynamics model formulation, fitting and assessment using state-space methods

    CERN Document Server

    Newman, K B; Morgan, B J T; King, R; Borchers, D L; Cole, D J; Besbeas, P; Gimenez, O; Thomas, L


    This book gives a unifying framework for estimating the abundance of open populations: populations subject to births, deaths and movement, given imperfect measurements or samples of the populations.  The focus is primarily on populations of vertebrates for which dynamics are typically modelled within the framework of an annual cycle, and for which stochastic variability in the demographic processes is usually modest. Discrete-time models are developed in which animals can be assigned to discrete states such as age class, gender, maturity,  population (within a metapopulation), or species (for multi-species models). The book goes well beyond estimation of abundance, allowing inference on underlying population processes such as birth or recruitment, survival and movement. This requires the formulation and fitting of population dynamics models.  The resulting fitted models yield both estimates of abundance and estimates of parameters characterizing the underlying processes.  

  11. Economic Modelling in Institutional Economic Theory

    Directory of Open Access Journals (Sweden)

    Wadim Strielkowski


    Full Text Available Our paper is centered around the formation of theory of institutional modelling that includes principles and ideas reflecting the laws of societal development within the framework of institutional economic theory. We scrutinize and discuss the scientific principles of this institutional modelling that are increasingly postulated by the classics of institutional theory and find their way into the basics of the institutional economics. We propose scientific ideas concerning the new innovative approaches to institutional modelling. These ideas have been devised and developed on the basis of the results of our own original design, as well as on the formalisation and measurements of economic institutions, their functioning and evolution. Moreover, we consider the applied aspects of the institutional theory of modelling and employ them in our research for formalizing our results and maximising the practical outcome of our paper. Our results and findings might be useful for the researchers and stakeholders searching for the systematic and comprehensive description of institutional level modelling, the principles involved in this process and the main provisions of the institutional theory of economic modelling.

  12. Randomized Item Response Theory Models

    NARCIS (Netherlands)

    Fox, Gerardus J.A.


    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by

  13. Supersymmetric SYK model and random matrix theory (United States)

    Li, Tianlin; Liu, Junyu; Xin, Yuan; Zhou, Yehao


    In this paper, we investigate the effect of supersymmetry on the symmetry classification of random matrix theory ensembles. We mainly consider the random matrix behaviors in the N=1 supersymmetric generalization of Sachdev-Ye-Kitaev (SYK) model, a toy model for two-dimensional quantum black hole with supersymmetric constraint. Some analytical arguments and numerical results are given to show that the statistics of the supersymmetric SYK model could be interpreted as random matrix theory ensembles, with a different eight-fold classification from the original SYK model and some new features. The time-dependent evolution of the spectral form factor is also investigated, where predictions from random matrix theory are governing the late time behavior of the chaotic hamiltonian with supersymmetry.

  14. Graphical Model Theory for Wireless Sensor Networks

    Energy Technology Data Exchange (ETDEWEB)

    Davis, William B.


    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm.

  15. Some Results in Dynamic Model Theory (United States)


    Science of Computer Programming 51 (2004) 3–22 Some results in dynamic model theory Dexter Kozen∗ Computer Science......models. At the /rst-order level, we recall the de/nition of Tarskian frames over a /rst-order signature . D. Kozen / Science of Computer Programming 51

  16. Deuteron electromagnetic form factors in a renormalizable formulation of chiral effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Epelbaum, E. [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Fakultaet fuer Physik und Astronomie, Bochum (Germany); Gasparyan, A.M. [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Fakultaet fuer Physik und Astronomie, Bochum (Germany); SSC RF ITEP, Moscow (Russian Federation); Gegelia, J. [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Fakultaet fuer Physik und Astronomie, Bochum (Germany); Tbilisi State University, Tbilisi (Georgia); Schindler, M.R. [University of South Carolina, Department of Physics and Astronomy, Columbia (United States)


    We calculate the deuteron electromagnetic form factors in a modified version of Weinberg's chiral effective field theory approach to the two-nucleon system. We derive renormalizable integral equations for the deuteron without partial wave decomposition. Deuteron form factors are extracted by applying the Lehmann-Symanzik-Zimmermann reduction formalism to the three-point correlation function of deuteron interpolating fields and the electromagnetic current operator. Numerical results of a leading-order calculation with removed cutoff regularization agree well with experimental data. (orig.)

  17. Formulation of court interpreting models: A South African perspective

    African Journals Online (AJOL)

    In South Africa there are no models of court interpreting to serve as a guide for court interpreters when performing their task. This is because there is no proper definition of the role of a court interpreter. Models of court interpreting define and describe the process by stating what court interpreters are actually doing when ...

  18. Application of recursive Gibbs-Appell formulation in deriving the equations of motion of N-viscoelastic robotic manipulators in 3D space using Timoshenko Beam Theory (United States)

    Korayem, M. H.; Shafei, A. M.


    The goal of this paper is to describe the application of Gibbs-Appell (G-A) formulation and the assumed modes method to the mathematical modeling of N-viscoelastic link manipulators. The paper's focus is on obtaining accurate and complete equations of motion which encompass the most related structural properties of lightweight elastic manipulators. In this study, two important damping mechanisms, namely, the structural viscoelasticity (Kelvin-Voigt) effect (as internal damping) and the viscous air effect (as external damping) have been considered. To include the effects of shear and rotational inertia, the assumption of Timoshenko beam (TB) theory (TBT) has been applied. Gravity, torsion, and longitudinal elongation effects have also been included in the formulations. To systematically derive the equations of motion and improve the computational efficiency, a recursive algorithm has been used in the modeling of the system. In this algorithm, all the mathematical operations are carried out by only 3×3 and 3×1 matrices. Finally, a computational simulation for a manipulator with two elastic links is performed in order to verify the proposed method.

  19. Selection and formulation of a numerical shallow water wave hindcast model

    NARCIS (Netherlands)

    Herbers, T.; Holthuijsen, L.H.; Booij, N.


    Formulate a numerical wave hindcast model which can be used to obtain realistic estimates of wave conditions in the Oosterschelde as input to a numerical geomorphological model. A directionally decoupled, parametric wave hindeast model is recommended that includes parameterized versions of

  20. Security Theorems via Model Theory

    Directory of Open Access Journals (Sweden)

    Joshua Guttman


    Full Text Available A model-theoretic approach can establish security theorems for cryptographic protocols. Formulas expressing authentication and non-disclosure properties of protocols have a special form. They are quantified implications for all xs . (phi implies for some ys . psi. Models (interpretations for these formulas are *skeletons*, partially ordered structures consisting of a number of local protocol behaviors. *Realized* skeletons contain enough local sessions to explain all the behavior, when combined with some possible adversary behaviors. We show two results. (1 If phi is the antecedent of a security goal, then there is a skeleton A_phi such that, for every skeleton B, phi is satisfied in B iff there is a homomorphism from A_phi to B. (2 A protocol enforces for all xs . (phi implies for some ys . psi iff every realized homomorphic image of A_phi satisfies psi. Hence, to verify a security goal, one can use the Cryptographic Protocol Shapes Analyzer CPSA (TACAS, 2007 to identify minimal realized skeletons, or "shapes," that are homomorphic images of A_phi. If psi holds in each of these shapes, then the goal holds.

  1. Probing for the Multiplicative Term in Modern Expectancy-Value Theory: A Latent Interaction Modeling Study (United States)

    Trautwein, Ulrich; Marsh, Herbert W.; Nagengast, Benjamin; Ludtke, Oliver; Nagy, Gabriel; Jonkmann, Kathrin


    In modern expectancy-value theory (EVT) in educational psychology, expectancy and value beliefs additively predict performance, persistence, and task choice. In contrast to earlier formulations of EVT, the multiplicative term Expectancy x Value in regression-type models typically plays no major role in educational psychology. The present study…

  2. Improved Stability of a Model IgG3 by DoE-Based Evaluation of Buffer Formulations

    National Research Council Canada - National Science Library

    Chavez, Brittany K; Agarabi, Cyrus D; Read, Erik K; Boyne II, Michael T; Khan, Mansoor A; Brorson, Kurt A


    .... Using a model murine IgG3 produced in a bioreactor system, multiple formulation compositions were systematically explored in a DoE design to optimize the stability of a challenging antibody formulation worst case...

  3. Vacation queueing models theory and applications

    CERN Document Server

    Tian, Naishuo


    A classical queueing model consists of three parts - arrival process, service process, and queue discipline. However, a vacation queueing model has an additional part - the vacation process which is governed by a vacation policy - that can be characterized by three aspects: 1) vacation start-up rule; 2) vacation termination rule, and 3) vacation duration distribution. Hence, vacation queueing models are an extension of classical queueing theory. Vacation Queueing Models: Theory and Applications discusses systematically and in detail the many variations of vacation policy. By allowing servers to take vacations makes the queueing models more realistic and flexible in studying real-world waiting line systems. Integrated in the book's discussion are a variety of typical vacation model applications that include call centers with multi-task employees, customized manufacturing, telecommunication networks, maintenance activities, etc. Finally, contents are presented in a "theorem and proof" format and it is invaluabl...

  4. Quantum field theory and the standard model

    CERN Document Server

    Schwartz, Matthew D


    Providing a comprehensive introduction to quantum field theory, this textbook covers the development of particle physics from its foundations to the discovery of the Higgs boson. Its combination of clear physical explanations, with direct connections to experimental data, and mathematical rigor make the subject accessible to students with a wide variety of backgrounds and interests. Assuming only an undergraduate-level understanding of quantum mechanics, the book steadily develops the Standard Model and state-of-the-art calculation techniques. It includes multiple derivations of many important results, with modern methods such as effective field theory and the renormalization group playing a prominent role. Numerous worked examples and end-of-chapter problems enable students to reproduce classic results and to master quantum field theory as it is used today. Based on a course taught by the author over many years, this book is ideal for an introductory to advanced quantum field theory sequence or for independe...

  5. Affine group formulation of the Standard Model coupled to gravity

    CERN Document Server

    Chou, Ching-Yi; Soo, Chopin


    Using the affine group formalism, we perform a nonperturbative quantization leading to the construction of elements of a physical Hilbert space for full, Lorentzian quantum gravity coupled to the Standard Model in four spacetime dimensions. This paper constitutes a first step toward understanding the phenomenology of quantum gravitational effects stemming from a consistent treatment of minimal couplings to matter.

  6. Mathematical Formulation Requirements and Specifications for the Process Models

    Energy Technology Data Exchange (ETDEWEB)

    Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.


    The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments

  7. Modeling surface tension using a ghost fluid technique within a volume of fluid formulation

    Energy Technology Data Exchange (ETDEWEB)

    Francois, M. M. (Marianne M.); Kothe, D. B. (Douglas B.); Cummins, S. J. (Sharen J.)


    Ghost fluid methods (GFM) are a viable approach for imposing sharp boundary conditions on interfaces that are arbitrarily embedded within the computational mesh. All GFM to date are formulated with an interface distance function that resides within a level-set (LS) framework. Recently we proposed a technique for reconstructing distance functions from volume fractions. This technique enables the exploitation of GFM within a volume of fluid formulation for modeling an interfacial phenomenon like surface tension. Combining GFM with a volume of fluid (VOF) formulation is attractive because of the VOF method's superior mass conservation and because of the ability of GFM to maintain sharp jump conditions. The continuum surface tension force (CSF) method, however, has the propensity to produce smooth jump. In the following, the combined VOF-GFM and more classical VOF-CSF formulations are compared and contrasted. Static and dynamic numerical results are used to illustrate our findings and support our claims.

  8. Canonical transformations and loop formulation of SU(N) lattice gauge theories (United States)

    Mathur, Manu; Sreeraj, T. P.


    We construct canonical transformations to reformulate SU(N) Kogut-Susskind lattice gauge theory in terms of a set of fundamental loop and string flux operators along with their canonically conjugate loop and string electric fields. The canonical relations between the initial SU(N) link operators and the final SU(N) loop and string operators, consistent with SU(N) gauge transformations, are explicitly constructed over the entire lattice. We show that as a consequence of SU(N) Gauss laws all SU(N) string degrees of freedom become cyclic and decouple from the physical Hilbert space Hp. The Kogut-Susskind Hamiltonian rewritten in terms of the fundamental physical loop operators has global SU(N) invariance. There are no gauge fields. We further show that the (1 /g2 ) magnetic field terms on plaquettes create and annihilate the fundamental plaquette loop fluxes while the (g2 ) electric field terms describe all their interactions. In the weak coupling (g2→0 ) continuum limit the SU(N) loop dynamics is described by SU(N) spin Hamiltonian with nearest neighbor interactions. In the simplest SU(2) case, where the canonical transformations map the SU(2) loop Hilbert space into the Hilbert spaces of hydrogen atoms, we analyze the special role of the hydrogen atom dynamical symmetry group S O (4 ,2 ) in the loop dynamics and the spectrum. A simple tensor network ansatz in the SU(2) gauge invariant hydrogen atom loop basis is discussed.

  9. Canonical Transformations and Loop Formulation of SU(N) Lattice Gauge Theories

    CERN Document Server

    Mathur, Manu


    We construct canonical transformations to reformulate SU(N) Kogut-Susskind lattice gauge theory in terms of a set of fundamental loop & string flux operators along with their canonically conjugate loop & string electric fields. We show that as a consequence of SU(N) Gauss laws all SU(N) string degrees of freedom become cyclic and decouple from the physical Hilbert space ${\\cal H}^p$. The canonical relations between the initial SU(N) link operators and the final SU(N) loop & string operators over the entire lattice are worked out in a self consistent manner. The Kogut-Susskind Hamiltonian rewritten in terms of the fundamental physical loop operators has global SU(N) invariance. There are no gauge fields. We further show that the $(1/g^2)$ magnetic field terms on plaquettes create and annihilate the fundamental plaquette loop fluxes while the $(g^2)$ electric field terms describe all their interactions. In the weak coupling ($g^2 \\rightarrow 0$) continuum limit the SU(N) loop dynamics is described b...

  10. Standard Model in multiscale theories and observational constraints (United States)

    Calcagni, Gianluca; Nardelli, Giuseppe; Rodríguez-Fernández, David


    We construct and analyze the Standard Model of electroweak and strong interactions in multiscale spacetimes with (i) weighted derivatives and (ii) q -derivatives. Both theories can be formulated in two different frames, called fractional and integer picture. By definition, the fractional picture is where physical predictions should be made. (i) In the theory with weighted derivatives, it is shown that gauge invariance and the requirement of having constant masses in all reference frames make the Standard Model in the integer picture indistinguishable from the ordinary one. Experiments involving only weak and strong forces are insensitive to a change of spacetime dimensionality also in the fractional picture, and only the electromagnetic and gravitational sectors can break the degeneracy. For the simplest multiscale measures with only one characteristic time, length and energy scale t*, ℓ* and E*, we compute the Lamb shift in the hydrogen atom and constrain the multiscale correction to the ordinary result, getting the absolute upper bound t*28 TeV . Stronger bounds are obtained from the measurement of the fine-structure constant. (ii) In the theory with q -derivatives, considering the muon decay rate and the Lamb shift in light atoms, we obtain the independent absolute upper bounds t*35 MeV . For α0=1 /2 , the Lamb shift alone yields t*450 GeV .

  11. Gauge-Origin Independent Formulation and Implementation of Magneto-Optical Activity within Atomic-Orbital-Density Based Hartree-Fock and Kohn-Sham Response Theories

    DEFF Research Database (Denmark)

    Kjærgaard, Thomas; Jørgensen, Poul; Thorvaldsen, Andreas


    -orbital density-matrix based formulation of response theory and use London atomic orbitals to parametrize the magnetic field dependence. It yields a computational procedure which is both gauge-origin independent and suitable for linear-scaling at the level of time-dependent Hartree-Fock and density functional...... theory. The formulation includes a modified preconditioned conjugated gradient algorithm, which projects out the excited state component from the solution to the linear response equation. This is required when solving one of the response equations for the determination of the B term and divergence...

  12. Formulating a stand-growth model for mathematical programming problems in Appalachian forests (United States)

    Gary W. Miller; Jay Sullivan


    Some growth and yield simulators applicable to central hardwood forests can be formulated for use in mathematical programming models that are designed to optimize multi-stand, multi-resource management problems. Once in the required format, growth equations serve as model constraints, defining the dynamics of stand development brought about by harvesting decisions. In...

  13. Formulation of consumables management models. Volume 1: Mission planning (United States)

    Torian, J. G.; Zamora, M. A.


    Development of an STS (Space Transportation System) interactive computer program MPP (Mission Planning Processor) working model was conducted. A summary of the computer program development and those supporting tasks conducted is presented. Development of the MPP Computer Program is discussed. This development was supported by several parallel tasks. These tasks either directly supported the program development, or provided information for future application and/or modification to the program in relation to the flight planning and flight operations of the STS and advanced spacecraft. The supporting tasks also included development of a Space Station MPP to demonstrate the applicability of the analytical methods developed under this RTOP to more advanced spacecraft than the STS.

  14. Density functional theory and multiscale materials modeling*

    Indian Academy of Sciences (India)


    wide class of problems involving nanomaterials, interfacial science and soft condensed matter has been addressed using the density based ... Keywords. Density functional theory; soft condensed matter; materials modeling. 1. Introduction ... the basic laws of quantum mechanics, their prediction through a direct ab initio ...

  15. Aligning Grammatical Theories and Language Processing Models (United States)

    Lewis, Shevaun; Phillips, Colin


    We address two important questions about the relationship between theoretical linguistics and psycholinguistics. First, do grammatical theories and language processing models describe separate cognitive systems, or are they accounts of different aspects of the same system? We argue that most evidence is consistent with the one-system view. Second,…

  16. Recursive renormalization group theory based subgrid modeling (United States)

    Zhou, YE


    Advancing the knowledge and understanding of turbulence theory is addressed. Specific problems to be addressed will include studies of subgrid models to understand the effects of unresolved small scale dynamics on the large scale motion which, if successful, might substantially reduce the number of degrees of freedom that need to be computed in turbulence simulation.

  17. Finite strain formulation of viscoelastic damage model for simulation of fabric reinforced polymers under dynamic loading

    Directory of Open Access Journals (Sweden)

    Treutenaere S.


    Full Text Available The use of fabric reinforced polymers in the automotive industry is growing significantly. The high specific stiffness and strength, the ease of shaping as well as the great impact performance of these materials widely encourage their diffusion. The present model increases the predictability of explicit finite element analysis and push the boundaries of the ongoing phenomenological model. Carbon fibre composites made up various preforms were tested by applying different mechanical load up to dynamic loading. This experimental campaign highlighted the physical mechanisms affecting the initial mechanical properties, namely intra- and interlaminar matrix damage, viscoelasticty and fibre failure. The intralaminar behaviour model is based on the explicit formulation of the matrix damage model developed by the ONERA as the given damage formulation correlates with the experimental observation. Coupling with a Maxwell-Wiechert model, the viscoelasticity is included without losing the direct explicit formulation. Additionally, the model is formulated under a total Lagrangian scheme in order to maintain consistency for finite strain. Thus, the material frame-indifference as well as anisotropy are ensured. This allows reorientation of fibres to be taken into account particularly for in-plane shear loading. Moreover, fall within the framework of the total Lagrangian scheme greatly makes the parameter identification easier, as based on the initial configuration. This intralaminar model thus relies upon a physical description of the behaviour of fabric composites and the numerical simulations show a good correlation with the experimental results.

  18. Removing Specification Errors from the Usual Formulation of Binary Choice Models

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy


    Full Text Available We develop a procedure for removing four major specification errors from the usual formulation of binary choice models. The model that results from this procedure is different from the conventional probit and logit models. This difference arises as a direct consequence of our relaxation of the usual assumption that omitted regressors constituting the error term of a latent linear regression model do not introduce omitted regressor biases into the coefficients of the included regressors.

  19. Estimation of Soil Electrical Properties in a Multilayer Earth Model with Boundary Element Formulation


    Islam, T.; Z. Chik; M. M. Mustafa; H. Sanusi


    This paper presents an efficient model for estimation of soil electric resistivity with depth and layer thickness in a multilayer earth structure. This model is the improvement of conventional two-layer earth model including Wenner resistivity formulations with boundary conditions. Two-layer soil model shows the limitations in specific soil characterizations of different layers with the interrelationships between soil apparent electrical resistivity (ρ) and several soil physical or chemical p...

  20. Lattice gauge theories and spin models (United States)

    Mathur, Manu; Sreeraj, T. P.


    The Wegner Z2 gauge theory-Z2 Ising spin model duality in (2 +1 ) dimensions is revisited and derived through a series of canonical transformations. The Kramers-Wannier duality is similarly obtained. The Wegner Z2 gauge-spin duality is directly generalized to SU(N) lattice gauge theory in (2 +1 ) dimensions to obtain the SU(N) spin model in terms of the SU(N) magnetic fields and their conjugate SU(N) electric scalar potentials. The exact and complete solutions of the Z2, U(1), SU(N) Gauss law constraints in terms of the corresponding spin or dual potential operators are given. The gauge-spin duality naturally leads to a new gauge invariant magnetic disorder operator for SU(N) lattice gauge theory which produces a magnetic vortex on the plaquette. A variational ground state of the SU(2) spin model with nearest neighbor interactions is constructed to analyze SU(2) gauge theory.

  1. Precision diet formulation to improve performance and profitability across various climates: Modeling the implications of increasing the formulation frequency of dairy cattle diets. (United States)

    White, Robin R; Capper, Judith L


    The objective of this study was to use a precision nutrition model to simulate the relationship between diet formulation frequency and dairy cattle performance across various climates. Agricultural Modeling and Training Systems (AMTS) CattlePro diet-balancing software (Cornell Research Foundation, Ithaca, NY) was used to compare 3 diet formulation frequencies (weekly, monthly, or seasonal) and 3 levels of climate variability (hot, cold, or variable). Predicted daily milk yield (MY), metabolizable energy (ME) balance, and dry matter intake (DMI) were recorded for each frequency-variability combination. Economic analysis was conducted to calculate the predicted revenue over feed and labor costs. Diet formulation frequency affected ME balance and MY but did not affect DMI. Climate variability affected ME balance and DMI but not MY. The interaction between climate variability and formulation frequency did not affect ME balance, MY, or DMI. Formulating diets more frequently increased MY, DMI, and ME balance. Economic analysis showed that formulating diets weekly rather than seasonally could improve returns over variable costs by $25,000 per year for a moderate-sized (300-cow) operation. To achieve this increase in returns, an entire feeding system margin of error of Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. Variational formulation of a quantitative phase-field model for nonisothermal solidification in a multicomponent alloy (United States)

    Ohno, Munekazu; Takaki, Tomohiro; Shibuta, Yasushi


    A variational formulation of a quantitative phase-field model is presented for nonisothermal solidification in a multicomponent alloy with two-sided asymmetric diffusion. The essential ingredient of this formulation is that the diffusion fluxes for conserved variables in both the liquid and solid are separately derived from functional derivatives of the total entropy and then these fluxes are related to each other on the basis of the local equilibrium conditions. In the present formulation, the cross-coupling terms between the phase-field and conserved variables naturally arise in the phase-field equation and diffusion equations, one of which corresponds to the antitrapping current, the phenomenological correction term in early nonvariational models. In addition, this formulation results in diffusivities of tensor form inside the interface. Asymptotic analysis demonstrates that this model can exactly reproduce the free-boundary problem in the thin-interface limit. The present model is widely applicable because approximations and simplifications are not formally introduced into the bulk's free energy densities and because off-diagonal elements of the diffusivity matrix are explicitly taken into account. Furthermore, we propose a nonvariational form of the present model to achieve high numerical performance. A numerical test of the nonvariational model is carried out for nonisothermal solidification in a binary alloy. It shows fast convergence of the results with decreasing interface thickness.

  3. Improved Stability of a Model IgG3 by DoE-Based Evaluation of Buffer Formulations


    Chavez, Brittany K.; Agarabi, Cyrus D.; Erik K. Read; Boyne II, Michael T.; Khan, Mansoor A.; Brorson, Kurt A.


    Formulating appropriate storage conditions for biopharmaceutical proteins is essential for ensuring their stability and thereby their purity, potency, and safety over their shelf-life. Using a model murine IgG3 produced in a bioreactor system, multiple formulation compositions were systematically explored in a DoE design to optimize the stability of a challenging antibody formulation worst case. The stability of the antibody in each buffer formulation was assessed by UV/VIS absorbance at 280 ...

  4. Density Functional Theory Models for Radiation Damage (United States)

    Dudarev, S. L.


    Density functional theory models developed over the past decade provide unique information about the structure of nanoscale defects produced by irradiation and about the nature of short-range interaction between radiation defects, clustering of defects, and their migration pathways. These ab initio models, involving no experimental input parameters, appear to be as quantitatively accurate and informative as the most advanced experimental techniques developed for the observation of radiation damage phenomena. Density functional theory models have effectively created a new paradigm for the scientific investigation and assessment of radiation damage effects, offering new insight into the origin of temperature- and dose-dependent response of materials to irradiation, a problem of pivotal significance for applications.

  5. Crack propagation modeling using Peridynamic theory (United States)

    Hafezi, M. H.; Alebrahim, R.; Kundu, T.


    Crack propagation and branching are modeled using nonlocal peridynamic theory. One major advantage of this nonlocal theory based analysis tool is the unifying approach towards material behavior modeling - irrespective of whether the crack is formed in the material or not. No separate damage law is needed for crack initiation and propagation. This theory overcomes the weaknesses of existing continuum mechanics based numerical tools (e.g. FEM, XFEM etc.) for identifying fracture modes and does not require any simplifying assumptions. Cracks grow autonomously and not necessarily along a prescribed path. However, in some special situations such as in case of ductile fracture, the damage evolution and failure depend on parameters characterizing the local stress state instead of peridynamic damage modeling technique developed for brittle fracture. For brittle fracture modeling the bond is simply broken when the failure criterion is satisfied. This simulation helps us to design more reliable modeling tool for crack propagation and branching in both brittle and ductile materials. Peridynamic analysis has been found to be very demanding computationally, particularly for real-world structures (e.g. vehicles, aircrafts, etc.). It also requires a very expensive visualization process. The goal of this paper is to bring awareness to researchers the impact of this cutting-edge simulation tool for a better understanding of the cracked material response. A computer code has been developed to implement the peridynamic theory based modeling tool for two-dimensional analysis. A good agreement between our predictions and previously published results is observed. Some interesting new results that have not been reported earlier by others are also obtained and presented in this paper. The final objective of this investigation is to increase the mechanics knowledge of self-similar and self-affine cracks.

  6. Modeling of tethered satellite formations using graph theory

    DEFF Research Database (Denmark)

    Larsen, Martin Birkelund; Smith, Roy S; Blanke, Mogens


    Tethered satellite formations have recently gained increasing attention due to future mission proposals. Several different formations have been investigated for their dynamic properties and control schemes have been suggested. Formulating the equations of motion and investigation which geometries...... satellite formation and proposes a method to deduce the equations of motion for the attitude dynamics of the formation in a compact form. The use of graph theory and Lagrange mechanics together allows a broad class of formations to be described using the same framework. A method is stated for finding...... could form stable formations in space are cumbersome when done at a case to case basis, and a common framework providing a basic model of the dynamics of tethered satellite formations can therefore be advantageous. This paper suggests the use of graph theoretical quantities to describe a tethered...

  7. Partial differential equations in action from modelling to theory

    CERN Document Server

    Salsa, Sandro


    The book is intended as an advanced undergraduate or first-year graduate course for students from various disciplines, including applied mathematics, physics and engineering. It has evolved from courses offered on partial differential equations (PDEs) over the last several years at the Politecnico di Milano. These courses had a twofold purpose: on the one hand, to teach students to appreciate the interplay between theory and modeling in problems arising in the applied sciences, and on the other to provide them with a solid theoretical background in numerical methods, such as finite elements. Accordingly, this textbook is divided into two parts. The first part, chapters 2 to 5, is more elementary in nature and focuses on developing and studying basic problems from the macro-areas of diffusion, propagation and transport, waves and vibrations. In turn the second part, chapters 6 to 11, concentrates on the development of Hilbert spaces methods for the variational formulation and the analysis of (mainly) linear bo...

  8. Partial differential equations in action from modelling to theory

    CERN Document Server

    Salsa, Sandro


    The book is intended as an advanced undergraduate or first-year graduate course for students from various disciplines, including applied mathematics, physics and engineering. It has evolved from courses offered on partial differential equations (PDEs) over the last several years at the Politecnico di Milano. These courses had a twofold purpose: on the one hand, to teach students to appreciate the interplay between theory and modeling in problems arising in the applied sciences, and on the other to provide them with a solid theoretical background in numerical methods, such as finite elements. Accordingly, this textbook is divided into two parts. The first part, chapters 2 to 5, is more elementary in nature and focuses on developing and studying basic problems from the macro-areas of diffusion, propagation and transport, waves and vibrations. In turn the second part, chapters 6 to 11, concentrates on the development of Hilbert spaces methods for the variational formulation and the analysis of (mainly) linear bo...

  9. Evaluation of soy-based surface active copolymers as surfactant ingredients in model shampoo formulations. (United States)

    Popadyuk, A; Kalita, H; Chisholm, B J; Voronov, A


    A new non-toxic soybean oil-based polymeric surfactant (SBPS) for personal-care products was developed and extensively characterized, including an evaluation of the polymeric surfactant performance in model shampoo formulations. To experimentally assure applicability of the soy-based macromolecules in shampoos, either in combination with common anionic surfactants (in this study, sodium lauryl sulfate, SLS) or as a single surface-active ingredient, the testing of SBPS physicochemical properties, performance and visual assessment of SBPS-based model shampoos was carried out. The results obtained, including foaming and cleaning ability of model formulations, were compared to those with only SLS as a surfactant as well as to SLS-free shampoos. Overall, the results show that the presence of SBPS improves cleaning, foaming, and conditioning of model formulations. SBPS-based formulations meet major requirements of multifunctional shampoos - mild detergency, foaming, good conditioning, and aesthetic appeal, which are comparable to commercially available shampoos. In addition, examination of SBPS/SLS mixtures in model shampoos showed that the presence of the SBPS enables the concentration of SLS to be significantly reduced without sacrificing shampoo performance. © 2014 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  10. Integrating gene set analysis and nonlinear predictive modeling of disease phenotypes using a Bayesian multitask formulation. (United States)

    Gönen, Mehmet


    Identifying molecular signatures of disease phenotypes is studied using two mainstream approaches: (i) Predictive modeling methods such as linear classification and regression algorithms are used to find signatures predictive of phenotypes from genomic data, which may not be robust due to limited sample size or highly correlated nature of genomic data. (ii) Gene set analysis methods are used to find gene sets on which phenotypes are linearly dependent by bringing prior biological knowledge into the analysis, which may not capture more complex nonlinear dependencies. Thus, formulating an integrated model of gene set analysis and nonlinear predictive modeling is of great practical importance. In this study, we propose a Bayesian binary classification framework to integrate gene set analysis and nonlinear predictive modeling. We then generalize this formulation to multitask learning setting to model multiple related datasets conjointly. Our main novelty is the probabilistic nonlinear formulation that enables us to robustly capture nonlinear dependencies between genomic data and phenotype even with small sample sizes. We demonstrate the performance of our algorithms using repeated random subsampling validation experiments on two cancer and two tuberculosis datasets by predicting important disease phenotypes from genome-wide gene expression data. We are able to obtain comparable or even better predictive performance than a baseline Bayesian nonlinear algorithm and to identify sparse sets of relevant genes and gene sets on all datasets. We also show that our multitask learning formulation enables us to further improve the generalization performance and to better understand biological processes behind disease phenotypes.

  11. The Body Model Theory of Somatosensory Cortex. (United States)

    Brecht, Michael


    I outline a microcircuit theory of somatosensory cortex as a body model serving both for body representation and "body simulation." A modular model of innervated and non-innervated body parts resides in somatosensory cortical layer 4. This body model is continuously updated and compares to an avatar (an animatable puppet) rather than a mere sensory map. Superficial layers provide context and store sensory memories, whereas layer 5 provides motor output and stores motor memories. I predict that layer-6-to-layer-4 inputs initiate body simulations allowing rehearsal and risk assessment of difficult actions, such as jumps. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Quantized gauge theory on the fuzzy sphere as random matrix model

    Energy Technology Data Exchange (ETDEWEB)

    Steinacker, Harold E-mail:


    U(n) Yang-Mills theory on the fuzzy sphere S{sup 2}{sub N} is quantized using random matrix methods. The gauge theory is formulated as a matrix model for a single Hermitian matrix subject to a constraint, and a potential with two degenerate minima. This allows to reduce the path integral over the gauge fields to an integral over eigenvalues, which can be evaluated for large N. The partition function of U(n) Yang-Mills theory on the classical sphere is recovered in the large N limit, as a sum over instanton contributions. The monopole solutions are found explicitly.

  13. Topos models for physics and topos theory

    Energy Technology Data Exchange (ETDEWEB)

    Wolters, Sander, E-mail: [Radboud Universiteit Nijmegen, Institute for Mathematics, Astrophysics, and Particle Physics (Netherlands)


    What is the role of topos theory in the topos models for quantum theory as used by Isham, Butterfield, Döring, Heunen, Landsman, Spitters, and others? In other words, what is the interplay between physical motivation for the models and the mathematical framework used in these models? Concretely, we show that the presheaf topos model of Butterfield, Isham, and Döring resembles classical physics when viewed from the internal language of the presheaf topos, similar to the copresheaf topos model of Heunen, Landsman, and Spitters. Both the presheaf and copresheaf models provide a “quantum logic” in the form of a complete Heyting algebra. Although these algebras are natural from a topos theoretic stance, we seek a physical interpretation for the logical operations. Finally, we investigate dynamics. In particular, we describe how an automorphism on the operator algebra induces a homeomorphism (or isomorphism of locales) on the associated state spaces of the topos models, and how elementary propositions and truth values transform under the action of this homeomorphism. Also with dynamics the focus is on the internal perspective of the topos.

  14. Colimitation of decomposition by substrate and decomposers – a comparison of model formulations

    Directory of Open Access Journals (Sweden)

    M. Reichstein


    Full Text Available Decomposition of soil organic matter (SOM is limited by both the available substrate and the active decomposer community. The understanding of this colimitation strongly affects the understanding of feedbacks of soil carbon to global warming and its consequences. This study compares different formulations of soil organic matter (SOM decomposition. We compiled formulations from literature into groups according to the representation of decomposer biomass on the SOM decomposition rate a non-explicit (substrate only, b linear, and c non-linear. By varying the SOM decomposition equation in a basic simplified decomposition model, we analyzed the following questions. Is the priming effect represented? Under which conditions is SOM accumulation limited? And, how does steady state SOM stocks scale with amount of fresh organic matter (FOM litter inputs? While formulations (a did not represent the priming effect, with formulations (b steady state SOM stocks were independent of amount of litter input. Further, with several formulations (c there was an offset of SOM that was not decomposed when no fresh OM was supplied. The finding that a part of the SOM is not decomposed on exhaust of FOM supply supports the hypothesis of carbon stabilization in deep soil by the absence of energy-rich fresh organic matter. Different representations of colimitation of decomposition by substrate and decomposers in SOM decomposition models resulted in qualitatively different long-term behaviour. A collaborative effort by modellers and experimentalists is required to identify formulations that are more or less suitable to represent the most important drivers of long term carbon storage.

  15. Formulation of a mathematical model for the analysis of the emission ...

    African Journals Online (AJOL)

    Formulation of a mathematical model for the analysis of the emission of carbon dioxide from gaseous fuel using least square method. ... Government to take control measures in curtailing the emission of Carbon Dioxide in the country. Keywords: Gaseous fuel, Automobile, Fossil fuels, Pollutants, Carbon Dioxide, Emissions ...

  16. Variational formulation for Black-Scholes equations in stochastic volatility models (United States)

    Gyulov, Tihomir B.; Valkov, Radoslav L.


    In this note we prove existence and uniqueness of weak solutions to a boundary value problem arising from stochastic volatility models in financial mathematics. Our settings are variational in weighted Sobolev spaces. Nevertheless, as it will become apparent our variational formulation agrees well with the stochastic part of the problem.

  17. Formulation of dynamical theory of X-ray diffraction for perfect crystals in the Laue case using the Riemann surface. (United States)

    Saka, Takashi


    The dynamical theory for perfect crystals in the Laue case was reformulated using the Riemann surface, as used in complex analysis. In the two-beam approximation, each branch of the dispersion surface is specified by one sheet of the Riemann surface. The characteristic features of the dispersion surface are analytically revealed using four parameters, which are the real and imaginary parts of two quantities specifying the degree of departure from the exact Bragg condition and the reflection strength. By representing these parameters on complex planes, these characteristics can be graphically depicted on the Riemann surface. In the conventional case, the absorption is small and the real part of the reflection strength is large, so the formulation is the same as the traditional analysis. However, when the real part of the reflection strength is small or zero, the two branches of the dispersion surface cross, and the dispersion relationship becomes similar to that of the Bragg case. This is because the geometrical relationships among the parameters are similar in both cases. The present analytical method is generally applicable, irrespective of the magnitudes of the parameters. Furthermore, the present method analytically revealed many characteristic features of the dispersion surface and will be quite instructive for further numerical calculations of rocking curves.

  18. Formulation of probabilistic models of protein structure in atomic detail using the reference ratio method

    DEFF Research Database (Denmark)

    Valentin, Jan B.; Andreetta, Christian; Boomsma, Wouter


    We propose a method to formulate probabilistic models of protein structure in atomic detail, for a given amino acid sequence, based on Bayesian principles, while retaining a close link to physics. We start from two previously developed probabilistic models of protein structure on a local length...... scale, which concern the dihedral angles in main chain and side chains, respectively. Conceptually, this constitutes a probabilistic and continuous alternative to the use of discrete fragment and rotamer libraries. The local model is combined with a nonlocal model that involves a small number of energy...... terms according to a physical force field, and some information on the overall secondary structure content. In this initial study we focus on the formulation of the joint model and the evaluation of the use of an energy vector as a descriptor of a protein's nonlocal structure; hence, we derive...

  19. Reliability and validation of a behavioral model of clinical behavioral formulation

    Directory of Open Access Journals (Sweden)

    Amanda M Muñoz-Martínez


    Full Text Available The aim of this study was to determine the reliability and content and predictive validity of a clinical case formulation, developed from a behavioral perspective. A mixed design integrating levels of descriptive analysis and A-B case study with follow-up was used. The study established the reliability of the following descriptive and explanatory categories: (a problem description, (b predisposing factors, (c precipitating factors, (d acquisition and (e inferred mechanism (maintenance. The analysis was performed on cases from 2005 to 2008 formulated with the model derived from the current study. With regards to validity, expert judges considered that the model had content validity. The predictive validity was established across application of model to three case studies. Discussion shows the importance of extending the investigation with the model in other populations and to establish the clinical and concurrent validity of the model.

  20. Improved Formulations for Air-Surface Exchanges Related to National Security Needs: Dry Deposition Models

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, James G.


    The Department of Homeland Security and others rely on results from atmospheric dispersion models for threat evaluation, event management, and post-event analyses. The ability to simulate dry deposition rates is a crucial part of our emergency preparedness capabilities. Deposited materials pose potential hazards from radioactive shine, inhalation, and ingestion pathways. A reliable characterization of these potential exposures is critical for management and mitigation of these hazards. A review of the current status of dry deposition formulations used in these atmospheric dispersion models was conducted. The formulations for dry deposition of particulate materials from am event such as a radiological attack involving a Radiological Detonation Device (RDD) is considered. The results of this effort are applicable to current emergency preparedness capabilities such as are deployed in the Interagency Modeling and Atmospheric Assessment Center (IMAAC), other similar national/regional emergency response systems, and standalone emergency response models. The review concludes that dry deposition formulations need to consider the full range of particle sizes including: 1) the accumulation mode range (0.1 to 1 micron diameter) and its minimum in deposition velocity, 2) smaller particles (less than .01 micron diameter) deposited mainly by molecular diffusion, 3) 10 to 50 micron diameter particles deposited mainly by impaction and gravitational settling, and 4) larger particles (greater than 100 micron diameter) deposited mainly by gravitational settling. The effects of the local turbulence intensity, particle characteristics, and surface element properties must also be addressed in the formulations. Specific areas for improvements in the dry deposition formulations are 1) capability of simulating near-field dry deposition patterns, 2) capability of addressing the full range of potential particle properties, 3) incorporation of particle surface retention/rebound processes, and

  1. Stochastic Finite Element Analysis of Non-Linear Structures Modelled by Plasticity Theory

    DEFF Research Database (Denmark)

    Frier, Christian; Sørensen, John Dalsgaard


    A Finite Element Reliability Method (FERM) is introduced to perform reliability analyses on two-dimensional structures in plane stress, modeled by non-linear plasticity theory. FERM is a coupling between the First Order Reliability Method (FORM) and the Finite Element Method (FEM). FERM can be used...... Method (DDM), here adapted to work with a generally formulated plasticity based constitutive model. The approach is exemplified with a steel plate with a hole in bending subjected to a displacement based limit state function....

  2. Stabilization of glycoprotein liquid formulation using arginine: a study with lactoferrin as a model protein. (United States)

    Kim, Hyun-Jung; Shin, Chang Hoon; Kim, Chan-Wha


    The formulation of new biotherapeutics without human serum albumin (HSA) could decrease the potential risk of blood-transmitted diseases and those caused by infectious viruses and other pathogens. In the present study, arginine was examined as a potential alternative to HAS, and bovine lactoferrin (bLf) was used as a representative model glycoprotein since bLf has potential immunomodulatory and antiviral activity. The optimal formulation for the mixture was determined to be 10 mM arginine, 15% (w/v) trehalose, and 0.02% (v/v) Tween 80, using a statistical analysis program, Minitab. Analyses were performed using reverse-phase high-performance liquid chromatography (HPLC) and SDS-PAGE. The blf HSA-free formulations lost only 12-20% of blf compared with 46% for control (without additives) after 28 d of storage. Based on long-term stability studies, the HSA-free formulation developed in this study had a stronger effect on the stability of bLf (1.4-fold) than HSA formulation under various storage conditions over 6 months.

  3. Assessing the Problem Formulation in an Integrated Assessment Model: Implications for Climate Policy Decision-Support (United States)

    Garner, G. G.; Reed, P. M.; Keller, K.


    Integrated assessment models (IAMs) are often used with the intent to aid in climate change decisionmaking. Numerous studies have analyzed the effects of parametric and/or structural uncertainties in IAMs, but uncertainties regarding the problem formulation are often overlooked. Here we use the Dynamic Integrated model of Climate and the Economy (DICE) to analyze the effects of uncertainty surrounding the problem formulation. The standard DICE model adopts a single objective to maximize a weighted sum of utilities of per-capita consumption. Decisionmakers, however, may be concerned with a broader range of values and preferences that are not captured by this a priori definition of utility. We reformulate the problem by introducing three additional objectives that represent values such as (i) reliably limiting global average warming to two degrees Celsius and minimizing both (ii) the costs of abatement and (iii) the damages due to climate change. We derive a set of Pareto-optimal solutions over which decisionmakers can trade-off and assess performance criteria a posteriori. We illustrate the potential for myopia in the traditional problem formulation and discuss the capability of this multiobjective formulation to provide decision support.

  4. Sparse modeling theory, algorithms, and applications

    CERN Document Server

    Rish, Irina


    ""A comprehensive, clear, and well-articulated book on sparse modeling. This book will stand as a prime reference to the research community for many years to come.""-Ricardo Vilalta, Department of Computer Science, University of Houston""This book provides a modern introduction to sparse methods for machine learning and signal processing, with a comprehensive treatment of both theory and algorithms. Sparse Modeling is an ideal book for a first-year graduate course.""-Francis Bach, INRIA - École Normale Supřieure, Paris

  5. Temperature characteristics modeling of Preisach theory

    Directory of Open Access Journals (Sweden)

    Chen Hao


    Full Text Available This paper proposes a modeling method of the temperature characteristics of Preisach theory. On the basis of the classical Preisach hysteresis model, the Curie temperature, the critical exponent and the ambient temperature are introduced after which the effect of temperature on the magnetic properties of ferromagnetic materials can be accurately reflected. A simulation analysis and a temperature characteristic experiment with silicon steel was carried out. The results are basically the same which proves the validity and the accuracy of the method.

  6. Criticism of the Classical Theory of Macroeconomic Modeling

    Directory of Open Access Journals (Sweden)

    Konstantin K. Kumehov


    Full Text Available Abstract: Current approaches and methods of modeling of macroeconomic systems do not allow to generate research ideas that could be used in applications. This is largely due to the fact that the dominant economic schools and research directions are building their theories on misconceptions about the economic system as object modeling, and have no common methodological approaches in the design of macroeconomic models. All of them are focused on building a model aimed at establishing equilibrium parameters of supply and demand, production and consumption. At the same time as the underlying factors are not considered resource potential and the needs of society in material and other benefits. In addition, there is no unity in the choice of elements and mechanisms of interaction between them. Not installed, what are the criteria to determine the elements of the model: whether it is the institutions, whether the industry is whether the population, or banks, or classes, etc. From the methodological point of view, the design of the model all the most well-known authors extrapolated to the new models of the past state or past events. As a result, every time the model is ready by the time the situation changes, the last parameters underlying the model are losing relevance, so at best, the researcher may have to interpret the events and parameters that are not feasible in the future. In this paper, based on analysis of the works of famous authors, belonging to different schools and areas revealed weaknesses of their proposed macroeconomic models that do not allow you to use them to solve applied problems of economic development. A fundamentally new approaches and methods by which it is possible the construction of macroeconomic models that take into account the theoretical and applied aspects of modeling, as well as formulated the basic methodological requirements.

  7. Theory, modeling and simulation: Annual report 1993

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, T.H. Jr.; Garrett, B.C.


    Developing the knowledge base needed to address the environmental restoration issues of the US Department of Energy requires a fundamental understanding of molecules and their interactions in insolation and in liquids, on surfaces, and at interfaces. To meet these needs, the PNL has established the Environmental and Molecular Sciences Laboratory (EMSL) and will soon begin construction of a new, collaborative research facility devoted to advancing the understanding of environmental molecular science. Research in the Theory, Modeling, and Simulation program (TMS), which is one of seven research directorates in the EMSL, will play a critical role in understanding molecular processes important in restoring DOE`s research, development and production sites, including understanding the migration and reactions of contaminants in soils and groundwater, the development of separation process for isolation of pollutants, the development of improved materials for waste storage, understanding the enzymatic reactions involved in the biodegradation of contaminants, and understanding the interaction of hazardous chemicals with living organisms. The research objectives of the TMS program are to apply available techniques to study fundamental molecular processes involved in natural and contaminated systems; to extend current techniques to treat molecular systems of future importance and to develop techniques for addressing problems that are computationally intractable at present; to apply molecular modeling techniques to simulate molecular processes occurring in the multispecies, multiphase systems characteristic of natural and polluted environments; and to extend current molecular modeling techniques to treat complex molecular systems and to improve the reliability and accuracy of such simulations. The program contains three research activities: Molecular Theory/Modeling, Solid State Theory, and Biomolecular Modeling/Simulation. Extended abstracts are presented for 89 studies.

  8. Modification of Concrete Damaged Plasticity model. Part II: Formulation and numerical tests

    Directory of Open Access Journals (Sweden)

    Kamińska Inez


    Full Text Available A refined model for elastoplastic damaged material is formulated based on the plastic potential introduced in Part I [1]. Considered model is an extension of Concrete Damaged Plasticity material implemented in Abaqus [2]. In the paper the stiffness tensor for elastoplastic damaged behaviour is derived. In order to validate the model, computations for the uniaxial tests are performed. Response of the model for various cases of parameter’s choice is shown and compared to the response of the CDP model.

  9. In vivo effects of traditional Ayurvedic formulations in Drosophila melanogaster model relate with therapeutic applications.

    Directory of Open Access Journals (Sweden)

    Vibha Dwivedi

    Full Text Available BACKGROUND: Ayurveda represents the traditional medicine system of India. Since mechanistic details of therapy in terms of current biology are not available in Ayurvedic literature, modern scientific studies are necessary to understand its major concepts and procedures. It is necessary to examine effects of the whole Ayurvedic formulations rather than their "active" components as is done in most current studies. METHODS: We tested two different categories of formulations, a Rasayana (Amalaki Rasayana or AR, an herbal derivative and a Bhasma (Rasa-Sindoor or RS, an organo-metallic derivative of mercury, for effects on longevity, development, fecundity, stress-tolerance, and heterogeneous nuclear ribonucleoprotein (hnRNP levels of Drosophila melanogaster using at least 200 larvae or flies for each assay. RESULTS: A 0.5% (weight/volume supplement of AR or RS affected life-history and other physiological traits in distinct ways. While the size of salivary glands, hnRNP levels in larval tissues, and thermotolerance of larvae/adult flies improved significantly following feeding either of the two formulations, the median life span and starvation resistance improved only with AR. Feeding on AR or RS supplemented food improved fecundity differently. Feeding of larvae and adults with AR increased the fecundity while the same with RS had opposite effect. On the contrary, feeding larvae on normal food and adults on AR supplement had no effect on fecundity but a comparable regime of feeding on RS-supplemented food improved fecundity. RS feeding did not cause heavy metal toxicity. CONCLUSIONS: The present study with two Ayurvedic formulations reveals formulation-specific effects on several parameters of the fly's life, which seem to generally agree with their recommended human usages in Ayurvedic practices. Thus, Drosophila, with its very rich genetic tools and well-worked-out developmental pathways promises to be a very good model for examining the cellular

  10. Use of preclinical dog studies and absorption modeling to facilitate late stage formulation bridging for a BCS II drug candidate. (United States)

    Kesisoglou, Filippos


    Formulation changes are common during drug development either due to clinical or manufacturing considerations. These changes especially at later stages of drug development oftentimes raise questions on the potential impact of a new formulation on bioavailability. In this work, the preclinical assessment of formulation bridging risk for a Biopharmaceutics Classification System II development compound is presented. Early clinical studies were conducted using a liquid-filled capsule (LFC). To assess the feasibility of a conventional solid dosage form, an initial analysis was conducted using absorption modeling which indicated conventional formulation of micronized active pharmaceutical ingredient (API) could be a viable option. Subsequently, test formulations were prepared and tested in vivo in dogs. The solid formulations were able to match exposures of the LFC capsule in the dog model; in addition, a sensitivity to API PSD was observed in line with the modeling predictions. When tested in the clinic, the conventional solid formulation resulted in exposures of approximately 25% lower compared to the LFC on an equivalent dose basis; however, bridging with a small dose adjustment would be feasible. The outcome of the clinical study was better predicted by the modeling approach while the dog model appeared to somewhat overestimate absorption. Through the use of preclinical tools and modeling and simulation, a risk assessment around formulation bridging can be conducted and inform formulation decisions or subsequent clinical study designs.

  11. Genetic model compensation: Theory and applications (United States)

    Cruickshank, David Raymond


    The adaptive filtering algorithm known as Genetic Model Compensation (GMC) was originally presented in the author's Master's Thesis. The current work extends this earlier work. GMC uses a genetic algorithm to optimize filter process noise parameters in parallel with the estimation of the state and based only on the observational information available to the filter. The original stochastic state model underlying GMC was inherited from the antecedent, non-adaptive Dynamic Model Compensation (DMC) algorithm. The current work develops the stochastic state model from a linear system viewpoint, avoiding the simplifications and approximations of the earlier development, and establishes Riemann sums as unbiased estimators of the stochastic integrals which describe the evolution of the random state components. These are significant developments which provide GMC with a solid theoretical foundation. Orbit determination is the area of application in this work, and two types of problems are studied: real-time autonomous filtering using absolute GPS measurements and precise post-processed filtering using differential GPS measurements. The first type is studied in a satellite navigation simulation in which pseudorange and pseudorange rate measurements are processed by an Extended Kalman Filter which incorporates both DMC and GMC. Both estimators are initialized by a geometric point solution algorithm. Using measurements corrupted by simulated Selective Availability errors, GMC reduces mean RSS position error by 6.4 percent, reduces mean clock bias error by 46 percent, and displays a marked improvement in covariance consistency relative to DMC. To study the second type of problem, GMC is integrated with NASA Jet Propulsion Laboratory's Gipsy/Oasis-II (GOA-II) precision orbit determination program creating an adaptive version of GOA-II's Reduced Dynamic Tracking (RDT) process noise formulation. When run as a sequential estimator with GPS measurements from the TOPEX satellite and

  12. Formulation of the low-energy effective theory of electroweak symmetry-breaking without a Higgs particle; Formulation de la theorie effective a basse energie du secteur electrofaible sans particule de Higgs

    Energy Technology Data Exchange (ETDEWEB)

    Hirn, J


    The low-energy effective theory of electroweak symmetry-breaking without a Higgs particle is constructed using the methods of Chiral Perturbation Theory. Weinberg's power-counting formula demonstrates the consistency of the loop expansion, with the corresponding renormalization. We find that the suppression of effective operators by a mass scale, which was automatic in the case of the Standard Model, no longer holds in the Higgs-less case. Moreover, the incriminated operators appear at leading order in the chiral expansion, at variance with experiments. To account for their suppression, invariance under a larger symmetry is required, corresponding to the composite sector (which produces the three Goldstone modes) being decoupled from the elementary sector (quarks, leptons and Yang-Mills fields). The couplings are introduced via spurions: this reduces the symmetry to SU(2) x U(1). In the simultaneous expansion in powers of momenta and spurions, the aforementioned operators are relegated to higher orders. In addition, the method allows for a systematic treatment of weak isospin breaking. The Weinberg power-counting formula can be recovered, and small neutrino masses accounted for. The three right-handed neutrinos (lighter than the TeV), which are introduced in connection with the custodial symmetry, are quasi-sterile and stable. A constraint on the underlying theory is obtained by studying the anomaly-matching in the composite sector and generalizing the Wess-Zumino construction. The spurion formalism is also applied to open linear moose models, for which generalized Weinberg sum rules are derived. (author)

  13. On the formulation, parameter identification and numerical integration of the EMMI model :plasticity and isotropic damage.

    Energy Technology Data Exchange (ETDEWEB)

    Bammann, Douglas J.; Johnson, G. C. (University of California, Berkeley, CA); Marin, Esteban B.; Regueiro, Richard A. (University of Colorado, Boulder, CO)


    In this report we present the formulation of the physically-based Evolving Microstructural Model of Inelasticity (EMMI) . The specific version of the model treated here describes the plasticity and isotropic damage of metals as being currently applied to model the ductile failure process in structural components of the W80 program . The formulation of the EMMI constitutive equations is framed in the context of the large deformation kinematics of solids and the thermodynamics of internal state variables . This formulation is focused first on developing the plasticity equations in both the relaxed (unloaded) and current configurations. The equations in the current configuration, expressed in non-dimensional form, are used to devise the identification procedure for the plasticity parameters. The model is then extended to include a porosity-based isotropic damage state variable to describe the progressive deterioration of the strength and mechanical properties of metals induced by deformation . The numerical treatment of these coupled plasticity-damage constitutive equations is explained in detail. A number of examples are solved to validate the numerical implementation of the model.

  14. Elastically cooperative activated barrier hopping theory of relaxation in viscous fluids. I. General formulation and application to hard sphere fluids. (United States)

    Mirigian, Stephen; Schweizer, Kenneth S


    We generalize the force-level nonlinear Langevin equation theory of single particle hopping to include collective effects associated with long range elastic distortion of the liquid. The activated alpha relaxation event is of a mixed spatial character, involving two distinct, but inter-related, local and collective barriers. There are no divergences at volume fractions below jamming or temperatures above zero Kelvin. The ideas are first developed and implemented analytically and numerically in the context of hard sphere fluids. In an intermediate volume fraction crossover regime, the local cage process is dominant in a manner consistent with an apparent Arrhenius behavior. The super-Arrhenius collective barrier is more strongly dependent on volume fraction, dominates the highly viscous regime, and is well described by a nonsingular law below jamming. The increase of the collective barrier is determined by the amplitude of thermal density fluctuations, dynamic shear modulus or transient localization length, and a growing microscopic jump length. Alpha relaxation time calculations are in good agreement with recent experiments and simulations on dense fluids and suspensions of hard spheres. Comparisons of the theory with elastic models and entropy crisis ideas are explored. The present work provides a foundation for constructing a quasi-universal, fit-parameter-free theory for relaxation in thermal molecular liquids over 14 orders of magnitude in time.

  15. Linear sigma model for multiflavor gauge theories (United States)

    Meurice, Y.


    We consider a linear sigma model describing 2 Nf2 bosons (σ , a0 , η' and π ) as an approximate effective theory for a S U (3 ) local gauge theory with Nf Dirac fermions in the fundamental representation. The model has a renormalizable U (Nf)L⊗U (Nf)R invariant part, which has an approximate O (2 Nf2) symmetry, and two additional terms, one describing the effects of a S U (Nf)V invariant mass term and the other the effects of the axial anomaly. We calculate the spectrum for arbitrary Nf. Using preliminary and published lattice results from the LatKMI collaboration, we found combinations of the masses that vary slowly with the explicit chiral symmetry breaking and Nf. This suggests that the anomaly term plays a leading role in the mass spectrum and that simple formulas such as Mσ2≃(2 /Nf-Cσ)Mη' 2 should apply in the chiral limit. Lattice measurements of Mη'2 and of approximate constants such as Cσ could help in locating the boundary of the conformal window. We show that our calculation can be adapted for arbitrary representations of the gauge group and in particular to the minimal model with two sextets, where similar patterns are likely to apply.

  16. Standard Models from Heterotic M-theory

    CERN Document Server

    Donagi, R Y; Pantev, T; Waldram, D; Donagi, Ron; Ovrut, Burt A.; Pantev, Tony; Waldram, Daniel


    We present a class of N=1 supersymmetric models of particle physics, derived directly from heterotic M-theory, that contain three families of chiral quarks and leptons coupled to the gauge group $SU(3)_C\\times SU(2)_{L}\\times U(1)_{Y}$. These models are a fundamental form of ``brane-world'' theories, with an observable and hidden sector each confined, after compactification on a Calabi-Yau threefold, to a BPS three-brane separated by a five dimensional bulk space with size of the order of the intermediate scale. The requirement of three families, coupled to the fundamental conditions of anomaly freedom and supersymmetry, constrains these models to contain additional five-branes wrapped around holomorphic curves in the Calabi-Yau threefold. These five-branes ``live'' in the bulk space and represent new, non-perturbative aspects of these particle physics vacua. We discuss, in detail, the relevant mathematical structure of a class of torus-fibered Calabi-Yau threefolds with non-trivial first homotopy groups and ...

  17. A matrix model from string field theory

    Directory of Open Access Journals (Sweden)

    Syoji Zeze


    Full Text Available We demonstrate that a Hermitian matrix model can be derived from level truncated open string field theory with Chan-Paton factors. The Hermitian matrix is coupled with a scalar and U(N vectors which are responsible for the D-brane at the tachyon vacuum. Effective potential for the scalar is evaluated both for finite and large N. Increase of potential height is observed in both cases. The large $N$ matrix integral is identified with a system of N ZZ branes and a ghost FZZT brane.

  18. Additive Dose Response Models: Explicit Formulation and the Loewe Additivity Consistency Condition

    Directory of Open Access Journals (Sweden)

    Simone Lederer


    Full Text Available High-throughput techniques allow for massive screening of drug combinations. To find combinations that exhibit an interaction effect, one filters for promising compound combinations by comparing to a response without interaction. A common principle for no interaction is Loewe Additivity which is based on the assumption that no compound interacts with itself and that two doses from different compounds having the same effect are equivalent. It then should not matter whether a component is replaced by the other or vice versa. We call this assumption the Loewe Additivity Consistency Condition (LACC. We derive explicit and implicit null reference models from the Loewe Additivity principle that are equivalent when the LACC holds. Of these two formulations, the implicit formulation is the known General Isobole Equation (Loewe, 1928, whereas the explicit one is the novel contribution. The LACC is violated in a significant number of cases. In this scenario the models make different predictions. We analyze two data sets of drug screening that are non-interactive (Cokol et al., 2011; Yadav et al., 2015 and show that the LACC is mostly violated and Loewe Additivity not defined. Further, we compare the measurements of the non-interactive cases of both data sets to the theoretical null reference models in terms of bias and mean squared error. We demonstrate that the explicit formulation of the null reference model leads to smaller mean squared errors than the implicit one and is much faster to compute.

  19. Mechanism for enhanced absorption of a solid dispersion formulation of LY2300559 using the artificial stomach duodenum model. (United States)

    Polster, Christopher S; Wu, Sy-Juen; Gueorguieva, Ivelina; Sperry, David C


    An artificial stomach duodenum (ASD) model has been used to demonstrate the performance difference between two formulations of LY2300559, a low-solubility acidic developmental drug. The two formulations investigated were a conventional high-shear wet granulation (HSWG) formulation and a solid dispersion formulation. A pharmacokinetic study in humans demonstrated the enhanced performance of the solid dispersion formulation relative to the HSWG formulation. The Cmax and AUC of the solid dispersion was 2.6 and 1.9 times greater, respectively, compared to the HSWG formulation. In the ASD, the solid dispersion formulation performance was characterized by three main phases: (1) rapid release in the stomach, creating a supersaturated concentration of drug, (2) precipitation in the stomach, and (3) rapid redissolution of the precipitate in the duodenum to concentration levels that are supersaturated relative to crystalline drug. A series of complementary experiments were employed to describe this performance behavior mechanistically. Imaging experiments with a pH indicating dye showed that local pH gradients from meglumine in the solid dispersion formulation were responsible for creating a high initial supersaturation concentration in the stomach. Upon dissipation of meglumine, the drug precipitated in the stomach as an amorphous solid. Because the precipitated drug is in an amorphous form, it can then rapidly redissolve as it transits to the more neutral environment of the duodenum. This unexpected sequence of physical state changes gives a mechanistic explanation for the enhanced in vivo performance of the solid dispersion formulation relative to the HSWG formulation.

  20. A direct comparison of a depth-dependent Radiation stress formulation and a Vortex force formulation within a three-dimensional coastal ocean model (United States)

    Moghimi, Saeed; Klingbeil, Knut; Gräwe, Ulf; Burchard, Hans


    In this study a model system consisting of the three-dimensional General Estuarine Transport Model (GETM) and the third generation wind wave model SWAN was developed. Both models were coupled in two-way mode. The effects of waves were included into the ocean model by implementing the depth-dependent Radiation stress formulation (RS) of Mellor (2011a) and the Vortex force formulation (VF) presented by Bennis et al. (2011). Thus, the developed model system offers a direct comparison of these two formulations. The enhancement of the vertical eddy viscosity due to the energy transfer by white capping and breaking waves was taken into account by means of injecting turbulent kinetic energy at the surface. Wave-current interaction inside the bottom boundary layer was considered as well. The implementation of both wave-averaged formulations was validated against three flume experiments. One of these experiments with long period surface waves (swell), had not been evaluated before. The validation showed the capability of the model system to reproduce the three-dimensional interaction of waves and currents. For the flume test cases the wave-induced water level changes (wave set-up and set-down) and the corresponding depth-integrated wave-averaged velocities were similar for RS and VF. Both formulations produced comparable velocity profiles for short period waves. However, for large period waves, VF overestimated the wave set-down near the main breaking points and RS showed artificial offshore-directed transport at the surface where wave shoaling was taking place. Finally the validated model system was applied to a realistic barred beach scenario. For RS and VF the resulting velocity profiles were similar after being significantly improved by a roller evolution method. Both wave-averaged formulations generally provided similar results, but some shortcomings were revealed. Although VF partly showed significant deviations from the measurements, its results were still physically

  1. Formulation of probabilistic models of protein structure in atomic detail using the reference ratio method. (United States)

    Valentin, Jan B; Andreetta, Christian; Boomsma, Wouter; Bottaro, Sandro; Ferkinghoff-Borg, Jesper; Frellsen, Jes; Mardia, Kanti V; Tian, Pengfei; Hamelryck, Thomas


    We propose a method to formulate probabilistic models of protein structure in atomic detail, for a given amino acid sequence, based on Bayesian principles, while retaining a close link to physics. We start from two previously developed probabilistic models of protein structure on a local length scale, which concern the dihedral angles in main chain and side chains, respectively. Conceptually, this constitutes a probabilistic and continuous alternative to the use of discrete fragment and rotamer libraries. The local model is combined with a nonlocal model that involves a small number of energy terms according to a physical force field, and some information on the overall secondary structure content. In this initial study we focus on the formulation of the joint model and the evaluation of the use of an energy vector as a descriptor of a protein's nonlocal structure; hence, we derive the parameters of the nonlocal model from the native structure without loss of generality. The local and nonlocal models are combined using the reference ratio method, which is a well-justified probabilistic construction. For evaluation, we use the resulting joint models to predict the structure of four proteins. The results indicate that the proposed method and the probabilistic models show considerable promise for probabilistic protein structure prediction and related applications. Copyright © 2013 Wiley Periodicals, Inc.

  2. Using chemical organization theory for model checking. (United States)

    Kaleta, Christoph; Richter, Stephan; Dittrich, Peter


    The increasing number and complexity of biomodels makes automatic procedures for checking the models' properties and quality necessary. Approaches like elementary mode analysis, flux balance analysis, deficiency analysis and chemical organization theory (OT) require only the stoichiometric structure of the reaction network for derivation of valuable information. In formalisms like Systems Biology Markup Language (SBML), however, information about the stoichiometric coefficients required for an analysis of chemical organizations can be hidden in kinetic laws. First, we introduce an algorithm that uncovers stoichiometric information that might be hidden in the kinetic laws of a reaction network. This allows us to apply OT to SBML models using modifiers. Second, using the new algorithm, we performed a large-scale analysis of the 185 models contained in the manually curated BioModels Database. We found that for 41 models (22%) the set of organizations changes when modifiers are considered correctly. We discuss one of these models in detail (BIOMD149, a combined model of the ERK- and Wnt-signaling pathways), whose set of organizations drastically changes when modifiers are considered. Third, we found inconsistencies in 5 models (3%) and identified their characteristics. Compared with flux-based methods, OT is able to identify those species and reactions more accurately [in 26 cases (14%)] that can be present in a long-term simulation of the model. We conclude that our approach is a valuable tool that helps to improve the consistency of biomodels and their repositories. All data and a JAVA applet to check SBML-models is available from Supplementary data are available at Bioinformatics online.

  3. A Beddoes-Leishman type dynamic stall model in state-space and indicial formulations

    DEFF Research Database (Denmark)

    Hansen, M.H.; Gaunaa, Mac; Aagaard Madsen, Helge


    This report contains a description of a Beddoes-Leishman type dynamic stall model in both a state-space and an indicial function formulation. The model predicts the unsteady aerodynamic forces and moment on an airfoil section undergoing arbitrary motionin heave, lead-lag, and pitch. The model...... is carried out by comparing the response of the model with inviscid solutions and observing the general behavior of the model using known airfoil data as input. Theproposed dynamic model gives results identical to inviscid solutions within the attached-flow region; and it exhibits the expected dynamic...... features, such as overshoot of the lift, in the stall region. The linearized model is shown to give identicalresults to the full model for small amplitude oscillations. Furthermore, it is shown that the response of finite thichkness airfoils can be reproduced to a high accuracy by the use of specific...

  4. Formulation of consumables management models: Mission planning processor payload interface definition (United States)

    Torian, J. G.


    Consumables models required for the mission planning and scheduling function are formulated. The relation of the models to prelaunch, onboard, ground support, and postmission functions for the space transportation systems is established. Analytical models consisting of an orbiter planning processor with consumables data base is developed. A method of recognizing potential constraint violations in both the planning and flight operations functions, and a flight data file storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights is presented.

  5. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter


    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  6. Estimation of Soil Electrical Properties in a Multilayer Earth Model with Boundary Element Formulation

    Directory of Open Access Journals (Sweden)

    T. Islam


    Full Text Available This paper presents an efficient model for estimation of soil electric resistivity with depth and layer thickness in a multilayer earth structure. This model is the improvement of conventional two-layer earth model including Wenner resistivity formulations with boundary conditions. Two-layer soil model shows the limitations in specific soil characterizations of different layers with the interrelationships between soil apparent electrical resistivity (ρ and several soil physical or chemical properties. In the multilayer soil model, the soil resistivity and electric potential at any points in multilayer anisotropic soil medium are expressed according to the variation of electric field intensity for geotechnical investigations. For most soils with varying layers, multilayer soil resistivity profile is therefore more suitable to get soil type, bulk density of compacted soil and to detect anomalous materials in soil. A boundary element formulation is implemented to show the multilayer soil model with boundary conditions in soil resistivity estimations. Numerical results of soil resistivity ratio and potential differences for different layers are presented to illustrate the application, accuracy, and efficiency of the proposed model. The nobility of the research is obtaining multilayer soil characterizations through soil electric properties in near surface soil profile.

  7. Towards the formulation of a realistic 3D model for simulation of magnetron injection guns for gyrotrons. A preliminary study

    Energy Technology Data Exchange (ETDEWEB)

    Sabchevski, S. [Bulgarian Academy of Sciences (Bulgaria). Institute of Electronics; Zhelyazkov, I. [Sofia Univ. (Bulgaria). Faculty of Physics; Illy, S.; Piosczyk, B.; Borie, E.


    Numerical experiments based on adequate, self-consistent physical models implemented in simulation codes are widely used for computer-aided design (CAD), analysis and optimization of the electron optical systems (EOS) of the gyrotrons. An essential part of the physical model is the emission model, i.e., the relations that govern the value of the beam current extracted from the emitter as well as its energy spectrum, spatial and angular distribution. In this paper, we present a compendium of the basic theory, the most essential formulas and discuss the most important factors responsible for the nonuniformity of the emission and velocity spread. We also review the emission models realized in various ray-tracing and Particle-In-Cell (PIC) codes and present a general formulation of a 3D emission model based on the principle of decomposition of the region near the cathode to a set of equivalent diodes. It is believed that the information summarized in this compendium will be helpful for the development of novel modules for calculation of the initial distribution in both the available 2D computer programs that are being upgraded now and in the novel 3D simulation tools development of which is in progress now. (orig.)

  8. Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Tucker, W. Troy (Applied Biomathematics, Setauket, NY); Zhang, Jianzhong (Iowa State University, Ames, IA); Ginzburg, Lev (Applied Biomathematics, Setauket, NY); Berleant, Daniel J. (Iowa State University, Ames, IA); Ferson, Scott (Applied Biomathematics, Setauket, NY); Hajagos, Janos (Applied Biomathematics, Setauket, NY); Nelsen, Roger B. (Lewis & Clark College, Portland, OR)


    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  9. A LQ-based kinetic model formulation for exploring dynamics of treatment response of tumours in patients. (United States)

    Scheidegger, Stephan; Lutters, Gerd; Bodis, Stephan


    A kinetic bio-mathematical, linear-quadratic (LQ) based model description for clonogenic survival is presented. In contrast to widely used formulations of models, a dynamic approach based on ordinary differential equations for coupling a repair model with a tumour growth model is used to allow analysis of intercellular process dynamics and submodel interference. The purpose of the model formulation is to find a quantitative framework for investigation of tumour response to radiotherapy in vivo. It is not the intention of the proposed model formulation to give a mechanistic explanation for cellular repair processes. This article addresses bio-mathematical aspects of the simplistic kinetic approach used for description of repair. The model formulation includes processes for cellular death, repopulation and cellular repair. The explicit use of the population size in the model facilitates the coupling of the sub-models including aspects of tissue dynamics (competition, oxygenation). The cellular repair is summarized by using a kinetic model for a dose equivalent Γ describing production and elimination of sublethal lesions. This dose equivalent replaces the absorbed dose used in the common LQ- model. Therefore, this approach is called the Γ- LQ- formulation. A comparison with two kinetic radiobiological models (the LPL model of Curtis and the compartmental model of Carlone) is carried out. The resulting differential equations are solved by numerical integration using a Runge-Kutta algorithm. The comparison reveals a good agreement between the Γ- LQ- formulation and the models of Curtis and Carlone under certain, defined conditions: The proposed formulation leads to results which are identical to the model of Carlone over a wide range of investigated biological parameters and different fractionation schemes when using first order repair kinetics. The comparison with experimental data and the LPL- model of Curtis shows a good agreement of the Γ- LQ- formulation using

  10. Investigation of an artificial intelligence technology--Model trees. Novel applications for an immediate release tablet formulation database. (United States)

    Shao, Q; Rowe, R C; York, P


    This study has investigated an artificial intelligence technology - model trees - as a modelling tool applied to an immediate release tablet formulation database. The modelling performance was compared with artificial neural networks that have been well established and widely applied in the pharmaceutical product formulation fields. The predictability of generated models was validated on unseen data and judged by correlation coefficient R(2). Output from the model tree analyses produced multivariate linear equations which predicted tablet tensile strength, disintegration time, and drug dissolution profiles of similar quality to neural network models. However, additional and valuable knowledge hidden in the formulation database was extracted from these equations. It is concluded that, as a transparent technology, model trees are useful tools to formulators.

  11. Early Formulation Model-centric Engineering on NASA's Europa Mission Concept Study (United States)

    Bayer, Todd; Chung, Seung; Cole, Bjorn; Cooke, Brian; Dekens, Frank; Delp, Chris; Gontijo, Ivair; Lewis, Kari; Moshir, Mehrdad; Rasmussen, Robert; hide


    The proposed Jupiter Europa Orbiter and Jupiter Ganymede Orbiter missions were formulated using current state-of-the-art MBSE facilities: - JPL's TeamX, Rapid Mission Architecting - ESA's Concurrent Design Facility - APL's ACE Concurrent Engineering Facility. When JEO became an official "pre-project" in Sep 2010, we had already developed a strong partnership with JPL's Integrated Model Centric Engineering (IMCE) initiative; decided to apply Architecting and SysML-based MBSE from the beginning, begun laying these foundations to support work in Phase A. Release of Planetary Science Decadal Survey and FY12 President's Budget in March 2011 changed the landscape. JEO reverted to being a pre-phase A study. A conscious choice was made to continue application of MBSE on the Europa Study, refocused for early formulation. This presentation describes the approach, results, and lessons.

  12. On the formulation and computer implementation of an age-dependent two-sex demographic model. (United States)

    Mode, C J; Salsburg, M A


    A two-sex age-dependent demographic model is formulated within the framework of a stochastic population process, including both time-homogeneous and time-inhomogeneous laws of evolution. An outline of the parametric components of the system, which expedite computer implementation and experimentation, is also given. New features of the model include a component for couple formation, using the class of Farlie-Morgenstern bivariate distributions to accommodate age preferences in selecting marriage partners, a component for couple dissolution due to separation or divorce, and an outline of techniques for initializing a two-sex projection given scanty information. For the case of time-homogeneous laws of evolution, stability properties of two-sex models that are analogs of those for one-sex models are difficult to prove mathematically due to nonlinearities. But computer experiments in this case suggest that these properties continue to hold for two-sex models for such widely used demographic indicators as period crude birth rates, period rates of natural increase, and period age distributions, which converge to constant forms in long-term projections. The values of the stable crude birth rate, rate of natural increase, and quantiles of the stable age distribution differ markedly among projections that differ only in selected values of parameters governing couple formation and dissolution. Such experimental results demonstrate that two-sex models are not merely intellectual curiosities but exist in their own right and lead to insights not attainable in simpler one-sex formulations.

  13. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás


    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  14. Conformal Field Theory Applied to Loop Models (United States)

    Jacobsen, Jesper Lykke

    The application of methods of quantum field theory to problems of statistical mechanics can in some sense be traced back to Onsager's 1944 solution [1] of the two-dimensional Ising model. It does however appear fair to state that the 1970's witnessed a real gain of momentum for this approach, when Wilson's ideas on scale invariance [2] were applied to study critical phenomena, in the form of the celebrated renormalisation group [3]. In particular, the so-called ɛ expansion permitted the systematic calculation of critical exponents [4], as formal power series in the space dimensionality d, below the upper critical dimension d c . An important lesson of these efforts was that critical exponents often do not depend on the precise details of the microscopic interactions, leading to the notion of a restricted number of distinct universality classes.

  15. Formulation, construction and analysis of kinetic models of metabolism: A review of modelling frameworks

    DEFF Research Database (Denmark)

    Saa, Pedro A.; Nielsen, Lars K.


    capabilities. In this review, we present an overview of the relevant mathematical frameworks for kinetic formulation, construction and analysis. Starting with kinetic formalisms, we next review statistical methods for parameter inference, as well as recent computational frameworks applied to the construction...

  16. Stochastic user equilibrium with equilibrated choice sets: Part I - Model formulations under alternative distributions and restrictions

    DEFF Research Database (Denmark)

    Watling, David Paul; Rasmussen, Thomas Kjær; Prato, Carlo Giacomo


    the advantages of the two principles, namely the definition of unused routes in DUE and of mis-perception in SUE, such that the resulting choice sets of used routes are equilibrated. Two model families are formulated to address this issue: the first is a general version of SUE permitting bounded and discrete...... to pre-specify the choice set. We present model specifications within these families, show illustrative examples, evaluate their relative merits, and identify key directions for further research....... error distributions; the second is a Restricted SUE model with an additional constraint that must be satisfied for unused paths. The overall advantage of these model families consists in their ability to combine the unused routes with the use of random utility models for used routes, without the need...

  17. A Realizability Model for Impredicative Hoare Type Theory

    DEFF Research Database (Denmark)

    Petersen, Rasmus Lerchedal; Birkedal, Lars; Nanevski, Alexandar


    We present a denotational model of impredicative Hoare Type Theory, a very expressive dependent type theory in which one can specify and reason about mutable abstract data types. The model ensures soundness of the extension of Hoare Type Theory with impredicative polymorphism; makes the connections...... to separation logic clear, and provides a basis for investigation of further sound extensions of the theory, in particular equations between computations and types....

  18. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application. (United States)

    Zhang, Yongsheng; Yao, Enjian; Wei, Heng; Zheng, Kangning


    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction.

  19. A finite element formulation for modeling dynamic wetting on flexible substrates and in deformable porous media.

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall; Cairncross, Richard A. (Drexel University, Philadelphia, PA); Madasu, S. (Drexel University, Philadelphia, PA)


    This report summarizes research advances pursued with award funding issued by the DOE to Drexel University through the Presidential Early Career Award (PECASE) program. Professor Rich Cairncross was the recipient of this award in 1997. With it he pursued two related research topics under Sandia's guidance that address the outstanding issue of fluid-structural interactions of liquids with deformable solid materials, focusing mainly on the ubiquitous dynamic wetting problem. The project focus in the first four years was aimed at deriving a predictive numerical modeling approach for the motion of the dynamic contact line on a deformable substrate. A formulation of physical model equations was derived in the context of the Galerkin finite element method in an arbitrary Lagrangian/Eulerian (ALE) frame of reference. The formulation was successfully integrated in Sandia's Goma finite element code and tested on several technologically important thin-film coating problems. The model equations, the finite-element implementation, and results from several applications are given in this report. In the last year of the five-year project the same physical concepts were extended towards the problem of capillary imbibition in deformable porous media. A synopsis of this preliminary modeling and experimental effort is also discussed.

  20. Modeling Poker Challenges by Evolutionary Game Theory

    Directory of Open Access Journals (Sweden)

    Marco Alberto Javarone


    Full Text Available We introduce a model for studying the evolutionary dynamics of Poker. Notably, despite its wide diffusion and the raised scientific interest around it, Poker still represents an open challenge. Recent attempts for uncovering its real nature, based on statistical physics, showed that Poker in some conditions can be considered as a skill game. In addition, preliminary investigations reported a neat difference between tournaments and ‘cash game’ challenges, i.e., between the two main configurations for playing Poker. Notably, these previous models analyzed populations composed of rational and irrational agents, identifying in the former those that play Poker by using a mathematical strategy, while in the latter those playing randomly. Remarkably, tournaments require very few rational agents to make Poker a skill game, while ‘cash game’ may require several rational agents for not being classified as gambling. In addition, when the agent interactions are based on the ‘cash game’ configuration, the population shows an interesting bistable behavior that deserves further attention. In the proposed model, we aim to study the evolutionary dynamics of Poker by using the framework of Evolutionary Game Theory, in order to get further insights on its nature, and for better clarifying those points that remained open in the previous works (as the mentioned bistable behavior. In particular, we analyze the dynamics of an agent population composed of rational and irrational agents, that modify their behavior driven by two possible mechanisms: self-evaluation of the gained payoff, and social imitation. Results allow to identify a relation between the mechanisms for updating the agents’ behavior and the final equilibrium of the population. Moreover, the proposed model provides further details on the bistable behavior observed in the ‘cash game’ configuration.

  1. A Semi-Implicit Free Surface Formulation for the Semi-Collocated Grid Diecast Ocean Model (United States)


    free-surface formulation for the DieCAST Ocean model that retains the accurate, low dissipation numerics of its latest and best semi collocated rigid...lid DieCAST version (reflecting the fact that the rigid lid barotropic mode numerics are also a sigma-like approach,) and requires less than 50 percent...more computing. Thus, the numerics used by both rigid lid and free-surface DieCAST versions combines the best of z-level and sigma coordinate numerics, as well as, the best of ’a’ and ’c’ grid numerics.

  2. Locally covariant quantum field theory and the problem of formulating the same physics in all space-times. (United States)

    Fewster, Christopher J


    The framework of locally covariant quantum field theory is discussed, motivated in part using 'ignorance principles'. It is shown how theories can be represented by suitable functors, so that physical equivalence of theories may be expressed via natural isomorphisms between the corresponding functors. The inhomogeneous scalar field is used to illustrate the ideas. It is argued that there are two reasonable definitions of the local physical content associated with a locally covariant theory; when these coincide, the theory is said to be dynamically local. The status of the dynamical locality condition is reviewed, as are its applications in relation to (i) the foundational question of what it means for a theory to represent the same physics in different space-times and (ii) a no-go result on the existence of natural states. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  3. Generation companies decision-making modeling by linear control theory

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez-Alcaraz, G. [Programa de Graduados e Investigacion en Ingenieria Electrica. Departamento de Ingenieria Electrica y Electronica, Instituto Tecnologico de Morelia. Av. Tecnologico 1500, Col. Lomas de Santiaguito 58120. Morelia, Mich. (Mexico); Sheble, Gerald B. [INESC Porto, Faculdade de Engenharia, Universidade do Porto, Campus da FEUP, Rua Dr. Roberto Frias, 4200-465 Porto (Portugal)


    This paper proposes four decision-making procedures to be employed by electric generating companies as part of their bidding strategies when competing in an oligopolistic market: naive, forward, adaptive, and moving average expectations. Decision-making is formulated in a dynamic framework by using linear control theory. The results reveal that interactions among all GENCOs affect market dynamics. Several numerical examples are reported, and conclusions are presented. (author)

  4. A modified slope-dependent formulation for groundwater runoff in a regional climate model (United States)

    Schlemmer, Linda; Strebel, Lukas; Keller, Michael; Lüthi, Daniel; Schär, Christoph


    Soil moisture influences the state of the overlying atmosphere considerably and thus plays a major role in the climate system. Its spatial distribution is strongly modulated by the underlying orography. Yet, the vertical transport of soil water and especially the drainage at the bottom of the soil column is currently treated in a very crude way in most atmospheric models. This potentially leads to large biases in near-surface temperatures during summertime as the soil dries out and induces elevation-dependent biases in climate simulations. We present a modified formulation for the groundwater runoff formation in the regional climate model COSMO-CLM (multi-layer soil model TERRA_ML). It is based on Darcy's law, allows for saturated aquifers and includes a slope-dependent discharge. Employing flux limiters ensures a physically consistent treatment. An implementation of this formulation into TERRA_ML is tested and validated both in idealized and real-case simulations for cloud-resolving as well as hydrostatic scales. Idealized simulations display a physically meaningful recharge and discharge of the saturated zone and exhibit a closed water budget. Decade-long climate simulations over Europe exhibit a more realistic representation of the groundwater distribution in mountainous areas, an improved annual cycle of surface latent heat fluxes and as a consequence reductions of the long-standing bias in near-surface temperatures in semi-arid regions.

  5. Theory and Modeling in Support of Tether (United States)

    Chang, C. L.; Bergeron, G.; Drobot, A. D.; Papadopoulos, K.; Riyopoulos, S.; Szuszczewicz, E.


    This final report summarizes the work performed by SAIC's Applied Physics Operation on the modeling and support of Tethered Satellite System missions (TSS-1 and TSS-1R). The SAIC team, known to be Theory and Modeling in Support of Tether (TMST) investigation, was one of the original twelve teams selected in July, 1985 for the first TSS mission. The accomplishments described in this report cover the period December 19, 1985 to September 31, 1999 and are the result of a continuous effort aimed at supporting the TSS missions in the following major areas. During the contract period, the SAIC's TMST investigation acted to: Participate in the planning and the execution on both of the TSS missions; Provide scientific understanding on the issues involved in the electrodynamic tether system operation prior to the TSS missions; Predict ionospheric conditions encountered during the re-flight mission (TSS-lR) based on realtime global ionosounde data; Perform post mission analyses to enhance our understanding on the TSS results. Specifically, we have 1) constructed and improved current collection models and enhanced our understanding on the current-voltage data; 2) investigated the effects of neutral gas in the current collection processes; 3) conducted laboratory experiments to study the discharge phenomena during and after tether-break; and 4) perform numerical simulations to understand data collected by plasma instruments SPES onboard the TSS satellite; Design and produce multi-media CD that highlights TSS mission achievements and convey the knowledge of the tether technology to the general public. Along with discussions of this work, a list of publications and presentations derived from the TMST investigation spanning the reporting period is compiled.

  6. Mean field theories and dual variation mathematical structures of the mesoscopic model

    CERN Document Server

    Suzuki, Takashi


    Mean field approximation has been adopted to describe macroscopic phenomena from microscopic overviews. It is still in progress; fluid mechanics, gauge theory, plasma physics, quantum chemistry, mathematical oncology, non-equilibirum thermodynamics.  spite of such a wide range of scientific areas that are concerned with the mean field theory, a unified study of its mathematical structure has not been discussed explicitly in the open literature.  The benefit of this point of view on nonlinear problems should have significant impact on future research, as will be seen from the underlying features of self-assembly or bottom-up self-organization which is to be illustrated in a unified way. The aim of this book is to formulate the variational and hierarchical aspects of the equations that arise in the mean field theory from macroscopic profiles to microscopic principles, from dynamics to equilibrium, and from biological models to models that arise from chemistry and physics.

  7. Simultaneous geologic scenario identification and flow model calibration with group-sparsity formulations (United States)

    Golmohammadi, Azarang; Jafarpour, Behnam


    Adopting representative geologic connectivity scenarios is critical for reliable modeling and prediction of flow and transport processes in subsurface environments. Geologic scenarios are often developed by integrating several sources of information, including knowledge of the depositional environment, qualitative and quantitative data such as outcrop and well logs, and process-based geologic modeling. In general, flow and transport response data are usually not included in constructing geologic scenarios for a basin. Instead, these data are typically matched using a given prior geologic scenario as constraint. Since data limitations, modeling assumptions and subjective interpretations can lead to significant uncertainty in the adopted geologic scenarios, flow and transport data may also be useful for constraining the uncertainty in proposed geologic scenarios. Constraining geologic scenarios with flow-related data opens an interesting and challenging research area, which goes beyond the traditional model calibration formulations where the geologic scenario is assumed given. In this paper, a novel concept, known as group-sparsity regularization, is proposed as an effective formulation to constrain the uncertainty in the prior geologic scenario during subsurface flow model calibration. Given a collection of model realizations from several plausible geologic scenarios, the proposed method first applies the truncated singular value decomposition (TSVD) to compactly represent the models from each geologic scenario. The TSVD basis for representing each scenario forms a distinct group. The proposed approach searches over these groups (i.e., geologic scenarios) to eliminate inconsistent groups that are not supported by the observed flow/pressure data. The group-sparsity regularization minimizes a l1/l2mixed norm, where the l2-norm quantifies the contribution of each group and operates on the coefficients within the groups while the l1-norm, having a selection property, is

  8. Reconstructing Constructivism: Causal Models, Bayesian Learning Mechanisms, and the Theory Theory (United States)

    Gopnik, Alison; Wellman, Henry M.


    We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework…

  9. Internal Universes in Models of Homotopy Type Theory

    DEFF Research Database (Denmark)

    Licata, Daniel R.; Orton, Ian; Pitts, Andrew M.


    We show that universes of fibrations in various models of homotopy type theory have an essentially global character: they cannot be described in the internal language of the presheaf topos from which the model is constructed. We get around this problem by extending the internal language with a mo...... that the interval in cubical sets does indeed have. This leads to a completely internal development of models of homotopy type theory within what we call crisp type theory....

  10. Catastrophe Theory: A Unified Model for Educational Change. (United States)

    Cryer, Patricia; Elton, Lewis


    Catastrophe Theory and Herzberg's theory of motivation at work was used to create a model of change that unifies and extends Lewin's two separate stage and force field models. This new model is used to analyze the behavior of academics as they adapt to the changing university environment. (Author/MLW)

  11. A Leadership Identity Development Model: Applications from a Grounded Theory (United States)

    Komives, Susan R.; Mainella, Felicia C.; Longerbeam, Susan D.; Osteen, Laura; Owen, Julie E.


    This article describes a stage-based model of leadership identity development (LID) that resulted from a grounded theory study on developing a leadership identity (Komives, Owen, Longerbeam, Mainella, & Osteen, 2005). The LID model expands on the leadership identity stages, integrates the categories of the grounded theory into the LID model, and…

  12. The big bang theory and Universe modeling. Mistakes in the relativity theory


    Javadov, Khaladdin; Javadli, Elmaddin


    This article is about Theory of Big Bang and it describes some details of Universe Modelling. It is Physical and Mathematical modeling of Universe formation. Application of mathematical and physical formulas for Universe Calculations.

  13. Theory and modeling of active brazing.

    Energy Technology Data Exchange (ETDEWEB)

    van Swol, Frank B.; Miller, James Edward; Lechman, Jeremy B.; Givler, Richard C.


    Active brazes have been used for many years to produce bonds between metal and ceramic objects. By including a relatively small of a reactive additive to the braze one seeks to improve the wetting and spreading behavior of the braze. The additive modifies the substrate, either by a chemical surface reaction or possibly by alloying. By its nature, the joining process with active brazes is a complex nonequilibrium non-steady state process that couples chemical reaction, reactant and product diffusion to the rheology and wetting behavior of the braze. Most of the these subprocesses are taking place in the interfacial region, most are difficult to access by experiment. To improve the control over the brazing process, one requires a better understanding of the melting of the active braze, rate of the chemical reaction, reactant and product diffusion rates, nonequilibrium composition-dependent surface tension as well as the viscosity. This report identifies ways in which modeling and theory can assist in improving our understanding.

  14. Efficient modeling of flat and homogeneous acoustic treatments for vibroacoustic finite element analysis. Direct field formulations (United States)

    Alimonti, L.; Atalla, N.


    This paper is concerned with the development of a simplified model for noise control treatments to speed up finite element analysis in vibroacoustic applications. The methodology relies on the assumption that the acoustic treatment is flat and homogeneous. Moreover, its finite lateral extent is neglected. This hypothesis is justified by short wavelength and large dissipation, which suggest that the reflected field emanating from the acoustic treatment lateral boundaries does not substantially affect its dynamic response. Under these circumstances, the response of the noise control treatment can be formally obtained by means of convolution integrals involving simple analytical kernels (i.e. Green functions). Such fundamental solutions can be computed efficiently by the transfer matrix method. However, some arbitrariness arises in the formulation of the mathematical model, resulting in different baffling conditions at the two ends of the treatment to be considered. Thus, the paper investigates the possibility of different formulations (i.e. baffling conditions) within the same hybrid finite element-transfer matrix framework, seeking for the best strategy in terms of tradeoff between efficiency and accuracy. Numerical examples are provided to show strengths and limitations of the proposed methodology.

  15. Formulating Fine to Medium Sand Erosion for Suspended Sediment Transport Models

    Directory of Open Access Journals (Sweden)

    François Dufois


    Full Text Available The capacity of an advection/diffusion model to predict sand transport under varying wave and current conditions is evaluated. The horizontal sand transport rate is computed by vertical integration of the suspended sediment flux. A correction procedure for the near-bed concentration is proposed so that model results are independent of the vertical resolution. The method can thus be implemented in regional models with operational applications. Simulating equilibrium sand transport rates, when erosion and deposition are balanced, requires a new empirical erosion law that involves the non-dimensional excess shear stress and a parameter that depends on the size of the sand grain. Comparison with several datasets and sediment transport formulae demonstrated the model’s capacity to simulate sand transport rates for a large range of current and wave conditions and sand diameters in the range 100–500 μm. Measured transport rates were predicted within a factor two in 67% of cases with current only and in 35% of cases with both waves and current. In comparison with the results obtained by Camenen and Larroudé (2003, who provided the same indicators for several practical transport rate formulations (whose means are respectively 72% and 37%, the proposed approach gives reasonable results. Before fitting a new erosion law to our model, classical erosion rate formulations were tested but led to poor comparisons with expected sediment transport rates. We suggest that classical erosion laws should be used with care in advection/diffusion models similar to ours, and that at least a full validation procedure for transport rates involving a range of sand diameters and hydrodynamic conditions should be carried out.

  16. Obesity and internalized weight stigma: a formulation model for an emerging psychological problem. (United States)

    Ratcliffe, Denise; Ellison, Nell


    Obese individuals frequently experience weight stigma and this is associated with psychological distress and difficulties. The process of external devaluation can lead to negative self-perception and evaluation and some obese individuals develop "internalized weight stigma". The prevalence of weight stigma is well established but there is a lack of information about the interplay between external and internal weight stigma. To synthesize the literature on the psychological effects of weight stigma into a formulation model that addresses the maintenance of internalized weight stigma. Current research on the psychological impact of weight stigma was reviewed. We identify cognitive, behavioural and attentional processes that maintain psychological conditions where self-evaluation plays a central role. A model was developed based on clinical utility. The model focuses on identifying factors that influence and maintain internalized weight stigma. We highlight the impact of negative societal and interpersonal experiences of weight stigma on how individuals view themselves as an obese person. Processing the self as a stigmatized individual is at the core of the model. Maintenance factors include negative self-judgements about the meaning of being an obese individual, attentional and mood shifts, and avoidance and safety behaviours. In addition, eating and weight management behaviours become deregulated and maintain both obesity and weight stigma. As obesity increases, weight stigma and the associated psychological effects are likely to increase. We provide a framework for formulating and intervening with internalized weight stigma as well as making therapists aware of the applicability and transferability of strategies that they may already use with other presenting problems.

  17. The Standard Model is Natural as Magnetic Gauge Theory

    DEFF Research Database (Denmark)

    Sannino, Francesco


    matter. The absence of scalars in the electric theory indicates that the associated magnetic theory is free from quadratic divergences. Our novel solution to the Standard Model hierarchy problem leads also to a new insight on the mystery of the observed number of fundamental fermion generations......We suggest that the Standard Model can be viewed as the magnetic dual of a gauge theory featuring only fermionic matter content. We show this by first introducing a Pati-Salam like extension of the Standard Model and then relating it to a possible dual electric theory featuring only fermionic...

  18. Rational estimation of the optimum amount of non-fibrous disintegrant applying percolation theory for binary fast disintegrating formulation. (United States)

    Krausbauer, Etienne; Puchkov, Maxim; Betz, Gabriele; Leuenberger, Hans


    The purpose of this study was to propose a method of determining the exact value of disintegrant ratio in a binary drug-disintegrant compacted mixture for a minimum disintegration time in the case of spherical particles. Disintegration is a limiting factor in dissolution process of compact for low water soluble active ingredients. As disintegration time is shortest at a certain ratio of disintegrant, a calculation of this value is important for solid dosage from design to enhance disintegration and dissolution process. According to percolation theory, a minimum disintegration time corresponds to the formation of a continuous water-conducting cluster through the entire tablet. The critical volumetric ratio at which the cluster is formed is named percolation threshold and has the value of 0.16 for random close packed (RCP) sphere systems. RCP systems where chosen as the best model for compacts consisting of spherical particles. Two cases for water diffusion through the tablet were identified, according to geometrical considerations between disintegrant and drug particles. These cases determine if disintegrant particles can have a contact between each other within the compact and thus if porosity and disintegrant volume are included in the continuous cluster. An equation for both cases is presented in the form of piecewise function to determine the minimal disintegrant volumetric ratio for a binary drug/disintegrant compact in order to achieve a minimum disintegration time. Disintegration tests were performed with tablets at different ratios of modified corn starch mixed with caffeine or paracetamol powders. Estimated and experimental optimal ratio were compared showing coefficient R(2) = 0.96. (c) 2007 Wiley-Liss, Inc.

  19. Carbon deposition model for oxygen-hydrocarbon combustion. Task 6: Data analysis and formulation of an empirical model (United States)

    Makel, Darby B.; Rosenberg, Sanders D.


    The formation and deposition of carbon (soot) was studied in the Carbon Deposition Model for Oxygen-Hydrocarbon Combustion Program. An empirical, 1-D model for predicting soot formation and deposition in LO2/hydrocarbon gas generators/preburners was derived. The experimental data required to anchor the model were identified and a test program to obtain the data was defined. In support of the model development, cold flow mixing experiments using a high injection density injector were performed. The purpose of this investigation was to advance the state-of-the-art in LO2/hydrocarbon gas generator design by developing a reliable engineering model of gas generator operation. The model was formulated to account for the influences of fluid dynamics, chemical kinetics, and gas generator hardware design on soot formation and deposition.

  20. Tensor formulation of the model equations on strong conservation form for an incompressible flow in general coordinates

    DEFF Research Database (Denmark)

    Jørgensen, Bo Hoffmann


    equations on a general form which accommodate curvilinear coordinates. Strong conservation form is obtained by formulating the equations so that the flow variables, velocity and pressure, are expressed in thephysical coordinate system while the location of evaluation is expressed within the transformed...... coordinate system. The tensor formulation allows both a finite difference and a pseudo-spectral description of the model equations. The intention is for thefinite difference formulation to achieve the same robustness and conservation properties as a finite volume discretization. Furthermore, an invariant...

  1. Electroweak theory and the Standard Model

    CERN Multimedia

    CERN. Geneva; Giudice, Gian Francesco


    There is a natural splitting in four sectors of the theory of the ElectroWeak (EW) Interactions, at pretty different levels of development/test. Accordingly, the 5 lectures are organized as follows, with an eye to the future: Lecture 1: The basic structure of the theory; Lecture 2: The gauge sector; Lecture 3: The flavor sector; Lecture 4: The neutrino sector; Lecture 5: The EW symmetry breaking sector.

  2. Forward Modeling and validation of a new formulation to compute self-potential signals associated with ground water flow

    Directory of Open Access Journals (Sweden)

    A. Bolève


    Full Text Available The classical formulation of the coupled hydroelectrical flow in porous media is based on a linear formulation of two coupled constitutive equations for the electrical current density and the seepage velocity of the water phase and obeying Onsager's reciprocity. This formulation shows that the streaming current density is controlled by the gradient of the fluid pressure of the water phase and a streaming current coupling coefficient that depends on the so-called zeta potential. Recently a new formulation has been introduced in which the streaming current density is directly connected to the seepage velocity of the water phase and to the excess of electrical charge per unit pore volume in the porous material. The advantages of this formulation are numerous. First this new formulation is more intuitive not only in terms of establishing a constitutive equation for the generalized Ohm's law but also in specifying boundary conditions for the influence of the flow field upon the streaming potential. With the new formulation, the streaming potential coupling coefficient shows a decrease of its magnitude with permeability in agreement with published results. The new formulation has been extended in the inertial laminar flow regime and to unsaturated conditions with applications to the vadose zone. This formulation is suitable to model self-potential signals in the field. We investigate infiltration of water from an agricultural ditch, vertical infiltration of water into a sinkhole, and preferential horizontal flow of ground water in a paleochannel. For the three cases reported in the present study, a good match is obtained between finite element simulations performed and field observations. Thus, this formulation could be useful for the inverse mapping of the geometry of groundwater flow from self-potential field measurements.

  3. Improved Formulation of the Hardening Soil Model in the Context of Modeling the Undrained Behavior of Cohesive Soils

    Directory of Open Access Journals (Sweden)

    Truty Andrzej


    Full Text Available The analysis of an important drawback of the well known Hardening Soil model (HSM is the main purpose of this paper. A special emphasis is put on modifying the HSM to enable an appropriate prediction of the undrained shear strength using a nonzero dilatancy angle. In this light, the paper demonstrates an advanced numerical finite element modeling addressed to practical geotechnical problems. The main focus is put on serviceability limit state analysis of a twin-tunnel excavation in London clay. The two-phase formulation for partially saturated medium, after Aubry and Ozanam, is used to describe interaction between soil skeleton and pore water pressure.

  4. Solid modeling and applications rapid prototyping, CAD and CAE theory

    CERN Document Server

    Um, Dugan


    The lessons in this fundamental text equip students with the theory of Computer Assisted Design (CAD), Computer Assisted Engineering (CAE), the essentials of Rapid Prototyping, as well as practical skills needed to apply this understanding in real world design and manufacturing settings. The book includes three main areas: CAD, CAE, and Rapid Prototyping, each enriched with numerous examples and exercises. In the CAD section, Professor Um outlines the basic concept of geometric modeling, Hermite and Bezier Spline curves theory, and 3-dimensional surface theories as well as rendering theory. The CAE section explores mesh generation theory, matrix notion for FEM, the stiffness method, and truss Equations. And in Rapid Prototyping, the author illustrates stereo lithographic theory and introduces popular modern RP technologies. Solid Modeling and Applications: Rapid Prototyping, CAD and CAE Theory is ideal for university students in various engineering disciplines as well as design engineers involved in product...

  5. The logical foundations of scientific theories languages, structures, and models

    CERN Document Server

    Krause, Decio


    This book addresses the logical aspects of the foundations of scientific theories. Even though the relevance of formal methods in the study of scientific theories is now widely recognized and regaining prominence, the issues covered here are still not generally discussed in philosophy of science. The authors focus mainly on the role played by the underlying formal apparatuses employed in the construction of the models of scientific theories, relating the discussion with the so-called semantic approach to scientific theories. The book describes the role played by this metamathematical framework in three main aspects: considerations of formal languages employed to axiomatize scientific theories, the role of the axiomatic method itself, and the way set-theoretical structures, which play the role of the models of theories, are developed. The authors also discuss the differences and philosophical relevance of the two basic ways of aximoatizing a scientific theory, namely Patrick Suppes’ set theoretical predicate...

  6. Evaluating resist degradation during reactive ion oxide etching using 193 nm model resist formulations (United States)

    May, M. J.; Mortini, B.; Sourd, C.; Perret, D.; Chung, D. W.; Barclay, G.; Brochon, C.; Hadziioannou, G.


    The weaker etch resistance of 193 nm resists1 is raising questions concerning their usability for the coming nodes as a single layer resist. We have found that 193 nm positive tone resists, that have been designed2 incorporating etch resistant groups like adamantyl or isobornyl 3-7, exhibit chemical modifications concerning these grafted functions while undergoing an oxide etch step. Previously performed experiments have pointed out that the photoacid generator (PAG) that is still contained in the unexposed regions of the sacrificial layer might be a reason for the modifications in the chemical buildup of this resists. Therefore, this work has focused on evaluating the impact of reactive ion oxide etching 8-10 on 193nm materials, for positive and negative tone chemically amplified resists. We used Thermo Gravimetric Analysis (TGA), Fourier Transformed Infra Red Spectroscopy (FTIR) and Atomic Force Microscopy (AFM) in order to check model formulations based on PHS, methacrylate or cyclic olefin polymers with various protecting groups having different activation energies and formulated with or without PAG and in order to understand the impact of the photoactive compound in the resist degradation behavior during plasma etch.

  7. Mixed direct-iterative methods for boundary integral formulations of continuum dielectric solvation models

    Energy Technology Data Exchange (ETDEWEB)

    Corcelli, S.A.; Kress, J.D.; Pratt, L.R.


    This paper develops and characterizes mixed direct-iterative methods for boundary integral formulations of continuum dielectric solvation models. We give an example, the Ca{sup ++}{hor_ellipsis}Cl{sup {minus}} pair potential of mean force in aqueous solution, for which a direct solution at thermal accuracy is difficult and, thus for which mixed direct-iterative methods seem necessary to obtain the required high resolution. For the simplest such formulations, Gauss-Seidel iteration diverges in rare cases. This difficulty is analyzed by obtaining the eigenvalues and the spectral radius of the non-symmetric iteration matrix. This establishes that those divergences are due to inaccuracies of the asymptotic approximations used in evaluation of the matrix elements corresponding to accidental close encounters of boundary elements on different atomic spheres. The spectral radii are then greater than one for those diverging cases. This problem is cured by checking for boundary element pairs closer than the typical spatial extent of the boundary elements and for those cases performing an ``in-line`` Monte Carlo integration to evaluate the required matrix elements. These difficulties are not expected and have not been observed for the thoroughly coarsened equations obtained when only a direct solution is sought. Finally, we give an example application of hybrid quantum-classical methods to deprotonation of orthosilicic acid in water.

  8. A multibody motorcycle model with rigid-ring tyres: formulation and validation (United States)

    Leonelli, Luca; Mancinelli, Nicolò


    The aim of this paper is the development and validation of a three-dimensional multibody motorcycle model including a rigid-ring tyre model, taking into account both the slopes and elevation of the road surface. In order to achieve accurate assessment of ride and handling performances of a road racing motorcycle, a tyre model capable of reproducing the dynamic response to actual road excitation is required. While a number of vehicle models with such feature are available for car application, the extension to the motorcycle modelling has not been addressed yet. To do so, a novel parametrisation for the general motorcycle kinematics is proposed, using a mixed reference point and relative coordinates approach. The resulting description, developed in terms of dependent coordinates, makes it possible to include the rigid-ring kinematics as well as road elevation and slopes, without affecting computational efficiency. The equations of motion for the whole multibody system are derived symbolically and the constraint equations arising from the dependent coordinate formulation are handled using the position and velocity vector projection technique. The resulting system of equations is integrated in time domain using a standard ordinary differential equation (ODE) algorithm. Finally, the model is validated with respect to experimentally measured data in both time and frequency domains.

  9. Nonperturbative type IIB model building in the F-theory framework

    Energy Technology Data Exchange (ETDEWEB)

    Jurke, Benjamin Helmut Friedrich


    This dissertation is concerned with the topic of non-perturbative string theory, which is generally considered to be the most promising approach to a consistent description of quantum gravity. The five known 10-dimensional perturbative string theories are all interconnected by numerous dualities, such that an underlying non-perturbative 11-dimensional theory, called M-theory, is postulated. Due to several technical obstacles, little is known about the fundamental objects in this theory. There exists an alternative non-perturbative description to type IIB string theory, namely F-theory. Here the SL(2;Z) self-duality of IIB theory is geometrized in the form of an elliptic fibration over the space-time. Moreover, higher-dimensional objects like 7-branes are included via singularities into the geometric picture. This formally elegant description, however, requires significant technical effort for the construction of suitable compactification geometries, as many different aspects necessarily have to be dealt with at the same time. On the other hand, the generation of essential GUT building blocks like certain Yukawa couplings or spinor representations is easier compared to perturbative string theory. The goal of this study is therefore to formulate a unified theory within the framework of F-theory, that satisfies basic phenomenological constraints. Within this thesis, at first E3-brane instantons in type IIB string theory - 4-dimensional objects that are entirely wrapped around the invisible dimensions of space-time - are matched with M5-branes in F-theory. Such objects are of great importance in the generation of critical Yukawa couplings or the stabilization of the free parameters of a theory. Certain properties of M5-branes then allow to derive a new criterion for E3-branes to contribute to the superpotential. In the aftermath of this analysis, several compactification geometries are constructed and checked for basic properties that are relevant for semi

  10. A review of organizational buyer behaviour models and theories ...

    African Journals Online (AJOL)

    Over the years, models have been developed, and theories propounded, to explain the behavior of industrial buyers on the one hand and the nature of the dyadic relationship between organizational buyers and sellers on the other hand. This paper is an attempt at a review of the major models and theories in extant ...

  11. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...

  12. The formulation and implementation of a stochastic model that explores HIV infection. (Volumes I and II). [HIV (human immunodeficiency virus)

    Energy Technology Data Exchange (ETDEWEB)

    Salsburg, M.A.


    This thesis presents the research, formulation, implementation and results of a stochastic model that projects HIV prevalence and crude demographic statistics. The background research includes demographic and epidemiological modeling issues. Published models and results are presented and discussed. The formulation of this model included demographic considerations. To model couple formations, a formulation of age-dependent couple formation was developed. Using the couple formation formulation, a two-sex model was developed to project population births. The epidemiological model addressed population types that were classified by HIV risk group, HIV state and age for both non-married populations and coupled populations. Males and females were partitioned into three risk groups each. The effects of heterogeneity caused by the behavior of diverse risk groups is discussed. Six states of HIV were modeled: susceptible (non-infected), initial infection, dormancy, AIDS Related Complex (ARC), full-blown AIDS and death due to AIDS. Male and female ages were considered in simulations that used age aggregation and in simulations that accounted for yearly ages. The projection of the spread of HIV was modeled on a monthly time scale. The projection of demographic statistics were modeled on a yearly time scale. The underlying stochastic model, with nonlinear different equations, are presented. Simulation models were run to consider the sensitivity of various parameters. The model results are shown, with a discussion of parameters used in previously published HIV model projections. The structure of the software developed to implement the model is presented. A cross-reference is provided to indicate software parameters that correspond with the formal model parameters. Another cross-reference provides the name of the APL function used to implement the calculations in the model formulation. The APL source code listing is included in the Appendix.

  13. A novel nasal powder formulation of glucagon: toxicology studies in animal models. (United States)

    Reno, Frederick E; Normand, Patrick; McInally, Kevin; Silo, Sherwin; Stotland, Patricia; Triest, Myriam; Carballo, Dolores; Piché, Claude


    Glucagon nasal powder (GNP), a novel intranasal formulation of glucagon being developed to treat insulin-induced severe hypoglycemia, contains synthetic glucagon (10% w/w), beta-cyclodextrin, and dodecylphosphocholine. The safety of this formulation was evaluated in four studies in animal models. The first study evaluated 28-day sub-chronic toxicology in rats treated intranasally with 1 and 2 mg of GNP/day (0.1 and 0.2 mg glucagon/rat/day). The second study evaluated 28-day sub-chronic toxicology in dogs administered 20 and 40 mg of formulation/dog/day (2 and 4 mg glucagon/dog/day) intranasally. A pulmonary insufflation study assessed acute toxicology following intra-tracheal administration of 0.5 mg of GNP (0.05 mg glucagon) to rats. Local tolerance to 30 mg of GNP (equivalent to 3 mg glucagon, the final dose for humans) was tested through direct administration into the eyes of rabbits. There were no test article-related adverse effects on body weight and/or food consumption, ophthalmology, electrocardiography, hematology, coagulation parameters, clinical chemistry, urinalysis, or organ weights, and no macroscopic findings at necropsy in any study. In rats, direct intra-tracheal insufflation at a dose of 0.5 mg of GNP/rat (0.05 mg glucagon/rat) did not result in adverse clinical, macroscopic, or microscopic effects. In dogs, the only adverse findings following sub-chronic use were transient (studies in dogs and rats revealed no microscopic findings. In rabbits, clinical observations noted in the GNP-treated eye and/or surrounding areas included ≥1 of the following: clear discharge, red conjunctiva, partial closure, and swelling of the peri-orbital area, which correlated with erythema and edema noted during ocular observations and grading. The studies reported here revealed no safety concerns associated with GNP in animal models. Studies published earlier have highlighted the local safety profile of intranasally administered cyclodextrins (a component of GNP

  14. Program evaluation models and related theories: AMEE guide no. 67. (United States)

    Frye, Ann W; Hemmer, Paul A


    This Guide reviews theories of science that have influenced the development of common educational evaluation models. Educators can be more confident when choosing an appropriate evaluation model if they first consider the model's theoretical basis against their program's complexity and their own evaluation needs. Reductionism, system theory, and (most recently) complexity theory have inspired the development of models commonly applied in evaluation studies today. This Guide describes experimental and quasi-experimental models, Kirkpatrick's four-level model, the Logic Model, and the CIPP (Context/Input/Process/Product) model in the context of the theories that influenced their development and that limit or support their ability to do what educators need. The goal of this Guide is for educators to become more competent and confident in being able to design educational program evaluations that support intentional program improvement while adequately documenting or describing the changes and outcomes-intended and unintended-associated with their programs.

  15. Non-static plane symmetric cosmological model in Wesson's theory

    Indian Academy of Sciences (India)

    ] scale invariant theory of gravitation with a time-dependent gauge function is investigated. The false vacuum model of the universe is constructed and some physical properties of the model are discussed.

  16. Rigid aleph_epsilon-saturated models of superstable theories


    Shami, Ziv; Shelah, Saharon


    In a countable superstable NDOP theory, the existence of a rigid aleph_epsilon-saturated model implies the existence of 2^lambda rigid aleph_epsilon-saturated models of power lambda for every lambda>2^{aleph_0}.

  17. The Birth of Model Theory Lowenheim's Theorem in the Frame of the Theory of Relatives

    CERN Document Server

    Badesa, Calixto


    Löwenheim's theorem reflects a critical point in the history of mathematical logic, for it marks the birth of model theory--that is, the part of logic that concerns the relationship between formal theories and their models. However, while the original proofs of other, comparably significant theorems are well understood, this is not the case with Löwenheim's theorem. For example, the very result that scholars attribute to Löwenheim today is not the one that Skolem--a logician raised in the algebraic tradition, like Löwenheim--appears to have attributed to him. In The Birth of Model Theory, Cali

  18. Functional testing of topical skin formulations using an optimised ex vivo skin organ culture model. (United States)

    Sidgwick, G P; McGeorge, D; Bayat, A


    A number of equivalent-skin models are available for investigation of the ex vivo effect of topical application of drugs and cosmaceuticals onto skin, however many have their drawbacks. With the March 2013 ban on animal models for cosmetic testing of products or ingredients for sale in the EU, their utility for testing toxicity and effect on skin becomes more relevant. The aim of this study was to demonstrate proof of principle that altered expression of key gene and protein markers could be quantified in an optimised whole tissue biopsy culture model. Topical formulations containing green tea catechins (GTC) were investigated in a skin biopsy culture model (n = 11). Punch biopsies were harvested at 3, 7 and 10 days, and analysed using qRT-PCR, histology and HPLC to determine gene and protein expression, and transdermal delivery of compounds of interest. Reduced gene expression of α-SMA, fibronectin, mast cell tryptase, mast cell chymase, TGF-β1, CTGF and PAI-1 was observed after 7 and 10 days compared with treated controls (p animal models in this context, prior to study in a clinical trial environment.

  19. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways. (United States)

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G


    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  20. Non Linear Programming (NLP formulation for quantitative modeling of protein signal transduction pathways.

    Directory of Open Access Journals (Sweden)

    Alexander Mitsos

    Full Text Available Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i excessive CPU time requirements and ii loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  1. Beyond the electric-dipole approximation: A formulation and implementation of molecular response theory for the description of absorption of electromagnetic field radiation. (United States)

    List, Nanna Holmgaard; Kauczor, Joanna; Saue, Trond; Jensen, Hans Jørgen Aagaard; Norman, Patrick


    We present a formulation of molecular response theory for the description of a quantum mechanical molecular system in the presence of a weak, monochromatic, linearly polarized electromagnetic field without introducing truncated multipolar expansions. The presentation focuses on a description of linear absorption by adopting the energy-loss approach in combination with the complex polarization propagator formulation of response theory. Going beyond the electric-dipole approximation is essential whenever studying electric-dipole-forbidden transitions, and in general, non-dipolar effects become increasingly important when addressing spectroscopies involving higher-energy photons. These two aspects are examined by our study of the near K-edge X-ray absorption fine structure of the alkaline earth metals (Mg, Ca, Sr, Ba, and Ra) as well as the trans-polyenes. In following the series of alkaline earth metals, the sizes of non-dipolar effects are probed with respect to increasing photon energies and a detailed assessment of results is made in terms of studying the pertinent transition electron densities and in particular their spatial extension in comparison with the photon wavelength. Along the series of trans-polyenes, the sizes of non-dipolar effects are probed for X-ray spectroscopies on organic molecules with respect to the spatial extension of the chromophore.

  2. A signal detection-item response theory model for evaluating neuropsychological measures. (United States)

    Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Risbrough, Victoria B; Baker, Dewleen G


    Models from signal detection theory are commonly used to score neuropsychological test data, especially tests of recognition memory. Here we show that certain item response theory models can be formulated as signal detection theory models, thus linking two complementary but distinct methodologies. We then use the approach to evaluate the validity (construct representation) of commonly used research measures, demonstrate the impact of conditional error on neuropsychological outcomes, and evaluate measurement bias. Signal detection-item response theory (SD-IRT) models were fitted to recognition memory data for words, faces, and objects. The sample consisted of U.S. Infantry Marines and Navy Corpsmen participating in the Marine Resiliency Study. Data comprised item responses to the Penn Face Memory Test (PFMT; N = 1,338), Penn Word Memory Test (PWMT; N = 1,331), and Visual Object Learning Test (VOLT; N = 1,249), and self-report of past head injury with loss of consciousness. SD-IRT models adequately fitted recognition memory item data across all modalities. Error varied systematically with ability estimates, and distributions of residuals from the regression of memory discrimination onto self-report of past head injury were positively skewed towards regions of larger measurement error. Analyses of differential item functioning revealed little evidence of systematic bias by level of education. SD-IRT models benefit from the measurement rigor of item response theory-which permits the modeling of item difficulty and examinee ability-and from signal detection theory-which provides an interpretive framework encompassing the experimentally validated constructs of memory discrimination and response bias. We used this approach to validate the construct representation of commonly used research measures and to demonstrate how nonoptimized item parameters can lead to erroneous conclusions when interpreting neuropsychological test data. Future work might include the

  3. Behavior of the gypsy moth life system model and development of synoptic model formulations (United States)

    J. J. Colbert; Xu Rumei


    Aims of the research: The gypsy moth life system model (GMLSM) is a complex model which incorporates numerous components (both biotic and abiotic) and ecological processes. It is a detailed simulation model which has much biological reality. However, it has not yet been tested with life system data. For such complex models, evaluation and testing cannot be adequately...


    Directory of Open Access Journals (Sweden)

    Innocent A. Ugbong


    Full Text Available In order to achieve the task of formulating an optimal drainage model for the Calabar area, the Calabar drainage system was studied using some cartographic techniques to analyze its surface run-off and channel characteristics so as to determine how floods are generated. A morphological analysis was done, using detailed contour maps prepared for the study area. The “Blue line” and “contour crenulations” methods were used to recreate the expected run-off channels or drainage networks under natural non-urbanized conditions. A drainage structure with 6 major basins and 73 sub-basins was discovered. Existing storm drains were constructed without regards to this natural structure and so floods were generated.


    Directory of Open Access Journals (Sweden)



    Full Text Available The present article is a modest theoretical contribution to the study of the quality of life over the life cycle span,with emphasis on the older adult. The collective writing is the result of debates which took place during the twoyears of the social practice project «Quality of Life and Life Cycle Span» carried out at the Psychology Faculty ofthe Pontificia Universidad Javeriana, which had as one of its objectives the investigation of the psychologicaldimension of the quality of life. The teamwork consisted in the compilation of some theoretical fundamentals ofthe process, after which a model is formulated proposing the dynamic integration of the dimensions, especially thepsychological, which affect the quality of life.

  6. Toric Methods in F-Theory Model Building

    Directory of Open Access Journals (Sweden)

    Johanna Knapp


    Full Text Available We discuss recent constructions of global F-theory GUT models and explain how to make use of toric geometry to do calculations within this framework. After introducing the basic properties of global F-theory GUTs, we give a self-contained review of toric geometry and introduce all the tools that are necessary to construct and analyze global F-theory models. We will explain how to systematically obtain a large class of compact Calabi-Yau fourfolds which can support F-theory GUTs by using the software package PALP.

  7. Theories, models and urban realities. From New York to Kathmandu

    Directory of Open Access Journals (Sweden)

    Román Rodríguez González


    Full Text Available At the beginning of the 21st century, there are various social theories that speak of global changes in the history of human civilization. Urban models have been through obvious changes throughout the last century according to the important transformation that are pro-posed by previous general theories. Nevertheless global diversity contradicts the generaliza-tion of these theories and models. From our own simple observations and reflections we arrive at conclusions that distance themselves from the prevailing theory of our civilized world. New York, Delhi, Salvador de Bahia, Bruges, Paris, Cartagena de Indias or Kath-mandu still have more internal differences than similarities.

  8. Theories, models and urban realities. From New York to Kathmandu

    Directory of Open Access Journals (Sweden)

    José Somoza Medina


    Full Text Available At the beginning of the 21st century, there are various social theories that speak of globalchanges in the history of human civilization. Urban models have been through obviouschanges throughout the last century according to the important transformation that are proposedby previous general theories. Nevertheless global diversity contradicts the generalizationof these theories and models. From our own simple observations and reflections wearrive at conclusions that distance themselves from the prevailing theory of our civilizedworld. New York, Delhi, Salvador de Bahia, Bruges, Paris, Cartagena de Indias or Kathmandustill have more internal differences than similarities.

  9. Model-Based Learning: A Synthesis of Theory and Research (United States)

    Seel, Norbert M.


    This article provides a review of theoretical approaches to model-based learning and related research. In accordance with the definition of model-based learning as an acquisition and utilization of mental models by learners, the first section centers on mental model theory. In accordance with epistemology of modeling the issues of semantics,…

  10. Effects of orange juice formulation on prebiotic functionality using an in vitro colonic model system. (United States)

    Costabile, Adele; Walton, Gemma E; Tzortzis, George; Vulevic, Jelena; Charalampopoulos, Dimitris; Gibson, Glenn R


    A three-stage continuous fermentative colonic model system was used to monitor in vitro the effect of different orange juice formulations on prebiotic activity. Three different juices with and without Bimuno, a GOS mixture containing galactooligosaccharides (B-GOS) were assessed in terms of their ability to induce a bifidogenic microbiota. The recipe development was based on incorporating 2.75g B-GOS into a 250 ml serving of juice (65°Brix of concentrate juice). Alongside the production of B-GOS juice, a control juice--orange juice without any additional Bimuno and a positive control juice, containing all the components of Bimuno (glucose, galactose and lactose) in the same relative proportions with the exception of B-GOS were developed. Ion Exchange Chromotography analysis was used to test the maintenance of bimuno components after the production process. Data showed that sterilisation had no significant effect on concentration of B-GOS and simple sugars. The three juice formulations were digested under conditions resembling the gastric and small intestinal environments. Main bacterial groups of the faecal microbiota were evaluated throughout the colonic model study using 16S rRNA-based fluorescence in situ hybridization (FISH). Potential effects of supplementation of the juices on microbial metabolism were studied measuring short chain fatty acids (SCFAs) using gas chromatography. Furthermore, B-GOS juices showed positive modulations of the microbiota composition and metabolic activity. In particular, numbers of faecal bifidobacteria and lactobacilli were significantly higher when B-GOS juice was fermented compared to controls. Furthermore, fermentation of B-GOS juice resulted in an increase in Roseburia subcluster and concomitantly increased butyrate production, which is of potential benefit to the host. In conclusion, this study has shown B-GOS within orange juice can have a beneficial effect on the fecal microbiota.

  11. Effects of orange juice formulation on prebiotic functionality using an in vitro colonic model system.

    Directory of Open Access Journals (Sweden)

    Adele Costabile

    Full Text Available A three-stage continuous fermentative colonic model system was used to monitor in vitro the effect of different orange juice formulations on prebiotic activity. Three different juices with and without Bimuno, a GOS mixture containing galactooligosaccharides (B-GOS were assessed in terms of their ability to induce a bifidogenic microbiota. The recipe development was based on incorporating 2.75g B-GOS into a 250 ml serving of juice (65°Brix of concentrate juice. Alongside the production of B-GOS juice, a control juice--orange juice without any additional Bimuno and a positive control juice, containing all the components of Bimuno (glucose, galactose and lactose in the same relative proportions with the exception of B-GOS were developed. Ion Exchange Chromotography analysis was used to test the maintenance of bimuno components after the production process. Data showed that sterilisation had no significant effect on concentration of B-GOS and simple sugars. The three juice formulations were digested under conditions resembling the gastric and small intestinal environments. Main bacterial groups of the faecal microbiota were evaluated throughout the colonic model study using 16S rRNA-based fluorescence in situ hybridization (FISH. Potential effects of supplementation of the juices on microbial metabolism were studied measuring short chain fatty acids (SCFAs using gas chromatography. Furthermore, B-GOS juices showed positive modulations of the microbiota composition and metabolic activity. In particular, numbers of faecal bifidobacteria and lactobacilli were significantly higher when B-GOS juice was fermented compared to controls. Furthermore, fermentation of B-GOS juice resulted in an increase in Roseburia subcluster and concomitantly increased butyrate production, which is of potential benefit to the host. In conclusion, this study has shown B-GOS within orange juice can have a beneficial effect on the fecal microbiota.

  12. Development of self-lubricating coatings via cold spray process: Feedstock formulation and deformation modeling (United States)

    Aggarwal, Gaurav

    Because of their low density, high specific strength and high stiffness, titanium alloys are one of the prime candidates for structural application often requiring specific tribological properties. However, their relatively high friction coefficients and low wear resistance are limiting their application over a wider temperature range. Various coatings deposited with technologies like high velocity oxy flame (HVOF), detonation gun (DGun), electron beam physical vapor deposition (EB-PVD), etc., can improve wear performance and decrease corrosion damage. These technologies require high processing temperatures precluding the integration of thermally vulnerable lubricants. This research looks at a relatively new coating process called Cold Spray for self-lubricating coatings on Ti-6Al-4V alloys. Cold Spray can produce coatings without significant heating of the sprayed powder or substrate. The particles are in solid state as they hit the substrate, and the formation of coatings occurs mainly due to the kinetic energy of the particles. Therefore, the impact velocity plays an important role. Below a critical value, the particles can cause densification and abrasion of the substrate. The focus of this study is to design composite coatings for the cold spray process and determination of the critical velocity through finite element modeling. Different powders and feedstock formulation techniques are discussed in order to find an optimum formulation for self-lubricating coatings. A composite powder (Ni coated hBN) was found to be the best candidate for the feedstock. The deformation of composite particles upon impact on the substrate was modeled and compared to the experiments. A number of approaches involving different modeling platforms, particle-substrate geometries, and material models have been tried. This work presents the results of ANSYS (version 10.0) analysis using an axisymmetric model of the particle impact. Stress and strain distributions in the particle

  13. Boundary conditions and the generalized metric formulation of the double sigma model

    Directory of Open Access Journals (Sweden)

    Chen-Te Ma


    Full Text Available Double sigma model with strong constraints is equivalent to the ordinary sigma model by imposing a self-duality relation. The gauge symmetries are the diffeomorphism and one-form gauge transformation with the strong constraints. We consider boundary conditions in the double sigma model from three ways. The first way is to modify the Dirichlet and Neumann boundary conditions with a fully O(D,D description from double gauge fields. We perform the one-loop β function for the constant background fields to find low-energy effective theory without using the strong constraints. The low-energy theory can also have O(D,D invariance as the double sigma model. The second way is to construct different boundary conditions from the projectors. The third way is to combine the antisymmetric background field with field strength to redefine an O(D,D generalized metric. We use this generalized metric to reconstruct a consistent double sigma model with the classical and quantum equivalence.

  14. Formulation of the multi-hit model with a non-Poisson distribution of hits. (United States)

    Vassiliev, Oleg N


    We proposed a formulation of the multi-hit single-target model in which the Poisson distribution of hits was replaced by a combination of two distributions: one for the number of particles entering the target and one for the number of hits a particle entering the target produces. Such an approach reflects the fact that radiation damage is a result of two different random processes: particle emission by a radiation source and interaction of particles with matter inside the target. Poisson distribution is well justified for the first of the two processes. The second distribution depends on how a hit is defined. To test our approach, we assumed that the second distribution was also a Poisson distribution. The two distributions combined resulted in a non-Poisson distribution. We tested the proposed model by comparing it with previously reported data for DNA single- and double-strand breaks induced by protons and electrons, for survival of a range of cell lines, and variation of the initial slopes of survival curves with radiation quality for heavy-ion beams. Analysis of cell survival equations for this new model showed that they had realistic properties overall, such as the initial and high-dose slopes of survival curves, the shoulder, and relative biological effectiveness (RBE) In most cases tested, a better fit of survival curves was achieved with the new model than with the linear-quadratic model. The results also suggested that the proposed approach may extend the multi-hit model beyond its traditional role in analysis of survival curves to predicting effects of radiation quality and analysis of DNA strand breaks. Our model, although conceptually simple, performed well in all tests. The model was able to consistently fit data for both cell survival and DNA single- and double-strand breaks. It correctly predicted the dependence of radiation effects on parameters of radiation quality. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. New formulation feed method in tariff model of solar PV in Indonesia (United States)

    Djamal, Muchlishah Hadi; Setiawan, Eko Adhi; Setiawan, Aiman


    Geographically, Indonesia has 18 latitudes that correlated strongly with the potential of solar radiation for the implementation of solar photovoltaic (PV) technologies. This is becoming the basis assumption to develop a proportional model of Feed In Tariff (FIT), consequently the FIT will be vary, according to the various of latitudes in Indonesia. This paper proposed a new formulation of solar PV FIT based on the potential of solar radiation and some independent variables such as latitude, longitude, Levelized Cost of Electricity (LCOE), and also socio-economic. The Principal Component Regression (PCR) method is used to analyzed the correlation of six independent variables C1-C6 then three models of FIT are presented. Model FIT-2 is chosen because it has a small residual value and has higher financial benefit compared to the other models. This study reveals the value of variable FIT associated with solar energy potential in each region, can reduce the total FIT to be paid by the state around 80 billion rupiahs in 10 years of 1 MW photovoltaic operation at each 34 provinces in Indonesia.

  16. Modeling transonic aerodynamic response using nonlinear systems theory for use with modern control theory (United States)

    Silva, Walter A.


    The presentation begins with a brief description of the motivation and approach that has been taken for this research. This will be followed by a description of the Volterra Theory of Nonlinear Systems and the CAP-TSD code which is an aeroelastic, transonic CFD (Computational Fluid Dynamics) code. The application of the Volterra theory to a CFD model and, more specifically, to a CAP-TSD model of a rectangular wing with a NACA 0012 airfoil section will be presented.

  17. Dimensional reduction of Markov state models from renormalization group theory. (United States)

    Orioli, S; Faccioli, P


    Renormalization Group (RG) theory provides the theoretical framework to define rigorous effective theories, i.e., systematic low-resolution approximations of arbitrary microscopic models. Markov state models are shown to be rigorous effective theories for Molecular Dynamics (MD). Based on this fact, we use real space RG to vary the resolution of the stochastic model and define an algorithm for clustering microstates into macrostates. The result is a lower dimensional stochastic model which, by construction, provides the optimal coarse-grained Markovian representation of the system's relaxation kinetics. To illustrate and validate our theory, we analyze a number of test systems of increasing complexity, ranging from synthetic toy models to two realistic applications, built form all-atom MD simulations. The computational cost of computing the low-dimensional model remains affordable on a desktop computer even for thousands of microstates.

  18. Using the Landlab toolkit to evaluate and compare alternative geomorphic and hydrologic model formulations (United States)

    Tucker, G. E.; Adams, J. M.; Doty, S. G.; Gasparini, N. M.; Hill, M. C.; Hobley, D. E. J.; Hutton, E.; Istanbulluoglu, E.; Nudurupati, S. S.


    Developing a better understanding of catchment hydrology and geomorphology ideally involves quantitative hypothesis testing. Often one seeks to identify the simplest mathematical and/or computational model that accounts for the essential dynamics in the system of interest. Development of alternative hypotheses involves testing and comparing alternative formulations, but the process of comparison and evaluation is made challenging by the rigid nature of many computational models, which are often built around a single assumed set of equations. Here we review a software framework for two-dimensional computational modeling that facilitates the creation, testing, and comparison of surface-dynamics models. Landlab is essentially a Python-language software library. Its gridding module allows for easy generation of a structured (raster, hex) or unstructured (Voronoi-Delaunay) mesh, with the capability to attach data arrays to particular types of element. Landlab includes functions that implement common numerical operations, such as gradient calculation and summation of fluxes within grid cells. Landlab also includes a collection of process components, which are encapsulated pieces of software that implement a numerical calculation of a particular process. Examples include downslope flow routing over topography, shallow-water hydrodynamics, stream erosion, and sediment transport on hillslopes. Individual components share a common grid and data arrays, and they can be coupled through the use of a simple Python script. We illustrate Landlab's capabilities with a case study of Holocene landscape development in the northeastern US, in which we seek to identify a collection of model components that can account for the formation of a series of incised canyons that have that developed since the Laurentide ice sheet last retreated. We compare sets of model ingredients related to (1) catchment hydrologic response, (2) hillslope evolution, and (3) stream channel and gully incision

  19. Screening of two new herbal formulations in rodent model of urolithiasis

    Directory of Open Access Journals (Sweden)

    Mohammad Ahmed Khan


    Full Text Available Background: Kidney stone formation or urolithiasis is a complex process that is a consequence of an imbalance between promoters and inhibitors in the kidneys. The recurrence of urolithiasis also represents a serious problem in patients. Not all standard pharmaceutical drugs used to prevent urolithiasis are effective in all patients, and many have adverse effects. The present study was undertaken to evaluate the antiurolithiatic potential of two new herbal formulations DRDC/AY/8080 (tablet and DRDC/AY/8081 (syrup against 28-day ethylene glycol (EG-induced urolithiasis model in Wistar rats. Materials and Methods: Animals were divided into five groups (n = 6. The control group was given normal saline, and the toxicant group was given 0.75% EG with 1% w/v of ammonium chloride (AC for 10 days followed by 0.75% w/v EG for next 18 days in drinking water. Treatment groups received respective oral co-treatment with DRDC/AY/8080 (265 mg/kg, DRDC/AY/8081 (2.65 ml/kg, and standard (2.65 ml/kg for 28 days along with EG and AC as given in toxicant group. After 28th day urine, blood and kidney tissue were collected. Ca2+, Mg2+, Na+, and K+ levels were estimated in urine, creatinine, and urea levels were estimated in serum whereas the extent of lipid peroxidation was measured in kidney tissue. Further, crystalluria and histopathological evaluation were carried out in urine and kidney tissue, respectively. Results: Toxicant group showed significant elevation (P < 0.001 vs. control in serum creatinine, blood urea, tissue lipid peroxide, and urinary Mg2+ levels and significant reduction in (P < 0.001 vs. control urinary Na+ and Ca2+ levels. Histopathology of the toxicant group showed damaged proximal tubules with deposits of refractile crystals and loss of tubular epithelium. Both tablet and syrup treated groups showed nephroprotective activity as evident from lower serum creatinine, blood urea, and lipid peroxide levels. Treatment with tablet and syrup

  20. Spectral and scattering theory for translation invariant models in quantum field theory

    DEFF Research Database (Denmark)

    Rasmussen, Morten Grud

    This thesis is concerned with a large class of massive translation invariant models in quantum field theory, including the Nelson model and the Fröhlich polaron. The models in the class describe a matter particle, e.g. a nucleon or an electron, linearly coupled to a second quantised massive scalar...

  1. Modeling reactive transport in deformable porous media using the theory of interacting continua.

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Zack


    This report gives an overview of the work done as part of an Early Career LDRD aimed at modeling flow induced damage of materials involving chemical reactions, deformation of the porous matrix, and complex flow phenomena. The numerical formulation is motivated by a mixture theory or theory of interacting continua type approach to coupling the behavior of the fluid and the porous matrix. Results for the proposed method are presented for several engineering problems of interest including carbon dioxide sequestration, hydraulic fracturing, and energetic materials applications. This work is intended to create a general framework for flow induced damage that can be further developed in each of the particular areas addressed below. The results show both convincing proof of the methodologies potential and the need for further validation of the models developed.

  2. A Method for Formulizing Disaster Evacuation Demand Curves Based on SI Model

    Directory of Open Access Journals (Sweden)

    Yulei Song


    Full Text Available The prediction of evacuation demand curves is a crucial step in the disaster evacuation plan making, which directly affects the performance of the disaster evacuation. In this paper, we discuss the factors influencing individual evacuation decision making (whether and when to leave and summarize them into four kinds: individual characteristics, social influence, geographic location, and warning degree. In the view of social contagion of decision making, a method based on Susceptible-Infective (SI model is proposed to formulize the disaster evacuation demand curves to address both social influence and other factors’ effects. The disaster event of the “Tianjin Explosions” is used as a case study to illustrate the modeling results influenced by the four factors and perform the sensitivity analyses of the key parameters of the model. Some interesting phenomena are found and discussed, which is meaningful for authorities to make specific evacuation plans. For example, due to the lower social influence in isolated communities, extra actions might be taken to accelerate evacuation process in those communities.

  3. Formulation and explanation of a success model in innovation management and its diffusion at sports federations

    Directory of Open Access Journals (Sweden)

    Farokh Hessami


    Full Text Available Since today’s world is moving ahead rapidly, the condition for the survival and durability of insti-tutions and organizations depend on research, development, innovation, and communication, the realization of which requires the presence of a new and creative model. Therefore, this paper aims to formulate and explain an open successful innovation management model by seeking the factors affecting the diffusion of innovation model at sports federations. The study in term of goal is ap-plied and in term of data analysis method is descriptive correlational study. The statistical popula-tion consists of sports federations of Iran in 2017. The sample size was 70 individuals selected us-ing Morgan table and the simple random sampling method from 10 federations. The research col-lection data tool was a questionnaire. The correlation matrix was used to determine the relationships between independent and dependent variables. Furthermore, a novel hybrid technique of fuzzy DEMATEL and fuzzy ANP was used to rank the factors affecting open innovation. The results showed that structural factors, interorganizational joint venture, customer relationships, research and development department, and new technologies affected the open innovation. Moreover, struc-tural factors were ranked the most effective factors. Then, customer relationships and interorganiza-tional joint venture were known as the weakest factors affecting the creation of open innovation.

  4. Theory and model use in social marketing health interventions. (United States)

    Luca, Nadina Raluca; Suggs, L Suzanne


    The existing literature suggests that theories and models can serve as valuable frameworks for the design and evaluation of health interventions. However, evidence on the use of theories and models in social marketing interventions is sparse. The purpose of this systematic review is to identify to what extent papers about social marketing health interventions report using theory, which theories are most commonly used, and how theory was used. A systematic search was conducted for articles that reported social marketing interventions for the prevention or management of cancer, diabetes, heart disease, HIV, STDs, and tobacco use, and behaviors related to reproductive health, physical activity, nutrition, and smoking cessation. Articles were published in English, after 1990, reported an evaluation, and met the 6 social marketing benchmarks criteria (behavior change, consumer research, segmentation and targeting, exchange, competition and marketing mix). Twenty-four articles, describing 17 interventions, met the inclusion criteria. Of these 17 interventions, 8 reported using theory and 7 stated how it was used. The transtheoretical model/stages of change was used more often than other theories. Findings highlight an ongoing lack of use or underreporting of the use of theory in social marketing campaigns and reinforce the call to action for applying and reporting theory to guide and evaluate interventions.

  5. Intercomparison of the Charnock and COARE bulk wind stress formulations for coastal ocean modelling

    Directory of Open Access Journals (Sweden)

    J. M. Brown


    Full Text Available The accurate parameterisation of momentum and heat transfer across the air–sea interface is vital for realistic simulation of the atmosphere–ocean system. In most modelling applications accurate representation of the wind stress is required to numerically reproduce surge, coastal ocean circulation, surface waves, turbulence and mixing. Different formulations can be implemented and impact the accuracy of the instantaneous and long-term residual circulation, the surface mixed layer, and the generation of wave-surge conditions. This, in turn, affects predictions of storm impact, sediment pathways, and coastal resilience to climate change. The specific numerical formulation needs careful selection to ensure the accuracy of the simulation. Two wind stress parameterisations widely used in the ocean circulation and the storm surge communities respectively are studied with focus on an application to the NW region of the UK. Model–observation validation is performed at two nearshore and one estuarine ADCP (acoustic Doppler current profiler stations in Liverpool Bay, a hypertidal region of freshwater influence (ROFI with vast intertidal areas. The period of study covers both calm and extreme conditions to test the robustness of the 10 m wind stress component of the Coupled Ocean–Atmosphere Response Experiment (COARE bulk formulae and the standard Charnock relation. In this coastal application a realistic barotropic–baroclinic simulation of the circulation and surge elevation is set-up, demonstrating greater accuracy occurs when using the Charnock relation, with a constant Charnock coefficient of 0.0185, for surface wind stress during this one month period.

  6. Adrenaline (epinephrine) microcrystal sublingual tablet formulation: enhanced absorption in a preclinical model. (United States)

    Rawas-Qalaji, Mutasem; Rachid, Ousama; Mendez, Belacryst A; Losada, Annette; Simons, F Estelle R; Simons, Keith J


    For anaphylaxis treatment in community settings, adrenaline (epinephrine) administration using an auto-injector in the thigh is universally recommended. Despite this, many people at risk of anaphylaxis in community settings do not carry their prescribed auto-injectors consistently and hesitate to use them when anaphylaxis occurs.The objective of this research was to study the effect of a substantial reduction in adrenaline (Epi) particle size to a few micrometres (Epi microcrystals (Epi-MC)) on enhancing adrenaline dissolution and increasing the rate and extent of sublingual absorption from a previously developed rapidly disintegrating sublingual tablet (RDST) formulation in a validated preclinical model. The in-vivo absorption of Epi-MC 20 mg RDSTs and Epi 40 mg RDSTs was evaluated in rabbits. Epi 0.3 mg intramuscular (IM) injection in the thigh and placebo RDSTs were used as positive and negative controls, respectively. Epimean (standard deviation) area under the plasma concentration vs time curves up to 60 min and Cmax from Epi-MC 20 mg and Epi 40 mg RDSTs did not differ significantly (P > 0.05) from Epi 0.3 mg IM injection. After adrenaline, regardless of route of administration, pharmacokinetic parameters were significantly higher (P adrenaline levels). Epi-MC RDSTs facilitated a twofold increase in Epi absorption and a 50% reduction in the sublingual dose. This novel sublingual tablet formulation is potentially useful for the first-aid treatment of anaphylaxis in community settings. © 2014 Royal Pharmaceutical Society.

  7. Sustainable conjunctive water management in irrigated agriculture: Model formulation and application to the Yaqui Valley, Mexico (United States)

    Schoups, Gerrit; Addams, C. Lee; Minjares, José Luis; Gorelick, Steven M.


    This paper investigates strategies to alleviate the effects of droughts on the profitability and sustainability of irrigated agriculture. These strategies include conjunctive management of surface water and groundwater resources, and engineered improvements such as lining of irrigation canals and addition of regional pumping well capacity. A spatially distributed simulation-optimization model was developed for an irrigated system consisting of multiple surface water reservoirs and an alluvial aquifer. The simulation model consists of an agronomic component and simulators describing the hydrologic system. The physical models account for storage and flow through the reservoirs, routing through the irrigation canals, and regional groundwater flow. The agronomic model describes crop productivity as a function of irrigation quantity and salinity, and determines agricultural profit. A profit maximization problem was formulated and solved using large-scale constrained gradient-based optimization. The model was applied to a real-world conjunctive surface water/groundwater management problem in the Yaqui Valley, an irrigated agricultural region in Sonora, Mexico. The model reproduces recorded reductions in agricultural production during a historical drought. These reductions were caused by a decline in surface water availability and limited installed pumping capacity. Results indicate that the impact of the historical 8-year drought could have been significantly reduced without affecting profit in wet years by better managing surface water and groundwater resources. Namely, groundwater could have been more heavily relied upon and surface water allocation capped at a sustainable level as an operating rule. Lining the irrigation canals would have resulted in water savings of 30% of historical reservoir releases during wet years, which could have been used in subsequent drier years to increase agricultural production. The benefits of a greater reliance on groundwater pumping

  8. A quasi-stationary numerical model of atomized metal droplets, I: Model formulation

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri; Pryds, Nini H; Thorborg, Jesper


    A mathematical model for accelerating powder particles by a gas and for their thermal behavior during flight has been developed. Usually, dealing with the solidification of metal droplets, the interaction between an array of droplets and the surrounding gas is not integrated into the modeling...... of such a process, e.g. in the literature the gas temperature is often modeled by an empirical expression. In the present model, however, the interaction between the enveloping gas and an array of droplets has been coupled and calculated numerically. The applicability of the empirical relation of the gas...... temperature proposed in the literature has been discussed in relation to the present model. One of the major advantages of the present modeling is that it provides a tool to predict the thermal behavior of droplets during flight without the need of experimental parameters, i.e. gas temperature. Furthermore...

  9. Measurement Models for Reasoned Action Theory


    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin


    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are ...

  10. In vitro dissolution models for the prediction of in vivo performance of an oral mesoporous silica formulation. (United States)

    McCarthy, Carol A; Faisal, Waleed; O'Shea, Joseph P; Murphy, Colm; Ahern, Robert J; Ryan, Katie B; Griffin, Brendan T; Crean, Abina M


    Drug release from mesoporous silica systems has been widely investigated in vitro using USP Type II (paddle) dissolution apparatus. However, it is not clear if the observed enhanced in vitro dissolution can forecast drug bioavailability in vivo. In this study, the ability of different in vitro dissolution models to predict in vivo oral bioavailability in a pig model was examined. The fenofibrate-loaded mesoporous silica formulation was compared directly to a commercial reference product, Lipantil Supra®. Three in vitro dissolution methods were considered; USP Type II (paddle) apparatus, USP Type IV (flow-through cell) apparatus and a USP IV Transfer model (incorporating a SGF to FaSSIF-V2 media transfer). In silico modelling, using a physiologically based pharmacokinetic modelling and simulation software package (Gastroplus™), to generate in vitro/in vivo relationships, was also investigated. The study demonstrates that the in vitro dissolution performance of a mesoporous silica formulation varies depending on the dissolution apparatus utilised and experimental design. The findings show that the USP IV transfer model was the best predictor of in vivo bioavailability. The USP Type II (paddle) apparatus was not effective at forecasting in vivo behaviour. This observation is likely due to hydrodynamic differences between the two apparatus and the ability of the transfer model to better simulate gastrointestinal transit. The transfer model is advantageous in forecasting in vivo behaviour for formulations which promote drug supersaturation and as a result are prone to precipitation to a more energetically favourable, less soluble form. The USP IV transfer model could prove useful in future mesoporous silica formulation development. In silico modelling has the potential to assist in this process. However, further investigation is required to overcome the limitations of the model for solubility enhancing formulations. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Formulation of consumables management models. Development approach for the mission planning processor working model (United States)

    Connelly, L. C.


    The mission planning processor is a user oriented tool for consumables management and is part of the total consumables subsystem management concept. The approach to be used in developing a working model of the mission planning processor is documented. The approach includes top-down design, structured programming techniques, and application of NASA approved software development standards. This development approach: (1) promotes cost effective software development, (2) enhances the quality and reliability of the working model, (3) encourages the sharing of the working model through a standard approach, and (4) promotes portability of the working model to other computer systems.

  12. Formulation of consumables management models: Test plan for the mission planning processor working model (United States)

    Connelly, L. C.


    The test plan and test procedures to be used in the verification and validation of the software being implemented in the mission planning processor working model program are documented. The mission planning processor is a user oriented tool for consumables management and is part of the total consumables subsystem management concept. An overview of the working model is presented. Execution of the test plan will comprehensively exercise the working model software. An overview of the test plan, including a testing schedule, is presented along with the test plan for the unit, module, and system levels. The criteria used to validate the working model results for each consumables subsystem is discussed.

  13. Posterior Predictive Model Checking for Multidimensionality in Item Response Theory (United States)

    Levy, Roy; Mislevy, Robert J.; Sinharay, Sandip


    If data exhibit multidimensionality, key conditional independence assumptions of unidimensional models do not hold. The current work pursues posterior predictive model checking, a flexible family of model-checking procedures, as a tool for criticizing models due to unaccounted for dimensions in the context of item response theory. Factors…

  14. Holomorphy without supersymmetry in the Standard Model Effective Field Theory

    Directory of Open Access Journals (Sweden)

    Rodrigo Alonso


    Full Text Available The anomalous dimensions of dimension-six operators in the Standard Model Effective Field Theory (SMEFT respect holomorphy to a large extent. The holomorphy conditions are reminiscent of supersymmetry, even though the SMEFT is not a supersymmetric theory.

  15. Reframing Leadership Pedagogy through Model and Theory Building. (United States)

    Mello, Jeffrey A.


    Leadership theories formed the basis of a course assignment with four objectives: understanding complex factors affecting leadership dynamics, developing abilities to assess organizational factors influencing leadership, practicing model and theory building, and viewing leadership from a multicultural perspective. The assignment was to develop a…

  16. Theory analysis of the Dental Hygiene Human Needs Conceptual Model. (United States)

    MacDonald, L; Bowen, D M


    Theories provide a structural knowing about concept relationships, practice intricacies, and intuitions and thus shape the distinct body of the profession. Capturing ways of knowing and being is essential to any professions' practice, education and research. This process defines the phenomenon of the profession - its existence or experience. Theory evaluation is a systematic criterion-based assessment of a specific theory. This study presents a theory analysis of the Dental Hygiene Human Needs Conceptual Model (DH HNCM). Using the Walker and Avant Theory Analysis, a seven-step process, the DH HNCM, was analysed and evaluated for its meaningfulness and contribution to dental hygiene. The steps include the following: (i) investigate the origins; (ii) examine relationships of the theory's concepts; (iii) assess the logic of the theory's structure; (iv) consider the usefulness to practice; (v) judge the generalizability; (vi) evaluate the parsimony; and (vii) appraise the testability of the theory. Human needs theory in nursing and Maslow's Hierarchy of Need Theory prompted this theory's development. The DH HNCM depicts four concepts based on the paradigm concepts of the profession: client, health/oral health, environment and dental hygiene actions, and includes validated eleven human needs that evolved overtime to eight. It is logical, simplistic, allows scientific predictions and testing, and provides a unique lens for the dental hygiene practitioner. With this model, dental hygienists have entered practice, knowing they enable clients to meet their human needs. For the DH HNCM, theory analysis affirmed that the model is reasonable and insightful and adds to the dental hygiene professions' epistemology and ontology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Theory, modeling, and simulation annual report, 1992

    Energy Technology Data Exchange (ETDEWEB)


    This report briefly discusses research on the following topics: development of electronic structure methods; modeling molecular processes in clusters; modeling molecular processes in solution; modeling molecular processes in separations chemistry; modeling interfacial molecular processes; modeling molecular processes in the atmosphere; methods for periodic calculations on solids; chemistry and physics of minerals; graphical user interfaces for computational chemistry codes; visualization and analysis of molecular simulations; integrated computational chemistry environment; and benchmark computations.

  18. Improved Stability of a Model IgG3 by DoE-Based Evaluation of Buffer Formulations

    Directory of Open Access Journals (Sweden)

    Brittany K. Chavez


    Full Text Available Formulating appropriate storage conditions for biopharmaceutical proteins is essential for ensuring their stability and thereby their purity, potency, and safety over their shelf-life. Using a model murine IgG3 produced in a bioreactor system, multiple formulation compositions were systematically explored in a DoE design to optimize the stability of a challenging antibody formulation worst case. The stability of the antibody in each buffer formulation was assessed by UV/VIS absorbance at 280 nm and 410 nm and size exclusion high performance liquid chromatography (SEC to determine overall solubility, opalescence, and aggregate formation, respectively. Upon preliminary testing, acetate was eliminated as a potential storage buffer due to significant visible precipitate formation. An additional 24 full factorial DoE was performed that combined the stabilizing effect of arginine with the buffering capacity of histidine. From this final DoE, an optimized formulation of 200 mM arginine, 50 mM histidine, and 100 mM NaCl at a pH of 6.5 was identified to substantially improve stability under long-term storage conditions and after multiple freeze/thaw cycles. Thus, our data highlights the power of DoE based formulation screening approaches even for challenging monoclonal antibody molecules.

  19. Modeling Multivariate Volatility Processes: Theory and Evidence

    Directory of Open Access Journals (Sweden)

    Jelena Z. Minovic


    Full Text Available This article presents theoretical and empirical methodology for estimation and modeling of multivariate volatility processes. It surveys the model specifications and the estimation methods. Multivariate GARCH models covered are VEC (initially due to Bollerslev, Engle and Wooldridge, 1988, diagonal VEC (DVEC, BEKK (named after Baba, Engle, Kraft and Kroner, 1995, Constant Conditional Correlation Model (CCC, Bollerslev, 1990, Dynamic Conditional Correlation Model (DCC models of Tse and Tsui, 2002, and Engle, 2002. I illustrate approach by applying it to daily data from the Belgrade stock exchange, I examine two pairs of daily log returns for stocks and index, report the results obtained, and compare them with the restricted version of BEKK, DVEC and CCC representations. The methods for estimation parameters used are maximum log-likehood (in BEKK and DVEC models and twostep approach (in CCC model.

  20. mathematical model of thermal explosion, the dual variational formulation of nonlinear problem, alternative functional

    Directory of Open Access Journals (Sweden)

    V. S. Zarubin


    in its plane, and in the circular cylinder unlimited in length.An approximate numerical solution of the differential equation that is included in a nonlinear mathematical model of the thermal explosion enables us to obtain quantitative estimates of combination of determining parameters at which the limit state occurs in areas of not only canonical form. A capability to study of the thermal explosion state can be extended in the context of development of mathematical modeling methods, including methods of model analysis to describe the thermal state of solids.To analyse a mathematical model of the thermal explosion in a homogeneous solid the paper uses a variational approach based on the dual variational formulation of the appropriate nonlinear stationary problem of heat conduction in such a body. This formulation contains two alternative functional reaching the matching values in their stationary points corresponding to the true temperature distribution. This functional feature allows you to not only get an approximate quantitative estimate of the combination of parameters that determine the thermal explosion state, but also to find the greatest possible error in such estimation.

  1. Radiative transfer theory applied to ocean bottom modeling. (United States)

    Quijano, Jorge E; Zurk, Lisa M


    Research on the propagation of acoustic waves in the ocean bottom sediment is of interest for active sonar applications such as target detection and remote sensing. The interaction of acoustic energy with the sea floor sublayers is usually modeled with techniques based on the full solution of the wave equation, which sometimes leads to mathematically intractable problems. An alternative way to model wave propagation in layered media containing random scatterers is the radiative transfer (RT) formulation, which is a well established technique in the electromagnetics community and is based on the principle of conservation of energy. In this paper, the RT equation is used to model the backscattering of acoustic energy from a layered elastic bottom sediment containing distributions of independent scatterers due to a constant single frequency excitation in the water column. It is shown that the RT formulation provides insight into the physical phenomena of scattering and conversion of energy between waves of different polarizations.

  2. Explicit Nonlinear Model Predictive Control Theory and Applications

    CERN Document Server

    Grancharova, Alexandra


    Nonlinear Model Predictive Control (NMPC) has become the accepted methodology to solve complex control problems related to process industries. The main motivation behind explicit NMPC is that an explicit state feedback law avoids the need for executing a numerical optimization algorithm in real time. The benefits of an explicit solution, in addition to the efficient on-line computations, include also verifiability of the implementation and the possibility to design embedded control systems with low software and hardware complexity. This book considers the multi-parametric Nonlinear Programming (mp-NLP) approaches to explicit approximate NMPC of constrained nonlinear systems, developed by the authors, as well as their applications to various NMPC problem formulations and several case studies. The following types of nonlinear systems are considered, resulting in different NMPC problem formulations: Ø  Nonlinear systems described by first-principles models and nonlinear systems described by black-box models; �...

  3. An Evolutionary Game Theory Model of Spontaneous Brain Functioning

    National Research Council Canada - National Science Library

    Dario Madeo; Agostino Talarico; Alvaro Pascual-Leone; Chiara Mocenni; Emiliano Santarnecchi


    ... conditions, making its understanding of fundamental importance in modern neuroscience. Here we present a theoretical and mathematical model based on an extension of evolutionary game theory on networks (EGN...

  4. Neurocognitive networks: findings, models, and theory. (United States)

    Meehan, Timothy P; Bressler, Steven L


    Through its early history, cognitive neuroscience largely followed a modular paradigm wherein high-level cognitive functions were mapped onto locally segregated brain regions. However, recent evidence drives a continuing shift away from modular theories of cognitive brain function, and toward theories which hold that cognition arises from the integrated activity of large-scale, distributed networks of brain regions. A growing consensus favors the fundamental concept of this new paradigm: the large-scale cognitive brain network, or neurocognitive network. This consensus was the motivation for Neurocognitive Networks 2010 (NCN 2010), a conference sponsored by the Cognitive Neuroscience Program of the National Science Foundation, organized by Drs. Steven Bressler and Craig Richter of Florida Atlantic University (FAU), and held at FAU in Boca Raton, FL on January 29-30, 2010. NCN 2010 gathered together some of today's leading investigators of neurocognitive networks. This paper serves to review their presentations as they relate to the paradigm of neurocognitive networks, as well as to compile the emergent themes, questions, and possible future research directions that arose from the conference. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. An inverse problem formulation for parameter estimation of a reaction-diffusion model of low grade gliomas. (United States)

    Gholami, Amir; Mang, Andreas; Biros, George


    We present a numerical scheme for solving a parameter estimation problem for a model of low-grade glioma growth. Our goal is to estimate the spatial distribution of tumor concentration, as well as the magnitude of anisotropic tumor diffusion. We use a constrained optimization formulation with a reaction-diffusion model that results in a system of nonlinear partial differential equations. In our formulation, we estimate the parameters using partially observed, noisy tumor concentration data at two different time instances, along with white matter fiber directions derived from diffusion tensor imaging. The optimization problem is solved with a Gauss-Newton reduced space algorithm. We present the formulation and outline the numerical algorithms for solving the resulting equations. We test the method using a synthetic dataset and compute the reconstruction error for different noise levels and detection thresholds for monofocal and multifocal test cases.

  6. Conceptual development: an adaptive resonance theory model of polysemy (United States)

    Dunbar, George L.


    Adaptive Resonance Theory provides a model of pattern classification that addresses the plasticity--stability dilemma and allows a neural network to detect when to construct a new category without the assistance of a supervisor. We show that Adaptive Resonance Theory can be applied to the study of natural concept development. Specifically, a model is presented which is able to categorize different usages of a common noun and group the polysemous senses appropriately.

  7. Translating caring theory into practice: the Carolina Care Model. (United States)

    Tonges, Mary; Ray, Joel


    This article describes how one organization operationalized Swanson Caring Theory and changed practice to ensure consistently high standards of performance. The Carolina Care Model developed at the University of North Carolina Hospitals is designed to actualize caring theory, support practices that promote patient satisfaction, and transform cultural norms. Evaluation suggests that this approach to care delivery enhances patients' and families' hospital experience and facilitates desired outcomes. The authors outline the Professional Practice Model, key characteristics of Carolina Care, links to caring theory, and development and implementation methodologies.

  8. The Number of Atomic Models of Uncountable Theories


    Ulrich, Douglas


    We show there exists a complete theory in a language of size continuum possessing a unique atomic model which is not constructible. We also show it is consistent with $ZFC + \\aleph_1 < 2^{\\aleph_0}$ that there is a complete theory in a language of size $\\aleph_1$ possessing a unique atomic model which is not constructible. Finally we show it is consistent with $ZFC + \\aleph_1 < 2^{\\aleph_0}$ that for every complete theory $T$ in a language of size $\\aleph_1$, if $T$ has uncountable atomic mod...

  9. Biorelevant Dissolution Models for a Weak Base To Facilitate Formulation Development and Overcome Reduced Bioavailability Caused by Hypochlordyria or Achlorhydria. (United States)

    Kou, Dawen; Dwaraknath, Sudharsan; Fischer, Yannick; Nguyen, Daniel; Kim, Myeonghui; Yiu, Hiuwing; Patel, Preeti; Ng, Tania; Mao, Chen; Durk, Matthew; Chinn, Leslie; Winter, Helen; Wigman, Larry; Yehl, Peter


    In this study, two dissolution models were developed to achieve in vitro-in vivo relationship for immediate release formulations of Compound-A, a poorly soluble weak base with pH-dependent solubility and low bioavailability in hypochlorhydric and achlorhydric patients. The dissolution models were designed to approximate the hypo-/achlorhydric and normal fasted stomach conditions after a glass of water was ingested with the drug. The dissolution data from the two models were predictive of the relative in vivo bioavailability of various formulations under the same gastric condition, hypo-/achlorhydric or normal. Furthermore, the dissolution data were able to estimate the relative performance under hypo-/achlorhydric and normal fasted conditions for the same formulation. Together, these biorelevant dissolution models facilitated formulation development for Compound-A by identifying the right type and amount of key excipient to enhance bioavailability and mitigate the negative effect of hypo-/achlorhydria due to drug-drug interaction with acid-reducing agents. The dissolution models use readily available USP apparatus 2, and their broader utility can be evaluated on other BCS 2B compounds with reduced bioavailability caused by hypo-/achlorhydria.

  10. Formulation and evaluation of topical herbal gel for the treatment of arthritis in animal model

    Directory of Open Access Journals (Sweden)

    Rajasekaran Aiyalu

    Full Text Available ABSTRACT The objective of the study is to formulate and evaluate a topical herbal gel containing Cardiospermum halicacabum and Vitex negundo leaf extracts for their anti-arthritic activity in rats. Twelve herbal gel formulations were prepared using 1.5% of gelling agents carbopol 934 (F1-F6 and carbopol 940 (F6-F12 and they were evaluated for physical appearance, net content, viscosity, extrudability, pH, spreadability, in vitro diffusion profile and primary skin irritation tests. The stability study for the topical herbal gel formulation was done as per ICH guidelines and anti-arthritic activity was evaluated by Freund's Complete Adjuvant (FCA induced arthritis method. Assessment of body weight, paw volume, hematological and biochemical parameters, histopathological examination and In vitro determination of serum biomarkers were also carried out. Formulated gels were homogenous, stable and complied with the guidelines. Among the formulations, F4 showed better release (98.4 % characteristics than other formulations. No erythema or edema was observed in the skin irritation test confirming the gel was non-toxic and safe. Topical application of the herbal gel F4 containing carbopol 934 displayed significant (p < 0.001 anti-arthritic activity compared to diseased rats. Reduction in paw volume, no agglutination in C - reactive protein and rheumatic factor, reduction in TNF level, regaining of normal hematological, and biochemical parameters, reduction in spleen and thymus weight and histopathological examination supported the anti-arthritic activity of the gel formulation.

  11. Stochastic Formulation of the Resolution of Identity: Application to Second Order Møller-Plesset Perturbation Theory. (United States)

    Takeshita, Tyler Y; de Jong, Wibe A; Neuhauser, Daniel; Baer, Roi; Rabani, Eran


    A stochastic orbital approach to the resolution of identity (RI) approximation for 4-index electron repulsion integrals (ERIs) is presented. The stochastic RI-ERIs are then applied to second order Møller-Plesset perturbation theory (MP2) utilizing a multiple stochastic orbital approach. The introduction of multiple stochastic orbitals results in an O(NAO3) scaling for both the stochastic RI-ERIs and stochastic RI-MP2, NAO being the number of basis functions. For a range of water clusters we demonstrate that this method exhibits a small prefactor and observed scalings of O(Ne2.4) for total energies and O(Ne3.1) for forces (Ne being the number of correlated electrons), outperforming MP2 for clusters with as few as 21 water molecules.

  12. Bianchi class A models in Sàez-Ballester's theory (United States)

    Socorro, J.; Espinoza-García, Abraham


    We apply the Sàez-Ballester (SB) theory to Bianchi class A models, with a barotropic perfect fluid in a stiff matter epoch. We obtain exact classical solutions à la Hamilton for Bianchi type I, II and VIh=-1 models. We also find exact quantum solutions to all Bianchi Class A models employing a particular ansatz for the wave function of the universe.

  13. A Dynamic Systems Theory Model of Visual Perception Development (United States)

    Coté, Carol A.


    This article presents a model for understanding the development of visual perception from a dynamic systems theory perspective. It contrasts to a hierarchical or reductionist model that is often found in the occupational therapy literature. In this proposed model vision and ocular motor abilities are not foundational to perception, they are seen…

  14. Biotherapeutic protein formulation variables influence protein integrity and can promote post-translational modifications as shown using chicken egg white lysozyme as a model system. (United States)

    Gourbatsi, Evdoxia; Povey, Jane; Uddin, Shahid; Smales, C Mark


    The effect of different formulations variables on protein integrity were investigated using lysozyme as a model protein for the development of biotherapeutic protein formulations for use in the clinic. Buffer composition/concentration was the key variable of formulation reagents investigated in determining lysozyme stability and authenticity independent of protein concentration whilst the storage temperature and time, not surprisingly, were also key variables. Tryptic peptide mapping of the protein showed that the modifications occurred when formulated under specific conditions but not others. A model peptide system was developed that reflected the same behavior under formulation conditions as intact lysozyme. Peptide models may mirror the stability of proteins, or regions of proteins, in the same formulations and be used to help develop a rapid screen of formulations for stabilisation of biotherapeutic proteins.

  15. Measurement Models for Reasoned Action Theory. (United States)

    Hennessy, Michael; Bleakley, Amy; Fishbein, Martin


    Quantitative researchers distinguish between causal and effect indicators. What are the analytic problems when both types of measures are present in a quantitative reasoned action analysis? To answer this question, we use data from a longitudinal study to estimate the association between two constructs central to reasoned action theory: behavioral beliefs and attitudes toward the behavior. The belief items are causal indicators that define a latent variable index while the attitude items are effect indicators that reflect the operation of a latent variable scale. We identify the issues when effect and causal indicators are present in a single analysis and conclude that both types of indicators can be incorporated in the analysis of data based on the reasoned action approach.

  16. Modeling acquaintance networks based on balance theory

    Directory of Open Access Journals (Sweden)

    Vukašinović Vida


    Full Text Available An acquaintance network is a social structure made up of a set of actors and the ties between them. These ties change dynamically as a consequence of incessant interactions between the actors. In this paper we introduce a social network model called the Interaction-Based (IB model that involves well-known sociological principles. The connections between the actors and the strength of the connections are influenced by the continuous positive and negative interactions between the actors and, vice versa, the future interactions are more likely to happen between the actors that are connected with stronger ties. The model is also inspired by the social behavior of animal species, particularly that of ants in their colony. A model evaluation showed that the IB model turned out to be sparse. The model has a small diameter and an average path length that grows in proportion to the logarithm of the number of vertices. The clustering coefficient is relatively high, and its value stabilizes in larger networks. The degree distributions are slightly right-skewed. In the mature phase of the IB model, i.e., when the number of edges does not change significantly, most of the network properties do not change significantly either. The IB model was found to be the best of all the compared models in simulating the e-mail URV (University Rovira i Virgili of Tarragona network because the properties of the IB model more closely matched those of the e-mail URV network than the other models

  17. Description of diffraction grating experiments for photons and electrons in Feynman's space-time formulation of quantum mechanics: The quantum origins of classical wave theories of light and massive particles


    Field, J H.


    The five laws of relativistic quantum mechanics, according to Feynman's path integral formulation, are concisely stated and applied to experiments. Reflection diffraction grating experiments for both photons and electrons are analysed, in particular the Davisson-Germer experiment in which the wave-like property of electrons was first established. It is shown how classical, purely spatial, effective wave theories for both photons and electrons are predicted by the path integral formulation of ...

  18. Modeling in applied sciences a kinetic theory approach

    CERN Document Server

    Pulvirenti, Mario


    Modeling complex biological, chemical, and physical systems, in the context of spatially heterogeneous mediums, is a challenging task for scientists and engineers using traditional methods of analysis Modeling in Applied Sciences is a comprehensive survey of modeling large systems using kinetic equations, and in particular the Boltzmann equation and its generalizations An interdisciplinary group of leading authorities carefully develop the foundations of kinetic models and discuss the connections and interactions between model theories, qualitative and computational analysis and real-world applications This book provides a thoroughly accessible and lucid overview of the different aspects, models, computations, and methodology for the kinetic-theory modeling process Topics and Features * Integrated modeling perspective utilized in all chapters * Fluid dynamics of reacting gases * Self-contained introduction to kinetic models * Becker–Doring equations * Nonlinear kinetic models with chemical reactions * Kinet...

  19. Mathematical Modelling and New Theories of Learning. (United States)

    Boaler, Jo


    Demonstrates the importance of expanding notions of learning beyond knowledge to the practices in mathematics classrooms. Considers a three-year study of students who learned through mathematical modeling. Shows that a modeling approach encouraged the development of a range of important practices in addition to knowledge that were useful in real…

  20. Baldrige Theory into Practice: A Generic Model (United States)

    Arif, Mohammed


    Purpose: The education system globally has moved from a push-based or producer-centric system to a pull-based or customer centric system. Malcolm Baldrige Quality Award (MBQA) model happens to be one of the latest additions to the pull based models. The purpose of this paper is to develop a generic framework for MBQA that can be used by…

  1. Optimal transportation networks models and theory

    CERN Document Server

    Bernot, Marc; Morel, Jean-Michel


    The transportation problem can be formalized as the problem of finding the optimal way to transport a given measure into another with the same mass. In contrast to the Monge-Kantorovitch problem, recent approaches model the branched structure of such supply networks as minima of an energy functional whose essential feature is to favour wide roads. Such a branched structure is observable in ground transportation networks, in draining and irrigation systems, in electrical power supply systems and in natural counterparts such as blood vessels or the branches of trees. These lectures provide mathematical proof of several existence, structure and regularity properties empirically observed in transportation networks. The link with previous discrete physical models of irrigation and erosion models in geomorphology and with discrete telecommunication and transportation models is discussed. It will be mathematically proven that the majority fit in the simple model sketched in this volume.

  2. [Formulation of combined predictive indicators using logistic regression model in predicting sepsis and prognosis]. (United States)

    Duan, Liwei; Zhang, Sheng; Lin, Zhaofen


    To explore the method and performance of using multiple indices to diagnose sepsis and to predict the prognosis of severe ill patients. Critically ill patients at first admission to intensive care unit (ICU) of Changzheng Hospital, Second Military Medical University, from January 2014 to September 2015 were enrolled if the following conditions were satisfied: (1) patients were 18-75 years old; (2) the length of ICU stay was more than 24 hours; (3) All records of the patients were available. Data of the patients was collected by searching the electronic medical record system. Logistic regression model was formulated to create the new combined predictive indicator and the receiver operating characteristic (ROC) curve for the new predictive indicator was built. The area under the ROC curve (AUC) for both the new indicator and original ones were compared. The optimal cut-off point was obtained where the Youden index reached the maximum value. Diagnostic parameters such as sensitivity, specificity and predictive accuracy were also calculated for comparison. Finally, individual values were substituted into the equation to test the performance in predicting clinical outcomes. A total of 362 patients (218 males and 144 females) were enrolled in our study and 66 patients died. The average age was (48.3±19.3) years old. (1) For the predictive model only containing categorical covariants [including procalcitonin (PCT), lipopolysaccharide (LPS), infection, white blood cells count (WBC) and fever], increased PCT, increased WBC and fever were demonstrated to be independent risk factors for sepsis in the logistic equation. The AUC for the new combined predictive indicator was higher than that of any other indictor, including PCT, LPS, infection, WBC and fever (0.930 vs. 0.661, 0.503, 0.570, 0.837, 0.800). The optimal cut-off value for the new combined predictive indicator was 0.518. Using the new indicator to diagnose sepsis, the sensitivity, specificity and diagnostic accuracy

  3. Conformal Field Theory and its application to the Ising model (United States)

    Meyer, Joshua

    The two-dimensional Ising model was originally solved by Onsager using statistical physics techniques. More recently, it has been found that the derivation of critical exponents and correlation functions can be greatly simplified by using the methods of Conformal Field Theory (CFT). We review these methods and apply them to the two-dimensional Ising model. The connection between the continuum limit Ising model and the field theory of free fermions is explained, resulting in a CFT on the plane with two non-trivial fields. Through the use of bosonization on the plane, the free-field correlation functions of the model are computed.

  4. Effect of new polyherbal formulations DF1911, DF2112 and DF2813 on CFA induced inflammation in rat model. (United States)

    Nagarkar, Bhagyashri; Jagtap, Suresh


    Aim of the present study was to evaluate anti-inflammatory activity of newly developed polyherbal formulations DF1911, DF2112 and DF2813. These newly developed formulations are modifications of Dashamoola, a well known Ayurvedic formulation, along with addition of new plants. Complete Freund's adjuvant (CFA) induced inflammation in rat was used as an experimental model. Effects of the treatment in rats were monitored by physiological and biochemical parameters, histopathology and through gene expression studies. Diclofenac sodium showed maximum percentage inhibition (56.8 ± 3.5%) of paw edema followed by Dashamoola Kwatha (19.9 ± 1.8%). Among test formulations treated groups, DF1911 at 250 mg/kg bw (48.2 ± 5.4%, p CFA rats were normalized after treatment with test formulations. Results of serum markers and histopathological observations also supported the activity of formulations. Increased MDA levels in liver tissue of CFA injected animals significantly (p < 0.05) decreased by Diclofenac sodium and test formulation treated groups. DF1911, DF2112 and DF2813 showed down-regulation of IL1-β (~6.4-fold, ~5.2-fold and ~7.6-fold), IL-6 (~1.1-fold, ~1.6-fold and ~1.9-fold), TNF-α (~2.0-fold, ~4.6-fold and ~3.5-fold), and iNOS (~1.2-fold, ~1.8-fold and ~1.1-fold) in inflamed paw tissue compared to negative control group, respectively. The anti-inflammatory effects of DF1911 and DF2112 in rats were significantly higher than the Dashamoola Kwatha and are comparable to Diclofenac sodium.

  5. Mixed models theory and applications with R

    CERN Document Server

    Demidenko, Eugene


    Mixed modeling is one of the most promising and exciting areas of statistical analysis, enabling the analysis of nontraditional, clustered data that may come in the form of shapes or images. This book provides in-depth mathematical coverage of mixed models' statistical properties and numerical algorithms, as well as applications such as the analysis of tumor regrowth, shape, and image. The new edition includes significant updating, over 300 exercises, stimulating chapter projects and model simulations, inclusion of R subroutines, and a revised text format. The target audience continues to be g

  6. Description of diffraction-grating experiments for photons and electrons in Feynman's spacetime formulation of quantum mechanics: the quantum origins of classical wave theories of light and massive particles (United States)

    Field, J. H.


    The five laws of relativistic quantum mechanics, according to Feynman's path integral formulation, are concisely stated and applied to experiments. Reflection-diffraction-grating experiments for both photons and electrons are analysed, in particular, the Davisson-Germer experiment in which the wave-like property of electrons was first established. It is shown how classical, purely spatial, effective wave theories for both photons and electrons are predicted by the path integral formulation of quantum mechanics. The standard Copenhagen interpretation of wave mechanics is critically discussed in the light of the described experimental applications of the path integral formulation.

  7. Homogeneous cosmological models in Yang's gravitation theory (United States)

    Fennelly, A. J.; Pavelle, R.


    We present a dynamic, spatially homogeneous solution of Yang's pure space gravitational field equations which is non-Einsteinian. The predictions of this cosmological model seem to be at variance with observations.

  8. Modeling workplace bullying using catastrophe theory. (United States)

    Escartin, J; Ceja, L; Navarro, J; Zapf, D


    Workplace bullying is defined as negative behaviors directed at organizational members or their work context that occur regularly and repeatedly over a period of time. Employees' perceptions of psychosocial safety climate, workplace bullying victimization, and workplace bullying perpetration were assessed within a sample of nearly 5,000 workers. Linear and nonlinear approaches were applied in order to model both continuous and sudden changes in workplace bullying. More specifically, the present study examines whether a nonlinear dynamical systems model (i.e., a cusp catastrophe model) is superior to the linear combination of variables for predicting the effect of psychosocial safety climate and workplace bullying victimization on workplace bullying perpetration. According to the AICc, and BIC indices, the linear regression model fits the data better than the cusp catastrophe model. The study concludes that some phenomena, especially unhealthy behaviors at work (like workplace bullying), may be better studied using linear approaches as opposed to nonlinear dynamical systems models. This can be explained through the healthy variability hypothesis, which argues that positive organizational behavior is likely to present nonlinear behavior, while a decrease in such variability may indicate the occurrence of negative behaviors at work.

  9. Understanding Rasch Measurement: The Rasch Model, Additive Conjoint Measurement, and New Models of Probabilistic Measurement Theory. (United States)

    Karabatsos, George


    Describes similarities and differences between additive conjoint measurement and the Rasch model, and formalizes some new nonparametric item response models that are, in a sense, probabilistic measurement theory models. Applies these new models to published and simulated data. (SLD)

  10. Actor-network theory and the OSCE: formulating a new research agenda for a post-psychometric era. (United States)

    Bearman, Margaret; Ajjawi, Rola


    The Objective Structured Clinical Examination (OSCE) is a ubiquitous part of medical education, although there is some debate about its value, particularly around possible impact on learning. Literature and research regarding the OSCE is most often situated within the psychometric or competency discourses of assessment. This paper describes an alternative approach: Actor-network-theory (ANT), a sociomaterial approach to understanding practice and learning. ANT provides a means to productively examine tensions and limitations of the OSCE, in part through extending research to include social relationships and physical objects. Using a narrative example, the paper suggests three ANT-informed insights into the OSCE. We describe: (1) exploring the OSCE as a holistic combination of people and objects; (2) thinking about the influences a checklist can exert over the OSCE; and (3) the implications of ANT educational research for standardisation within the OSCE. We draw from this discussion to provide a practical agenda for ANT research into the OSCE. This agenda promotes new areas for exploration in an often taken-for-granted assessment format.

  11. Spatial interaction models facility location using game theory

    CERN Document Server

    D'Amato, Egidio; Pardalos, Panos


    Facility location theory develops the idea of locating one or more facilities by optimizing suitable criteria such as minimizing transportation cost, or capturing the largest market share. The contributions in this book focus an approach to facility location theory through game theoretical tools highlighting situations where a location decision is faced by several decision makers and leading to a game theoretical framework in non-cooperative and cooperative methods. Models and methods regarding the facility location via game theory are explored and applications are illustrated through economics, engineering, and physics. Mathematicians, engineers, economists and computer scientists working in theory, applications and computational aspects of facility location problems using game theory will find this book useful.

  12. A formulation of convection for stellar structure and evolution calculations without the mixing-length theory approximations. I - Application to the sun (United States)

    Lydon, Thomas J.; Fox, Peter A.; Sofia, Sabatino


    The problem of treating convective energy transport without MLT approximations is approached here by formulating the results of numerical simulations of convection in terms of energy fluxes. This revised treatment of convective transport can be easily incorporated within existing stellar structure codes. As an example, the technique is applied to the sun. The treatment does not include any free parameters, making the models extremely sensitive to the accuracy of the treatments of opacities, chemical abundances, treatments of the solar atmosphere, and the equation of state.

  13. Phase field modeling of brittle fracture for enhanced assumed strain shells at large deformations: formulation and finite element implementation (United States)

    Reinoso, J.; Paggi, M.; Linder, C.


    Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.

  14. Dexamethasone-releasing cochlear implant coatings: application of artificial neural networks for modelling of formulation parameters and drug release profile. (United States)

    Nemati, Pedram; Imani, Mohammad; Farahmandghavi, Farhid; Mirzadeh, Hamid; Marzban-Rad, Ehsan; Nasrabadi, Ali Motie


    Over the past few decades, mathematical modelling and simulation of drug delivery systems has been steadily gained interest as a focus for academic and industrial attention. Here, simulation of dexamethasone (DEX, a corticosteroid anti-inflammatory agent) release profile from drug-eluting cochlear implant coatings is reported using artificial neural networks. The devices were fabricated as monolithic dispersions of the pharmaceutically active ingredient in a silicone rubber matrix. A two-phase exponential model was fitted on the experimentally obtained DEX release profiles. An artificial neural network (ANN) was trained to determine formulation parameters (i.e. DEX loading percentage, the devices surface area and their geometry) for a specific experimentally obtained drug release profile. In a reverse strategy, an ANN was trained for determining expected drug release profiles for the same set of formulation parameters. An algorithm was developed by combining the two previously developed ANNs in a serial manner, and this was successfully used for simulating the developed drug-eluting cochlear implant coatings. The models were validated by a leave-one-out method and performing new experiments. The developed ANN algorithms were capable to bilaterally predict drug release profile for a known set of formulation parameters or find out the levels for input formulation parameters to obtain a desired DEX release profile. © 2013 Royal Pharmaceutical Society.

  15. Density functional theory and multiscale materials modeling

    Indian Academy of Sciences (India)

    One of the vital ingredients in the theoretical tools useful in materials modeling at all the length scales of interest is the concept of density. In the microscopic length scale, it is the electron density that has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids.

  16. Theory and Model for Martensitic Transformations

    DEFF Research Database (Denmark)

    Lindgård, Per-Anker; Mouritsen, Ole G.


    Martensitic transformations are shown to be driven by the interplay between two fluctuating strain components. No soft mode is needed, but a central peak occurs representing the dynamics of strain clusters. A two-dimensional magnetic-analog model with the martensitic-transition symmetry...

  17. Formulation of detailed consumables management models for the development (preoperational) period of advanced space transportation system: Executive summary (United States)

    Torian, J. G.


    Formulation of models required for the mission planning and scheduling function and establishment of the relation of those models to prelaunch, onboard, ground support, and postmission functions for the development phase of space transportation systems (STS) was conducted. The preoperational space shuttle is used as the design baseline for the subject model formulations. Analytical models were developed which consist of a mission planning processor with appropriate consumables data base and a method of recognizing potential constraint violations in both the planning and flight operations functions. A flight data file for storage/retrieval of information over an extended period which interfaces with a flight operations processor for monitoring of the actual flights was examined.

  18. Simulation of Oil Slick Transport in Great Lakes Connecting Channels. Volume 1. Theory and Model Formulation (United States)


    attempted to analyze in detail the 3 hydrodynamic problem defined above (Kerr and Babu, 1970; 5 15 U DePietio and Cox, 1979; and Foda and Cox, 1980...Water Management and Planning Branch.. Foda , M. and R.G. Cox, (1980). "The spreading of thin liquid films on a water-air interface," Journal of Fluid

  19. Applying learning theories and instructional design models for effective instruction. (United States)

    Khalil, Mohammed K; Elkhider, Ihsan A


    Faculty members in higher education are involved in many instructional design activities without formal training in learning theories and the science of instruction. Learning theories provide the foundation for the selection of instructional strategies and allow for reliable prediction of their effectiveness. To achieve effective learning outcomes, the science of instruction and instructional design models are used to guide the development of instructional design strategies that elicit appropriate cognitive processes. Here, the major learning theories are discussed and selected examples of instructional design models are explained. The main objective of this article is to present the science of learning and instruction as theoretical evidence for the design and delivery of instructional materials. In addition, this article provides a practical framework for implementing those theories in the classroom and laboratory. Copyright © 2016 The American Physiological Society.

  20. Real time polymer nanocomposites-based physical nanosensors: theory and modeling (United States)

    Bellucci, Stefano; Shunin, Yuri; Gopeyenko, Victor; Lobanova-Shunina, Tamara; Burlutskaya, Nataly; Zhukovskii, Yuri


    Functionalized carbon nanotubes and graphene nanoribbons nanostructures, serving as the basis for the creation of physical pressure and temperature nanosensors, are considered as tools for ecological monitoring and medical applications. Fragments of nanocarbon inclusions with different morphologies, presenting a disordered system, are regarded as models for nanocomposite materials based on carbon nanoсluster suspension in dielectric polymer environments (e.g., epoxy resins). We have formulated the approach of conductivity calculations for carbon-based polymer nanocomposites using the effective media cluster approach, disordered systems theory and conductivity mechanisms analysis, and obtained the calibration dependences. Providing a proper description of electric responses in nanosensoring systems, we demonstrate the implementation of advanced simulation models suitable for real time control nanosystems. We also consider the prospects and prototypes of the proposed physical nanosensor models providing the comparisons with experimental calibration dependences.

  1. Automated Physico-Chemical Cell Model Development through Information Theory

    Energy Technology Data Exchange (ETDEWEB)

    Peter J. Ortoleva


    The objective of this project was to develop predictive models of the chemical responses of microbial cells to variations in their surroundings. The application of these models is optimization of environmental remediation and energy-producing biotechnical processes.The principles on which our project is based are as follows: chemical thermodynamics and kinetics; automation of calibration through information theory; integration of multiplex data (e.g. cDNA microarrays, NMR, proteomics), cell modeling, and bifurcation theory to overcome cellular complexity; and the use of multiplex data and information theory to calibrate and run an incomplete model. In this report we review four papers summarizing key findings and a web-enabled, multiple module workflow we have implemented that consists of a set of interoperable systems biology computational modules.

  2. Multilevel Ventilation: Theory and Simplified Mathematical Model

    Directory of Open Access Journals (Sweden)

    P. Torok


    Full Text Available Considering the issues of artificial ventilation (AV in non-homogenous pathological lung processes (acute lung injury, acute respiratory distress syndrome, pneumonia, etc., the authors created a mathematical model of multicompartment non-homogenous injured lungs that were ventilated by a new mode of AV, the so-called three-level ventilation. Multilevel ventilation was defined a type (modification of ALV whose basic ventilation level was produced by the modes CMV, PCV or PS (ASB and add-on level, and the so-called background ventilation was generated by the levels of PEEP and high PEEP (PEEPh with varying frequency and duration. Multi-level ventilation on 3 pressure levels was realized by the mathematical model as a combination of pressure-controlled ventilation (PCV and two levels of PEEP and PEEPh. The objective was to prove that in cases of considerably non-homogenous gas distribution in acute pathological disorders of lungs, gas entry into the so-called slow bronchoalveolar compartments could be improved by multilevel AV, without substabtially changing the volume of so-called fast compartments. Material and Method. Multi-level ventilation at 3 pressure levels was realized by the mathematical model as a combination of PCV and two levels of PEEP and PEEPh. Results. By comparing the single-level AV in the PCV mode with the so-called three-level ventilation defined as a combination of PCV+PEEPh/PEEP, the authors have discovered that the loading of slow compartments in the model was considerably improved by 50—60% as compared with the baseline values. In absolute terms, this difference was as many as 2—10 times of the volume. Conclusion. The mathematical model may demonstrate that the application of the so-called three-level AV causes considerable changes in gas distribution in the lung parenchyma disordered by a non-homogenous pathological process. The authors state that the proposed mathematical model requires clinical verification in order

  3. From integrable models to gauge theories Festschrift Matinyan (Sergei G)

    CERN Document Server

    Gurzadyan, V G


    This collection of twenty articles in honor of the noted physicist and mentor Sergei Matinyan focuses on topics that are of fundamental importance to high-energy physics, field theory and cosmology. The topics range from integrable quantum field theories, three-dimensional Ising models, parton models and tests of the Standard Model, to black holes in loop quantum gravity, the cosmological constant and magnetic fields in cosmology. A pedagogical essay by Lev Okun concentrates on the problem of fundamental units. The articles have been written by well-known experts and are addressed to graduate

  4. Formulation and study some inverse problems in modeling of hydrophysical fields in water areas with "liquid" boundaries (United States)

    Agoshkov, Valery


    There are different approaches for modeling boundary conditions describing hydrophysical fields in water areas with "liquid" boundaries. Variational data assimilation may also be considered as one of such approaches. Development of computer equipment, together with an increase in the quantity and quality of data from the satellites and other monitoring tools proves that the development of this particular approach is perspective. The range of connected the problems is wide - different recording forms of boundary conditions, observational data assimilation procedures and used models of hydrodynamics are possible. In this work some inverse problems and corresponding variational data assimilation ones, connected with mathematical modeling of hydrophysical fields in water areas (seas and oceans) with "liquid" ("open") boundaries, are formulated and studied. Note that the surface of water area (which can also be considered as a "liquid" boundary) is not included in the set of "liquid" boundaries, in this case "liquid" boundaries are borders between the areas "water-water". In the work, mathematical model of hydrothermodynamics in the water areas with "liquid" ("open") part of the boundary, a generalized statement of the problem and the splitting method for time approximation are formulated. Also the problem of variational data assimilation and iterative algorithm for solving inverse problems mentioned above are formulated. The work is based on [1]. The work was partly supported by the Russian Science Foundation (project 14-11-00609, the general formulation of the inverse problems) and by the Russian Foundation for Basic Research (project 16-01-00548, the formulation of the problem and its study). [1] V.I. Agoshkov, Methods for solving inverse problems and variational data assimilation problems of observations in the problems of the large-scale dynamics of the oceans and seas, Institute of Numerical Mathematics, RAS, Moscow, 2016 (in Russian).

  5. Lenses on reading an introduction to theories and models

    CERN Document Server

    Tracey, Diane H


    Widely adopted as an ideal introduction to the major models of reading, this text guides students to understand and facilitate children's literacy development. Coverage encompasses the full range of theories that have informed reading instruction and research, from classical thinking to cutting-edge cognitive, social learning, physiological, and affective perspectives. Readers learn how theory shapes instructional decision making and how to critically evaluate the assumptions and beliefs that underlie their own teaching. Pedagogical features include framing and discussion questions, learning a

  6. Fuzzy Stochastic Optimization Theory, Models and Applications

    CERN Document Server

    Wang, Shuming


    Covering in detail both theoretical and practical perspectives, this book is a self-contained and systematic depiction of current fuzzy stochastic optimization that deploys the fuzzy random variable as a core mathematical tool to model the integrated fuzzy random uncertainty. It proceeds in an orderly fashion from the requisite theoretical aspects of the fuzzy random variable to fuzzy stochastic optimization models and their real-life case studies.   The volume reflects the fact that randomness and fuzziness (or vagueness) are two major sources of uncertainty in the real world, with significant implications in a number of settings. In industrial engineering, management and economics, the chances are high that decision makers will be confronted with information that is simultaneously probabilistically uncertain and fuzzily imprecise, and optimization in the form of a decision must be made in an environment that is doubly uncertain, characterized by a co-occurrence of randomness and fuzziness. This book begins...

  7. Computational hemodynamics theory, modelling and applications

    CERN Document Server

    Tu, Jiyuan; Wong, Kelvin Kian Loong


    This book discusses geometric and mathematical models that can be used to study fluid and structural mechanics in the cardiovascular system.  Where traditional research methodologies in the human cardiovascular system are challenging due to its invasive nature, several recent advances in medical imaging and computational fluid and solid mechanics modelling now provide new and exciting research opportunities. This emerging field of study is multi-disciplinary, involving numerical methods, computational science, fluid and structural mechanics, and biomedical engineering. Certainly any new student or researcher in this field may feel overwhelmed by the wide range of disciplines that need to be understood. This unique book is one of the first to bring together knowledge from multiple disciplines, providing a starting point to each of the individual disciplines involved, attempting to ease the steep learning curve. This book presents elementary knowledge on the physiology of the cardiovascular system; basic knowl...

  8. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars


    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from, together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  9. A grid-based distributed flood forecasting model for use with weather radar data: Part 1. Formulation

    Directory of Open Access Journals (Sweden)

    V. A. Bell


    Full Text Available A practical methodology for distributed rainfall-runoff modelling using grid square weather radar data is developed for use in real-time flood forecasting. The model, called the Grid Model, is configured so as to share the same grid as used by the weather radar, thereby exploiting the distributed rainfall estimates to the full. Each grid square in the catchment is conceptualised as a storage which receives water as precipitation and generates water by overflow and drainage. This water is routed across the catchment using isochrone pathways. These are derived from a digital terrain model assuming two fixed velocities of travel for land and river pathways which are regarded as model parameters to be optimised. Translation of water between isochrones is achieved using a discrete kinematic routing procedure, parameterised through a single dimensionless wave speed parameter, which advects the water and incorporates diffusion effects through the discrete space-time formulation. The basic model routes overflow and drainage separately through a parallel system of kinematic routing reaches, characterised by different wave speeds but using the same isochrone-based space discretisation; these represent fast and slow pathways to the basin outlet, respectively. A variant allows the slow pathway to have separate isochrones calculated using Darcy velocities controlled by the hydraulic gradient as estimated by the local gradient of the terrain. Runoff production within a grid square is controlled by its absorption capacity which is parameterised through a simple linkage function to the mean gradient in the square, as calculated from digital terrain data. This allows absorption capacity to be specified differently for every grid square in the catchment through the use of only two regional parameters and a DTM measurement of mean gradient for each square. An extension of this basic idea to consider the distribution of gradient within the square leads analytically

  10. Combining Theory Generation and Model Checking for Security Protocol Analysis, (United States)


    checking to the task of protocol analysis, while the other utilizes the method of theory generation. which borrows from both model checking and...This paper reviews two relatively new tools for automated formal analysis of security protocols. One applies the formal methods technique of model

  11. Thermodynamic Models from Fluctuation Solution Theory Analysis of Molecular Simulations

    DEFF Research Database (Denmark)

    Christensen, Steen; Peters, Günther H.j.; Hansen, Flemming Yssing


    Fluctuation solution theory (FST) is employed to analyze results of molecular dynamics (MD) simulations of liquid mixtures. The objective is to generate parameters for macroscopic GE-models, here the modified Margules model. We present a strategy for choosing the number of parameters included...

  12. [Study of dental model testing tool based on robot theory]. (United States)

    Hu, B; Song, Y; Cheng, L


    A new three dimensional testing and analysing system of dental model is discussed It is designed based on the motion theory of robots. The system is capable of not only measuring the three dimensional sizes of dental models, but also saving and outputing the tested data. The construction of the system is briefly introduced here.

  13. Pilot evaluation in TENCompetence: a theory-driven model1

    NARCIS (Netherlands)

    Schoonenboom, J.; Sligte, H.; Moghnieh, A.; Specht, M.; Glahn, C.; Stefanov, K.; Navarrete, T.; Blat, J.


    This paper describes a theory-driven evaluation model that is used in evaluating four pilots in which an infrastructure for lifelong competence development, which is currently being developed, is validated. The model makes visible the separate implementation steps that connect the envisaged

  14. Clinical outcome measurement: Models, theory, psychometrics and practice. (United States)

    McClimans, Leah; Browne, John; Cano, Stefan

    In the last decade much has been made of the role that models play in the epistemology of measurement. Specifically, philosophers have been interested in the role of models in producing measurement outcomes. This discussion has proceeded largely within the context of the physical sciences, with notable exceptions considering measurement in economics. However, models also play a central role in the methods used to develop instruments that purport to quantify psychological phenomena. These methods fall under the umbrella term 'psychometrics'. In this paper, we focus on Clinical Outcome Assessments (COAs) and discuss two measurement theories and their associated models: Classical Test Theory (CTT) and Rasch Measurement Theory. We argue that models have an important role to play in coordinating theoretical terms with empirical content, but to do so they must serve: 1) as a representation of the measurement interaction; and 2) in conjunction with a theory of the attribute in which we are interested. We conclude that Rasch Measurement Theory is a more promising approach than CTT in these regards despite the latter's popularity with health outcomes researchers. Copyright © 2017. Published by Elsevier Ltd.

  15. Effective Biot theory and its generalization to poroviscoelastic models (United States)

    Liu, Xu; Greenhalgh, Stewart; Zhou, Bing; Greenhalgh, Mark


    A method is suggested to express the effective bulk modulus of the solid frame of a poroelastic material as a function of the saturated bulk modulus. This method enables effective Biot theory to be described through the use of seismic dispersion measurements or other models developed for the effective saturated bulk modulus. The effective Biot theory is generalized to a poroviscoelastic model of which the moduli are represented by the relaxation functions of the generalized fractional Zener model. The latter covers the general Zener and the Cole-Cole models as special cases. A global search method is described to determine the parameters of the relaxation functions, and a simple deterministic method is also developed to find the defining parameters of the single Cole-Cole model. These methods enable poroviscoelastic models to be constructed, which are based on measured seismic attenuation functions, and ensure that the model dispersion characteristics match the observations.

  16. Applications of Generalizability Theory and Their Relations to Classical Test Theory and Structural Equation Modeling. (United States)

    Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat


    Although widely recognized as a comprehensive framework for representing score reliability, generalizability theory (G-theory), despite its potential benefits, has been used sparingly in reporting of results for measures of individual differences. In this article, we highlight many valuable ways that G-theory can be used to quantify, evaluate, and improve psychometric properties of scores. Our illustrations encompass assessment of overall reliability, percentages of score variation accounted for by individual sources of measurement error, dependability of cut-scores for decision making, estimation of reliability and dependability for changes made to measurement procedures, disattenuation of validity coefficients for measurement error, and linkages of G-theory with classical test theory and structural equation modeling. We also identify computer packages for performing G-theory analyses, most of which can be obtained free of charge, and describe how they compare with regard to data input requirements, ease of use, complexity of designs supported, and output produced. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Impact of Aerodynamic Resistance Formulations used in Two-Source Modeling of Energy Exchange from the Soil and Vegetation Using Land Surface Temperature (United States)

    Kustas, W. P.; Nieto Solana, H.; Andreu, A.; Cammalleri, C.; Kool, D.; Agam, N.; Alfieri, J. G.


    Application of the Two-Source Energy Balance (TSEB) Model using land surface temperature (LST) requires aerodynamic resistance parameterizations for the flux exchange above the canopy layer, within the canopy air space and at the soil/substrate surface. There are a number of aerodynamic resistance formulations that can be used, based on K-theory or Lagrangian approaches while others are semi-empirical derived from experimental data. These formulations require a within-canopy wind profile model as well as a parameterization for heat transfer from soil surface. The effect of the various parameterization schemes on TSEB output using tower and airborne LST observations over both highly-structured perennial crops, such as orchards and vineyards, and strongly clumped natural vegetation, such as woody savanna and desert shrublands will be presented. The utility of the various aerodynamic resistance formulas for application over these types of canopy architectures will also be discussed along with ongoing efforts to develop more reliable approaches for strongly-clumped and open-canopy environments for partitioning soil and canopy fluxes.

  18. Modeling molecular recognition: theory and application. (United States)

    Mardis, K; Luo, R; David, L; Potter, M; Glemza, A; Payne, G; Gilson, M K


    Abstract Efficient, reliable methods for calculating the binding affinities of noncovalent complexes would allow advances in a variety of areas such as drug discovery and separation science. We have recently described a method that accommodates significant physical detail while remaining fast enough for use in molecular design. This approach uses the predominant states method to compute free energies, an empirical force field, and an implicit solvation model based upon continuum electrostatics. We review applications of this method to systems ranging from small molecules to protein-ligand complexes.

  19. Putting "Organizations" into an Organization Theory Course: A Hybrid CAO Model for Teaching Organization Theory (United States)

    Hannah, David R.; Venkatachary, Ranga


    In this article, the authors present a retrospective analysis of an instructor's multiyear redesign of a course on organization theory into what is called a hybrid Classroom-as-Organization model. It is suggested that this new course design served to apprentice students to function in quasi-real organizational structures. The authors further argue…

  20. Reconstructing constructivism: causal models, Bayesian learning mechanisms, and the theory theory. (United States)

    Gopnik, Alison; Wellman, Henry M


    We propose a new version of the "theory theory" grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and nontechnical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and the psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists.

  1. Thimble regularization at work: From toy models to chiral random matrix theories (United States)

    Di Renzo, F.; Eruzzi, G.


    We apply the Lefschetz thimble formulation of field theories to a couple of different problems. We first address the solution of a complex zero-dimensional ϕ4 theory. Although very simple, this toy model makes us appreciate a few key issues of the method. In particular, we will solve the model by a correct accounting of all the thimbles giving a contribution to the partition function and we will discuss a number of algorithmic solutions to simulate this (simple) model. We will then move to a chiral random matrix (CRM) theory. This is a somehow more realistic setting, giving us once again the chance to tackle the same couple of fundamental questions: How many thimbles contribute to the solution? How can we make sure that we correctly sample configurations on the thimble? Since the exact result is known for the observable we study (a condensate), we can verify that, in the region of parameters we studied, only one thimble contributes and that the algorithmic solution that we set up works well, despite its very crude nature. The deviation of results from phase quenched ones highlights that in a certain region of parameter space there is a quite important sign problem. In view of this, the success of our thimble approach is quite a significant one.

  2. Thimble regularization at work: from toy models to chiral random matrix theories

    CERN Document Server

    Di Renzo, Francesco


    We apply the Lefschetz thimble formulation of field theories to a couple of different problems. We first address the solution of a complex 0-dimensional phi^4 theory. Although very simple, this toy-model makes us appreciate a few key issues of the method. In particular, we will solve the model by a correct accounting of all the thimbles giving a contribution to the partition function and we will discuss a number of algorithmic solutions to simulate this (simple) model. We will then move to a chiral random matrix (CRM) theory. This is a somehow more realistic setting, giving us once again the chance to tackle the same couple of fundamental questions: how many thimbles contribute to the solution? how can we make sure that we correctly sample configurations on the thimble? Since the exact result is known for the observable we study (a condensate), we can verify that, in the region of parameters we studied, only one thimble contributes and that the algorithmic solution that we set up works well, despite its very ...

  3. Localization landscape theory of disorder in semiconductors I: Theory and modeling


    Filoche, Marcel; Piccardo, Marco; Wu, Yuh-Renn; Li, Chi-Kang; Weisbuch, Claude; Mayboroda, Svitlana


    We present here a model of carrier distribution and transport in semiconductor alloys accounting for quantum localization effects in disordered materials. This model is based on the recent development of a mathematical theory of quantum localization which introduces for each type of carrier a spatial function called \\emph{localization landscape}. These landscapes allow us to predict the localization regions of electron and hole quantum states, their corresponding energies, and the local densi...

  4. Influence of model complexity and problem formulation on the forces in the knee calculated using optimization methods. (United States)

    Hu, Chih-Chung; Lu, Tung-Wu; Chen, Sheng-Chang


    Predictions of the forces transmitted by the redundant force-bearing structures in the knee are often performed using optimization methods considering only moment equipollence as a result of simplified knee modeling without ligament contributions. The current study aimed to investigate the influence of model complexity (with or without ligaments), problem formulation (moment equipollence with or without force equipollence) and optimization criteria on the prediction of the forces transmitted by the force-bearing structures in the knee. Ten healthy young male adults walked in a gait laboratory while their kinematic and ground reaction forces were measured simultaneously. A validated 3D musculoskeletal model of the locomotor system with a knee model that included muscles, ligaments and articular surfaces was used to calculate the joint resultant forces and moments, and subsequently the forces transmitted in the considered force-bearing structures via optimization methods. Three problem formulations with eight optimization criteria were evaluated. Among the three problem formulations, simultaneous consideration of moment and force equipollence for the knee model with ligaments and articular contacts predicted contact forces (first peak: 3.3-3.5 BW; second peak: 3.2-4.2 BW; swing: 0.3 BW) that were closest to previously reported theoretical values (2.0-4.0 BW) and in vivo data telemetered from older adults with total knee replacements (about 2.8 BW during stance; 0.5 BW during swing). Simultaneous consideration of moment and force equipollence also predicted more physiological ligament forces (problem formulation affect the prediction of the forces transmitted by the force-bearing structures at the knee during normal level walking. Inclusion of the ligaments in a knee model enables the simultaneous consideration of equations of force and moment equipollence, which is required for accurately estimating the contact and ligament forces, and is more critical than the

  5. Making sense of implementation theories, models and frameworks. (United States)

    Nilsen, Per


    Implementation science has progressed towards increased use of theoretical approaches to provide better understanding and explanation of how and why implementation succeeds or fails. The aim of this article is to propose a taxonomy that distinguishes between different categories of theories, models and frameworks in implementation science, to facilitate appropriate selection and application of relevant approaches in implementation research and practice and to foster cross-disciplinary dialogue among implementation researchers. Theoretical approaches used in implementation science have three overarching aims: describing and/or guiding the process of translating research into practice (process models); understanding and/or explaining what influences implementation outcomes (determinant frameworks, classic theories, implementation theories); and evaluating implementation (evaluation frameworks). This article proposes five categories of theoretical approaches to achieve three overarching aims. These categories are not always recognized as separate types of approaches in the literature. While there is overlap between some of the theories, models and frameworks, awareness of the differences is important to facilitate the selection of relevant approaches. Most determinant frameworks provide limited "how-to" support for carrying out implementation endeavours since the determinants usually are too generic to provide sufficient detail for guiding an implementation process. And while the relevance of addressing barriers and enablers to translating research into practice is mentioned in many process models, these models do not identify or systematically structure specific determinants associated with implementation success. Furthermore, process models recognize a temporal sequence of implementation endeavours, whereas determinant frameworks do not explicitly take a process perspective of implementation.

  6. Developing and exploring a theory for the lateral erosion of bedrock channels for use in landscape evolution models (United States)

    Langston, Abigail L.; Tucker, Gregory E.


    Understanding how a bedrock river erodes its banks laterally is a frontier in geomorphology. Theories for the vertical incision of bedrock channels are widely implemented in the current generation of landscape evolution models. However, in general existing models do not seek to implement the lateral migration of bedrock channel walls. This is problematic, as modeling geomorphic processes such as terrace formation and hillslope-channel coupling depends on the accurate simulation of valley widening. We have developed and implemented a theory for the lateral migration of bedrock channel walls in a catchment-scale landscape evolution model. Two model formulations are presented, one representing the slow process of widening a bedrock canyon and the other representing undercutting, slumping, and rapid downstream sediment transport that occurs in softer bedrock. Model experiments were run with a range of values for bedrock erodibility and tendency towards transport- or detachment-limited behavior and varying magnitudes of sediment flux and water discharge in order to determine the role that each plays in the development of wide bedrock valleys. The results show that this simple, physics-based theory for the lateral erosion of bedrock channels produces bedrock valleys that are many times wider than the grid discretization scale. This theory for the lateral erosion of bedrock channel walls and the numerical implementation of the theory in a catchment-scale landscape evolution model is a significant first step towards understanding the factors that control the rates and spatial extent of wide bedrock valleys.

  7. A new formulation of non-relativistic diffeomorphism invariance

    CERN Document Server

    Banerjee, Rabin; Mukherjee, Pradip


    We provide a new formulation of nonrelativistic diffeomorphism invariance. It is generated by localising the usual global Galilean Symmetry. The correspondence with the type of diffeomorphism invariant models currently in vogue in the theory of fractional quantum Hall effect has been discussed. Our construction is shown to open up a general approach of model building in theoretical condensed matter physics. Also, this formulation has the capacity of obtaining Newton - Cartan geometry from the gauge procedure.

  8. A new formulation of non-relativistic diffeomorphism invariance

    Directory of Open Access Journals (Sweden)

    Rabin Banerjee


    Full Text Available We provide a new formulation of non-relativistic diffeomorphism invariance. It is generated by localising the usual global Galilean symmetry. The correspondence with the type of diffeomorphism invariant models currently in vogue in the theory of fractional quantum Hall effect has been discussed. Our construction is shown to open up a general approach of model building in theoretical condensed matter physics. Also, this formulation has the capacity of obtaining Newton–Cartan geometry from the gauge procedure.

  9. A new formulation of non-relativistic diffeomorphism invariance

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Rabin, E-mail: [S.N. Bose National Centre for Basic Sciences, JD Block, Sector III, Salt Lake City, Kolkata-700 098 (India); Mitra, Arpita, E-mail: [S.N. Bose National Centre for Basic Sciences, JD Block, Sector III, Salt Lake City, Kolkata-700 098 (India); Mukherjee, Pradip, E-mail: [Department of Physics, Barasat Government College, Barasat, West Bengal (India)


    We provide a new formulation of non-relativistic diffeomorphism invariance. It is generated by localising the usual global Galilean symmetry. The correspondence with the type of diffeomorphism invariant models currently in vogue in the theory of fractional quantum Hall effect has been discussed. Our construction is shown to open up a general approach of model building in theoretical condensed matter physics. Also, this formulation has the capacity of obtaining Newton–Cartan geometry from the gauge procedure.

  10. Atmospheric infrasound propagation modelling using the reflectivity method with a direct formulation of the wind effect (United States)

    Maupin, Valerie; Näsholm, Sven Peter; Schweitzer, Johannes; Gibbons, Steven J.


    We recently advocated using the reflectivity method, also known as the wavenumber integration method or fast-field program, to model atmospheric infrasound propagation at regional distances. The advantage of the reflectivity method is its ability to model the full wavefield, including diffractive effects with head waves and shadow zone arrivals, in a broad frequency range but still at a relatively low computational cost. Attenuation can easily be included, giving the possibility to analyse relative amplitudes and frequency content of the different arrivals. It has clear advantages compared with ray theory in terms of predicting phases considering the particular frequent occurrence of shadow zone arrivals in infrasound observations. Its main limitation, at least in the traditional form of the method, lies in the fact that it can only handle range-independent models. We presented earlier some reflectivity method simulations of an observed accidental explosion in Norway. Wind intensity and direction are non-negligible parameters for infrasound propagation and these are appropriately taken into account in most infrasound ray-tracing codes. On the other hand, in the previous reflectivity simulations wind was taken into account only through the effective sound speed approximation where the horizontal projection of the wind field is added to the adiabatic sound speed profiles. This approximation is appropriate for dominantly horizontal propagation but can give incorrect arrival times and shadow zone locations for waves which have a significant portion of their propagation path at more vertical incidence, like thermospheric arrivals. We present here how we have modified the original reflectivity algorithm in order to take the wind into account in a more correct fashion, and how this improvement influences the synthetics.

  11. Lenses on Reading An Introduction to Theories and Models

    CERN Document Server

    Tracey, Diane H


    This widely adopted text explores key theories and models that frame reading instruction and research. Readers learn why theory matters in designing and implementing high-quality instruction and research; how to critically evaluate the assumptions and beliefs that guide their own work; and what can be gained by looking at reading through multiple theoretical lenses. For each theoretical model, classroom applications are brought to life with engaging vignettes and teacher reflections. Research applications are discussed and illustrated with descriptions of exemplary studies. New to This Edition

  12. Selected HIV-1 Env trimeric formulations act as potent immunogens in a rabbit vaccination model

    DEFF Research Database (Denmark)

    Heyndrickx, Leo; Stewart-Jones, Guillaume; Jansson, Marianne Bendixen


    Ten to 30% of HIV-1 infected subjects develop broadly neutralizing antibodies (bNAbs) during chronic infection. We hypothesized that immunizing rabbits with viral envelope glycoproteins (Envs) from these patients may induce bNAbs, when formulated as a trimeric protein and in the presence...

  13. Mechanistic formulation of a lineal-quadratic-linear (LQL) model: Split-dose experiments and exponentially decaying sources

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, Mariana; Carlone, Marco [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland 21201 (United States) and Department of Radiation Therapy, Department of Veterans Affairs Medical Center, Washington, DC 20422 (United States); Princess Margaret Hospital and Peel Regional Cancer Center, Toronto, Ontario M5G 2M9 (Canada)


    Purpose: In recent years, several models were proposed that modify the standard linear-quadratic (LQ) model to make the predicted survival curve linear at high doses. Most of these models are purely phenomenological and can only be applied in the particular case of acute doses per fraction. The authors consider a mechanistic formulation of a linear-quadratic-linear (LQL) model in the case of split-dose experiments and exponentially decaying sources. This model provides a comprehensive description of radiation response for arbitrary dose rate and fractionation with only one additional parameter. Methods: The authors use a compartmental formulation of the LQL model from the literature. They analytically solve the model's differential equations for the case of a split-dose experiment and for an exponentially decaying source. They compare the solutions of the survival fraction with the standard LQ equations and with the lethal-potentially lethal (LPL) model. Results: In the case of the split-dose experiment, the LQL model predicts a recovery ratio as a function of dose per fraction that deviates from the square law of the standard LQ. The survival fraction as a function of time between fractions follows a similar exponential law as the LQ but adds a multiplicative factor to the LQ parameter {beta}. The LQL solution for the split-dose experiment is very close to the LPL prediction. For the decaying source, the differences between the LQL and the LQ solutions are negligible when the half-life of the source is much larger than the characteristic repair time, which is the clinically relevant case. Conclusions: The compartmental formulation of the LQL model can be used for arbitrary dose rates and provides a comprehensive description of dose response. When the survival fraction for acute doses is linear for high dose, a deviation of the square law formula of the recovery ratio for split doses is also predicted.

  14. Microscopic driving theory with oscillatory congested states: model and empirical verification

    CERN Document Server

    Tian, Junfang; Ma, Shoufeng; Jia, Bin; Zhang, Wenyi


    The essential distinction between the Fundamental Diagram Approach (FDA) and Kerner's Three- Phase Theory (KTPT) is the existence of a unique gap-speed (or flow-density) relationship in the former class. In order to verify this relationship, empirical data are analyzed with the following findings: (1) linear relationship between the actual space gap and speed can be identified when the speed difference between vehicles approximates zero; (2) vehicles accelerate or decelerate around the desired space gap most of the time. To explain these phenomena, we propose that, in congested traffic flow, the space gap between two vehicles will oscillate around the desired space gap in the deterministic limit. This assumption is formulated in terms of a cellular automaton. In contrast to FDA and KTPT, the new model does not have any congested steady-state solution. Simulations under periodic and open boundary conditions reproduce the empirical findings of KTPT. Calibrating and validating the model to detector data produces...

  15. Theory, modeling and simulation of superconducting qubits

    Energy Technology Data Exchange (ETDEWEB)

    Berman, Gennady P [Los Alamos National Laboratory; Kamenev, Dmitry I [Los Alamos National Laboratory; Chumak, Alexander [INSTIT OF PHYSICS, KIEV; Kinion, Carin [LLNL; Tsifrinovich, Vladimir [POLYTECHNIC INSTIT OF NYU


    We analyze the dynamics of a qubit-resonator system coupled with a thermal bath and external electromagnetic fields. Using the evolution equations for the set of Heisenberg operators that describe the whole system, we derive an expression for the resonator field, that includes the resonator-drive, the resonator-bath, and resonator-qubit interactions. The renormalization of the resonator frequency, caused by the qubit-resonator interaction, is accounted for. Using the solutions for the resonator field, we derive the equation that describes the qubit dynamics. The dependence of the qubit evolution during the measurement time on the fidelity of a single-shot measurement is studied. The relation between the fidelity and measurement time is shown explicitly. We proposed a novel adiabatic method for the phase qubit measurement. The method utilizes a low-frequency, quasi-classical resonator inductively coupled to the qubit. The resonator modulates the qubit energy, and the back reaction of the qubit causes a shift in the phase of the resonator. The resonator phase shift can be used to determine the qubit state. We have simulated this measurement taking into the account the energy levels outside the phase qubit manifold. We have shown that, for qubit frequencies in the range of 8-12GHZ, a resonator frequency of 500 MHz and a measurement time of 100 ns, the phase difference between the two qubit states is greater than 0.2 rad. This phase difference exceeds the measurement uncertainty, and can be detected using a classical phase-meter. A fidelity of 0.9999 can be achieved for a relaxation time of 0.5 ms. We also model and simulate a microstrip-SQUID amplifier of frequency about 500 MHz, which could be used to amplify the resonator oscillations in the phase qubit adiabatic measurement. The voltage gain and the amplifier noise temperature are calculated. We simulate the preparation of a generalized Bell state and compute the relaxation times required for achieving high

  16. An introduction to queueing theory modeling and analysis in applications

    CERN Document Server

    Bhat, U Narayan


    This introductory textbook is designed for a one-semester course on queueing theory that does not require a course on stochastic processes as a prerequisite. By integrating the necessary background on stochastic processes with the analysis of models, the work provides a sound foundational introduction to the modeling and analysis of queueing systems for a wide interdisciplinary audience of students in mathematics, statistics, and applied disciplines such as computer science, operations research, and engineering. This edition includes additional topics in methodology and applications. Key features: • An introductory chapter including a historical account of the growth of queueing theory in more than 100 years. • A modeling-based approach with emphasis on identification of models. • Rigorous treatment of the foundations of basic models commonly used in applications with appropriate references for advanced topics. • Applications in manufacturing and, computer and communication systems. • A chapter on ...

  17. Traffic Games: Modeling Freeway Traffic with Game Theory. (United States)

    Cortés-Berrueco, Luis E; Gershenson, Carlos; Stephens, Christopher R


    We apply game theory to a vehicular traffic model to study the effect of driver strategies on traffic flow. The resulting model inherits the realistic dynamics achieved by a two-lane traffic model and aims to incorporate phenomena caused by driver-driver interactions. To achieve this goal, a game-theoretic description of driver interaction was developed. This game-theoretic formalization allows one to model different lane-changing behaviors and to keep track of mobility performance. We simulate the evolution of cooperation, traffic flow, and mobility performance for different modeled behaviors. The analysis of these results indicates a mobility optimization process achieved by drivers' interactions.

  18. Theory of positive disintegration as a model of adolescent development. (United States)

    Laycraft, Krystyna


    This article introduces a conceptual model of the adolescent development based on the theory of positive disintegration combined with theory of self-organization. Dabrowski's theory of positive disintegration, which was created almost a half century ago, still attracts psychologists' and educators' attention, and is extensively applied into studies of gifted and talented people. The positive disintegration is the mental development described by the process of transition from lower to higher levels of mental life and stimulated by tension, inner conflict, and anxiety. This process can be modeled by a sequence of patterns of organization (attractors) as a developmental potential (a control parameter) changes. Three levels of disintegration (unilevel disintegration, spontaneous multilevel disintegration, and organized multilevel disintegration) are analyzed in detail and it is proposed that they represent behaviour of early, middle and late periods of adolescence. In the discussion, recent research on the adolescent brain development is included.

  19. Integrating social capital theory, social cognitive theory, and the technology acceptance model to explore a behavioral model of telehealth systems. (United States)

    Tsai, Chung-Hung


    Telehealth has become an increasingly applied solution to delivering health care to rural and underserved areas by remote health care professionals. This study integrated social capital theory, social cognitive theory, and the technology acceptance model (TAM) to develop a comprehensive behavioral model for analyzing the relationships among social capital factors (social capital theory), technological factors (TAM), and system self-efficacy (social cognitive theory) in telehealth. The proposed framework was validated with 365 respondents from Nantou County, located in Central Taiwan. Structural equation modeling (SEM) was used to assess the causal relationships that were hypothesized in the proposed model. The finding indicates that elderly residents generally reported positive perceptions toward the telehealth system. Generally, the findings show that social capital factors (social trust, institutional trust, and social participation) significantly positively affect the technological factors (perceived ease of use and perceived usefulness respectively), which influenced usage intention. This study also confirmed that system self-efficacy was the salient antecedent of perceived ease of use. In addition, regarding the samples, the proposed model fitted considerably well. The proposed integrative psychosocial-technological model may serve as a theoretical basis for future research and can also offer empirical foresight to practitioners and researchers in the health departments of governments, hospitals, and rural communities.

  20. Integrating Social Capital Theory, Social Cognitive Theory, and the Technology Acceptance Model to Explore a Behavioral Model of Telehealth Systems (United States)

    Tsai, Chung-Hung


    Telehealth has become an increasingly applied solution to delivering health care to rural and underserved areas by remote health care professionals. This study integrated social capital theory, social cognitive theory, and the technology acceptance model (TAM) to develop a comprehensive behavioral model for analyzing the relationships among social capital factors (social capital theory), technological factors (TAM), and system self-efficacy (social cognitive theory) in telehealth. The proposed framework was validated with 365 respondents from Nantou County, located in Central Taiwan. Structural equation modeling (SEM) was used to assess the causal relationships that were hypothesized in the proposed model. The finding indicates that elderly residents generally reported positive perceptions toward the telehealth system. Generally, the findings show that social capital factors (social trust, institutional trust, and social participation) significantly positively affect the technological factors (perceived ease of use and perceived usefulness respectively), which influenced usage intention. This study also confirmed that system self-efficacy was the salient antecedent of perceived ease of use. In addition, regarding the samples, the proposed model fitted considerably well. The proposed integrative psychosocial-technological model may serve as a theoretical basis for future research and can also offer empirical foresight to practitioners and researchers in the health departments of governments, hospitals, and rural communities. PMID:24810577

  1. Integrating Social Capital Theory, Social Cognitive Theory, and the Technology Acceptance Model to Explore a Behavioral Model of Telehealth Systems

    Directory of Open Access Journals (Sweden)

    Chung-Hung Tsai


    Full Text Available Telehealth has become an increasingly applied solution to delivering health care to rural and underserved areas by remote health care professionals. This study integrated social capital theory, social cognitive theory, and the technology acceptance model (TAM to develop a comprehensive behavioral model for analyzing the relationships among social capital factors (social capital theory, technological factors (TAM, and system self-efficacy (social cognitive theory in telehealth. The proposed framework was validated with 365 respondents from Nantou County, located in Central Taiwan. Structural equation modeling (SEM was used to assess the causal relationships that were hypothesized in the proposed model. The finding indicates that elderly residents generally reported positive perceptions toward the telehealth system. Generally, the findings show that social capital factors (social trust, institutional trust, and social participation significantly positively affect the technological factors (perceived ease of use and perceived usefulness respectively, which influenced usage intention. This study also confirmed that system self-efficacy was the salient antecedent of perceived ease of use. In addition, regarding the samples, the proposed model fitted considerably well. The proposed integrative psychosocial-technological model may serve as a theoretical basis for future research and can also offer empirical foresight to practitioners and researchers in the health departments of governments, hospitals, and rural communities.

  2. Mechanistic Oral Absorption Modeling and Simulation for Formulation Development and Bioequivalence Evaluation: Report of an FDA Public Workshop. (United States)

    Zhang, X; Duan, J; Kesisoglou, F; Novakovic, J; Amidon, G L; Jamei, M; Lukacova, V; Eissing, T; Tsakalozou, E; Zhao, L; Lionberger, R


    On May 19, 2016, the US Food and Drug Administration (FDA) hosted a public workshop, entitled "Mechanistic Oral Absorption Modeling and Simulation for Formulation Development and Bioequivalence Evaluation." The topic of mechanistic oral absorption modeling, which is one of the major applications of physiologically based pharmacokinetic (PBPK) modeling and simulation, focuses on predicting oral absorption by mechanistically integrating gastrointestinal transit, dissolution, and permeation processes, incorporating systems, active pharmaceutical ingredient (API), and the drug product information, into a systemic mathematical whole-body framework. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  3. Reconstructing constructivism: Causal models, Bayesian learning mechanisms and the theory theory (United States)

    Gopnik, Alison; Wellman, Henry M.


    We propose a new version of the “theory theory” grounded in the computational framework of probabilistic causal models and Bayesian learning. Probabilistic models allow a constructivist but rigorous and detailed approach to cognitive development. They also explain the learning of both more specific causal hypotheses and more abstract framework theories. We outline the new theoretical ideas, explain the computational framework in an intuitive and non-technical way, and review an extensive but relatively recent body of empirical results that supports these ideas. These include new studies of the mechanisms of learning. Children infer causal structure from statistical information, through their own actions on the world and through observations of the actions of others. Studies demonstrate these learning mechanisms in children from 16 months to 4 years old and include research on causal statistical learning, informal experimentation through play, and imitation and informal pedagogy. They also include studies of the variability and progressive character of intuitive theory change, particularly theory of mind. These studies investigate both the physical and psychological and social domains. We conclude with suggestions for further collaborative projects between developmental and computational cognitive scientists. PMID:22582739

  4. Theory, modeling, and integrated studies in the Arase (ERG) project (United States)

    Seki, Kanako; Miyoshi, Yoshizumi; Ebihara, Yusuke; Katoh, Yuto; Amano, Takanobu; Saito, Shinji; Shoji, Masafumi; Nakamizo, Aoi; Keika, Kunihiro; Hori, Tomoaki; Nakano, Shin'ya; Watanabe, Shigeto; Kamiya, Kei; Takahashi, Naoko; Omura, Yoshiharu; Nose, Masahito; Fok, Mei-Ching; Tanaka, Takashi; Ieda, Akimasa; Yoshikawa, Akimasa


    Understanding of underlying mechanisms of drastic variations of the near-Earth space (geospace) is one of the current focuses of the magnetospheric physics. The science target of the geospace research project Exploration of energization and Radiation in Geospace (ERG) is to understand the geospace variations with a focus on the relativistic electron acceleration and loss processes. In order to achieve the goal, the ERG project consists of the three parts: the Arase (ERG) satellite, ground-based observations, and theory/modeling/integrated studies. The role of theory/modeling/integrated studies part is to promote relevant theoretical and simulation studies as well as integrated data analysis to combine different kinds of observations and modeling. Here we provide technical reports on simulation and empirical models related to the ERG project together with their roles in the integrated studies of dynamic geospace variations. The simulation and empirical models covered include the radial diffusion model of the radiation belt electrons, GEMSIS-RB and RBW models, CIMI model with global MHD simulation REPPU, GEMSIS-RC model, plasmasphere thermosphere model, self-consistent wave-particle interaction simulations (electron hybrid code and ion hybrid code), the ionospheric electric potential (GEMSIS-POT) model, and SuperDARN electric field models with data assimilation. ERG (Arase) science center tools to support integrated studies with various kinds of data are also briefly introduced.[Figure not available: see fulltext.

  5. A single MCR-ALS model for drug analysis in different formulations: Application on diazepam commercial preparations. (United States)

    De Luca, Michele; Ioele, Giuseppina; Spatari, Claudia; Ragno, Gaetano


    A multivariate curve resolution - alternating least squares (MCR-ALS) analysis was used to quantify diazepam (DZP) in thirty commercial liquid formulations. MCR calibration was run on the UV spectrophotometric data of the commercial DZP samples over the range 200-400nm, allowing the resolution of the drug signal and then the excipients contained in all the formulations. A single model MCR for the determination of the drug in all samples was then built through the adoption of the correlation constraint. This model was optimized by an appropriate selection of the most useful wavelength ranges and then validated on external samples. DZP concentrations in the pharmaceutical formulations were measured by HPLC-DAD analysis. The performance of the MCR model was compared with that from application of classical partial least squares regression (PLSR). The results, in terms of error of prediction, were very satisfactory, reaching a relative error below of 1.66% against 2.56%, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Biotherapeutic protein formulation variables influence protein integrity and can promote post-translational modifications as shown using chicken egg white lysozyme as a model system


    Gourbatsi, Evdoxia; Povey, Jane; Uddin, Shahid; Smales, C. Mark


    Objectives The effect of different formulations variables on protein integrity were investigated using lysozyme as a model protein for the development of biotherapeutic protein formulations for use in the clinic. Results Buffer composition/concentration was the key variable of formulation reagents investigated in determining lysozyme stability and authenticity independent of protein concentration whilst the storage temperature and time, not surprisingly, were also key variables. Tryptic pepti...

  7. Linking Complexity and Sustainability Theories: Implications for Modeling Sustainability Transitions

    Directory of Open Access Journals (Sweden)

    Camaren Peter


    Full Text Available In this paper, we deploy a complexity theory as the foundation for integration of different theoretical approaches to sustainability and develop a rationale for a complexity-based framework for modeling transitions to sustainability. We propose a framework based on a comparison of complex systems’ properties that characterize the different theories that deal with transitions to sustainability. We argue that adopting a complexity theory based approach for modeling transitions requires going beyond deterministic frameworks; by adopting a probabilistic, integrative, inclusive and adaptive approach that can support transitions. We also illustrate how this complexity-based modeling framework can be implemented; i.e., how it can be used to select modeling techniques that address particular properties of complex systems that we need to understand in order to model transitions to sustainability. In doing so, we establish a complexity-based approach towards modeling sustainability transitions that caters for the broad range of complex systems’ properties that are required to model transitions to sustainability.

  8. Stochastic models in risk theory and management accounting

    NARCIS (Netherlands)

    Brekelmans, R.C.M.


    This thesis deals with stochastic models in two fields: risk theory and management accounting. Firstly, two extensions of the classical risk process are analyzed. A method is developed that computes bounds of the probability of ruin for the classical risk rocess extended with a constant interest

  9. Anisotropic cosmological models and generalized scalar tensor theory

    Indian Academy of Sciences (India)

    Abstract. In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–. Sachs space-time. For bulk viscous fluid, both exponential and power-law solutions have been stud- ied and some assumptions ...

  10. Magnetized cosmological models in bimetric theory of gravitation

    Indian Academy of Sciences (India)

    Abstract. Bianchi type-III magnetized cosmological model when the field of gravitation is governed by either a perfect fluid or cosmic string is investigated in Rosen's [1] bimetric theory of gravitation. To complete determinate solution, the condition, viz., A = (BC)n, where n is a constant, between the metric potentials is used.

  11. Anisotropic cosmological models in f (R, T) theory of gravitation

    Indian Academy of Sciences (India)

    cally viable f (R) gravity model, which showed the unification of early time inflation and late time acceleration. Harko et al [13] developed f (R, T) modified theory of gravity, where the gravi- tational Lagrangian is given by an arbitrary function of the Ricci scalar R and the trace T of the energy–momentum tensor. It is to be noted ...

  12. Teaching Model Building to High School Students: Theory and Reality. (United States)

    Roberts, Nancy; Barclay, Tim


    Builds on a National Science Foundation (NSF) microcomputer based laboratory project to introduce system dynamics into the precollege setting. Focuses on providing students with powerful and investigatory theory building tools. Discusses developed hardware, software, and curriculum materials used to introduce model building and simulations into…

  13. A Model to Demonstrate the Place Theory of Hearing (United States)

    Ganesh, Gnanasenthil; Srinivasan, Venkata Subramanian; Krishnamurthi, Sarayu


    In this brief article, the authors discuss Georg von Békésy's experiments showing the existence of traveling waves in the basilar membrane and that maximal displacement of the traveling wave was determined by the frequency of the sound. The place theory of hearing equates the basilar membrane to a frequency analyzer. The model described in this…

  14. Multilevel Higher-Order Item Response Theory Models (United States)

    Huang, Hung-Yu; Wang, Wen-Chung


    In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…

  15. SIMP model at NNLO in chiral perturbation theory

    DEFF Research Database (Denmark)

    Hansen, Martin Rasmus Lundquist; Langaeble, K.; Sannino, F.


    We investigate the phenomenological viability of a recently proposed class of composite dark matter models where the relic density is determined by 3 to 2 number-changing processes in the dark sector. Here the pions of the strongly interacting field theory constitute the dark matter particles...

  16. Speech act theory in support of idealised warning models | Carstens ...

    African Journals Online (AJOL)

    ... subsuming lower level speech acts such as POINTING OUT/ALERTING, INFORMING and INSTRUCTING. Secondly, the model is used to analyse and evaluate actual warnings collected from information sheets for hair-dryers, indicating the heuristic value of combined insights from document design and speech act theory ...

  17. A Proposed Model of Jazz Theory Knowledge Acquisition (United States)

    Ciorba, Charles R.; Russell, Brian E.


    The purpose of this study was to test a hypothesized model that proposes a causal relationship between motivation and academic achievement on the acquisition of jazz theory knowledge. A reliability analysis of the latent variables ranged from 0.92 to 0.94. Confirmatory factor analyses of the motivation (standardized root mean square residual…

  18. Conceptualizations of Creativity: Comparing Theories and Models of Giftedness (United States)

    Miller, Angie L.


    This article reviews seven different theories of giftedness that include creativity as a component, comparing and contrasting how each one conceptualizes creativity as a part of giftedness. The functions of creativity vary across the models, suggesting that while the field of gifted education often cites the importance of creativity, the…

  19. Dimensions of Genocide: The Circumplex Model Meets Violentization Theory (United States)

    Winton, Mark A.


    The purpose of this study is to examine the use of Olson's (1995, 2000) family therapy based circumplex model and Athens' (1992, 1997, 2003) violentization theory in explaining genocide. The Rwandan genocide of 1994 is used as a case study. Published texts, including interviews with perpetrators, research reports, human rights reports, and court…

  20. Pilot evaluation in TENCompetence: a theory-driven model


    Schoonenboom, Judith; Sligte, Henk; Moghnieh, Ayman; Specht, Marcus; Glahn, Christian; Stefanov, Krassen


    Schoonenboom, J., Sligte, H., Moghnieh, A., Specht, M., Glahn, C., & Stefanov, K. (2007). Pilot evaluation in TENCompetence: a theory-driven model. In T. Navarette, J. Blat & R. Koper (Eds.). Proceedings of the 3rd TENCompetence Open Workshop 'Current Research on IMS Learning Design and Lifelong Competence Development Infrastructures' (pp. 43-50). June, 21-22, 2007, Barcelona, Spain.

  1. Pilot evaluation in TENCompetence: a theory-driven model

    NARCIS (Netherlands)

    Schoonenboom, Judith; Sligte, Henk; Moghnieh, Ayman; Specht, Marcus; Glahn, Christian; Stefanov, Krassen


    Schoonenboom, J., Sligte, H., Moghnieh, A., Specht, M., Glahn, C., & Stefanov, K. (2007). Pilot evaluation in TENCompetence: a theory-driven model. In T. Navarette, J. Blat & R. Koper (Eds.). Proceedings of the 3rd TENCompetence Open Workshop 'Current Research on IMS Learning Design and Lifelong

  2. Excellence in Physics Education Award: Modeling Theory for Physics Instruction (United States)

    Hestenes, David


    All humans create mental models to plan and guide their interactions with the physical world. Science has greatly refined and extended this ability by creating and validating formal scientific models of physical things and processes. Research in physics education has found that mental models created from everyday experience are largely incompatible with scientific models. This suggests that the fundamental problem in learning and understanding science is coordinating mental models with scientific models. Modeling Theory has drawn on resources of cognitive science to work out extensive implications of this suggestion and guide development of an approach to science pedagogy and curriculum design called Modeling Instruction. Modeling Instruction has been widely applied to high school physics and, more recently, to chemistry and biology, with noteworthy results.

  3. Alliance: A common factor of psychotherapy modeled by structural theory

    Directory of Open Access Journals (Sweden)

    Wolfgang eTschacher


    Full Text Available There is broad consensus that the therapeutic alliance constitutes a core common factor for all modalities of psychotherapy. Meta-analyses corroborated that alliance, as it emerges from therapeutic process, is a significant predictor of therapy outcome. Psychotherapy process is traditionally described and explored using two categorially different approaches, the experiential (first-person perspective and the behavioral (third-person perspective. We propose to add to this duality a third, structural approach. Dynamical systems theory and synergetics on the one hand and enactivist theory on the other together can provide this structural approach, which contributes in specific ways to a clarification of the alliance factor. Systems theory offers concepts and tools for the modeling of the individual self and, building on this, of alliance processes. In the enactive perspective, the self is conceived as a socially enacted autonomous system that strives to maintain identity by observing a two-fold goal: to exist as an individual self in its own right (distinction while also being open to others (participation. Using this conceptualization, we formalized the therapeutic alliance as a phase space whose potential minima (attractors can be shifted by the therapist to approximate therapy goals. This mathematical formalization is derived from probability theory and synergetics. Our conclusions say that structural theory provides powerful tools for the modeling of how therapeutic change is staged by the formation, utilization, and dissolution of the therapeutic alliance. In addition, we point out novel testable hypotheses and future applications.

  4. Route Choice Model Based on Game Theory for Commuters

    Directory of Open Access Journals (Sweden)

    Licai Yang


    Full Text Available The traffic behaviours of commuters may cause traffic congestion during peak hours. Advanced Traffic Information System can provide dynamic information to travellers. Due to the lack of timeliness and comprehensiveness, the provided information cannot satisfy the travellers’ needs. Since the assumptions of traditional route choice model based on Expected Utility Theory conflict with the actual situation, a route choice model based on Game Theory is proposed to provide reliable route choice to commuters in actual situation in this paper. The proposed model treats the alternative routes as game players and utilizes the precision of predicted information and familiarity of traffic condition to build a game. The optimal route can be generated considering Nash Equilibrium by solving the route choice game. Simulations and experimental analysis show that the proposed model can describe the commuters’ routine route choice decisionexactly and the provided route is reliable.

  5. Accounting for Errors in Model Analysis Theory: A Numerical Approach (United States)

    Sommer, Steven R.; Lindell, Rebecca S.


    By studying the patterns of a group of individuals' responses to a series of multiple-choice questions, researchers can utilize Model Analysis Theory to create a probability distribution of mental models for a student population. The eigenanalysis of this distribution yields information about what mental models the students possess, as well as how consistently they utilize said mental models. Although the theory considers the probabilistic distribution to be fundamental, there exists opportunities for random errors to occur. In this paper we will discuss a numerical approach for mathematically accounting for these random errors. As an example of this methodology, analysis of data obtained from the Lunar Phases Concept Inventory will be presented. Limitations and applicability of this numerical approach will be discussed.

  6. Asset pricing models, the labour theory of value and their implications for accounting


    Toms, S.


    The paper analyses the social components of two theories of central importance to accounting and finance. It shows that modern finance theory is unable to account for value and that although its canon and major assumptions successfully obfuscate real social relations, they do not provide an alternative explanation of those relations. The paper also argues that although Marx’s Capital uses accounting formulations to analyse capitalism, it does not provide a means of advancing accounting theory...

  7. On the formulations of higher-order strain gradient crystal plasticity models

    DEFF Research Database (Denmark)

    Kuroda, M.; Tvergaard, Viggo


    Recently, several higher-order extensions to the crystal plasticity theory have been proposed to incorporate effects of material length scales that were missing links in the conventional continuum mechanics. The extended theories are classified into work-conjugate and non-work-conjugate types...... quantities do not appear explicitly. Instead, rates of crystallographic slip are influenced by back stresses that arise in response to spatial gradients of the geometrically necessary dislocation densities. The work-conjugate type and the non-work-conjugate type of theories have different theoretical...

  8. Mapping of the stochastic Lotka-Volterra model to models of population genetics and game theory (United States)

    Constable, George W. A.; McKane, Alan J.


    The relationship between the M -species stochastic Lotka-Volterra competition (SLVC) model and the M -allele Moran model of population genetics is explored via timescale separation arguments. When selection for species is weak and the population size is large but finite, precise conditions are determined for the stochastic dynamics of the SLVC model to be mappable to the neutral Moran model, the Moran model with frequency-independent selection, and the Moran model with frequency-dependent selection (equivalently a game-theoretic formulation of the Moran model). We demonstrate how these mappings can be used to calculate extinction probabilities and the times until a species' extinction in the SLVC model.

  9. Formulation of in situ chemically cross-linked hydrogel depots for protein release: from the blob model perspective. (United States)

    Yu, Yu; Chau, Ying


    The fast release rate and the undesirable covalent binding are two major problems often encountered in formulating in situ chemically cross-linked hydrogel as protein release depot, particularly when prolonged release over months is desirable. In this study, we applied the De Gennes' blob theory to analyze and tackle these two problems using a vinylsulfone-thiol (VS-SH) reaction based in situ hydrogel system. We showed that the simple scaling relation ξb ≈ Rg(c/c*)(-v/(3v-1)) is applicable to the in situ hydrogel and the mesh size estimated from the precursor polymer parameters is a reasonable match to experimental results. On the other hand, as predicted by the theory and confirmed by experiments, the drug diffusion within hydrogel depends mainly on polymer concentration but not the degree of modification (DM). The covalent binding was found to be caused by the mismatch of location between the reactive groups and the entanglement points. The mismatch and, thus, the protein binding were minimized by increasing the DM and concentration of the SH polymer relative to the VS polymer, as predicted by theory. Using these principles, an in situ hydrogel system for the controlled release of an antiangiogenic antibody therapeutics bevacizumab for 3 months was developed.

  10. Cluster variational theory of spin ((3)/(2)) Ising models

    CERN Document Server

    Tucker, J W


    A cluster variational method for spin ((3)/(2)) Ising models on regular lattices is presented that leads to results that are exact for Bethe lattices of the same coordination number. The method is applied to both the Blume-Capel (BC) and the isotropic Blume-Emery-Griffiths model (BEG). In particular, the first-order phase line separating the two low-temperature ferromagnetic phases in the BC model, and the ferrimagnetic phase boundary in the BEG model are studied. Results are compared with those of other theories whose qualitative predictions have been in conflict.

  11. On ADE quiver models and F-theory compactification

    Energy Technology Data Exchange (ETDEWEB)

    Belhaj, A [Department of Mathematics and Statistics, University of Ottawa, 585 King Edward Ave., Ottawa, ON, K1N 6N5 (Canada); Rasmussen, J [Department of Mathematics and Statistics, University of Melbourne, Parkville, Victoria 3010 (Australia); Sebbar, A [Department of Mathematics and Statistics, University of Ottawa, 585 King Edward Ave., Ottawa, ON, K1N 6N5 (Canada); Sedra, M B [Laboratoire de Physique de la Matiere et Rayonnement (LPMR), Morocco Faculte des Sciences, Universite Ibn Tofail, Kenitra, Morocco (Morocco)


    Based on mirror symmetry, we discuss geometric engineering of N = 1 ADE quiver models from F-theory compactifications on elliptic K3 surfaces fibred over certain four-dimensional base spaces. The latter are constructed as intersecting 4-cycles according to ADE Dynkin diagrams, thereby mimicking the construction of Calabi-Yau threefolds used in geometric engineering in type II superstring theory. Matter is incorporated by considering D7-branes wrapping these 4-cycles. Using a geometric procedure referred to as folding, we discuss how the corresponding physics can be converted into a scenario with D5-branes wrapping 2-cycles of ALE spaces.

  12. Should the model for risk-informed regulation be game theory rather than decision theory? (United States)

    Bier, Vicki M; Lin, Shi-Woei


    deception), to identify optimal regulatory strategies. Therefore, we believe that the types of regulatory interactions analyzed in this article are better modeled using game theory rather than decision theory. In particular, the goals of this article are to review the relevant literature in game theory and regulatory economics (to stimulate interest in this area among risk analysts), and to present illustrative results showing how the application of game theory can provide useful insights into the theory and practice of risk-informed regulation. © 2012 Society for Risk Analysis.

  13. High-order accurate finite-volume formulations for the pressure gradient force in layered ocean models

    CERN Document Server

    Engwirda, Darren; Marshall, John


    The development of a set of high-order accurate finite-volume formulations for evaluation of the pressure gradient force in layered ocean models is described. A pair of new schemes are presented, both based on an integration of the contact pressure force about the perimeter of an associated momentum control-volume. The two proposed methods differ in their choice of control-volume geometries. High-order accurate numerical integration techniques are employed in both schemes to account for non-linearities in the underlying equation-of-state definitions and thermodynamic profiles, and details of an associated vertical interpolation and quadrature scheme are discussed in detail. Numerical experiments are used to confirm the consistency of the two formulations, and it is demonstrated that the new methods maintain hydrostatic and thermobaric equilibrium in the presence of strongly-sloping layer-wise geometry, non-linear equation-of-state definitions and non-uniform vertical stratification profiles. Additionally, one...

  14. Finite Element and Plate Theory Modeling of Acoustic Emission Waveforms (United States)

    Prosser, W. H.; Hamstad, M. A.; Gary, J.; OGallagher, A.


    A comparison was made between two approaches to predict acoustic emission waveforms in thin plates. A normal mode solution method for Mindlin plate theory was used to predict the response of the flexural plate mode to a point source, step-function load, applied on the plate surface. The second approach used a dynamic finite element method to model the problem using equations of motion based on exact linear elasticity. Calculations were made using properties for both isotropic (aluminum) and anisotropic (unidirectional graphite/epoxy composite) materials. For simulations of anisotropic plates, propagation along multiple directions was evaluated. In general, agreement between the two theoretical approaches was good. Discrepancies in the waveforms at longer times were caused by differences in reflections from the lateral plate boundaries. These differences resulted from the fact that the two methods used different boundary conditions. At shorter times in the signals, before reflections, the slight discrepancies in the waveforms were attributed to limitations of Mindlin plate theory, which is an approximate plate theory. The advantages of the finite element method are that it used the exact linear elasticity solutions, and that it can be used to model real source conditions and complicated, finite specimen geometries as well as thick plates. These advantages come at a cost of increased computational difficulty, requiring lengthy calculations on workstations or supercomputers. The Mindlin plate theory solutions, meanwhile, can be quickly generated on personal computers. Specimens with finite geometry can also be modeled. However, only limited simple geometries such as circular or rectangular plates can easily be accommodated with the normal mode solution technique. Likewise, very limited source configurations can be modeled and plate theory is applicable only to thin plates.

  15. Application of RPMI 2650 as a cell model to evaluate solid formulations for intranasal delivery of drugs. (United States)

    Gonçalves, Vanessa S S; Matias, Ana A; Poejo, Joana; Serra, Ana T; Duarte, Catarina M M


    During the development of intranasal drug delivery systems for local/systemic effect or brain targeting, it is necessary to assess its cytotoxicity and drug transport through nasal epithelium. In order to avoid animal experiments or the use of excised tissues, in vitro cell models, such as RPMI 2650 cells, are being preferred during recent years. Nevertheless, the deposition of solid formulations into nasal cell layers with further transepithelial transport rate of drugs has been poorly studied or reported. Thus, the purpose of this work is to further investigate RPMI 2650 cell line as an effective alternative to animal tissues for solid drug-loaded formulations cytotoxicity and drug permeation studies in order to become an option as a tool for drug discovery. Furthermore, we wanted to determine the extent to which the administration of drugs in particulate forms would differ in relation to the permeability of the same compounds applied as solutions. RPMI 2650 cells were cultured in submersed or at air-liquid interface conditions and characterized regarding transepithelial electrical resistance (TEER) and production of mucus. Pure ketoprofen (used as model compound) and five formulations loaded with same drug, namely solid lipid particles (Gelucire 43/01™), structured lipid particles (Gelucire 43/01™:Glyceryl monooleate) and aerogel microparticles (Alginate, Alginate:Pectin, Alginate:Carrageenan), were evaluated with RPMI 2650 model in terms of cytotoxicity and permeability of drug (applied as solution, dispersion or powder+buffer). RPMI 2650 cells were capable to grow in monolayer and multilayer, showing the same permeability as excised human nasal mucosa for sodium fluorescein (paracellular marker), with analogous TEER values and production of mucus, as referred by other authors. None of the powders showed cytotoxicity when applied to RPMI 2650 cells. Regarding permeation of drug through cell layers, not only the form of application of powders but also their

  16. Nonequilibrium Dynamical Mean-Field Theory for Bosonic Lattice Models

    Directory of Open Access Journals (Sweden)

    Hugo U. R. Strand


    Full Text Available We develop the nonequilibrium extension of bosonic dynamical mean-field theory and a Nambu real-time strong-coupling perturbative impurity solver. In contrast to Gutzwiller mean-field theory and strong-coupling perturbative approaches, nonequilibrium bosonic dynamical mean-field theory captures not only dynamical transitions but also damping and thermalization effects at finite temperature. We apply the formalism to quenches in the Bose-Hubbard model, starting from both the normal and the Bose-condensed phases. Depending on the parameter regime, one observes qualitatively different dynamical properties, such as rapid thermalization, trapping in metastable superfluid or normal states, as well as long-lived or strongly damped amplitude oscillations. We summarize our results in nonequilibrium “phase diagrams” that map out the different dynamical regimes.

  17. Comparison of neurofuzzy logic and neural networks in modelling experimental data of an immediate release tablet formulation. (United States)

    Shao, Qun; Rowe, Raymond C; York, Peter


    This study compares the performance of neurofuzzy logic and neural networks using two software packages (INForm and FormRules) in generating predictive models for a published database for an immediate release tablet formulation. Both approaches were successful in developing good predictive models for tablet tensile strength and drug dissolution profiles. While neural networks demonstrated a slightly superior capability in predicting unseen data, neurofuzzy logic had the added advantage of generating rule sets representing the cause-effect relationships contained in the experimental data.

  18. Super Yang-Mills theory as a random matrix model

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, W. [Institute for Theoretical Physics, State University of New York, Stony Brook, New York 11794-3840 (United States)


    We generalize the Gervais-Neveu gauge to four-dimensional {ital N}=1 superspace. The model describes an {ital N}=2 super Yang-Mills theory. All chiral superfields ({ital N}=2 matter and ghost multiplets) exactly cancel to all loops. The remaining Hermitian scalar superfield (matrix) has a renormalizable massive propagator and simplified vertices. These properties are associated with {ital N}=1 supergraphs describing a superstring theory on a random lattice world sheet. We also consider all possible finite matrix models, and find they have a universal large-color limit. These could describe gravitational strings if the matrix-model coupling is fixed to unity, for exact electric-magnetic self-duality.

  19. Models for probability and statistical inference theory and applications

    CERN Document Server

    Stapleton, James H


    This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readersModels for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses mo...

  20. Localization landscape theory of disorder in semiconductors. I. Theory and modeling (United States)

    Filoche, Marcel; Piccardo, Marco; Wu, Yuh-Renn; Li, Chi-Kang; Weisbuch, Claude; Mayboroda, Svitlana


    We present here a model of carrier distribution and transport in semiconductor alloys accounting for quantum localization effects in disordered materials. This model is based on the recent development of a mathematical theory of quantum localization which introduces for each type of carrier a spatial function called localization landscape. These landscapes allow us to predict the localization regions of electron and hole quantum states, their corresponding energies, and the local densities of states. We show how the various outputs of these landscapes can be directly implemented into a drift-diffusion model of carrier transport and into the calculation of absorption/emission transitions. This creates a new computational model which accounts for disorder localization effects while also capturing two major effects of quantum mechanics, namely, the reduction of barrier height (tunneling effect) and the raising of energy ground states (quantum confinement effect), without having to solve the Schrödinger equation. Finally, this model is applied to several one-dimensional structures such as single quantum wells, ordered and disordered superlattices, or multiquantum wells, where comparisons with exact Schrödinger calculations demonstrate the excellent accuracy of the approximation provided by the landscape theory.

  1. Integration of mathematical models in marketing theory and practice

    Directory of Open Access Journals (Sweden)

    Ioana Olariu


    Full Text Available This article is a theoretical approach on the main mathematical models used in marketing practice. Application of general systems theory in marketing involves setting behavior assumptions as models of various processes.These models have, on the one hand, to describe the interactions between ambiance and system factors, and, secondly, to identify causal dependencies existing in these interactions.Since the models are the means by which possible solutions can be drawn consequences, they occupy a central role in the design of a system to solve a marketing problem.The model is a simplified representation which is described and conceptualized phenomena and real life situations.The purpose of a model is to facilitate understanding of the real system. Models are widely used in marketing, it takes different forms that facilitate understanding the realities of marketing.

  2. 3D Finite Volume Modeling of ENDE Using Electromagnetic T-Formulation

    Directory of Open Access Journals (Sweden)

    Yue Li


    Full Text Available An improved method which can analyze the eddy current density in conductor materials using finite volume method is proposed on the basis of Maxwell equations and T-formulation. The algorithm is applied to solve 3D electromagnetic nondestructive evaluation (E’NDE benchmark problems. The computing code is applied to study an Inconel 600 work piece with holes or cracks. The impedance change due to the presence of the crack is evaluated and compared with the experimental data of benchmark problems No. 1 and No. 2. The results show a good agreement between both calculated and measured data.

  3. Original electric-vertex formulation of the symmetric eight-vertex model on the square lattice is fully nonuniversal (United States)

    Krčmár, Roman; Šamaj, Ladislav


    The partition function of the symmetric (zero electric field) eight-vertex model on a square lattice can be formulated either in the original "electric" vertex format or in an equivalent "magnetic" Ising-spin format. In this paper, both electric and magnetic versions of the model are studied numerically by using the corner transfer matrix renormalization-group method which provides reliable data. The emphasis is put on the calculation of four specific critical exponents, related by two scaling relations, and of the central charge. The numerical method is first tested in the magnetic format, the obtained dependencies of critical exponents on the model's parameters agree with Baxter's exact solution, and weak universality is confirmed within the accuracy of the method due to the finite size of the system. In particular, the critical exponents η and δ are constant as required by weak universality. On the other hand, in the electric format, analytic formulas based on the scaling relations are derived for the critical exponents ηe and δe which agree with our numerical data. These exponents depend on the model's parameters which is evidence for the full nonuniversality of the symmetric eight-vertex model in the original electric formulation.

  4. Expanding (3+1)-dimensional universe from a lorentzian matrix model for superstring theory in (9+1) dimensions. (United States)

    Kim, Sang-Woo; Nishimura, Jun; Tsuchiya, Asato


    We reconsider the matrix model formulation of type IIB superstring theory in (9+1)-dimensional space-time. Unlike the previous works in which the Wick rotation was used to make the model well defined, we regularize the Lorentzian model by introducing infrared cutoffs in both the spatial and temporal directions. Monte Carlo studies reveal that the two cutoffs can be removed in the large-N limit and that the theory thus obtained has no parameters other than one scale parameter. Moreover, we find that three out of nine spatial directions start to expand at some "critical time," after which the space has SO(3) symmetry instead of SO(9).

  5. User Delay Cost Model and Facilities Maintenance Cost Model for a Terminal Control Area : Volume 1. Model Formulation and Demonstration (United States)


    The User Delay Cost Model (UDCM) is a Monte Carlo computer simulation of essential aspects of Terminal Control Area (TCA) air traffic movements that would be affected by facility outages. The model can also evaluate delay effects due to other factors...

  6. Semisolid formulations containing cetirizine: human skin permeation and topical antihistaminic evaluation in a rabbit model. (United States)

    Ciurlizza, Claudia; Fernández, Francisco; Calpena, Ana Cristina; Lázaro, Raquel; Parra, Alexander; Clares, Beatriz


    Cetirizine dihydrochloride (CTZ) is a second-generation histamine H1 antagonist, effective for the treatment of a wide range of allergic diseases. It has been utilized for managing the symptoms of chronic urticaria and atopic skin conditions. Thus, two novel semisolid formulations, nanoemulsion (NE) and hydrogel (HG) were developed to study their potential utility as vehicles including cetirizine (CTZ) and evaluate the potential use as topical H1-antihistamines agents. The physicochemical and stability properties of both vehicles were tested. Drug release kinetics and human skin permeation studies were performed using Franz cells. The antihistaminic activity was assayed in New Zealand rabbits and compared with two commercial first generation antihistamines. Both formulations were stable and provided a sustained drug release. Amounts of CTZ remaining in the skin were higher for HG, showing the maximum biological effect at 30 min, similar to topical first generation H1-antihistamines commercially available. These results suggest that CTZ-HG could be a promising system for the treatment of topical allergy bringing rapid antihistaminic relief.

  7. An Experimental In Vivo Model to Characterize “Heavy Legs” Symptom in Topical Formulations

    Directory of Open Access Journals (Sweden)

    Pedro Contreiras Pinto


    Full Text Available The “Heavy legs” symptom is regarded as an early expression of chronic venous failure, estimated to affect 40% of the population in developing countries. A new methodology is proposed to approach the “tired or heavy legs” symptom. Seven females with this complaint applied a standard topical formulation during 28 days in one leg randomly chosen. Local blood flow records were obtained instantaneously and during postural change with a laser doppler flowmeter (LDF. High-frequency sonography and local morphometry were also obtained at Days 0, 14, and 28. When compared with D0, LDF values present a significant decrease of both basal and dynamical values after Day 14 and Day 28 suggesting that this effect may result from the formulation application, also involving the related massage. Centimetric measurements and sonographic analysis also supported those inferences. The proposed methodology can evaluate the dynamical changes of  “heavy legs” symptom and eventually be very useful to assess the related claim support.

  8. Therapeutic evaluation of grain based functional food formulation in a geriatric animal model. (United States)

    Teradal, Deepa; Joshi, Neena; Aladakatti, Ravindranath H


    This study investigates the effect of wholesome grain based functional food formulation, on clinical and biochemical parameters in 24-30 months old Wistar albino geriatric rats, corresponding to human age 60-75 years. Animals were randomly divided into five, groups. Experimental diets were compared to the basal rat diet (Group I). Four food, formulation were-wheat based (Group II), finger millet based (Group III), wheat based, diet + fenugreek seed powder (Group IV), finger millet based diet + fenugreek powder, (Group V). These five types of diets were fed to the experimental rats for 6 weeks. Hematological and biochemical parameters were evaluated. The results showed that, feed intake was influenced by the type of feed. Diets supplemented with, fenugreek (Group IV) caused a significant increase in serum hemoglobin. The total serum protein values were significantly highest in Group III. Total serum albumin was found to be lower in Group I and highest in Group II. The concentration of BUN was highest in Group I and the lowest in control diet. Serum cholesterol and glucose were significantly reduced in Group IV. Several hematological and serum mineral values were influenced by the type of diet. The type of diet did not influence the organs weight. A moderate hypoglycemic and hypercholesterolemic effect was observed in composite mix fed rats. This study clearly justifies the recommendation to use wholesome grain based functional foods for geriatric population.

  9. Decision-Making Theories and Models: A Discussion of Rational and Psychological Decision-Making Theories and Models: The Search for a Cultural-Ethical Decision-Making Model


    Oliveira, Arnaldo


    This paper examines rational and psychological decision-making models. Descriptive and normative methodologies such as attribution theory, schema theory, prospect theory, ambiguity model, game theory, and expected utility theory are discussed. The definition of culture is reviewed, and the relationship between culture and decision making is also highlighted as many organizations use a cultural-ethical decision-making model.

  10. Spatially random models, estimation theory, and robot arm dynamics (United States)

    Rodriguez, G.


    Spatially random models provide an alternative to the more traditional deterministic models used to describe robot arm dynamics. These alternative models can be used to establish a relationship between the methodologies of estimation theory and robot dynamics. A new class of algorithms for many of the fundamental robotics problems of inverse and forward dynamics, inverse kinematics, etc. can be developed that use computations typical in estimation theory. The algorithms make extensive use of the difference equations of Kalman filtering and Bryson-Frazier smoothing to conduct spatial recursions. The spatially random models are very easy to describe and are based on the assumption that all of the inertial (D'Alembert) forces in the system are represented by a spatially distributed white-noise model. The models can also be used to generate numerically the composite multibody system inertia matrix. This is done without resorting to the more common methods of deterministic modeling involving Lagrangian dynamics, Newton-Euler equations, etc. These methods make substantial use of human knowledge in derivation and minipulation of equations of motion for complex mechanical systems.

  11. Molecular Thermodynamic Modeling of Fluctuation Solution Theory Properties

    DEFF Research Database (Denmark)

    O’Connell, John P.; Abildskov, Jens


    Fluctuation Solution Theory provides relationships between integrals of the molecular pair total and direct correlation functions and the pressure derivative of solution density, partial molar volumes, and composition derivatives of activity coefficients. For dense fluids, the integrals follow...... a relatively simple corresponding-states behavior even for complex systems, show welldefined relationships for infinite dilution properties in complex and near-critical systems, allow estimation of mixed-solvent solubilities of gases and pharmaceuticals, and can be expressed by simple perturbation models...

  12. Formulation of supergravity without superspace

    CERN Document Server

    Ferrara, S


    Supergravity, the particle theory which unifies under a unique gauge principle the quantum-mechanical concept of spin and space-time geometry, is formulated in terms of quantities defined over Minkowski space-time. 'l'he relation between this formulation and the fonnulation which uses superspace, the space-time supplemented by spinning degrees of freedom, is also briefly discussed.

  13. Building Better Ecological Machines: Complexity Theory and Alternative Economic Models

    Directory of Open Access Journals (Sweden)

    Jess Bier


    Full Text Available Computer models of the economy are regularly used to predict economic phenomena and set financial policy. However, the conventional macroeconomic models are currently being reimagined after they failed to foresee the current economic crisis, the outlines of which began to be understood only in 2007-2008. In this article we analyze the most prominent of this reimagining: Agent-Based models (ABMs. ABMs are an influential alternative to standard economic models, and they are one focus of complexity theory, a discipline that is a more open successor to the conventional chaos and fractal modeling of the 1990s. The modelers who create ABMs claim that their models depict markets as ecologies, and that they are more responsive than conventional models that depict markets as machines. We challenge this presentation, arguing instead that recent modeling efforts amount to the creation of models as ecological machines. Our paper aims to contribute to an understanding of the organizing metaphors of macroeconomic models, which we argue is relevant conceptually and politically, e.g., when models are used for regulatory purposes.

  14. Game Theory Models for Multi-Robot Patrolling of Infrastructures

    Directory of Open Access Journals (Sweden)

    Erik Hernández


    Full Text Available This work is focused on the problem of performing multi-robot patrolling for infrastructure security applications in order to protect a known environment at critical facilities. Thus, given a set of robots and a set of points of interest, the patrolling task consists of constantly visiting these points at irregular time intervals for security purposes. Current existing solutions for these types of applications are predictable and inflexible. Moreover, most of the previous work has tackled the patrolling problem with centralized and deterministic solutions and only few efforts have been made to integrate dynamic methods. Therefore, one of the main contributions of this work is the development of new dynamic and decentralized collaborative approaches in order to solve the aforementioned problem by implementing learning models from Game Theory. The model selected in this work that includes belief-based and reinforcement models as special cases is called Experience-Weighted Attraction. The problem has been defined using concepts of Graph Theory to represent the environment in order to work with such Game Theory techniques. Finally, the proposed methods have been evaluated experimentally by using a patrolling simulator. The results obtained have been compared with previous available approaches.

  15. Quantile hydrologic model selection and model structure deficiency assessment : 1. Theory

    NARCIS (Netherlands)

    Pande, S.


    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies

  16. Application of the evolution theory in modelling of innovation diffusion

    Directory of Open Access Journals (Sweden)

    Krstić Milan


    Full Text Available The theory of evolution has found numerous analogies and applications in other scientific disciplines apart from biology. In that sense, today the so-called 'memetic-evolution' has been widely accepted. Memes represent a complex adaptable system, where one 'meme' represents an evolutional cultural element, i.e. the smallest unit of information which can be identified and used in order to explain the evolution process. Among others, the field of innovations has proved itself to be a suitable area where the theory of evolution can also be successfully applied. In this work the authors have started from the assumption that it is also possible to apply the theory of evolution in the modelling of the process of innovation diffusion. Based on the conducted theoretical research, the authors conclude that the process of innovation diffusion in the interpretation of a 'meme' is actually the process of imitation of the 'meme' of innovation. Since during the process of their replication certain 'memes' show a bigger success compared to others, that eventually leads to their natural selection. For the survival of innovation 'memes', their manifestations are of key importance in the sense of their longevity, fruitfulness and faithful replicating. The results of the conducted research have categorically confirmed the assumption of the possibility of application of the evolution theory with the innovation diffusion with the help of innovation 'memes', which opens up the perspectives for some new researches on the subject.

  17. Models of rational decision making in contemporary economic theory

    Directory of Open Access Journals (Sweden)

    Krstić Bojan


    Full Text Available The aim of this paper is to show that the economists can not adequately explain the rational behavior if are focused only on the scientific observations from the model of full rationality and the model instrumental rationality, and the inclusion related model makes 'larger views', which like more reprezentative reflection of the rational behavior represents a solid basis for construction the model of decision-making in contemporary economic science. Taking into account the goal of the work and its specific character, we composed adequate structure of work. In the first part of the paper, we define the model of full rationality, its important characteristics. In the second part, we analyze the model of instrumental rationality. In the analysis of model, we start from the statement, which given in economic theory, that the rational actor uses the best means to achieve their objectives. In the third part, we consider of the basic of the models of value rationality. In the fourth part, we consider key characteristics of the model of bounded rationality. In the last part, we focuse on questioning the basic assumptions of the model of full rationality and the model of instrumental rationality. We especially analyze the personal and social goals preferences of high school students and university students.

  18. Towards an Extended Evolutionary Game Theory with Survival Analysis and Agreement Algorithms for Modeling Uncertainty, Vulnerability, and Deception (United States)

    Ma, Zhanshan (Sam)

    Competition, cooperation and communication are the three fundamental relationships upon which natural selection acts in the evolution of life. Evolutionary game theory (EGT) is a 'marriage' between game theory and Darwin's evolution theory; it gains additional modeling power and flexibility by adopting population dynamics theory. In EGT, natural selection acts as optimization agents and produces inherent strategies, which eliminates some essential assumptions in traditional game theory such as rationality and allows more realistic modeling of many problems. Prisoner's Dilemma (PD) and Sir Philip Sidney (SPS) games are two well-known examples of EGT, which are formulated to study cooperation and communication, respectively. Despite its huge success, EGT exposes a certain degree of weakness in dealing with time-, space- and covariate-dependent (i.e., dynamic) uncertainty, vulnerability and deception. In this paper, I propose to extend EGT in two ways to overcome the weakness. First, I introduce survival analysis modeling to describe the lifetime or fitness of game players. This extension allows more flexible and powerful modeling of the dynamic uncertainty and vulnerability (collectively equivalent to the dynamic frailty in survival analysis). Secondly, I introduce agreement algorithms, which can be the Agreement algorithms in distributed computing (e.g., Byzantine Generals Problem [6][8], Dynamic Hybrid Fault Models [12]) or any algorithms that set and enforce the rules for players to determine their consensus. The second extension is particularly useful for modeling dynamic deception (e.g., asymmetric faults in fault tolerance and deception in animal communication). From a computational perspective, the extended evolutionary game theory (EEGT) modeling, when implemented in simulation, is equivalent to an optimization methodology that is similar to evolutionary computing approaches such as Genetic algorithms with dynamic populations [15][17].

  19. CMB anomalies from an inflationary model in string theory

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhi-Guo; Piao, Yun-Song [University of Chinese Academy of Sciences, School of Physics, Beijing (China); Guo, Zong-Kuan [Chinese Academy of Sciences, State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, P.O. Box 2735, Beijing (China)


    Recent Planck measurements show some CMB anomalies on large angular scales, which confirms the early observations by WMAP. We show that an inflationary model, in which before the slow-roll inflation the Universe is in a superinflationary phase, can generate a large-scale cutoff in the primordial power spectrum, which may account for not only the power suppression on large angular scales, but also a large dipole power asymmetry in the CMB. We discuss an implementation of our model in string theory. (orig.)

  20. Theory and Circuit Model for Lossy Coaxial Transmission Line

    Energy Technology Data Exchange (ETDEWEB)

    Genoni, T. C.; Anderson, C. N.; Clark, R. E.; Gansz-Torres, J.; Rose, D. V.; Welch, Dale Robert


    The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.

  1. Models and applications of chaos theory in modern sciences

    CERN Document Server

    Zeraoulia, Elhadj


    This book presents a select group of papers that provide a comprehensive view of the models and applications of chaos theory in medicine, biology, ecology, economy, electronics, mechanical, and the human sciences. Covering both the experimental and theoretical aspects of the subject, it examines a range of current topics of interest. It considers the problems arising in the study of discrete and continuous time chaotic dynamical systems modeling the several phenomena in nature and society-highlighting powerful techniques being developed to meet these challenges that stem from the area of nonli

  2. Thick brane models in generalized theories of gravity

    Directory of Open Access Journals (Sweden)

    D. Bazeia


    Full Text Available This work deals with thick braneworld models, in an environment where the Ricci scalar is changed to accommodate the addition of two extra terms, one depending on the Ricci scalar itself, and the other, which takes into account the trace of the energy–momentum tensor of the scalar field that sources the braneworld scenario. We suppose that the scalar field engenders standard kinematics, and we show explicitly that the gravity sector of this new braneworld scenario is linearly stable. We illustrate the general results investigating two distinct models, focusing on how the brane profile is changed in the modified theories.

  3. Theories linguistiques, modeles informatiques, experimentation psycholinguistique (Linguistic Theories, Information-Processing Models, Psycholinguistic Experimentation) (United States)

    Dubois, Daniele


    Delineates and elaborates upon the underlying psychological postulates in linguistic and information-processing models, and shows the interdependence of psycholinguistics and linguistic analysis. (Text is in French.) (DB)

  4. Precipitation in Powder Metallurgy, Nickel Base Superalloys: Review of Modeling Approach and Formulation of Engineering (Postprint) (United States)


    system whose inter- diffusion coefficient is in- dependent of composition. For multi-component alloys, D thus represents again an ef- fective...1984) Solution to the multi-particle diffusion problem with applications to Ostwald ripening—I. Theory. Acta Metall 32:2001–2011 20. Voorhees PW...Glicksman ME (1984) Solution to the multi-particle diffusion problem with applications to Ostwald ripening—II. Computer simulations. Acta Metall 32:2013–2030

  5. Oral Vaccine Formulations Stimulate Mucosal and Systemic Antibody Responses against Staphylococcal Enterotoxin B in a Piglet Model (United States)

    Inskeep, Tiffany K.; Stahl, Chad; Odle, Jack; Oakes, Judy; Hudson, Laura; Bost, Kenneth L.; Piller, Kenneth J.


    Despite the potential for its use as an agent of biowarfare or bioterrorism, no approved vaccine against staphylococcal enterotoxin B (SEB) exists. Nontoxic, mutant forms of SEB have been developed; however, it has been difficult to determine the efficacy of such subunit vaccine candidates due to the lack of superantigen activity of native SEB in rodents and due to the limitations of primate models. Since pigs respond to SEB in a manner similar to that of human subjects, we utilized this relevant animal model to investigate the safety and immunogenicity of a triple mutant of SEB carrying the amino acid changes L45R, Y89A, and Y94A. This recombinant mutant SEB (rmSEB) did not possess superantigen activity in pig lymphocyte cultures. Furthermore, rmSEB was unable to compete with native SEB for binding to pig leukocytes. These in vitro studies suggested that rmSEB could be a safe subunit vaccine. To test this possibility, piglets immunized orally with rmSEB formulations experienced no significant decrease in food consumption and no weight loss during the vaccination regimen. Oral vaccination with 1-mg doses of rmSEB on days 0, 7, 14, and 24 resulted in serum IgG and fecal IgA levels by day 36 that cross-reacted with native SEB. Surprisingly, the inclusion of cholera toxin adjuvant in vaccine formulations containing rmSEB did not result in increased antibody responses compared to formulations using the immunogen alone. Taken together, these studies provide additional evidence for the potential use of nontoxic forms of SEB as vaccines. PMID:20554806

  6. The linear model and hypothesis a general unifying theory

    CERN Document Server

    Seber, George


    This book provides a concise and integrated overview of hypothesis testing in four important subject areas, namely linear and nonlinear models, multivariate analysis, and large sample theory. The approach used is a geometrical one based on the concept of projections and their associated idempotent matrices, thus largely avoiding the need to involve matrix ranks. It is shown that all the hypotheses encountered are either linear or asymptotically linear, and that all the underlying models used are either exactly or asymptotically linear normal models. This equivalence can be used, for example, to extend the concept of orthogonality in the analysis of variance to other models, and to show that the asymptotic equivalence of the likelihood ratio, Wald, and Score (Lagrange Multiplier) hypothesis tests generally applies.

  7. Visceral obesity and psychosocial stress: a generalised control theory model (United States)

    Wallace, Rodrick


    The linking of control theory and information theory via the Data Rate Theorem and its generalisations allows for construction of necessary conditions statistical models of body mass regulation in the context of interaction with a complex dynamic environment. By focusing on the stress-related induction of central obesity via failure of HPA axis regulation, we explore implications for strategies of prevention and treatment. It rapidly becomes evident that individual-centred biomedical reductionism is an inadequate paradigm. Without mitigation of HPA axis or related dysfunctions arising from social pathologies of power imbalance, economic insecurity, and so on, it is unlikely that permanent changes in visceral obesity for individuals can be maintained without constant therapeutic effort, an expensive - and likely unsustainable - public policy.

  8. DsixTools: the standard model effective field theory toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Celis, Alejandro [Ludwig-Maximilians-Universitaet Muenchen, Fakultaet fuer Physik, Arnold Sommerfeld Center for Theoretical Physics, Munich (Germany); Fuentes-Martin, Javier; Vicente, Avelino [Universitat de Valencia-CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Virto, Javier [University of Bern, Albert Einstein Center for Fundamental Physics, Institute for Theoretical Physics, Bern (Switzerland)


    We present DsixTools, a Mathematica package for the handling of the dimension-six standard model effective field theory. Among other features, DsixTools allows the user to perform the full one-loop renormalization group evolution of the Wilson coefficients in the Warsaw basis. This is achieved thanks to the SMEFTrunner module, which implements the full one-loop anomalous dimension matrix previously derived in the literature. In addition, DsixTools also contains modules devoted to the matching to the ΔB = ΔS = 1, 2 and ΔB = ΔC = 1 operators of the Weak Effective Theory at the electroweak scale, and their QCD and QED Renormalization group evolution below the electroweak scale. (orig.)

  9. Adapting Structuration Theory as a Comprehensive Theory for Distance Education: The ASTIDE Model (United States)

    Aktaruzzaman, Md; Plunkett, Margaret


    Distance Education (DE) theorists have argued about the requirement for a theory to be comprehensive in a way that can explicate many of the activities associated with DE. Currently, Transactional Distance Theory (TDT) (Moore, 1993) and the Theory of Instructional Dialogue (IDT) (Caspi & Gorsky, 2006) are the most prominent theories, yet they…

  10. A formulation of convection for stellar structure and evolution calculations without the mixing-length theory approximations. II - Application to Alpha Centauri A and B (United States)

    Lydon, Thomas J.; Fox, Peter A.; Sofia, Sabatino


    We have constructed a series of models of Alpha Centauri A and Alpha Centauri B for the purposes of testing the effects of convection modeling both by means of the mixing-length theory (MLT), and by means of parameterization of energy fluxes based upon numerical simulations of turbulent compressible convection. We demonstrate that while MLT, through its adjustable parameter alpha, can be used to match any given values of luminosities and radii, our treatment of convection, which lacks any adjustable parameters, makes specific predictions of stellar radii. Since the predicted radii of the Alpha Centauri system fall within the errors of the observed radii, our treatment of convection is applicable to other stars in the H-R diagram in addition to the sun. A second set of models is constructed using MLT, adjusting alpha to yield not the 'measured' radii but, instead, the radii predictions of our revised treatment of convection. We conclude by assessing the appropriateness of using a single value of alpha to model a wide variety of stars.

  11. Plane answers to complex questions the theory of linear models

    CERN Document Server

    Christensen, Ronald


    This book was written to rigorously illustrate the practical application of the projective approach to linear models. To some, this may seem contradictory. I contend that it is possible to be both rigorous and illustrative and that it is possible to use the projective approach in practical applications. Therefore, unlike many other books on linear models, the use of projections and sub­ spaces does not stop after the general theory. They are used wherever I could figure out how to do it. Solving normal equations and using calculus (outside of maximum likelihood theory) are anathema to me. This is because I do not believe that they contribute to the understanding of linear models. I have similar feelings about the use of side conditions. Such topics are mentioned when appropriate and thenceforward avoided like the plague. On the other side of the coin, I just as strenuously reject teaching linear models with a coordinate free approach. Although Joe Eaton assures me that the issues in complicated problems freq...

  12. An atomic finite element model for biodegradable polymers. Part 1. Formulation of the finite elements. (United States)

    Gleadall, Andrew; Pan, Jingzhe; Ding, Lifeng; Kruft, Marc-Anton; Curcó, David


    Molecular dynamics (MD) simulations are widely used to analyse materials at the atomic scale. However, MD has high computational demands, which may inhibit its use for simulations of structures involving large numbers of atoms such as amorphous polymer structures. An atomic-scale finite element method (AFEM) is presented in this study with significantly lower computational demands than MD. Due to the reduced computational demands, AFEM is suitable for the analysis of Young's modulus of amorphous polymer structures. This is of particular interest when studying the degradation of bioresorbable polymers, which is the topic of an accompanying paper. AFEM is derived from the inter-atomic potential energy functions of an MD force field. The nonlinear MD functions were adapted to enable static linear analysis. Finite element formulations were derived to represent interatomic potential energy functions between two, three and four atoms. Validation of the AFEM was conducted through its application to atomic structures for crystalline and amorphous poly(lactide). Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Predicting Cortisol Exposure from Paediatric Hydrocortisone Formulation Using a Semi-Mechanistic Pharmacokinetic Model Established in Healthy Adults. (United States)

    Melin, Johanna; Parra-Guillen, Zinnia P; Hartung, Niklas; Huisinga, Wilhelm; Ross, Richard J; Whitaker, Martin J; Kloft, Charlotte


    Optimisation of hydrocortisone replacement therapy in children is challenging as there is currently no licensed formulation and dose in Europe for children under 6 years of age. In addition, hydrocortisone has non-linear pharmacokinetics caused by saturable plasma protein binding. A paediatric hydrocortisone formulation, Infacort ® oral hydrocortisone granules with taste masking, has therefore been developed. The objective of this study was to establish a population pharmacokinetic model based on studies in healthy adult volunteers to predict hydrocortisone exposure in paediatric patients with adrenal insufficiency. Cortisol and binding protein concentrations were evaluated in the absence and presence of dexamethasone in healthy volunteers (n = 30). Dexamethasone was used to suppress endogenous cortisol concentrations prior to and after single doses of 0.5, 2, 5 and 10 mg of Infacort ® or 20 mg of Infacort ® /hydrocortisone tablet/hydrocortisone intravenously. A plasma protein binding model was established using unbound and total cortisol concentrations, and sequentially integrated into the pharmacokinetic model. Both specific (non-linear) and non-specific (linear) protein binding were included in the cortisol binding model. A two-compartment disposition model with saturable absorption and constant endogenous cortisol baseline (Baseline cort ,15.5 nmol/L) described the data accurately. The predicted cortisol exposure for a given dose varied considerably within a small body weight range in individuals weighing cortisol exposure indicated the importance of defining an accurate hydrocortisone dose to mimic physiological concentrations for neonates and infants weighing <20 kg. EudraCT number: 2013-000260-28, 2013-000259-42.

  14. Comparison of Polytomous Parametric and Nonparametric Item Response Theory Models

    Directory of Open Access Journals (Sweden)



    Full Text Available This research aimed to identify the effects of independent variables as sample size, sample distribution, the number of items in the test, and the number of response categories of items in the test on the estimations of Graded Response Model (GRM under Parametric Item Response Theory (PIRT and by Monotone Homogeneity Model (MHM under Non-Parametric Item Response Theory (NIRT for polytomously scored items. To achieve this aim, the research was performed as a fundamental study in which 192 simulation conditions were designed by the combination of sample size, sample distribution, the number of items, and the number of categories of items. Estimates by GRM and MHM were examined under different levels of sample size (N= 100, 250, 500, 1000, sample distribution (normal, skewed, the number of items (10, 20, 40, 80, and the number of categories of items (3, 5, 7 conditions, by respectively calculating model-data fit, reliability values, standart errors of parameters. As a result of the research, it was found that since the values used to evaluate model-data fit were influenced by the increase of variable while calculating model-data fit and since they can not be interpreted alone, it is difficult to compare and generalize the results. The practical calculation of model data fit, which can be interpreted without the need for another value, in MHM provides superiority over GRM. Another research result is that the reliability values give similar results for both models. The standard errors of the MHM parameter estimates is lower than the GRM estimates under small sample and few items conditions and the standard errors of the MHM parameter estimates are close to each other in all conditions.

  15. String theory for pedestrians

    CERN Multimedia

    CERN. Geneva


    In this 3-lecture series I will discuss the basics of string theory, some physical applications, and the outlook for the future. I will begin with the main concepts of the classical theory and the application to the study of cosmic superstrings. Then I will turn to the quantum theory and discuss applications to the investigation of hadronic spectra and the recently discovered quark-gluon plasma. I will conclude with a sketch of string models of particle physics and showing some avenues that may lead to a complete formulation of string theory.

  16. Vanadium(V) and -(IV) complexes of anionic polysaccharides: Controlled release pharmaceutical formulations and models of vanadium biotransformation products. (United States)

    Kremer, Lauren E; McLeod, Andrew I; Aitken, Jade B; Levina, Aviva; Lay, Peter A


    Uncontrolled reactions in biological media are a main obstacle for clinical translation of V-based anti-diabetic or anti-cancer pro-drugs. We investigated the use of controlled-release pharmaceutical formulations to ameliorate this issue with a series of V(V) and (IV) complexes of anionic polysaccharides. Carboxymethyl cellulose, xanthan gum, or alginic acid formulations were prepared by the reactions of [VO4](3-) with one or two molar equivalents of biological reductants, L-ascorbic acid (AA) or L-cysteine (Cys), in the presence of excess polysaccharide at pH~7 or pH~4. XANES studies with the use of a previously developed library of model V(V), V(IV) and V(III) complexes showed that reactions in the presence of AA led mostly to the mixtures of five- and six-coordinate V(IV) species, while the reactions in the presence of Cys led predominantly to the mixtures of five- and six-coordinate V(V) species. The XANES spectra of some of these samples closely matched those reported previously for [VO4](3-) biotransformation products in isolated blood plasma, red blood cells, or cultured adipocytes, which supports the hypothesis that modified polysaccharides are major binders of V(V) and V(IV) in biological systems. Studies by EPR spectroscopy suggested predominant V(IV)-carboxylato binding in complexes with polysaccharides. One of the isolated products (a V(IV)-alginato complex) showed selective release of low-molecular-mass V species at pH~8, but not at pH~2, which makes it a promising lead for the development of V-containing formulations for oral administration that are stable in the stomach, but release the active ingredient in the intestines. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Modeling Psychologists' Ethical Intention: Application of an Expanded Theory of Planned Behavior. (United States)

    Ferencz-Kaddari, Michall; Shifman, Annie; Koslowsky, Meni


    At the core of all therapeutic and medical practice lies ethics. By applying an expanded Ajzen's Theory of Planned Behavior formulation, the present investigation tested a model for explaining psychologists' intention to behave ethically. In the pretest, dual relationships and money conflicts were seen as the most prevalent dilemmas. A total of 395 clinical psychologists filled out questionnaires containing either a dual relationship dilemma describing a scenario where a psychologist was asked to treat a son of a colleague or a money-focused dilemma where he or she was asked to treat a patient unable to pay for the service. Results obtained from applying the expanded Ajzen's model to each dilemma, generally, supported the study hypotheses. In particular, attitudes were seen as the most important predictor in both dilemmas followed by a morality component, defined here as the commitment of the psychologist to the patient included here as an additional predictor in the model. The expanded model provided a better understanding of ethical intention. Practical implications were also discussed. © The Author(s) 2016.

  18. A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials (United States)

    Matouš, Karel; Geers, Marc G. D.; Kouznetsova, Varvara G.; Gillman, Andrew


    Since the beginning of the industrial age, material performance and design have been in the midst of innovation of many disruptive technologies. Today's electronics, space, medical, transportation, and other industries are enriched by development, design and deployment of composite, heterogeneous and multifunctional materials. As a result, materials innovation is now considerably outpaced by other aspects from component design to product cycle. In this article, we review predictive nonlinear theories for multiscale modeling of heterogeneous materials. Deeper attention is given to multiscale modeling in space and to computational homogenization in addressing challenging materials science questions. Moreover, we discuss a state-of-the-art platform in predictive image-based, multiscale modeling with co-designed simulations and experiments that executes on the world's largest supercomputers. Such a modeling framework consists of experimental tools, computational methods, and digital data strategies. Once fully completed, this collaborative and interdisciplinary framework can be the basis of Virtual Materials Testing standards and aids in the development of new material formulations. Moreover, it will decrease the time to market of innovative products.

  19. Mirage models confront the LHC. II. Flux-stabilized type IIB string theory (United States)

    Kaufman, Bryan L.; Nelson, Brent D.


    We continue the study of a class of string-motivated effective supergravity theories in light of current data from the CERN Large Hadron Collider (LHC). In this installment we consider type IIB string theory compactified on a Calabi-Yau orientifold in the presence of fluxes, in the manner originally formulated by Kachru et al. We allow for a variety of potential uplift mechanisms and embeddings of the Standard Model field content into D3-and D7-brane configurations. We find that an uplift sector independent of the Kähler moduli, as is the case with anti-D3-branes, is inconsistent with data unless the matter and Higgs sectors are localized on D7 branes exclusively, or are confined to twisted sectors between D3-and D7-branes. We identify regions of parameter space for all possible D-brane configurations that remain consistent with Planck observations on the dark matter relic density and measurements of the CP-even Higgs mass at the LHC. Constraints arising from LHC searches at √s =8 TeV and the LUX dark matter detection experiment are discussed. The discovery prospects for the remaining parameter space at dark matter direct-detection experiments are described, and signatures for detection of superpartners at the LHC with √s =14 TeV are analyzed.

  20. Theory, modelling and simulation in origins of life studies. (United States)

    Coveney, Peter V; Swadling, Jacob B; Wattis, Jonathan A D; Greenwell, H Christopher


    Origins of life studies represent an exciting and highly multidisciplinary research field. In this review we focus on the contributions made by theory, modelling and simulation to addressing fundamental issues in the domain and the advances these approaches have helped to make in the field. Theoretical approaches will continue to make a major impact at the "systems chemistry" level based on the analysis of the remarkable properties of nonlinear catalytic chemical reaction networks, which arise due to the auto-catalytic and cross-catalytic nature of so many of the putative processes associated with self-replication and self-reproduction. In this way, we describe inter alia nonlinear kinetic models of RNA replication within a primordial Darwinian soup, the origins of homochirality and homochiral polymerization. We then discuss state-of-the-art computationally-based molecular modelling techniques that are currently being deployed to investigate various scenarios relevant to the origins of life.

  1. Structure and asymptotic theory for nonlinear models with GARCH errors

    Directory of Open Access Journals (Sweden)

    Felix Chan


    Full Text Available Nonlinear time series models, especially those with regime-switching and/or conditionally heteroskedastic errors, have become increasingly popular in the economics and finance literature. However, much of the research has concentrated on the empirical applications of various models, with little theoretical or statistical analysis associated with the structure of the processes or the associated asymptotic theory. In this paper, we derive sufficient conditions for strict stationarity and ergodicity of three different specifications of the first-order smooth transition autoregressions with heteroskedastic errors. This is essential, among other reasons, to establish the conditions under which the traditional LM linearity tests based on Taylor expansions are valid. We also provide sufficient conditions for consistency and asymptotic normality of the Quasi-Maximum Likelihood Estimator for a general nonlinear conditional mean model with first-order GARCH errors.

  2. Dynamic statistical models of biological cognition: insights from communications theory (United States)

    Wallace, Rodrick


    Maturana's cognitive perspective on the living state, Dretske's insight on how information theory constrains cognition, the Atlan/Cohen cognitive paradigm, and models of intelligence without representation, permit construction of a spectrum of dynamic necessary conditions statistical models of signal transduction, regulation, and metabolism at and across the many scales and levels of organisation of an organism and its context. Nonequilibrium critical phenomena analogous to physical phase transitions, driven by crosstalk, will be ubiquitous, representing not only signal switching, but the recruitment of underlying cognitive modules into tunable dynamic coalitions that address changing patterns of need and opportunity at all scales and levels of organisation. The models proposed here, while certainly providing much conceptual insight, should be most useful in the analysis of empirical data, much as are fitted regression equations.

  3. A queueing theory based model for business continuity in hospitals. (United States)

    Miniati, R; Cecconi, G; Dori, F; Frosini, F; Iadanza, E; Biffi Gentili, G; Niccolini, F; Gusinu, R


    Clinical activities can be seen as results of precise and defined events' succession where every single phase is characterized by a waiting time which includes working duration and possible delay. Technology makes part of this process. For a proper business continuity management, planning the minimum number of devices according to the working load only is not enough. A risk analysis on the whole process should be carried out in order to define which interventions and extra purchase have to be made. Markov models and reliability engineering approaches can be used for evaluating the possible interventions and to protect the whole system from technology failures. The following paper reports a case study on the application of the proposed integrated model, including risk analysis approach and queuing theory model, for defining the proper number of device which are essential to guarantee medical activity and comply the business continuity management requirements in hospitals.

  4. Modelling apical constriction in epithelia using elastic shell theory. (United States)

    Jones, Gareth Wyn; Chapman, S Jonathan


    Apical constriction is one of the fundamental mechanisms by which embryonic tissue is deformed, giving rise to the shape and form of the fully-developed organism. The mechanism involves a contraction of fibres embedded in the apical side of epithelial tissues, leading to an invagination or folding of the cell sheet. In this article the phenomenon is modelled mechanically by describing the epithelial sheet as an elastic shell, which contains a surface representing the continuous mesh formed from the embedded fibres. Allowing this mesh to contract, an enhanced shell theory is developed in which the stiffness and bending tensors of the shell are modified to include the fibres' stiffness, and in which the active effects of the contraction appear as body forces in the shell equilibrium equations. Numerical examples are presented at the end, including the bending of a plate and a cylindrical shell (modelling neurulation) and the invagination of a spherical shell (modelling simple gastrulation).

  5. Lattice Gauge Theories Within and Beyond the Standard Model

    Energy Technology Data Exchange (ETDEWEB)

    Gelzer, Zechariah John [Iowa U.


    The Standard Model of particle physics has been very successful in describing fundamental interactions up to the highest energies currently probed in particle accelerator experiments. However, the Standard Model is incomplete and currently exhibits tension with experimental data for interactions involving $B$~mesons. Consequently, $B$-meson physics is of great interest to both experimentalists and theorists. Experimentalists worldwide are studying the decay and mixing processes of $B$~mesons in particle accelerators. Theorists are working to understand the data by employing lattice gauge theories within and beyond the Standard Model. This work addresses the theoretical effort and is divided into two main parts. In the first part, I present a lattice-QCD calculation of form factors for exclusive semileptonic decays of $B$~mesons that are mediated by both charged currents ($B \\to \\pi \\ell \

  6. Density Functional Theory and Materials Modeling at Atomistic Length Scales

    Directory of Open Access Journals (Sweden)

    Swapan K. Ghosh


    Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.

  7. String Theory and Quintessence


    Hellerman, Simeon; Kaloper, Nemanja; Susskind, Leonard


    We discuss the obstacles for defining a set of observable quantities analogous to an S-matrix which are needed to formulate string theory in an accelerating universe. We show that the quintessence models with the equations of state $-1 < w

  8. Multiagent model and mean field theory of complex auction dynamics (United States)

    Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng


    Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.

  9. Spectroscopic-Based Chemometric Models for Quantifying Low Levels of Solid-State Transitions in Extended Release Theophylline Formulations. (United States)

    Korang-Yeboah, Maxwell; Rahman, Ziyaur; Shah, Dhaval A; Khan, Mansoor A


    Variations in the solid state form of a pharmaceutical solid have profound impact on the product quality and clinical performance. Quantitative models that allow rapid and accurate determination of polymorphic changes in pharmaceutical products are essential in ensuring product quality throughout its lifecycle. This study reports the development and validation of chemometric models of Raman and near infrared spectroscopy (NIR) for quantifying the extent of pseudopolymorphic transitions of theophylline in extended release formulations. The chemometric models were developed using sample matrices consisting of the commonly used excipients and at the ratios in commercially available products. A combination of scatter removal (multiplicative signal correction and standard normal variate) and derivatization (Savitzky-Golay second derivative) algorithm were used for data pretreatment. Partial least squares and principal component regression models were developed and their performance assessed. Diagnostic statistics such as the root mean square error, correlation coefficient, bias and Q(2) were used as parameters to test the model fit and performance. The models developed had a good fit and performance as shown by the values of the diagnostic statistics. The model diagnostic statistics were similar for MSC-SG and SNV-SG treated spectra. Similarly, PLSR and PCR models had comparable performance. Raman chemometric models were slightly better than their corresponding NIR model. The Raman and NIR chemometric models developed had good accuracy and precision as demonstrated by closeness of the predicted values for the independent observations to the actual TMO content hence the developed models can serve as useful tools in quantifying and controlling solid state transitions in extended release theophylline products. Copyright © 2016. Published by Elsevier Inc.

  10. A Homogenized Energy Model for Hysteresis in Ferroelectric Materials: General Density Formulation

    National Research Council Canada - National Science Library

    Smith, Ralph C; Hatch, Andrew; Mukhergee, Binu; Liu, Shifang


    ... homogenization techniques to construct macroscopic models. In the first step of the development, previous analysis is used to construct Helmholtz and Gibbs energy relations at the lattice level...

  11. Effective Field Theory and the Gamow Shell Model


    Rotureau, J.; van Kolck, U.


    We combine Halo/Cluster Effective Field Theory (H/CEFT) and the Gamow Shell Model (GSM) to describe the $0^+$ ground state of $\\rm{^6He}$ as a three-body halo system. We use two-body interactions for the neutron-alpha particle and two-neutron pairs obtained from H/CEFT at leading order, with parameters determined from scattering in the p$_{3/2}$ and s$_0$ channels, respectively. The three-body dynamics of the system is solved using the GSM formalism, where the continuum states are incorporate...

  12. Model for urban and indoor cellular propagation using percolation theory (United States)

    Franceschetti, G.; Marano, S.; Pasquino, N.; Pinto, I. M.


    A method for the analysis and statistical characterization of wave propagation in indoor and urban cellular radio channels is presented, based on a percolation model. Pertinent principles of the theory are briefly reviewed, and applied to the problem of interest. Relevant quantities, such as pulsed-signal arrival rate, number of reflections against obstacles, and path lengths are deduced and related to basic environment parameters such as obstacle density and transmitter-receiver separation. Results are found to be in good agreement with alternative simulations and measurements.

  13. Theory and Modeling of High-Power Gyrotrons

    Energy Technology Data Exchange (ETDEWEB)

    Nusinovich, Gregory Semeon [Univ. of Maryland, College Park, MD (United States)


    This report summarized results of the work performed at the Institute for Research in Electronics and Applied Physics of the University of Maryland (College Park, MD) in the framework of the DOE Grant “Theory and Modeling of High-Power Gyrotrons”. The report covers the work performed in 2011-2014. The research work was performed in three directions: - possibilities of stable gyrotron operation in very high-order modes offering the output power exceeding 1 MW level in long-pulse/continuous-wave regimes, - effect of small imperfections in gyrotron fabrication and alignment on the gyrotron efficiency and operation, - some issues in physics of beam-wave interaction in gyrotrons.

  14. : The origins of the random walk model in financial theory


    Walter, Christian


    Ce texte constitue le chapitre 2 de l'ouvrage Le modèle de marche au hasard en finance, de Christian Walter, à paraître chez Economica, collection " Audit, assurance, actuariat ", en juin 2013. Il est publié ici avec l'accord de l'éditeur.; Three main concerns pave the way for the birth of the random walk model in financial theory: an ethical issue with Jules Regnault (1834-1894), a scientific issue with Louis Bachelier (1870-1946) and a pratical issue with Alfred Cowles (1891-1984). Three to...

  15. The Five-Factor Model and Self-Determination Theory

    DEFF Research Database (Denmark)

    Olesen, Martin Hammershøj; Thomsen, Dorthe Kirkegaard; Schnieber, Anette

    This study investigates conceptual overlap vs. distinction between individual differences in personality traits, i.e. the Five-Factor Model; and Self-determination Theory, i.e. general causality orientations. Twelve-hundred-and-eighty-seven freshmen (mean age 21.71; 64% women) completed electronic...... questionnaires of personality traits (NEO-FFI) and causality orientations (GCOS). To test whether covariance between traits and orientations could be attributed to shared or separate latent variables we conducted joint factor analyses. Results reveal that the Autonomy orientation can be distinguished from...

  16. Reconsideration of r/K Selection Theory Using Stochastic Control Theory and Nonlinear Structured Population Models. (United States)

    Oizumi, Ryo; Kuniya, Toshikazu; Enatsu, Yoichi


    Despite the fact that density effects and individual differences in life history are considered to be important for evolution, these factors lead to several difficulties in understanding the evolution of life history, especially when population sizes reach the carrying capacity. r/K selection theory explains what types of life strategies evolve in the presence of density effects and individual differences. However, the relationship between the life schedules of individuals and population size is still unclear, even if the theory can classify life strategies appropriately. To address this issue, we propose a few equations on adaptive life strategies in r/K selection where density effects are absent or present. The equations detail not only the adaptive life history but also the population dynamics. Furthermore, the equations can incorporate temporal individual differences, which are referred to as internal stochasticity. Our framework reveals that maximizing density effects is an evolutionarily stable strategy related to the carrying capacity. A significant consequence of our analysis is that adaptive strategies in both selections maximize an identical function, providing both population growth rate and carrying capacity. We apply our method to an optimal foraging problem in a semelparous species model and demonstrate that the adaptive strategy yields a lower intrinsic growth rate as well as a lower basic reproductive number than those obtained with other strategies. This study proposes that the diversity of life strategies arises due to the effects of density and internal stochasticity.

  17. Rigorously testing multialternative decision field theory against random utility models. (United States)

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg


    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  18. Corvid re-caching without 'theory of mind': a model.

    Directory of Open Access Journals (Sweden)

    Elske van der Vaart

    Full Text Available Scrub jays are thought to use many tactics to protect their caches. For instance, they predominantly bury food far away from conspecifics, and if they must cache while being watched, they often re-cache their worms later, once they are in private. Two explanations have been offered for such observations, and they are intensely debated. First, the birds may reason about their competitors' mental states, with a 'theory of mind'; alternatively, they may apply behavioral rules learned in daily life. Although this second hypothesis is cognitively simpler, it does seem to require a different, ad-hoc behavioral rule for every caching and re-caching pattern exhibited by the birds. Our new theory avoids this drawback by explaining a large variety of patterns as side-effects of stress and the resulting memory errors. Inspired by experimental data, we assume that re-caching is not motivated by a deliberate effort to safeguard specific caches from theft, but by a general desire to cache more. This desire is brought on by stress, which is determined by the presence and dominance of onlookers, and by unsuccessful recovery attempts. We study this theory in two experiments similar to those done with real birds with a kind of 'virtual bird', whose behavior depends on a set of basic assumptions about corvid cognition, and a well-established model of human memory. Our results show that the 'virtual bird' acts as the real birds did; its re-caching reflects whether it has been watched, how dominant its onlooker was, and how close to that onlooker it has cached. This happens even though it cannot attribute mental states, and it has only a single behavioral rule assumed to be previously learned. Thus, our simulations indicate that corvid re-caching can be explained without sophisticated social cognition. Given our specific predictions, our theory can easily be tested empirically.

  19. A Mathematical Formulation for 3D Quasi-Static Multibody Models of Diarthrodial Joints

    NARCIS (Netherlands)

    Kwak, S. D.; Blankevoort, L.; Ateshian, G. A.


    This study describes a general set of equations for quasi-static analysis of three-dimensional multibody systems, with a particular emphasis on modeling of diarthrodial joints. The model includes articular contact, muscle forces, tendons and tendon pulleys, ligaments, and the wrapping of soft tissue

  20. Formulating "Principles of Procedure" for the Foreign Language Classroom: A Framework for Process Model Language Curricula (United States)

    Villacañas de Castro, Luis S.


    This article aims to apply Stenhouse's process model of curriculum to foreign language (FL) education, a model which is characterized by enacting "principles of procedure" which are specific to the discipline which the school subject belongs to. Rather than to replace or dissolve current approaches to FL teaching and curriculum…