WorldWideScience

Sample records for regimes theory methods

  1. REVIEW OF REGIME THEORY OF ALLUVIAL CHANNELS

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    One of the most important problems in river engineering is to determine a stable cross section geomenry and slope for an alluvial channel. This has been the subject of considerable research for about a century and continues to be of great practical interest. Lgnoring plan geometry, an alluvi-al channel can adjust its slope, depth and width, to develop a dynamic stable condition in which it can transport a certain a-mount of water and sediment. The brief history of regime the-ory and its new development are reviewed in this paper.

  2. Analysis of the Two-Regime Method on Square Meshes

    KAUST Repository

    Flegg, Mark B.

    2014-01-01

    The two-regime method (TRM) has been recently developed for optimizing stochastic reaction-diffusion simulations [M. Flegg, J. Chapman, and R. Erban, J. Roy. Soc. Interface, 9 (2012), pp. 859-868]. It is a multiscale (hybrid) algorithm which uses stochastic reaction-diffusion models with different levels of detail in different parts of the computational domain. The coupling condition on the interface between different modeling regimes of the TRM was previously derived for onedimensional models. In this paper, the TRM is generalized to higher dimensional reaction-diffusion systems. Coupling Brownian dynamics models with compartment-based models on regular (square) two-dimensional lattices is studied in detail. In this case, the interface between different modeling regimes contains either flat parts or right-angle corners. Both cases are studied in the paper. For flat interfaces, it is shown that the one-dimensional theory can be used along the line perpendicular to the TRM interface. In the direction tangential to the interface, two choices of the TRM parameters are presented. Their applicability depends on the compartment size and the time step used in the molecular-based regime. The two-dimensional generalization of the TRM is also discussed in the case of corners. © 2014 Society for Industrial and Applied Mathematics.

  3. Orbital motion theory and operational regimes for cylindrical emissive probes

    Science.gov (United States)

    Chen, Xin; Sanchez-Arriaga, G.

    2017-02-01

    A full-kinetic model based on the orbital-motion theory for cylindrical emissive probes (EPs) is presented. The conservation of the distribution function, the energy, and the angular momentum for cylindrical probes immersed in collisionless and stationary plasmas is used to write the Vlasov-Poisson system as a single integro-differential equation. It describes self-consistently the electrostatic potential profile and, consequently, the current-voltage (I-V) probe characteristics. Its numerical solutions are used to identify different EP operational regimes, including orbital-motion-limited (OML)/non-OML current collection and monotonic/non-monotonic potential, in the parametric domain of probe bias and emission level. The most important features of the potential and density profiles are presented and compared with common approximations in the literature. Conventional methods to measure plasma potential with EPs are briefly revisited. A direct application of the model is to estimate plasma parameters by fitting I-V measurements to the theoretical results.

  4. Life history theory predicts fish assemblage response to hydrologic regimes.

    Science.gov (United States)

    Mims, Meryl C; Olden, Julian D

    2012-01-01

    The hydrologic regime is regarded as the primary driver of freshwater ecosystems, structuring the physical habitat template, providing connectivity, framing biotic interactions, and ultimately selecting for specific life histories of aquatic organisms. In the present study, we tested ecological theory predicting directional relationships between major dimensions of the flow regime and life history composition of fish assemblages in perennial free-flowing rivers throughout the continental United States. Using long-term discharge records and fish trait and survey data for 109 stream locations, we found that 11 out of 18 relationships (61%) tested between the three life history strategies (opportunistic, periodic, and equilibrium) and six hydrologic metrics (two each describing flow variability, predictability, and seasonality) were statistically significant (P history strategies, with 82% of all significant relationships observed supporting predictions from life history theory. Specifically, we found that (1) opportunistic strategists were positively related to measures of flow variability and negatively related to predictability and seasonality, (2) periodic strategists were positively related to high flow seasonality and negatively related to variability, and (3) the equilibrium strategists were negatively related to flow variability and positively related to predictability. Our study provides important empirical evidence illustrating the value of using life history theory to understand both the patterns and processes by which fish assemblage structure is shaped by adaptation to natural regimes of variability, predictability, and seasonality of critical flow events over broad biogeographic scales.

  5. Power Counting Regime of Chiral Effective Field Theory and Beyond

    CERN Document Server

    Hall, J M M; Leinweber, D B

    2010-01-01

    Chiral effective field theory complements numerical simulations of quantum chromodynamics (QCD) on a space-time lattice. It provides a model-independent formalism for connecting lattice simulation results at finite volume and a variety of quark masses to the physical world. The asymptotic nature of the chiral expansion places the focus on the first few terms of the expansion. Thus, knowledge of the power-counting regime (PCR) of chiral effective field theory, where higher-order terms of the expansion may be regarded as negligible, is as important as knowledge of the expansion itself. Through the consideration of a variety of renormalization schemes and associated parameters, techniques to identify the PCR where results are independent of the renormalization scheme are established. The nucleon mass is considered as a benchmark for illustrating this general approach. Because the PCR is small, the numerical simulation results are also examined to search for the possible presence of an intrinsic scale which may b...

  6. The generalization of A. E. Kennelly theory of complex representation of the electrical quantities in sinusoidal periodic regime to the one and three-phase electric quantities in non-sinusoidal periodic regime

    CERN Document Server

    Mihai, Gheorghe

    2010-01-01

    In this paper, a new mathematical method of electrical circuits calculus is proposed based on the theory of the complex linear operators in matrix form. The newly proposed method generalizes the theory of complex representation of electrical quantities in sinusoidal periodic regime to the non-sinusoidal periodic regime.

  7. United theory of planet formation (i): Tandem regime

    Science.gov (United States)

    Ebisuzaki, Toshikazu; Imaeda, Yusuke

    2017-07-01

    The present paper is the first one of a series of papers that present the new united theory of planet formation, which includes magneto-rotational instability and porous aggregation of solid particles in an consistent way. We here describe the ;tandem; planet formation regime, in which a solar system like planetary systems are likely to be produced. We have obtained a steady-state, 1-D model of the accretion disk of a protostar taking into account the magneto-rotational instability (MRI) and and porous aggregation of solid particles. We find that the disk is divided into an outer turbulent region (OTR), a MRI suppressed region (MSR), and an inner turbulent region (ITR). The outer turbulent region is fully turbulent because of MRI. However, in the range, rout(= 8 - 60 AU) from the central star, MRI is suppressed around the midplane of the gas disk and a quiet area without turbulence appears, because the degree of ionization of gas becomes low enough. The disk becomes fully turbulent again in the range rin(= 0.2 - 1 AU), which is called the inner turbulent region, because the midplane temperature become high enough (>1000 K) due to gravitational energy release. Planetesimals are formed through gravitational instability at the outer and inner MRI fronts (the boundaries between the MRI suppressed region (MSR) and the outer and inner turbuent regions) without particle enhancement in the original nebula composition, because of the radial concentration of the solid particles. At the outer MRI front, icy particles grow through low-velocity collisions into porous aggregates with low densities (down to ∼10-5 gcm-3). They eventually undergo gravitational instability to form icy planetesimals. On the other hand, rocky particles accumulate at the inner MRI front, since their drift velocities turn outward due to the local maximum in gas pressure. They undergo gravitational instability in a sub-disk of pebbles to form rocky planetesimals at the inner MRI front. They are likely

  8. A Review of the Detection Methods for Climate Regime Shifts

    Directory of Open Access Journals (Sweden)

    Qunqun Liu

    2016-01-01

    Full Text Available An abrupt climate change means that the climate system shifts from a steady state to another steady state. Study on the phenomenon and theory of the abrupt climate change is a new research field of modern climatology, and it is of great significance for the prediction of future climate change. The climate regime shift is one of the most common forms of abrupt climate change, which mainly refers to the statistical significant changes on the variable of climate system at one time scale. These detection methods can be roughly divided into five categories based on different types of abrupt changes, namely, abrupt mean value change, abrupt variance change, abrupt frequency change, abrupt probability density change, and the multivariable analysis. The main research progress of abrupt climate change detection methods is reviewed. What is more, some actual applications of those methods in observational data are provided. With the development of nonlinear science, many new methods have been presented for detecting an abrupt dynamic change in recent years, which is useful supplement for the abrupt change detection methods.

  9. European energy security analysing the EU-Russia energy security regime in terms of interdependence theory

    CERN Document Server

    Esakova, Nataliya

    2012-01-01

    Nataliya Esakova performs an analysis of the interdependencies and the nature of cooperation between energy producing, consuming and transit countries focusing on the gas sector. For the analysis the theoretical framework of the interdependence theory by Robert O. Keohane and Joseph S. Nye and the international regime theory are applied to the recent developments within the gas relationship between the European Union and Russia in the last decade. The objective of the analysis is to determine, whether a fundamental regime change in terms of international regime theory is taking place, and, if so, which regime change explanation model in terms of interdependence theory is likely to apply.

  10. European energy security. Analysing the EU-Russia energy security regime in terms of interdependence theory

    Energy Technology Data Exchange (ETDEWEB)

    Esakova, Nataliya

    2012-07-01

    Nataliya Esakova performs an analysis of the interdependencies and the nature of cooperation between energy producing, consuming and transit countries focusing on the gas sector. For the analysis the theoretical framework of the interdependence theory by Robert O. Keohane and Joseph S. Nye and the international regime theory are applied to the recent developments within the gas relationship between the European Union and Russia in the last decade. The objective of the analysis is to determine, whether a fundamental regime change in terms of international regime theory is taking place, and, if so, which regime change explanation model in terms of interdependence theory is likely to apply. (orig.)

  11. Introducing legal method when teaching stakeholder theory

    DEFF Research Database (Denmark)

    Buhmann, Karin

    2015-01-01

    : the Business & Human Rights regime from a UN Global Compact perspective; and mandatory CSR reporting. Supplying integrated teaching notes and generalising on the examples, we explain how legal method may help students of business ethics, organisation and management – future managers – in their analysis...... to the business ethics literature by explaining how legal method complements stakeholder theory for organisational practice....

  12. Setting up the drying regimes based on the theory of moisture migration during drying

    Science.gov (United States)

    Vasić, M.; Radojević, Z.

    2016-08-01

    Drying is energy intensive process which has important effect on the quality of the clay tiles that are dried commercially. Chamber and tunnel dryers are constantly improving. Better technical equipment and operational strategies have lead to higher quality of the dried clay products. The moisture migration during isothermal drying process can be visually traced on the curve that represents the relationship between variable effective moisture diffusivity (MR) with time (t). Proposed non isothermal drying regimes were consisted from several isothermal segments. For the first time, the choice of isothermal segments specification and its duration was not specified by experience or by trial-and-error method. It was detected from the isothermal curves Deff - MR in accordance with the theory of moisture migration during drying. Proposed drying regimes were tested. Clay roofing tiles were dried without cracks. Dried clay roofing tiles has satisfied all requirements defined in EN 1304 norm related to the shape regularity and mechanical properties.

  13. Tautomerism methods and theories

    CERN Document Server

    Antonov, Liudmil

    2013-01-01

    Covering the gap between basic textbooks and over-specialized scientific publications, this is the first reference available to describe this interdisciplinary topic for PhD students and scientists starting in the field. The result is an introductory description providing suitable practical examples of the basic methods used to study tautomeric processes, as well as the theories describing the tautomerism and proton transfer phenomena. It also includes different spectroscopic methods for examining tautomerism, such as UV-VIs, time-resolved fluorescence spectroscopy, and NMR spectrosc

  14. Quantum resource theories in the single-shot regime

    Science.gov (United States)

    Gour, Gilad

    2017-06-01

    One of the main goals of any resource theory such as entanglement, quantum thermodynamics, quantum coherence, and asymmetry, is to find necessary and sufficient conditions that determine whether one resource can be converted to another by the set of free operations. Here we find such conditions for a large class of quantum resource theories which we call affine resource theories. Affine resource theories include the resource theories of athermality, asymmetry, and coherence, but not entanglement. Remarkably, the necessary and sufficient conditions can be expressed as a family of inequalities between resource monotones (quantifiers) that are given in terms of the conditional min-entropy. The set of free operations is taken to be (1) the maximal set (i.e., consists of all resource nongenerating quantum channels) or (2) the self-dual set of free operations (i.e., consists of all resource nongenerating maps for which the dual map is also resource nongenerating). As an example, we apply our results to quantum thermodynamics with Gibbs preserving operations, and several other affine resource theories. Finally, we discuss the applications of these results to resource theories that are not affine and, along the way, provide the necessary and sufficient conditions that a quantum resource theory consists of a resource destroying map.

  15. Nanosecond Repetitively Pulsed Discharges in Air at Atmospheric Pressure -- Experiment and Theory of Regime Transitions

    Science.gov (United States)

    Pai, David; Lacoste, Deanna; Laux, Christophe

    2009-10-01

    In atmospheric pressure air preheated from 300 to 1000 K, the Nanosecond Repetitively Pulsed (NRP) method has been used to generate corona, glow, and spark discharges. Experiments have been performed to determine the parameter space (applied voltage, pulse repetition frequency, ambient gas temperature, and inter-electrode gap distance) of each discharge regime. Notably, there is a minimum gap distance for the existence of the glow regime that increases with decreasing gas temperature. A theory is developed to describe the Corona-to-Glow (C-G) and Glow-to-Spark (G-S) transitions for NRP discharges. The C-G transition is shown to depend on the Avalanche-to-Streamer Transition (AST) as well as the electric field strength in the positive column. The G-S transition is due to the thermal ionization instability. The minimum gap distance for the existence of the glow regime can be understood by considering that the applied voltage of the AST must be lower than that of the thermal ionization instability. This is a previously unknown criterion for generating glow discharges, as it does not correspond to the Paschen minimum or to the Meek-Raether criterion.

  16. Enforcing the climate regime: Game theory and the Marrakesh Accords

    Energy Technology Data Exchange (ETDEWEB)

    Hovi, Jon

    2002-07-01

    The article reviews basic insights about compliance and ''hard'' enforcement that can be derived from various non-cooperative equilibrium concepts and evaluates the Marrakesh Accords in light of these insights. Five different notions of equilibrium are considered - the Nash equilibrium, the sub game perfect equilibrium, the re negotiation proof equilibrium, the coalition proof equilibrium and the perfect Bayesian equilibrium. These various types of equilibrium have number of implications for effective enforcement: 1. Consequences of non-compliance should be more than proportionate. 2. To be credible punishment needs to take place in the Pareto frontier, rather than by reversion to some suboptimal state. 3. An effective enforcement system must be able to curb collective as well as individual incentives to cheat. 4. A fully transparent enforcement regime could in fact turn out to be detrimental for compliance levels. It is concluded that constructing an effective system for ''hard'' enforcement of the Kyoto Protocol is a formidable task that has only partially been accomplished by the Marrakesh Accords. A possible explanation is that the design of a compliance system for the climate regime involved a careful balancing of the desire to minimise non-compliance against other important considerations. (Author)

  17. Study of the Transition Flow Regime using Monte Carlo Methods

    Science.gov (United States)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  18. Chiral effective field theory beyond the power-counting regime

    CERN Document Server

    Hall, Jonathan M M; Young, Ross D

    2011-01-01

    Novel techniques are presented, which identify the chiral power-counting regime (PCR), and realize the existence of an intrinsic energy scale embedded in lattice QCD results that extend outside the PCR. The nucleon mass is considered as a benchmark for illustrating this new approach. Using finite-range regularization, an optimal regularization scale can be extracted from lattice simulation results by analyzing the renormalization of the low energy coefficients. The optimal scale allows a description of lattice simulation results that extend beyond the PCR by quantifying and thus handling any scheme-dependence. Preliminary results for the nucleon magnetic moment are also examined, and a consistent optimal regularization scale is obtained. This indicates the existence of an intrinsic scale corresponding to the finite size of the source of the pion cloud.

  19. OPTIMIZATION OF TAX REGIME USING THE INSTRUMENT OF GAME THEORY

    Directory of Open Access Journals (Sweden)

    Igor Yu. Pelevin

    2014-01-01

    Full Text Available The article is devoted to one of one possible mechanism of taxation optimization of agricultural enterprises where used the game theory. Use of this mechanism allows to apply the most optimal type of taxation that would benefit both a taxpayer and the government. In the article offered the definition of the tax storage and its possible applications.

  20. Optically Levitating Dielectrics in the Quantum Regime: Theory and Protocols

    CERN Document Server

    Romero-Isart, Oriol; Juan, Mathieu L; Quidant, Romain; Kiesel, Nikolai; Aspelmeyer, Markus; Cirac, J Ignacio

    2010-01-01

    We provide a general quantum theory to describe the coupling of light with the motion of a dielectric object inside a high finesse optical cavity. In particular, we derive the total Hamiltonian of the system as well as a master equation describing the state of the center of mass mode of the dielectric and the cavity field mode. In addition, a quantum theory of elasticity is used in order to study the coupling of the center of mass motion with internal vibrational excitations of the dielectric. This general theory is applied to the recent proposal of using an optically levitating nanodielectric as a cavity optomechanical system [Romero-Isart et al. NJP 12, 033015 (2010), Chang et al. PNAS 107, 1005 (2010)]. On this basis, we also design a light-mechanics interface to prepare non-Gaussian states of the mechanical motion, such as quantum superpositions of Fock states. Finally, we introduce a direct mechanical tomography scheme to probe these genuine quantum states by time of flight experiments.

  1. Cyclostationarity theory and methods

    CERN Document Server

    Leśkow, Jacek; Napolitano, Antonio; Sanchez-Ramirez, Andrea

    2014-01-01

    In the last decade the research in signal analysis was dominated by models that encompass nonstationarity as an important feature. This book presents the results of a workshop held in Grodek—Poland in February 2013 which was dedicated to the investigation of cyclostationary signals. Its main objective is to highlight the strong interactions between theory and applications of cyclostationary signals with the use of modern statistical tools. An important application of cyclostationary signals is the analysis of mechanical signals generated by a vibrating mechanism. Cyclostationary models are very important to perform basic operations on signals in both time and frequency domains. One of the fundamental problems in diagnosis of rotating machine is the identification of significant modulating frequencies that contribute to the cyclostationary nature of the signals. The book shows that there are modern tools available for analyzing cyclostationary signals without the assumption of gaussianity. Those methods are...

  2. Introducing legal method when teaching stakeholder theory

    DEFF Research Database (Denmark)

    Buhmann, Karin

    2015-01-01

    Governments are particularly salient stakeholders for business ethics. They act on societal needs and social expectations, and have the political and legal powers to restrict or expand the economic freedoms of business as well as the legitimacy and often urgency to do so. We draw on two examples......: the Business & Human Rights regime from a UN Global Compact perspective; and mandatory CSR reporting. Supplying integrated teaching notes and generalising on the examples, we explain how legal method may help students of business ethics, organisation and management – future managers – in their analysis...... to the business ethics literature by explaining how legal method complements stakeholder theory for organisational practice....

  3. The epsilon regime of chiral perturbation theory with Wilson-type fermions

    Energy Technology Data Exchange (ETDEWEB)

    Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Shindler, A. [Liverpool Univ. (United Kingdom). Theoretical Physics Division

    2009-11-15

    In this proceeding contribution we report on the ongoing effort to simulate Wilson-type fermions in the so called epsilon regime of chiral perturbation theory (cPT).We present results for the chiral condensate and the pseudoscalar decay constant obtained with Wilson twisted mass fermions employing two lattice spacings, two different physical volumes and several quark masses. With this set of simulations we make a first attempt to estimate the systematic uncertainties. (orig.)

  4. Density functional theory of the Seebeck coefficient in the Coulomb blockade regime

    Science.gov (United States)

    Yang, Kaike; Perfetto, Enrico; Kurth, Stefan; Stefanucci, Gianluca; D'Agosta, Roberto

    2016-08-01

    The Seebeck coefficient plays a fundamental role in identifying the efficiency of a thermoelectric device. Its theoretical evaluation for atomistic models is routinely based on density functional theory calculations combined with the Landauer-Büttiker approach to quantum transport. This combination, however, suffers from serious drawbacks for devices in the Coulomb blockade regime. We show how to cure the theory through a simple correction in terms of the temperature derivative of the exchange correlation potential. Our results compare well with both rate equations and experimental findings on carbon nanotubes.

  5. Conspiracy Theories as Alternative Regimes of Truth and as a Universal Socio-Cultural Phenomenon

    Directory of Open Access Journals (Sweden)

    Jovana Diković

    2016-02-01

    Full Text Available The paper analyzes the universal phenomenon of conspiracy theories, which, depending on specific social or political circumstances and on the social context in general, exist as alternative regimes of truth. In both the institutional and private spheres, conspiracy theories represent a kind of cognitive apparatus by means of which individuals and groups gain certain "information" and interpretations of those levels of reality that seem exclusive, hidden and controlled by supposed centers of power. The ultimate implications of conspiracy theories can be seen in the forming of prejudices, which, particularly in certain political circumstances and in combination with populist doctrines, can have a great impact on the public and private spheres. The potential of conspiracy theories, therefore, goes beyond mere intellectual intrigue and diversion for the masses, as they lend themselves to all kinds of political instrumentalization.

  6. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  7. Wave theories of non-laminar charged particle beams: from quantum to thermal regime

    Science.gov (United States)

    Fedele, Renato; Tanjia, Fatema; Jovanović, Dusan; de Nicola, Sergio; Ronsivalle, Concetta; Ronsivalle

    2014-04-01

    The standard classical description of non-laminar charged particle beams in paraxial approximation is extended to the context of two wave theories. The first theory that we discuss (Fedele R. and Shukla, P. K. 1992 Phys. Rev. A 45, 4045. Tanjia, F. et al. 2011 Proceedings of the 38th EPS Conference on Plasma Physics, Vol. 35G. Strasbourg, France: European Physical Society) is based on the Thermal Wave Model (TWM) (Fedele, R. and Miele, G. 1991 Nuovo Cim. D 13, 1527.) that interprets the paraxial thermal spreading of beam particles as the analog of quantum diffraction. The other theory is based on a recently developed model (Fedele, R. et al. 2012a Phys. Plasmas 19, 102106; Fedele, R. et al. 2012b AIP Conf. Proc. 1421, 212), hereafter called Quantum Wave Model (QWM), that takes into account the individual quantum nature of single beam particle (uncertainty principle and spin) and provides collective description of beam transport in the presence of quantum paraxial diffraction. Both in quantum and quantum-like regimes, the beam transport is governed by a 2D non-local Schrödinger equation, with self-interaction coming from the nonlinear charge- and current-densities. An envelope equation of the Ermakov-Pinney type, which includes collective effects, is derived for both TWM and QWM regimes. In TWM, such description recovers the well-known Sacherer's equation (Sacherer, F. J. 1971 IEEE Trans. Nucl. Sci. NS-18, 1105). Conversely, in the quantum regime and in Hartree's mean field approximation, one recovers the evolution equation for a single-particle spot size, i.e. for a single quantum ray spot in the transverse plane (Compton regime). We demonstrate that such quantum evolution equation contains the same information as the evolution equation for the beam spot size that describes the beam as a whole. This is done heuristically by defining the lowest QWM state accessible by a system of non-overlapping fermions. The latter are associated with temperature values that are

  8. “Hybrid institutions”: Applications of common property theory beyond discrete tenure regimes

    Directory of Open Access Journals (Sweden)

    Laura German

    2009-09-01

    Full Text Available Property rights theory has contributed a great deal to global understanding of the factors shaping the management, governance and sustainability of discrete property regimes (individual, State, commons. Yet as the commons become increasingly altered and enclosed and management challenges extend beyond the boundaries of any given unit of property, institutional theory must extend beyond discrete property regimes. This paper argues that as natural resource management challenges grow more complex and interconnected, common property theory in the Ostrom tradition remains an essential component of successful management solutions – for common pool resources, public and private goods alike. Building on the commons and externality literature in general, and the Ostrom and Coasean traditions in particular, we propose the use of the term “hybrid institution” to explore the governance of common or connected interests within and between diverse property regimes. Following a general introduction to a set of propositions for encompassing this expanded realm of application of commons theory, we use the literature on integrated natural resource management to frame the scope of “commons” issues facing rural communities today. Empirical and action research from eastern Africa and logical arguments are each used to illustrate and sharpen the focus of our propositions so that they can be tested and refined in future research. This analysis demonstrates the instrumental potential of the concept of hybrid institutions as a framework for shaping more productive engagements with seemingly intractable natural resource management challenges at farm and landscape scale. Our analysis suggests that central elements of the Ostrom and Coasean traditions can be complementary explanatory lenses for contemporary resource conflict and management.

  9. Constitutional Reform and Political Regime in Interwar Portugal. A Challenge for Political Theory

    Directory of Open Access Journals (Sweden)

    Florin-Ciprian MITREA

    2013-06-01

    Full Text Available Salazar’s authoritarian regime (1932-1968 represents unquestionably a controversial moment in Europe’s political history. Antonio Salazar is considered either a saviour of interwar Portugal and an exponent of Christian philosophy in politics, or, on the contrary, a dictator of fascist filiation who obstructed his country’s democratic evolution. All disputes aside, it can be stated with certainty that the Portuguese politician was the longest-serving state leader of twentieth century Europe and that his constitutional philosophy is still a challenge for political theory. Was Salazar’s an authoritarian, dictatorial, totalitarian regime or, conversely, can it be considered a sui generis aspect of the Mediterranean political model? Starting from this question, the aim of this article is to analyse the substance of Salazar’s political thought, as well as its reception phenomenon from the viewpoint of Arendtian critique of totalitarianism, and of the model of conceptual history, as theorised by Reinhart Kosellek.

  10. Kinetic theory of turbulence for parallel propagation revisited: Low-to-intermediate frequency regime

    Science.gov (United States)

    Yoon, Peter H.

    2015-09-01

    A previous paper [P. H. Yoon, "Kinetic theory of turbulence for parallel propagation revisited: Formal results," Phys. Plasmas 22, 082309 (2015)] revisited the second-order nonlinear kinetic theory for turbulence propagating in directions parallel/anti-parallel to the ambient magnetic field, in which the original work according to Yoon and Fang [Phys. Plasmas 15, 122312 (2008)] was refined, following the paper by Gaelzer et al. [Phys. Plasmas 22, 032310 (2015)]. The main finding involved the dimensional correction pertaining to discrete-particle effects in Yoon and Fang's theory. However, the final result was presented in terms of formal linear and nonlinear susceptibility response functions. In the present paper, the formal equations are explicitly written down for the case of low-to-intermediate frequency regime by making use of approximate forms for the response functions. The resulting equations are sufficiently concrete so that they can readily be solved by numerical means or analyzed by theoretical means. The derived set of equations describe nonlinear interactions of quasi-parallel modes whose frequency range covers the Alfvén wave range to ion-cyclotron mode, but is sufficiently lower than the electron cyclotron mode. The application of the present formalism may range from the nonlinear evolution of whistler anisotropy instability in the high-beta regime, and the nonlinear interaction of electrons with whistler-range turbulence.

  11. Kinetic theory of turbulence for parallel propagation revisited: Low-to-intermediate frequency regime

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Peter H., E-mail: yoonp@umd.edu [University of Maryland, College Park, Maryland 20742 (United States); School of Space Research, Kyung Hee University, Yongin, Gyeonggi 446-701 (Korea, Republic of)

    2015-09-15

    A previous paper [P. H. Yoon, “Kinetic theory of turbulence for parallel propagation revisited: Formal results,” Phys. Plasmas 22, 082309 (2015)] revisited the second-order nonlinear kinetic theory for turbulence propagating in directions parallel/anti-parallel to the ambient magnetic field, in which the original work according to Yoon and Fang [Phys. Plasmas 15, 122312 (2008)] was refined, following the paper by Gaelzer et al. [Phys. Plasmas 22, 032310 (2015)]. The main finding involved the dimensional correction pertaining to discrete-particle effects in Yoon and Fang's theory. However, the final result was presented in terms of formal linear and nonlinear susceptibility response functions. In the present paper, the formal equations are explicitly written down for the case of low-to-intermediate frequency regime by making use of approximate forms for the response functions. The resulting equations are sufficiently concrete so that they can readily be solved by numerical means or analyzed by theoretical means. The derived set of equations describe nonlinear interactions of quasi-parallel modes whose frequency range covers the Alfvén wave range to ion-cyclotron mode, but is sufficiently lower than the electron cyclotron mode. The application of the present formalism may range from the nonlinear evolution of whistler anisotropy instability in the high-beta regime, and the nonlinear interaction of electrons with whistler-range turbulence.

  12. Random matrix theory for mixed regular-chaotic dynamics in the super-extensive regime

    CERN Document Server

    El-Hady, A Abd

    2011-01-01

    We apply Tsallis's q-indexed nonextensive entropy to formulate a random matrix theory (RMT), which may be suitable for systems with mixed regular-chaotic dynamics. We consider the super-extensive regime of q < 1. We obtain analytical expressions for the level-spacing distributions, which are strictly valid for 2 \\times 2 random-matrix ensembles, as usually done in the standard RMT. We compare the results with spacing distributions, numerically calculated for random matrix ensembles describing a harmonic oscillator perturbed by Gaussian orthogonal and unitary ensembles.

  13. Lattice QCD in the {epsilon}-regime and random matrix theory

    Energy Technology Data Exchange (ETDEWEB)

    Giusti, L.; Luescher, M. [CERN, Geneva (Switzerland); Weisz, P. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Wittig, H. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2003-11-01

    In the {epsilon}-regime of QCD the main features of the spectrum of the low-lying eigenvalues of the (euclidean) Dirac operator are expected to be described by a certain universality class of random matrix models. In particular, the latter predict the joint statistical distribution of the individual eigenvalues in any topological sector of the theory. We compare some of these predictions with high-precision numerical data obtained from quenched lattice QCD for a range of lattice spacings and volumes. While no complete matching is observed, the results agree with theoretical expectations at volumes larger than about 5 fm{sup 4}. (orig.)

  14. Lattice QCD in the {epsilon}-regime and random matrix theory

    Energy Technology Data Exchange (ETDEWEB)

    Giusti, Leonardo; Luescher, Martin [CERN, Theory Division, Geneva (Switzerland)]. E-mail addresses: leonardo.giusti@cern.ch; luscher@mail.cern.ch; Weisz, Peter [Max-Planck-Institut fuer Physik, Munich (Germany)]. E-mail: pew@dmumpiwh.mppmu.mpg.de; Wittig, Hartmut [DESY, Theory Group, Hamburg (Germany)]. E-mail: hartmut.wittig@desy.de

    2003-11-01

    In the {epsilon}-regime of QCD the main features of the spectrum of the low-lying eigenvalues of the (euclidean) Dirac operator are expected to be described by a certain universality class of random matrix models. In particular, the latter predict the joint statistical distribution of the individual eigenvalues in any topological sector of the theory. We compare some of these predictions with high-precision numerical data obtained from quenched lattice QCD for a range of lattice spacings and volumes. While no complete matching is observed, the results agree with theoretical expectations at volumes larger than about 5 fm{sup 4}. (author)

  15. Informetrics theory, methods and applications

    CERN Document Server

    Qiu, Junping; Yang, Siluo; Dong, Ke

    2017-01-01

    This book provides an accessible introduction to the history, theory and techniques of informetrics. Divided into 14 chapters, it develops the content system of informetrics from the theory, methods and applications; systematically analyzes the six basic laws and the theory basis of informetrics and presents quantitative analysis methods such as citation analysis and computer-aided analysis. It also discusses applications in information resource management, information and library science, science of science, scientific evaluation and the forecast field. Lastly, it describes a new development in informetrics- webometrics. Providing a comprehensive overview of the complex issues in today's environment, this book is a valuable resource for all researchers, students and practitioners in library and information science.

  16. Quantum fields in the non-perturbative regime. Yang-Mills theory and gravity

    Energy Technology Data Exchange (ETDEWEB)

    Eichhorn, Astrid

    2011-09-06

    In this thesis we study candidates for fundamental quantum field theories, namely non-Abelian gauge theories and asymptotically safe quantum gravity. Whereas the first ones have a stronglyinteracting low-energy limit, the second one enters a non-perturbative regime at high energies. Thus, we apply a tool suited to the study of quantum field theories beyond the perturbative regime, namely the Functional Renormalisation Group. In a first part, we concentrate on the physical properties of non-Abelian gauge theories at low energies. Focussing on the vacuum properties of the theory, we present an evaluation of the full effective potential for the field strength invariant F{sub {mu}}{sub {nu}}F{sup {mu}}{sup {nu}} from non-perturbative gauge correlation functions and find a non-trivial minimum corresponding to the existence of a dimension four gluon condensate in the vacuum. We also relate the infrared asymptotic form of the {beta} function of the running background-gauge coupling to the asymptotic behavior of Landau-gauge gluon and ghost propagators and derive an upper bound on their scaling exponents. We then consider the theory at finite temperature and study the nature of the confinement phase transition in d = 3+1 dimensions in various non-Abelian gauge theories. For SU(N) with N= 3,..,12 and Sp(2) we find a first-order phase transition in agreement with general expectations. Moreover our study suggests that the phase transition in E(7) Yang-Mills theory also is of first order. Our studies shed light on the question which property of a gauge group determines the order of the phase transition. In a second part we consider asymptotically safe quantum gravity. Here, we focus on the Faddeev-Popov ghost sector of the theory, to study its properties in the context of an interacting UV regime. We investigate several truncations, which all lend support to the conjecture that gravity may be asymptotically safe. In a first truncation, we study the ghost anomalous dimension

  17. Operator theory and numerical methods

    CERN Document Server

    Fujita, H; Suzuki, T

    2001-01-01

    In accordance with the developments in computation, theoretical studies on numerical schemes are now fruitful and highly needed. In 1991 an article on the finite element method applied to evolutionary problems was published. Following the method, basically this book studies various schemes from operator theoretical points of view. Many parts are devoted to the finite element method, but other schemes and problems (charge simulation method, domain decomposition method, nonlinear problems, and so forth) are also discussed, motivated by the observation that practically useful schemes have fine mathematical structures and the converses are also true. This book has the following chapters: 1. Boundary Value Problems and FEM. 2. Semigroup Theory and FEM. 3. Evolution Equations and FEM. 4. Other Methods in Time Discretization. 5. Other Methods in Space Discretization. 6. Nonlinear Problems. 7. Domain Decomposition Method.

  18. Optimal regime of jet fuel water impregnation by ultrasonic dispersion method

    Directory of Open Access Journals (Sweden)

    В. Г. Бережний

    2000-09-01

    Full Text Available Analyzed is the efficiency of existing method and devices of jet fuel water impregnation, their advantages and disadvantages. Proposed is a principal scheme of installation and optimal regime of jet fuel water impregnation by ultrasonic dispersion method

  19. Local theory of extrapolation methods

    Science.gov (United States)

    Kulikov, Gennady

    2010-03-01

    In this paper we discuss the theory of one-step extrapolation methods applied both to ordinary differential equations and to index 1 semi-explicit differential-algebraic systems. The theoretical background of this numerical technique is the asymptotic global error expansion of numerical solutions obtained from general one-step methods. It was discovered independently by Henrici, Gragg and Stetter in 1962, 1964 and 1965, respectively. This expansion is also used in most global error estimation strategies as well. However, the asymptotic expansion of the global error of one-step methods is difficult to observe in practice. Therefore we give another substantiation of extrapolation technique that is based on the usual local error expansion in a Taylor series. We show that the Richardson extrapolation can be utilized successfully to explain how extrapolation methods perform. Additionally, we prove that the Aitken-Neville algorithm works for any one-step method of an arbitrary order s, under suitable smoothness.

  20. Extrapolation methods theory and practice

    CERN Document Server

    Brezinski, C

    1991-01-01

    This volume is a self-contained, exhaustive exposition of the extrapolation methods theory, and of the various algorithms and procedures for accelerating the convergence of scalar and vector sequences. Many subroutines (written in FORTRAN 77) with instructions for their use are provided on a floppy disk in order to demonstrate to those working with sequences the advantages of the use of extrapolation methods. Many numerical examples showing the effectiveness of the procedures and a consequent chapter on applications are also provided - including some never before published results and applicat

  1. Fractional dynamics and the TeV regime of field theory

    Science.gov (United States)

    Goldfain, Ervin

    2007-04-01

    The description of complex dynamics in the TeV regime of field theory warrants the transition from ordinary calculus on smooth manifolds to fractional differentiation and integration. Starting from the principle of local scale invariance, we explore the spectrum of phenomena that is likely to emerge beyond the energy range of the standard model. We find that, in the deep ultraviolet region of field theory, a) fractional dynamics in Minkowski space-time is equivalent to field theory in curved space-time. This result points out to a natural integration of classical gravity in the framework of TeV physics; b) the three gauge groups of the standard model are rooted in the topological concept of fractional dimension. This result suggests that gauge bosons and fermions are unified through a fundamentally different mechanism than the one advocated by supersymmetry; c) fractional dynamics is the underlying source of parity violation in weak interactions and of the breaking of time-reversal invariance in processes involving neutral kaons. Note: this work is available at doi:10.1016/j.cnsns.2006.06.001

  2. Effects of diversity and procrastination in priority queuing theory: the different power law regimes.

    Science.gov (United States)

    Saichev, A; Sornette, D

    2010-01-01

    Empirical analyses show that after the update of a browser, or the publication of the vulnerability of a software, or the discovery of a cyber worm, the fraction of computers still using the older browser or software version, or not yet patched, or exhibiting worm activity decays as a power law approximately 1/t(alpha) with 0queuing theory, of a target task which has the lowest priority compared to all other tasks that flow on the computer of an individual. We identify a "time deficit" control parameter beta and a bifurcation to a regime where there is a nonzero probability for the target task to never be completed. The distribution of waiting time T until the completion of the target task has the power law tail approximately 1/t(1/2), resulting from a first-passage solution of an equivalent Wiener process. Taking into account a diversity of time deficit parameters in a population of individuals, the power law tail is changed into 1/t(alpha), with alpha is an element of (0.5,infinity), including the well-known case 1/t. We also study the effect of "procrastination," defined as the situation in which the target task may be postponed or delayed even after the individual has solved all other pending tasks. This regime provides an explanation for even slower apparent decay and longer persistence.

  3. Variational methods for field theories

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Menahem, S.

    1986-09-01

    Four field theory models are studied: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. Free field theory is used as a laboratory for a new variational blocking-truncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes (Boron-Oppenheimer approximation). This ''adiabatic truncation'' method gives very accurate results for ground-state energy density and correlation functions. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Euclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. The transfer-matrix method is used to find a good (non-blocking) trial ground state for the Ising model in a transverse magnetic field in (1 + 1) dimensions.

  4. Ph.D. Thesis: Chiral Effective Field Theory Beyond the Power-Counting Regime

    CERN Document Server

    Hall, Jonathan M M

    2011-01-01

    Novel techniques are presented, which identify the power-counting regime (PCR) of chiral effective field theory, and allow the use of lattice quantum chromodynamics results that extend outside the PCR. By analyzing the renormalization of low-energy coefficients of the chiral expansion of the nucleon mass, the existence of an optimal regularization scale is realized. The techniques developed for the nucleon mass renormalization are then applied to a test case: performing a chiral extrapolation without prior phenomenological bias. The robustness of the procedure for obtaining an optimal regularization scale and performing a reliable chiral extrapolation is confirmed. The procedure developed is then applied to the magnetic moment and the electric charge radius of the nucleon. The consistency of the results for the value of the optimal regularization scale provides strong evidence for the existence of an intrinsic energy scale in the nucleon-pion interaction.

  5. Biometrics Theory, Methods, and Applications

    CERN Document Server

    Boulgouris, N V; Micheli-Tzanakou, Evangelia

    2009-01-01

    An in-depth examination of the cutting edge of biometrics. This book fills a gap in the literature by detailing the recent advances and emerging theories, methods, and applications of biometric systems in a variety of infrastructures. Edited by a panel of experts, it provides comprehensive coverage of:. Multilinear discriminant analysis for biometric signal recognition;. Biometric identity authentication techniques based on neural networks;. Multimodal biometrics and design of classifiers for biometric fusion;. Feature selection and facial aging modeling for face recognition;. Geometrical and

  6. Nonstandard Methods in Measure Theory

    Directory of Open Access Journals (Sweden)

    Grigore Ciurea

    2014-01-01

    to the study of the extension of vector measures. Applications of our results lead to simple new proofs for theorems of classical measure theory. The novelty lies in the use of the principle of extension by continuity (for which we give a nonstandard proof to obtain in an unified way some notable theorems which have been obtained by Fox, Brooks, Ohba, Diestel, and others. The methods of proof are quite different from those used by previous authors, and most of them are realized by means of nonstandard analysis.

  7. Bayes linear statistics, theory & methods

    CERN Document Server

    Goldstein, Michael

    2007-01-01

    Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...

  8. Extending the applicability of Redfield theories into highly non-Markovian regimes

    CERN Document Server

    Montoya-Castillo, Andrés; Reichman, David R

    2015-01-01

    We present a new, computationally inexpensive method for the calculation of reduced density matrix dynamics for systems with a potentially large number of subsystem degrees of freedom coupled to a generic bath. The approach consists of propagation of weak-coupling Redfield-like equations for the high frequency bath degrees of freedom only, while the low frequency bath modes are dynamically arrested but statistically sampled. We examine the improvements afforded by this approximation by comparing with exact results for the spin-boson model over a wide range of parameter space. The results from the method are found to dramatically improve Redfield dynamics in highly non--Markovian regimes, at a similar computational cost. Relaxation of the mode-freezing approximation via classical (Ehrenfest) evolution of the low frequency modes results in a dynamical hybrid method. We find that this Redfield-based dynamical hybrid approach, which is computationally more expensive than bare Redfield dynamics, yields only a marg...

  9. Oscillating Adriatic temperature and salinity regimes mapped using the Self-Organizing Maps method

    Science.gov (United States)

    Matić, Frano; Kovač, Žarko; Vilibić, Ivica; Mihanović, Hrvoje; Morović, Mira; Grbec, Branka; Leder, Nenad; Džoić, Tomislav

    2017-01-01

    This paper aims to document salinity and temperature regimes in the middle and south Adriatic Sea by applying the Self-Organizing Maps (SOM) method to the available long-term temperature and salinity series. The data were collected on a seasonal basis between 1963 and 2011 in two dense water collecting depressions, Jabuka Pit and Southern Adriatic Pit, and over the Palagruža Sill. Seasonality was removed prior to the analyses. Salinity regimes have been found to oscillate rapidly between low-salinity and high-salinity SOM solutions, ascribed to the advection of Western and Eastern Mediterranean waters, respectively. Transient salinity regimes normally lasted less than a season, while temperature transient regimes lasted longer. Salinity regimes prolonged their duration after the major basin-wide event, the Eastern Mediterranean Transient, in the early 1990s. A qualitative relationship between high-salinity regimes and dense water formation and dynamics has been documented. The SOM-based analyses have a large capacity for classifying the oscillating ocean regimes in a basin, which, in the case of the Adriatic Sea, beside climate forcing, is an important driver of biogeochemical changes that impacts trophic relations, appearance and abundance of alien organisms, and fisheries, etc.

  10. An Abductive Theory of Scientific Method

    Science.gov (United States)

    Haig, Brian D.

    2005-01-01

    A broad theory of scientific method is sketched that has particular relevance for the behavioral sciences. This theory of method assembles a complex of specific strategies and methods that are used in the detection of empirical phenomena and the subsequent construction of explanatory theories. A characterization of the nature of phenomena is…

  11. Towards time-dependent current-density-functional theory in the non-linear regime.

    Science.gov (United States)

    Escartín, J M; Vincendon, M; Romaniello, P; Dinh, P M; Reinhard, P-G; Suraud, E

    2015-02-28

    Time-Dependent Density-Functional Theory (TDDFT) is a well-established theoretical approach to describe and understand irradiation processes in clusters and molecules. However, within the so-called adiabatic local density approximation (ALDA) to the exchange-correlation (xc) potential, TDDFT can show insufficiencies, particularly in violently dynamical processes. This is because within ALDA the xc potential is instantaneous and is a local functional of the density, which means that this approximation neglects memory effects and long-range effects. A way to go beyond ALDA is to use Time-Dependent Current-Density-Functional Theory (TDCDFT), in which the basic quantity is the current density rather than the density as in TDDFT. This has been shown to offer an adequate account of dissipation in the linear domain when the Vignale-Kohn (VK) functional is used. Here, we go beyond the linear regime and we explore this formulation in the time domain. In this case, the equations become very involved making the computation out of reach; we hence propose an approximation to the VK functional which allows us to calculate the dynamics in real time and at the same time to keep most of the physics described by the VK functional. We apply this formulation to the calculation of the time-dependent dipole moment of Ca, Mg and Na2. Our results show trends similar to what was previously observed in model systems or within linear response. In the non-linear domain, our results show that relaxation times do not decrease with increasing deposited excitation energy, which sets some limitations to the practical use of TDCDFT in such a domain of excitations.

  12. Basic methods of soliton theory

    CERN Document Server

    Cherednik, I

    1996-01-01

    In the 25 years of its existence Soliton Theory has drastically expanded our understanding of "integrability" and contributed a lot to the reunification of Mathematics and Physics in the range from deep algebraic geometry and modern representation theory to quantum field theory and optical transmission lines.The book is a systematic introduction to the Soliton Theory with an emphasis on its background and algebraic aspects. It is the first one devoted to the general matrix soliton equations, which are of great importance for the foundations and the applications.Differential algebra (local cons

  13. PE Metrics: Background, Testing Theory, and Methods

    Science.gov (United States)

    Zhu, Weimo; Rink, Judy; Placek, Judith H.; Graber, Kim C.; Fox, Connie; Fisette, Jennifer L.; Dyson, Ben; Park, Youngsik; Avery, Marybell; Franck, Marian; Raynes, De

    2011-01-01

    New testing theories, concepts, and psychometric methods (e.g., item response theory, test equating, and item bank) developed during the past several decades have many advantages over previous theories and methods. In spite of their introduction to the field, they have not been fully accepted by physical educators. Further, the manner in which…

  14. GNSS remote sensing theory, methods and applications

    CERN Document Server

    Jin, Shuanggen; Xie, Feiqin

    2014-01-01

    This book presents the theory and methods of GNSS remote sensing as well as its applications in the atmosphere, oceans, land and hydrology. It contains detailed theory and study cases to help the reader put the material into practice.

  15. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  16. Straussian Grounded-Theory Method: An Illustration

    Science.gov (United States)

    Thai, Mai Thi Thanh; Chong, Li Choy; Agrawal, Narendra M.

    2012-01-01

    This paper demonstrates the benefits and application of Straussian Grounded Theory method in conducting research in complex settings where parameters are poorly defined. It provides a detailed illustration on how this method can be used to build an internationalization theory. To be specific, this paper exposes readers to the behind-the-scene work…

  17. English 450: Theories and Methods of Argument

    Science.gov (United States)

    Jones, Rebecca

    2008-01-01

    This article presents a course design of English 450: Theories and Methods of Argument. The course is an upper level course in the Writing concentration of B. A. in English and American Language and Literature at the University of Tennessee, Chattanooga, a metropolitan university in the South. At the 400 level, Theories and Methods of Argument is…

  18. Improved method for calculating neoclassical transport coefficients in the banana regime

    Energy Technology Data Exchange (ETDEWEB)

    Taguchi, M., E-mail: taguchi.masayoshi@nihon-u.ac.jp [College of Industrial Technology, Nihon University, Narashino 275-8576 (Japan)

    2014-05-15

    The conventional neoclassical moment method in the banana regime is improved by increasing the accuracy of approximation to the linearized Fokker-Planck collision operator. This improved method is formulated for a multiple ion plasma in general tokamak equilibria. The explicit computation in a model magnetic field shows that the neoclassical transport coefficients can be accurately calculated in the full range of aspect ratio by the improved method. The some neoclassical transport coefficients for the intermediate aspect ratio are found to appreciably deviate from those obtained by the conventional moment method. The differences between the transport coefficients with these two methods are up to about 20%.

  19. Design theory methods and organization for innovation

    CERN Document Server

    Le Masson, Pascal; Hatchuel, Armand

    2017-01-01

    This textbook presents the core of recent advances in design theory and its implications for design methods and design organization. Providing a unified perspective on different design methods and approaches, from the most classic (systematic design) to the most advanced (C-K theory), it offers a unique and integrated presentation of traditional and contemporary theories in the field. Examining the principles of each theory, this guide utilizes numerous real life industrial applications, with clear links to engineering design, industrial design, management, economics, psychology and creativity. Containing a section of exams with detailed answers, it is useful for courses in design theory, engineering design and advanced innovation management. "Students and professors, practitioners and researchers in diverse disciplines, interested in design, will find in this book a rich and vital source for studying fundamental design methods and tools as well as the most advanced design theories that work in practice". Pro...

  20. Mathematical methods of electromagnetic theory

    CERN Document Server

    Friedrichs, Kurt O

    2014-01-01

    This text provides a mathematically precise but intuitive introduction to classical electromagnetic theory and wave propagation, with a brief introduction to special relativity. While written in a distinctive, modern style, Friedrichs manages to convey the physical intuition and 19th century basis of the equations, with an emphasis on conservation laws. Particularly striking features of the book include: (a) a mathematically rigorous derivation of the interaction of electromagnetic waves with matter, (b) a straightforward explanation of how to use variational principles to solve problems in el

  1. Going beyond The three worlds of welfare capitalism: regime theory and public health research.

    Science.gov (United States)

    Bambra, C

    2007-12-01

    International research on the social determinants of health has increasingly started to integrate a welfare state regimes perspective. Although this is to be welcomed, to date there has been an over-reliance on Esping-Andersen's The three worlds of welfare capitalism typology (1990). This is despite the fact that it has been subjected to extensive criticism and that there are in fact a number of competing welfare state typologies within the comparative social policy literature. The purpose of this paper is to provide public health researchers with an up-to-date overview of the welfare state regime literature so that it can be reflected more accurately in future research. It outlines The three worlds of welfare capitalism typology, and it presents the criticisms it received and an overview of alternative welfare state typologies. It concludes by suggesting new avenues of study in public health that could be explored by drawing upon this broader welfare state regimes literature.

  2. The Validity of Divergent Grounded Theory Method

    Directory of Open Access Journals (Sweden)

    Martin Nils Amsteus PhD

    2014-02-01

    Full Text Available The purpose of this article is to assess whether divergence of grounded theory method may be considered valid. A review of literature provides a basis for understanding and evaluating grounded theory. The principles and nature of grounded theory are synthesized along with theoretical and practical implications. It is deduced that for a theory to be truly grounded in empirical data, the method resulting in the theory should be the equivalent of pure induction. Therefore, detailed, specified, stepwise a priori procedures may be seen as unbidden or arbitrary. It is concluded that divergent grounded theory can be considered valid. The author argues that securing methodological transparency through the description of the actual principles and procedures employed, as well as tailoring them to the particular circumstances, is more important than adhering to predetermined stepwise procedures. A theoretical foundation is provided from which diverse theoretical developments and methodological procedures may be developed, judged, and refined based on their own merits.

  3. Simulation of Rarefied Gas Flow in Slip and Transitional Regimes by the Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    S Abdullah

    2010-07-01

    Full Text Available In this paper, a lattice Boltzmann method (LBM based simulation of microscale flow has been carried out, for various values of Knudsen number. The details in determining the parameters critical for LBM applications in microscale flow are provided. Pressure distributions in the slip flow regime are compared with the analytical solution based on the Navier-Stokes equationwith slip-velocity boundary condition. Satisfactory agreements have been achieved. Simulations are then extended to transition regime (Kn = 0.15 and compared with the same analytical solution. The results show some deviation from the analytical solution due to the breakdown of continuum assumption. From this study, we may conclude that the lattice Boltzmann method is an efficient approach for simulation of microscale flow.

  4. Q- and A-learning Methods for Estimating Optimal Dynamic Treatment Regimes

    CERN Document Server

    Schulte, Phillip J; Laber, Eric B; Davidian, Marie

    2012-01-01

    In clinical practice, physicians make a series of treatment decisions over the course of a patient's disease based on his/her baseline and evolving characteristics. A dynamic treatment regime is a set of sequential decision rules that operationalizes this process. Each rule corresponds to a key decision point and dictates the next treatment action among the options available as a function of accrued information on the patient. Using data from a clinical trial or observational study, a key goal is estimating the optimal regime, that, if followed by the patient population, would yield the most favorable outcome on average. Q-learning and advantage (A-)learning are two main approaches for this purpose. We provide a detailed account of Q- and A-learning and study systematically the performance of these methods. The methods are illustrated using data from a study of depression.

  5. Linear methods in band theory

    DEFF Research Database (Denmark)

    Andersen, O. Krogh

    1975-01-01

    and they specify the boundary conditions on a single MT or atomic sphere in the most convenient way. This method is very well suited for self-consistent calculations. The empty-lattice test is applied to the linear-MTO method and the free-electron energy bands are accurately reproduced. Finally, it is shown how......Two approximate methods for solving the band-structure problem in an efficient and physically transparent way are presented and discussed in detail. The variational principle for the one-electron Hamiltonian is used in both schemes, and the trial functions are linear combinations of energy......-independent augmented plane waves (APW) and muffin-tin orbitals (MTO), respectively. The secular equations are therefore eigenvalue equations, linear in energy. The trial functions are defined with respect to a muffin-tin (MT) potential and the energy bands depend on the potential in the spheres through potential...

  6. Melnikov's method in String Theory

    CERN Document Server

    Asano, Yuhma; Yoshida, Kentaroh

    2016-01-01

    Melnikov's method is an analytical way to show the existence of classical chaos generated by a Smale horseshoe. It is a powerful technique, though its applicability is somewhat limited. In this paper, we present a solution of type IIB supergravity to which Melnikov's method is applicable. This is a brane-wave type deformation of the AdS$_5\\times$S$^5$ background. By employing two reduction ans\\"atze, we study two types of coupled pendulum-oscillator systems. Then the Melnikov function is computed for each of the systems by following the standard way of Holmes and Marsden and the existence of chaos is shown analytically.

  7. Lattice methods and effective field theory

    CERN Document Server

    Nicholson, Amy N

    2016-01-01

    Lattice field theory is a non-perturbative tool for studying properties of strongly interacting field theories, which is particularly amenable to numerical calculations and has quantifiable systematic errors. In these lectures we apply these techniques to nuclear Effective Field Theory (EFT), a non-relativistic theory for nuclei involving the nucleons as the basic degrees of freedom. The lattice formulation of [1,2] for so-called pionless EFT is discussed in detail, with portions of code included to aid the reader in code development. Systematic and statistical uncertainties of these methods are discussed at length, and extensions beyond pionless EFT are introduced in the final Section.

  8. Time-dependent density-functional theory in the projector augmented-wave method

    DEFF Research Database (Denmark)

    Walter, Michael; Häkkinen, Hannu; Lehtovaara, Lauri

    2008-01-01

    We present the implementation of the time-dependent density-functional theory both in linear-response and in time-propagation formalisms using the projector augmented-wave method in real-space grids. The two technically very different methods are compared in the linear-response regime where we...

  9. On the applicability of the level set method beyond the flamelet regime in thermonuclear supernova simulations

    CERN Document Server

    Schmidt, W

    2007-01-01

    In thermonuclear supernovae, intermediate mass elements are mostly produced by distributed burning provided that a deflagration to detonation transition does not set in. Apart from the two-dimensional study by Roepke & Hillebrandt (2005), very little attention has been payed so far to the correct treatment of this burning regime in numerical simulations. In this article, the physics of distributed burning is reviewed from the literature on terrestrial combustion and differences which arise from the very small Prandtl numbers encountered in degenerate matter are pointed out. Then it is shown that the level set method continues to be applicable beyond the flamelet regime as long as the width of the flame brush does not become smaller than the numerical cutoff length. Implementing this constraint with a simple parameterisation of the effect of turbulence onto the energy generation rate, the production of intermediate mass elements increases substantially compared to previous simulations, in which the burning...

  10. Theories of multiple equilibria and weather regimes : A critical reexamination. II - Baroclinic two-layer models

    Science.gov (United States)

    Cehelsky, Priscilla; Tung, Ka Kit

    1987-01-01

    Previous results based on low- and intermediate-order truncations of the two-layer model suggest the existence of multiple equilibria and/or multiple weather regimes for the extratropical large-scale flow. The importance of the transient waves in the synoptic scales in organizing the large-scale flow and in the maintenance of weather regimes was emphasized. The result shows that multiple equilibria/weather regimes that are present in lower-order models examined disappear when a sufficient number of modes are kept in the spectral expansion of the solution to the governing partial differential equations. Much of the chaotic behavior of the large-scale flow that is present in intermediate-order models is now found to be spurious. Physical reasons for the drastic modification are offered. A peculiarity in the formulation of most existing two-layer models is noted that also tends to exaggerate the importance of baroclinic processes and increase the degree of unpredictability of the large-scale flow.

  11. Hamiltonian methods in the theory of solitons

    CERN Document Server

    Fadeev, Ludwig

    1987-01-01

    The main characteristic of this classic exposition of the inverse scattering method and its applications to soliton theory is its consistent Hamiltonian approach to the theory. The nonlinear Schrodinger equation is considered as a main example, forming the first part of the book. The second part examines such fundamental models as the sine-Gordon equation and the Heisenberg equation, the classification of integrable models and methods for constructing their solutions.

  12. Separable programming theory and methods

    CERN Document Server

    Stefanov, Stefan M

    2001-01-01

    In this book, the author considers separable programming and, in particular, one of its important cases - convex separable programming Some general results are presented, techniques of approximating the separable problem by linear programming and dynamic programming are considered Convex separable programs subject to inequality equality constraint(s) and bounds on variables are also studied and iterative algorithms of polynomial complexity are proposed As an application, these algorithms are used in the implementation of stochastic quasigradient methods to some separable stochastic programs Numerical approximation with respect to I1 and I4 norms, as a convex separable nonsmooth unconstrained minimization problem, is considered as well Audience Advanced undergraduate and graduate students, mathematical programming operations research specialists

  13. Theory of the ultrafast mode-locked GaN lasers in a large-signal regime

    CERN Document Server

    Smetanin, Igor V; Boiko, Dmitri L

    2011-01-01

    Analytical theory of the high-power passively mode-locked laser with a slow absorber is developed. In distinguishing from previous treatment, our model is valid at pulse energies well exceeding the saturation energy of absorber. This is achieved by solving the mode-locking master equation in the pulse energy-domain representation. The performances of monolithic sub-picosecond blue-violet GaN mode-locked diode laser in the high-power operation regime are analyzed using the developed approach.

  14. A Novel Method for Analyzing and Interpreting GCM Results Using Clustered Climate Regimes

    Science.gov (United States)

    Hoffman, F. M.; Hargrove, W. W.; Erickson, D. J.; Oglesby, R. J.

    2003-12-01

    A high-performance parallel clustering algorithm has been developed for analyzing and comparing climate model results and long time series climate measurements. Designed to identify biases and detect trends in disparate climate change data sets, this tool combines and simplifies large temporally-varying data sets from atmospheric measurements to multi-century climate model output. Clustering is a statistical procedure which provides an objective method for grouping multivariate conditions into a set of states or regimes within a given level of statistical tolerance. The groups or clusters--statistically defined across space and through time--possess centroids which represent the synoptic conditions of observations or model results contained in each state no matter when or where they occurred. The clustering technique was applied to five business-as-usual (BAU) scenarios from the Parallel Climate Model (PCM). Three fields of significance (surface temperature, precipitation, and soil moisture) were clustered from 2000 through 2098. Our analysis shows an increase in spatial area occupied by the cluster or climate regime which typifies desert regions (i.e., an increase in desertification) and a decrease in the spatial area occupied by the climate regime typifying winter-time high latitude perma-frost regions. The same analysis subsequently applied to the ensemble as a whole demonstrates the consistency and variability of trends from each ensemble member. The patterns of cluster changes can be used to show predicted variability in climate on global and continental scales. Novel three-dimensional phase space representations of these climate regimes show the portion of this phase space occupied by the land surface at all points in space and time. Any single spot on the globe will exist in one of these climate regimes at any single point in time, and by incrementing time, that same spot will trace out a trajectory or orbit among these climate regimes in phase space. When a

  15. Numerical modeling of microchannel gas flows in the transition flow regime via cascaded lattice Boltzmann method

    CERN Document Server

    Liu, Qing

    2016-01-01

    As a numerically accurate and computationally efficient mesoscopic numerical method, the lattice Boltzmann (LB) method has achieved great success in simulating microscale rarefied gas flows. In this paper, an LB method based on the cascaded collision operator is presented to simulate microchannel gas flows in the transition flow regime. The Bosanquet-type effective viscosity is incorporated into the cascaded lattice Boltzmann (CLB) method to account for the rarefaction effects. In order to gain accurate simulations and match the Bosanquet-type effective viscosity, the combined bounce-back/specular-reflection scheme with a modified second-order slip boundary condition is employed in the CLB method. The present method is applied to study gas flow in a microchannel with periodic boundary condition and gas flow in a long microchannel with pressure boundary condition over a wide range of Knudsen numbers. The predicted results, including the velocity profile, the mass flow rate, and the non-linear pressure deviatio...

  16. Effective field theory for a p -wave superconductor in the subgap regime

    Science.gov (United States)

    Hansson, T. H.; Kvorning, T.; Nair, V. P.; Sreejith, G. J.

    2015-02-01

    We construct an effective field theory for the 2 d spinless p -wave paired superconductor that faithfully describes the topological properties of the bulk state, and also provides a model for the subgap states at vortex cores and edges. In particular, it captures the topologically protected zero modes and has the correct ground-state degeneracy on the torus. We also show that our effective field theory becomes a topological field theory in a well defined scaling limit and that the vortices have the expected non-Abelian braiding statistics.

  17. Domain walls, fusion rules, and conformal field theory in the quantum Hall regime.

    Science.gov (United States)

    Ardonne, Eddy

    2009-05-08

    We provide a simple way to obtain the fusion rules associated with elementary quasiholes over quantum Hall wave functions, in terms of domain walls. The knowledge of the fusion rules is helpful in the identification of the underlying conformal field theory describing the wave functions. We show that, for a certain two-parameter family (k,r) of wave functions, the fusion rules are those of su(r)k. In addition, we give an explicit conformal field theory construction of these states, based on the Mk(k+1,k+r) "minimal" theories. For r=2, these states reduce to the Read-Rezayi states. The "Gaffnian" wave function is the prototypical example for r>2, in which case the conformal field theory is nonunitary.

  18. Introducing Legal Method When Teaching Stakeholder Theory

    DEFF Research Database (Denmark)

    Buhmann, Karin

    2015-01-01

    Governments are particularly salient stakeholders for business ethics. They act on societal needs and social expectations, and have the political and legal powers to restrict or expand the economic freedoms of business as well as the legitimacy and often urgency to do so. We draw on two examples......: the Business & Human Rights regime from a UN Global Compact perspective; and mandatory CSR reporting. Supplying integrated teaching notes and generalising on the examples, we explain how legal method may help students of business ethics, organisation and management – future managers – in their analysis...

  19. Introducing legal method when teaching stakeholder theory

    DEFF Research Database (Denmark)

    Buhmann, Karin

    2015-01-01

    Governments are particularly salient stakeholders for business ethics. They act on societal needs and social expectations, and have the political and legal powers to restrict or expand the economic freedoms of business as well as the legitimacy and often urgency to do so. We draw on two examples......: the Business & Human Rights regime from a UN Global Compact perspective; and mandatory CSR reporting. Supplying integrated teaching notes and generalising on the examples, we explain how legal method may help students of business ethics, organisation and management – future managers – in their analysis...

  20. Testing the limits of quasi-geostrophic theory: application to observed laboratory flows outside the quasi-geostrophic regime

    Science.gov (United States)

    Williams, Paul; Read, Peter; Haine, Thomas

    2010-05-01

    We compare laboratory observations of equilibrated baroclinic waves in the rotating two-layer annulus, with numerical simulations from a quasi-geostrophic model. The laboratory experiments lie well outside the quasi-geostrophic regime: the Rossby number reaches unity; the depth-to-width aspect ratio is large; and the fluid contains ageostrophic inertia-gravity waves. Despite being formally inapplicable, the quasi-geostrophic model captures the laboratory flows reasonably well. The model displays several systematic biases, which are consequences of its treatment of boundary layers and neglect of interfacial surface tension, and which may be explained without invoking the dynamical effects of the moderate Rossby number, large aspect ratio or inertia-gravity waves. We conclude that quasi-geostrophic theory appears to continue to apply well outside its formal bounds. This is an unexpected and intriguing result that could not have been predicted from the existing literature. It is also potentially useful, for example by permitting the use of a low-order quasi-geostrophic model to easily map out the bifurcation structure - which would be very difficult with a primitive equations model - followed by the use of a primitive equations model for more quantitative agreement in specific cases. Reference Williams, PD, PL Read and TWN Haine (2010) Testing the limits of quasi-geostrophic theory: application to observed laboratory flows outside the quasi-geostrophic regime. Journal of Fluid Mechanics, in press.

  1. Quantitative methods in classical perturbation theory.

    Science.gov (United States)

    Giorgilli, A.

    Poincaré proved that the series commonly used in Celestial mechanics are typically non convergent, although their usefulness is generally evident. Recent work in perturbation theory has enlightened this conjecture of Poincaré, bringing into evidence that the series of perturbation theory, although non convergent in general, furnish nevertheless valuable approximations to the true orbits for a very large time, which in some practical cases could be comparable with the age of the universe. The aim of the author's paper is to introduce the quantitative methods of perturbation theory which allow to obtain such powerful results.

  2. FCJ-117 Four Regimes of Entropy: For an Ecology of Genetics and Biomorphic Media Theory

    Directory of Open Access Journals (Sweden)

    Matteo Pasquinelli

    2011-04-01

    Full Text Available This essay approaches the definition of media ecology from two opposite perspectives. On one hand, it tests the homogeneity of the biomimetic continuum, which supposes the mediascape as an extension of the biological realm. On the other, it analyses the biodigital continuum, which takes, for instance, digital code as a universal grammar for genetic code. The problematic relation between biological and technological paradigms and between linguistics and genetics is clarified with reference to Erwin Schrödinger’s concept of negative entropy. Four different regimes of entropic density are then suggested to describe the physical, biological, technological, and cognitive domains. On the basis of this ‘energetic geology’, a new ecology of machines is proposed.

  3. Risk assessment theory, methods, and applications

    CERN Document Server

    Rausand, Marvin

    2011-01-01

    With its balanced coverage of theory and applications along with standards and regulations, Risk Assessment: Theory, Methods, and Applications serves as a comprehensive introduction to the topic. The book serves as a practical guide to current risk analysis and risk assessment, emphasizing the possibility of sudden, major accidents across various areas of practice from machinery and manufacturing processes to nuclear power plants and transportation systems. The author applies a uniform framework to the discussion of each method, setting forth clear objectives and descriptions, while also shedding light on applications, essential resources, and advantages and disadvantages. Following an introduction that provides an overview of risk assessment, the book is organized into two sections that outline key theory, methods, and applications. * Introduction to Risk Assessment defines key concepts and details the steps of a thorough risk assessment along with the necessary quantitative risk measures. Chapters outline...

  4. Wave theories of non-laminar charged particle beams: from quantum to thermal regime

    CERN Document Server

    Fedele, Renato; Jovanovic, Dusan; De Nicola, Sergio; Ronsivalle, Concetta

    2013-01-01

    The standard classical description of non-laminar charge particle beams in paraxial approximation is extended to the context of two wave theories. The first theory is the so-called Thermal Wave Model (TWM) that interprets the paraxial thermal spreading of the beam particles as the analog of the quantum diffraction. The other theory, hereafter called Quantum Wave Model (QWM), that takes into account the individual quantum nature of the single beam particle (uncertainty principle and spin) and provides the collective description of the beam transport in the presence of the quantum paraxial diffraction. QWM can be applied to beams that are sufficiently cold to allow the particles to manifest their individual quantum nature but sufficiently warm to make overlapping-less the single-particle wave functions. In both theories, the propagation of the beam transport in plasmas or in vacuo is provided by fully similar set of nonlinear and nonlocal governing equations, where in the case of TWM the Compton wavelength (fun...

  5. Shallow lakes theory revisited: various alternative regimes driven by climate, nutrients, depth and lake size

    NARCIS (Netherlands)

    Scheffer, M.; Nes, van E.H.

    2007-01-01

    Shallow lakes have become the archetypical example of ecosystems with alternative stable states. However, since the early conception of that theory, the image of ecosystem stability has been elaborated for shallow lakes far beyond the simple original model. After discussing how spatial heterogeneity

  6. Accuracy verification methods theory and algorithms

    CERN Document Server

    Mali, Olli; Repin, Sergey

    2014-01-01

    The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control.   The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.

  7. Density functional theory for molecular multiphoton ionization in the perturbative regime.

    Science.gov (United States)

    Toffoli, Daniele; Decleva, Piero

    2012-10-07

    A general implementation of the lowest nonvanishing order perturbation theory for the calculation of molecular multiphoton ionization cross sections is proposed in the framework of density functional theory. Bound and scattering wave functions are expanded in a multicentric basis set and advantage is taken of the full molecular point group symmetry, thus enabling the application of the formalism to medium-size molecules. Multiphoton ionization cross sections and angular asymmetry parameters have been calculated for the two- and four-photon ionization of the H(2) (+) molecule, for linear and circular light polarizations. Both fixed and random orientations of the target molecule have been considered. To demonstrate the efficiency of the proposed methodology, the two-photon cross section and angular asymmetry parameters for the HOMO and HOMO-1 orbital ionization of benzene are also presented.

  8. Extended theory of the Taylor problem in the plasmoid-unstable regime

    Energy Technology Data Exchange (ETDEWEB)

    Comisso, L., E-mail: luca.comisso@polito.it; Grasso, D. [Dipartimento Energia, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy and Istituto dei Sistemi Complessi - CNR, Via dei Taurini 19, 00185 Roma (Italy); Waelbroeck, F. L. [Institute for Fusion Studies, The University of Texas at Austin, Austin, Texas 78712-1203 (United States)

    2015-04-15

    A fundamental problem of forced magnetic reconnection has been solved taking into account the plasmoid instability of thin reconnecting current sheets. In this problem, the reconnection is driven by a small amplitude boundary perturbation in a tearing-stable slab plasma equilibrium. It is shown that the evolution of the magnetic reconnection process depends on the external source perturbation and the microscopic plasma parameters. Small perturbations lead to a slow nonlinear Rutherford evolution, whereas larger perturbations can lead to either a stable Sweet-Parker-like phase or a plasmoid phase. An expression for the threshold perturbation amplitude required to trigger the plasmoid phase is derived, as well as an analytical expression for the reconnection rate in the plasmoid-dominated regime. Visco-resistive magnetohydrodynamic simulations complement the analytical calculations. The plasmoid formation plays a crucial role in allowing fast reconnection in a magnetohydrodynamical plasma, and the presented results suggest that it may occur and have profound consequences even if the plasma is tearing-stable.

  9. Extension of the classical theory of crystallization to non-isothermal regimes: Application to nanocrystallization processes

    Energy Technology Data Exchange (ETDEWEB)

    Blazquez, J.S., E-mail: jsebas@us.es [Departamento de Fisica de la Materia Condensada, Instituto de Ciencia de Materiales, CSIC Universidad de Sevilla, Apartado 1065, 41080 Sevilla (Spain); Borrego, J.M.; Conde, C.F.; Conde, A. [Departamento de Fisica de la Materia Condensada, Instituto de Ciencia de Materiales, CSIC Universidad de Sevilla, Apartado 1065, 41080 Sevilla (Spain); Lozano-Perez, S. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Non-isothermal kinetics is easily analyzed using the present approach. Black-Right-Pointing-Pointer Local Avrami exponents are obtained for nanocrystallization in a wide range. Black-Right-Pointing-Pointer Results on nanocrystallization are explained in the frame of limited growth approach. Black-Right-Pointing-Pointer Deviations from isokinetic behavior is analyzed for two different multiple processes. - Abstract: The non-isothermal kinetics of primary crystallization processes is studied from numerically generated curves and their predictions have been tested in several nanocrystallization processes. Single processes and transformations involving two overlapped processes in a non-isothermal regime have been generated and deviations from isokinetic behavior are found when the overlapped processes have different activation energies. In the case of overlapped processes competing for the same type of atoms, the heating rate dependence of the obtained Avrami exponent can supply information on the activation energies of each individual processes. The application to experimental data of nanocrystallization processes is consistent with a limited growth approximation. In the case of preexisting crystallites in the as-cast samples, predictions on the heating rate dependence of the obtained Avrami exponents of multiple processes have been confirmed.

  10. Molecular Hanle effect in the Paschen-Back regime: theory and application

    Science.gov (United States)

    Shapiro, A. I.; Berdyugina, S. V.; Fluri, D. M.; Stenflo, J. O.

    The second solar spectrum resulting from coherent scattering is a main tool for diagnostics of turbulent magnetic fields on the Sun. Scattering on diatomic molecules plays an important role in forming this spectrum and even dominates in some spectral regions. In a magnetic field electronic states of a molecule are often perturbed via the Paschen-Back effect. Sometimes this perturbation can completely change the spectrum, not only quantitatively, but even qualitatively. Here we calculate molecular scattering properties taking into account the Paschen-Back effect. We calculate the Mueller matrix for coherent scattering at diatomic molecules in the intermediate Hund's case (a-b) and look for the effects that can be caused by the Paschen-Back effect. We have found a considerable deviation from the Zeeman regime and discuss here the quantitative and qualitative effects on observed polarization signals for the CN B 2 [Sigma] - X 2 [Sigma] system as an example. We show an application of the Hanle effect for the interpretation of observations of

  11. Time-dependent density functional theory of open quantum systems in the linear-response regime.

    Science.gov (United States)

    Tempel, David G; Watson, Mark A; Olivares-Amaya, Roberto; Aspuru-Guzik, Alán

    2011-02-21

    Time-dependent density functional theory (TDDFT) has recently been extended to describe many-body open quantum systems evolving under nonunitary dynamics according to a quantum master equation. In the master equation approach, electronic excitation spectra are broadened and shifted due to relaxation and dephasing of the electronic degrees of freedom by the surrounding environment. In this paper, we develop a formulation of TDDFT linear-response theory (LR-TDDFT) for many-body electronic systems evolving under a master equation, yielding broadened excitation spectra. This is done by mapping an interacting open quantum system onto a noninteracting open Kohn-Sham system yielding the correct nonequilibrium density evolution. A pseudoeigenvalue equation analogous to the Casida equations of the usual LR-TDDFT is derived for the Redfield master equation, yielding complex energies and Lamb shifts. As a simple demonstration, we calculate the spectrum of a C(2 +) atom including natural linewidths, by treating the electromagnetic field vacuum as a photon bath. The performance of an adiabatic exchange-correlation kernel is analyzed and a first-order frequency-dependent correction to the bare Kohn-Sham linewidth based on the Görling-Levy perturbation theory is calculated.

  12. Method of computation of energies in the fractional quantum Hall effect regime

    Directory of Open Access Journals (Sweden)

    M.A. Ammar

    2016-09-01

    Full Text Available In a previous work, we reported exact results of energies of the ground state in the fractional quantum Hall effect (FQHE regime for systems with up to N_{e}=6 electrons at the filling factor ν=1/3 by using the method of complex polar coordinates. In this work, we display interesting computational details of the previous calculation and extend the calculation to N_{e}=7 electrons at ν=1/3. Moreover, similar exact results are derived at the filling ν=1/5 for systems with up to N_{e}=6 electrons. The results that we obtained by analytical calculation are in good agreement with their analogues ones derived by the method of Monte Carlo in a precedent work.

  13. A method to characterize the different extreme waves for islands exposed to various wave regimes: a case study devoted to Reunion Island

    Directory of Open Access Journals (Sweden)

    S. Lecacheux

    2012-07-01

    Full Text Available This paper outlines a new approach devoted to the analysis of extreme waves in presence of several wave regimes. It entails discriminating the different wave regimes from offshore wave data using classification algorithms, before conducting the extreme wave analysis for each regime separately. The concept is applied to the pilot site of Reunion Island which is affected by three main wave regimes: southern waves, trade-wind waves and cyclonic waves. Several extreme wave scenarios are determined for each regime, based on real historical cases (for cyclonic waves and extreme value analysis (for non-cyclonic waves. For each scenario, the nearshore wave characteristics are modelled all around Reunion Island and the linear theory equations are used to back calculate the equivalent deep-water wave characteristics for each portion of the coast. The relative exposure of the coastline to the extreme waves of each regime is determined by comparing the equivalent deep-water wave characteristics.

    This method provides a practical framework to perform an analysis of extremes within a complex environment presenting several sources of extreme waves. First, at a particular coastal location, it allows for inter-comparison between various kinds of extreme waves that are generated by different processes and that may occur at different periods of the year. Then, it enables us to analyse the alongshore variability in wave exposition, which is a good indicator of potential runup extreme values. For the case of Reunion Island, cyclonic waves are dominant offshore around the island, with equivalent deep-water wave heights up to 18 m for the northern part. Nevertheless, due to nearshore wave refraction, southern waves may become as energetic as cyclonic waves on the western part of the island and induce similar impacts in terms of runup and submersion. This method can be easily transposed to other case studies and can be adapted, depending on the data

  14. Auroral Radio Emission from Late L and T Dwarfs: A New Constraint on Dynamo Theory in the Substellar Regime

    Science.gov (United States)

    Kao, Melodie M.; Hallinan, Gregg; Pineda, J. Sebastian; Escala, Ivanna; Burgasser, Adam; Bourke, Stephen; Stevenson, David

    2016-02-01

    We have observed six late L and T dwarfs with the Karl G. Jansky Very Large Array (VLA) to investigate the presence of highly circularly polarized radio emission, associated with large-scale auroral currents. Previous surveys encompassing ∼60 L6 or later targets have yielded only one detection. Our sample includes the previously detected T6.5 dwarf 2MASS 10475385+2124234, as well as five new targets selected for the presence of Hα emission and/or optical infrared photometric variability, which are possible manifestations of auroral activity. We detect 2MASS 10475385+2124234, as well as four of the five targets in our biased sample, including the strong IR-variable source SIMP J01365662+0933473 and bright Hα emitter 2MASS 12373919+6526148, reinforcing the possibility that activity at these disparate wavelengths is related. The radio emission frequency corresponds to a precise determination of the lower-bound magnetic field strength near the surface of each dwarf, and this new sample provides robust constraints on dynamo theory in the low-mass brown dwarf regime. Magnetic fields ≳ 2.5 kG are confirmed for five of six targets. Our results provide tentative evidence that the dynamo operating in this mass regime may be inconsistent with predicted values from a recently proposed model. Further observations at higher radio frequencies are essential for verifying this assertion.

  15. The Swedish School and macroeconomic theory: Reflections on a small open economy operating within a regime of flexible exchanges

    Directory of Open Access Journals (Sweden)

    D. TROPEANO

    2013-10-01

    Full Text Available The Swedish School, with its representatives Ohlin, Hammarskjöld and Lindahl, made​important contributions to the economic theory of the open economy, even if such contributions have never been at the centre of attention of economists, probably due to their dark language style. In particular, it had a vision of the operating of an open economy that was completely different from post-war Keynesian orthodoxy. The exchange rate regime does not isolate a small economy from the repercussions of events that occur in financial markets and from goods at the international level. The other major assumption of open economy macroeconomics was the independence of monetary policy. The Keynesian models in the 1950s included only external money. On the contrary, the Swedes considered the credit system and the working of international banks.

  16. Chaos and Energy Spreading for Time-Dependent Hamiltonians, and the Various Regimes in the Theory of Quantum Dissipation

    Science.gov (United States)

    Cohen, Doron

    2000-08-01

    We make the first steps toward a generic theory for energy spreading and quantum dissipation. The Wall formula for the calculation of friction in nuclear physics and the Drude formula for the calculation of conductivity in mesoscopic physics can be regarded as two special results of the general formulation. We assume a time-dependent Hamiltonian H(Q, P; x(t)) with x(t)=Vt, where V is slow in a classical sense. The rate-of-change V is not necessarily slow in the quantum-mechanical sense. The dynamical variables (Q, P) may represent some "bath" which is being parametrically driven by x. This bath may consist of just a few degrees of freedom, but it is assumed to be classically chaotic. In the case of either the Wall or Drude formula, the dynamical variables (Q, P) may represent a single particle. In any case, dissipation means an irreversible systematic growth of the (average) energy. It is associated with the stochastic spreading of energy across levels. The latter can be characterized by a transition probability kernel Pt(n ∣ m), where n and m are level indices. This kernel is the main object of the present study. In the classical limit, due to the (assumed) chaotic nature of the dynamics, the second moment of Pt(n ∣ m) exhibits a crossover from ballistic to diffusive behavior. In order to capture this crossover within quantum mechanics, a proper theory for the quantal Pt(n ∣ m) should be constructed. We define the V regimes where either perturbation theory or semiclassical considerations are applicable in order to establish this crossover. In the limit ℏ→0 perturbation theory does not apply but semiclassical considerations can be used in order to argue that there is detailed correspondence, during the crossover time, between the quantal and the classical Pt(n ∣ m). In the perturbative regime there is a lack of such correspondence. Namely, Pt(n ∣ m) is characterized by a perturbative core-tail structure that persists during the crossover time. In

  17. Multiphase lattice Boltzmann methods theory and application

    CERN Document Server

    Huang, Haibo; Lu, Xiyun

    2015-01-01

    Theory and Application of Multiphase Lattice Boltzmann Methods presents a comprehensive review of all popular multiphase Lattice Boltzmann Methods developed thus far and is aimed at researchers and practitioners within relevant Earth Science disciplines as well as Petroleum, Chemical, Mechanical and Geological Engineering. Clearly structured throughout, this book will be an invaluable reference  on the current state of all popular multiphase Lattice Boltzmann Methods (LBMs). The advantages and disadvantages of each model are presented in an accessible manner to enable the reader to choose the

  18. Oil monitoring methods based on information theory

    Institute of Scientific and Technical Information of China (English)

    XIA Yan-chun; HUO Hua

    2009-01-01

    To evaluate the Wear condition of machines accurately,oil spectrographic entropy,mutual information and ICA analysis methods based on information theory are presented.A full-scale diagnosis utilizing all channels of spectrographic analysis can be obtained.By measuring the complexity and correlativity,the characteristics of wear condition of machines can be shown clearly.The diagnostic quality is improved.The analysis processes of these monitoring methods are given through the explanation of examples.The availability of these methods is validated and further research fields are demonstrated.

  19. The Adapted Ordering Method in Representation Theory

    CERN Document Server

    Gato-Rivera, Beatriz

    2004-01-01

    In 1998 the Adapted Ordering Method was developed for the representation theory of the superconformal algebras. This method, which proves to be very powerful, can be applied to most algebras and superalgebras, however. It allows: to determine maximal dimensions for a given type of singular vector space, to identify all singular vectors by only a few coefficients, to spot subsingular vectors and to set the basis for constructing embedding diagrams. We present this method for general algebras and superalgebras and review the results obtained for the Virasoro algebra and for the N=2 superconformal algebras.

  20. The nearly Newtonian regime in non-linear theories of gravity

    Science.gov (United States)

    Sotiriou, Thomas P.

    2006-09-01

    The present paper reconsiders the Newtonian limit of models of modified gravity including higher order terms in the scalar curvature in the gravitational action. This was studied using the Palatini variational principle in Meng and Wang (Gen. Rel. Grav. 36, 1947 (2004)) and Domínguez and Barraco (Phys. Rev. D 70, 043505 (2004)) with contradicting results. Here a different approach is used, and problems in the previous attempts are pointed out. It is shown that models with negative powers of the scalar curvature, like the ones used to explain the present accelerated expansion, as well as their generalization which include positive powers, can give the correct Newtonian limit, as long as the coefficients of these powers are reasonably small. Some consequences of the performed analysis seem to raise doubts for the way the Newtonian limit was derived in the purely metric approach of fourth order gravity [Dick in Gen. Rel. Grav. 36, 217 (2004)]. Finally, we comment on a recent paper [Olmo in Phys. Rev. D 72, 083505 (2005)] in which the problem of the Newtonian limit of both the purely metric and the Palatini formalism is discussed, using the equivalent Brans Dicke theory, and with which our results partly disagree.

  1. Harmony Search Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    X. Z. Gao

    2015-01-01

    Full Text Available The Harmony Search (HS method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.

  2. Analysis of electrical circuits with variable load regime parameters projective geometry method

    CERN Document Server

    Penin, A

    2015-01-01

    This book introduces electric circuits with variable loads and voltage regulators. It allows to define invariant relationships for various parameters of regime and circuit sections and to prove the concepts characterizing these circuits. Generalized equivalent circuits are introduced. Projective geometry is used for the interpretation of changes of operating regime parameters. Expressions of normalized regime parameters and their changes are presented. Convenient formulas for the calculation of currents are given. Parallel voltage sources and the cascade connection of multi-port networks are d

  3. A New Method of Moments for the Bimodal Particle System in the Stokes Regime

    Directory of Open Access Journals (Sweden)

    Yan-hua Liu

    2013-01-01

    Full Text Available The current paper studied the particle system in the Stokes regime with a bimodal distribution. In such a system, the particles tend to congregate around two major sizes. In order to investigate this system, the conventional method of moments (MOM should be extended to include the interaction between different particle clusters. The closure problem for MOM arises and can be solved by a multipoint Taylor-expansion technique. The exact expression is deduced to include the size effect between different particle clusters. The collision effects between different modals could also be modeled. The new model was simply tested and proved to be effective to treat the bimodal system. The results showed that, for single-modal particle system, the results from new model were the same as those from TEMOM. However, for the bimodal particle system, there was a distinct difference between the two models, especially for the zero-order moment. The current model generated fewer particles than TEMOM. The maximum deviation reached about 15% for m0 and 4% for m2. The detailed distribution of each submodal could also be investigated through current model.

  4. The two-regime method for optimizing stochastic reaction-diffusion simulations

    KAUST Repository

    Flegg, M. B.

    2011-10-19

    Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches to less detailed compartment-based simulations. Compartment-based approaches yield quick and accurate mesoscopic results, but lack the level of detail that is characteristic of the computationally intensive molecular-based models. Often microscopic detail is only required in a small region (e.g. close to the cell membrane). Currently, the best way to achieve microscopic detail is to use a resource-intensive simulation over the whole domain. We develop the two-regime method (TRM) in which a molecular-based algorithm is used where desired and a compartment-based approach is used elsewhere. We present easy-to-implement coupling conditions which ensure that the TRM results have the same accuracy as a detailed molecular-based model in the whole simulation domain. Therefore, the TRM combines strengths of previously developed stochastic reaction-diffusion software to efficiently explore the behaviour of biological models. Illustrative examples and the mathematical justification of the TRM are also presented.

  5. Methods of algebraic geometry in control theory

    CERN Document Server

    Falb, Peter

    1999-01-01

    "Control theory represents an attempt to codify, in mathematical terms, the principles and techniques used in the analysis and design of control systems. Algebraic geometry may, in an elementary way, be viewed as the study of the structure and properties of the solutions of systems of algebraic equations. The aim of this book is to provide access to the methods of algebraic geometry for engineers and applied scientists through the motivated context of control theory" .* The development which culminated with this volume began over twenty-five years ago with a series of lectures at the control group of the Lund Institute of Technology in Sweden. I have sought throughout to strive for clarity, often using constructive methods and giving several proofs of a particular result as well as many examples. The first volume dealt with the simplest control systems (i.e., single input, single output linear time-invariant systems) and with the simplest algebraic geometry (i.e., affine algebraic geometry). While this is qui...

  6. Mathematical Methods of Game and Economic Theory

    CERN Document Server

    Aubin, J-P

    1982-01-01

    This book presents a unified treatment of optimization theory, game theory and a general equilibrium theory in economics in the framework of nonlinear functional analysis. It not only provides powerful and versatile tools for solving specific problems in economics and the social sciences but also serves as a unifying theme in the mathematical theory of these subjects as well as in pure mathematics itself.

  7. Updates of CORESTA Recommended Methods after Further Collaborative Studies Carried Out under Both ISO and Health Canada Intense Smoking Regimes

    Directory of Open Access Journals (Sweden)

    Purkis SW

    2014-12-01

    Full Text Available During 2012, three CORESTA Recommended Methods (CRMs (1-3 were updated to include smoke yield and variability data under both ISO (4 and the Canadian Intense (CI (5 smoking regimes. At that time, repeatability and reproducibility data under the CI regime on smoke analytes other than “tar”, nicotine and carbon monoxide (6 and tobacco-specific nitrosamines (TSNAs (7 were not available in the public literature. The subsequent work involved the determination of the mainstream smoke yields of benzo[a]-pyrene, selected volatiles (benzene, toluene, 1,3-butadiene, isoprene, acrylonitrile, and selected carbonyls (acetaldehyde, formaldehyde, propionaldehyde, butyraldehyde, crotonaldehyde, acrolein, acetone and 2-butanone in ten cigarette products followed by statistical analyses according to the ISO protocol (8. This paper provides some additional perspective on the data variability under the ISO and CI smoking regimes not given in the CRMs.

  8. Theory of the Trojan-Horse Method

    CERN Document Server

    Baur, G; Baur, Gerhard; Typel, Stefan

    2004-01-01

    The Trojan-Horse method is an indirect approach to determine the energy dependence of S factors of astrophysically relevant two-body reactions. This is accomplished by studying closely related three-body reactions under quasi-free scattering conditions. The basic theory of the Trojan-Horse method is developed starting from a post-form distorted wave Born approximation of the T-matrix element. In the surface approximation the cross section of the three-body reaction can be related to the S-matrix elements of the two-body reaction. The essential feature of the Trojan-Horse method is the effective suppression of the Coulomb barrier at low energies for the astrophysical reaction leading to finite cross sections at the threshold of the two-body reaction. In a modified plane wave approximation the relation between the two-body and three-body cross sections becomes very transparent. Applications of the Trojan Horse Method are discussed. It is of special interest that electron screening corrections are negligible due...

  9. The Time and Regime Dependencies of Sensitive Areas for Tropical Cyclone Prediction Using the CNOP Method

    Institute of Scientific and Technical Information of China (English)

    ZHOU Feifan; MU Mu

    2012-01-01

    This study examines the time and regime dependencies of sensitive areas identified by the conditional nonlinear optimal perturbation (CNOP) method for forecasts of two typhoons.Typhoon Meari (2004) was weakly nonlinear and is herein referred to as the linear case,while Typhoon Matsa (2005) was strongly nonlinear and is herein referred to as the nonlinear case.In the linear case,the sensitive areas identified for special forecast times when the initial time was fixed resembled those identified for other forecast times.Targeted observations deployed to improve a special time forecast would thus also benefit forecasts at other times.In the nonlinear case,the similarities among the sensitive areas identified for different forecast times were more limited.The deployment of targeted observations in the nonlinear case would therefore need to be adapted to achieve large improvenents for different targeted forecasts.For both cases,the closer the forecast time,the higher the similarities of the sensitive areas.When the forecast time was fixed,the sensitive areas in the linear case diverged continuously from the verification area as the forecast period lengthened,while those in the nonlinear case were always located around the initial cyclones.The deployment of targeted observations to improve a special forecast depends strongly on the time of deployment.An examination of the efficiency gained by reducing initial errors within the identified sensitive areas confirmed these results.In general,the greatest improvement in a special time forecast was obtained by identifying the sensitive areas for the corresponding forecast time period.

  10. FMEA using uncertainty theories and MCDM methods

    CERN Document Server

    Liu, Hu-Chen

    2016-01-01

    This book offers a thorough and systematic introduction to the modified failure mode and effect analysis (FMEA) models based on uncertainty theories (e.g. fuzzy logic, intuitionistic fuzzy sets, D numbers and 2-tuple linguistic variables) and various multi-criteria decision making (MCDM) approaches such as distance-based MCDM, compromise ranking MCDM and hybrid MCDM, etc. As such, it provides essential FMEA methods and practical examples that can be considered in applying FMEA to enhance the reliability and safety of products and services. The book offers a valuable guide for practitioners and researchers working in the fields of quality management, decision making, information science, management science, engineering, etc. It can also be used as a textbook for postgraduate and senior undergraduate students.

  11. Theory and methods in cultural neuroscience

    Science.gov (United States)

    Hariri, Ahmad R.; Harada, Tokiko; Mano, Yoko; Sadato, Norihiro; Parrish, Todd B.; Iidaka, Tetsuya

    2010-01-01

    Cultural neuroscience is an emerging research discipline that investigates cultural variation in psychological, neural and genomic processes as a means of articulating the bidirectional relationship of these processes and their emergent properties. Research in cultural neuroscience integrates theory and methods from anthropology, cultural psychology, neuroscience and neurogenetics. Here, we review a set of core theoretical and methodological challenges facing researchers when planning and conducting cultural neuroscience studies, and provide suggestions for overcoming these challenges. In particular, we focus on the problems of defining culture and culturally appropriate experimental tasks, comparing neuroimaging data acquired from different populations and scanner sites and identifying functional genetic polymorphisms relevant to culture. Implications of cultural neuroscience research for addressing current issues in population health disparities are discussed. PMID:20592044

  12. Approximation methods in gravitational-radiation theory

    Science.gov (United States)

    Will, C. M.

    1986-02-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913+16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. The author summarizes recent developments in two areas in which approximations are important: (1) the quadrupole approximation, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (2) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  13. Nested partitions method, theory and applications

    CERN Document Server

    Shi, Leyuan

    2009-01-01

    There is increasing need to solve large-scale complex optimization problems in a wide variety of science and engineering applications, including designing telecommunication networks for multimedia transmission, planning and scheduling problems in manufacturing and military operations, or designing nanoscale devices and systems. Advances in technology and information systems have made such optimization problems more and more complicated in terms of size and uncertainty. Nested Partitions Method, Theory and Applications provides a cutting-edge research tool to use for large-scale, complex systems optimization. The Nested Partitions (NP) framework is an innovative mix of traditional optimization methodology and probabilistic assumptions. An important feature of the NP framework is that it combines many well-known optimization techniques, including dynamic programming, mixed integer programming, genetic algorithms and tabu search, while also integrating many problem-specific local search heuristics. The book uses...

  14. Theory of the Trojan-Horse Method

    CERN Document Server

    Typel, S

    2003-01-01

    The Trojan-Horse method is an indirect approach to determine the energy dependence of S-factors of astrophysically relevant two-body reactions. This is accomplished by studying closely related three-body reactions under quasi-free scattering conditions. The basic theory of the Trojan-Horse method is developed starting from a post-form distorted wave Born approximation of the T-matrix element. In the surface approximation the cross section of the three-body reaction can be related to the S-matrix elements of the two-body reaction. The essential feature of the Trojan-Horse method is the effective suppression of the Coulomb barrier at low energies for the astrophysical reaction leading to finite cross sections at the threshold of the two-body reaction. In a modified plane wave approximation the relation between the two-body and three-body cross sections becomes very transparent. The appearing Trojan-Horse integrals are studied in detail.

  15. Developing Systemic Theories Requires Formal Methods

    Science.gov (United States)

    Gobet, Fernand

    2012-01-01

    Ziegler and Phillipson (Z&P) advance an interesting and ambitious proposal, whereby current analytical/mechanistic theories of gifted education are replaced by systemic theories. In this commentary, the author focuses on the pros and cons of using systemic theories. He argues that Z&P's proposal both goes too far and not far enough. The future of…

  16. Economic method of development of canned food sterilization regimes for industry autoclaves

    Directory of Open Access Journals (Sweden)

    Stolyanov A. V.

    2015-12-01

    Full Text Available Utilization of the sterilization equipment AVK-30M for development of sterilization regimes of canned food for industrial autoclaves, particularly for ASCAMAT 230, has been described. Energy consumption of autoclaves during stages of heating and sterilization has been compared as well

  17. String theory and the scientific method

    CERN Document Server

    Dawid, Richard

    2013-01-01

    String theory has played a highly influential role in theoretical physics for nearly three decades and has substantially altered our view of the elementary building principles of the Universe. However, the theory remains empirically unconfirmed, and is expected to remain so for the foreseeable future. So why do string theorists have such a strong belief in their theory? This book explores this question, offering a novel insight into the nature of theory assessment itself. Dawid approaches the topic from a unique position, having extensive experience in both philosophy and high-energy physics. He argues that string theory is just the most conspicuous example of a number of theories in high-energy physics where non-empirical theory assessment has an important part to play. Aimed at physicists and philosophers of science, the book does not use mathematical formalism and explains most technical terms.

  18. Optimal choice of an exchange rate regime: a critical literature review

    OpenAIRE

    Ouchen, Mariam

    2013-01-01

    This paper set out to review the main theories and empirical methods employed in selecting an appropriate exchange rate regime.In order to achieve this, the paper is organized as follows : Section 2 introduces the distinct classifications of exchange regimes(de jure exchange rate regimes versus the facto exchange rate regimes), and the different theoretical approaches which illustrate how an optimal exchange rate regime is determined . Despite their initial popularity, the theoretical consi...

  19. Kissinger method applied to the crystallization of glass-forming liquids: Regimes revealed by ultra-fast-heating calorimetry

    Energy Technology Data Exchange (ETDEWEB)

    Orava, J., E-mail: jo316@cam.ac.uk [Department of Materials Science & Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); WPI-Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); Greer, A.L., E-mail: alg13@cam.ac.uk [Department of Materials Science & Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); WPI-Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan)

    2015-03-10

    Highlights: • Study of ultra-fast DSC applied to the crystallization of glass-forming liquids. • Numerical modeling of DSC traces at heating rates exceeding 10 orders of magnitude. • Identification of three regimes in Kissinger plots. • Elucidation of the effect of liquid fragility on the Kissinger method. • Modeling to study the regime in which crystal growth is thermodynamically limited. - Abstract: Numerical simulation of DSC traces is used to study the validity and limitations of the Kissinger method for determining the temperature dependence of the crystal-growth rate on continuous heating of glasses from the glass transition to the melting temperature. A particular interest is to use the wide range of heating rates accessible with ultra-fast DSC to study systems such as the chalcogenide Ge{sub 2}Sb{sub 2}Te{sub 5} for which fast crystallization is of practical interest in phase-change memory. Kissinger plots are found to show three regimes: (i) at low heating rates the plot is straight, (ii) at medium heating rates the plot is curved as expected from the liquid fragility, and (iii) at the highest heating rates the crystallization rate is thermodynamically limited, and the plot has curvature of the opposite sign. The relative importance of these regimes is identified for different glass-forming systems, considered in terms of the liquid fragility and the reduced glass-transition temperature. The extraction of quantitative information on fundamental crystallization kinetics from Kissinger plots is discussed.

  20. New methods in nuclear reaction theory

    Energy Technology Data Exchange (ETDEWEB)

    Redish, E. F.

    1979-01-01

    Standard nuclear reaction methods are limited to treating problems that generalize two-body scattering. These are problems with only one continuous (vector) degree of freedom (CDOF). The difficulty in extending these methods to cases with two or more CDOFs is not just the additional numerical complexity: the mathematical problem is usually not well-posed. It is hard to guarantee that the proper boundary conditions (BCs) are satisfied. Since this is not generally known, the discussion is begun by considering the physics of this problem in the context of coupled-channel calculations. In practice, the difficulties are usually swept under the rug by the use of a highly developed phenomenology (or worse, by the failure to test a calculation for convergence). This approach limits the kind of reactions that can be handled to ones occurring on the surface of where a second CDOF can be treated perturbatively. In the past twenty years, the work of Faddeev, the quantum three-body problem has been solved. Many techniques (and codes) are now available for solving problems with two CDOFs. A method for using these techniques in the nuclear N-body problem is presented. A set of well-posed (connected kernal) equations for physical scattering operators is taken. Then it is shown how approximation schemes can be developed for a wide range of reaction mechanisms. The resulting general framework for a reaction theory can be applied to a number of nuclear problems. One result is a rigorous treatment of multistep transfer reactions with the possibility of systematically generating corrections. The application of the method to resonance reactions and knock-out is discussed. 12 figures.

  1. Shape theory categorical methods of approximation

    CERN Document Server

    Cordier, J M

    2008-01-01

    This in-depth treatment uses shape theory as a ""case study"" to illustrate situations common to many areas of mathematics, including the use of archetypal models as a basis for systems of approximations. It offers students a unified and consolidated presentation of extensive research from category theory, shape theory, and the study of topological algebras.A short introduction to geometric shape explains specifics of the construction of the shape category and relates it to an abstract definition of shape theory. Upon returning to the geometric base, the text considers simplical complexes and

  2. Modeling laser drilling in percussion regime using constraint natural element method

    OpenAIRE

    Girardot, Jérémie; Lorong, Philippe; ILLOUL, Lounès; Ranc, Nicolas; Schneider, Matthieu; FAVIER, Véronique

    2015-01-01

    International audience; The laser drilling process is the main process used in machining procedures on aeronautic engines, espe- cially in the cooling parts. The industrial problematic is to reduce geometrical deviations of the holes and defects dur- ing manufacturing. The interaction between a laser beam and an absorbent metallic matter in the laser drilling regime involves thermal and hydrodynamical phenomenon. Their role on the drilling is not yet completely understood and a realistic simu...

  3. Non perturbative methods in two dimensional quantum field theory

    CERN Document Server

    Abdalla, Elcio; Rothe, Klaus D

    1991-01-01

    This book is a survey of methods used in the study of two-dimensional models in quantum field theory as well as applications of these theories in physics. It covers the subject since the first model, studied in the fifties, up to modern developments in string theories, and includes exact solutions, non-perturbative methods of study, and nonlinear sigma models.

  4. Topological methods in Galois representation theory

    CERN Document Server

    Snaith, Victor P

    2013-01-01

    An advanced monograph on Galois representation theory by one of the world's leading algebraists, this volume is directed at mathematics students who have completed a graduate course in introductory algebraic topology. Topics include abelian and nonabelian cohomology of groups, characteristic classes of forms and algebras, explicit Brauer induction theory, and much more. 1989 edition.

  5. Methods of Fourier analysis and approximation theory

    CERN Document Server

    Tikhonov, Sergey

    2016-01-01

    Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.

  6. Servohydraulic methods for mechanical testing in the Sub-Hopkinson rate regime up to strain rates of 500 1/s.

    Energy Technology Data Exchange (ETDEWEB)

    Crenshaw, Thomas B.; Boyce, Brad Lee

    2005-10-01

    Tensile and compressive stress-strain experiments on metals at strain rates in the range of 1-1000 1/s are relevant to many applications such as gravity-dropped munitions and airplane accidents. While conventional test methods cover strain rates up to {approx}10 s{sup -1} and split-Hopkinson and other techniques cover strain rates in excess of {approx}1000 s{sup -1}, there are no well defined techniques for the intermediate or ''Sub-Hopkinson'' strain-rate regime. The current work outlines many of the challenges in testing in the Sub-Hopkinson regime, and establishes methods for addressing these challenges. The resulting technique for obtaining intermediate rate stress-strain data is demonstrated in tension on a high-strength, high-toughness steel alloy (Hytuf) that could be a candidate alloy for earth penetrating munitions and in compression on a Au-Cu braze alloy.

  7. Theory-Based Lexicographical Methods in a Functional Perspective

    DEFF Research Database (Denmark)

    Tarp, Sven

    2014-01-01

    This contribution provides an overview of some of the methods used in relation to the function theory. It starts with a definition of the concept of method and the relation existing between theory and method. It establishes an initial distinction between artisanal and theory-based methods...... of various methods used in the different sub-phases of the overall dictionary compilation process, from the making of the concept to the preparation for publication on the chosen media, with focus on the Internet. Finally, it briefly discusses some of the methods used to create and test the function theory...

  8. The TR method: A new graphical method that uses the slip preference of the faults to separate heterogeneous fault-slip data in extensional and compressional stress regimes

    Science.gov (United States)

    Tranos, Markos

    2013-04-01

    The new graphical TR method uses the slip preference (SP) of the faults to separate heterogeneous fault-slip data. This SP is described in detail and several examples of the application of the TR method are presented. For this purpose, synthetic fault-slip data driven by various extensional and compressional stress regimes whose greatest principal stress axis (σ1) or least principal stress axis (σ3) always remains in vertical or horizontal position respectively as in Andersonian stress states have been considered. Their SP is given through a simple graphical manner and the aid of the Win-Tensor stress inversion software. The extensional stress regimes that have been examined are the radial extension (RE), radial-pure extension (RE-PE), pure extension (PE), pure extension-transtension (PE-TRN) and transtension (TRN), whereas the compressional stress regimes are the radial compression (RC), radial-pure compression (RC-PC), pure compression (PC), pure compression-transpression (PC-TRP) and transpression (TRP). A necessary condition for the TR method that is the faults dipping towards the certain horizontal principal stress axis of the driving stress regime are dip-slip faults, either normal or reverse ones, is satisfied for all extensional and compressional stress regimes respectively. The trend of the horizontal least or greatest principal stress axis of the driving extensional or compressional stress regime respectively can be directly defined by the trend of the T-axes of the normal faults or the P-axes of the reverse faults respectively. Taking into account a coefficient of friction no smaller than 0.6, the reactivated extensional faults in the crust dip at angles higher than about 40°, and the increase of the stress ratio and/or the fault dip angle results in the increase of the slip deviation from the normal activation. In turn, in the compressional stress regimes, the dip angle and SP of the activated faults suggest the distinction of the compressional

  9. Mathematical methods in the theory of queuing

    CERN Document Server

    Khinchin, A Y; Quenouille, M H

    2013-01-01

    Written by a prominent Russian mathematician, this concise monograph examines aspects of queuing theory as an application of probability. The three-part treatment begins with a study of the stream of incoming demands (or ""calls,"" in the author's terminology). Subsequent sections explore systems with losses and systems allowing delay. Prerequisites include a familiarity with the theory of probability and mathematical analysis. A. Y. Khinchin made significant contributions to probability theory, statistical physics, and several other fields. His elegant, groundbreaking work will prove of subs

  10. THEORY AND METHOD FOR WETLAND BOUNDARY DELINEATION

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Based on the analysis of the subjectivity of wetland boundary criteria and their causes at present, this paper suggested that, under the condition that the mechanism of wetland formation process has not been understood,"black box" method of System Theory can be used to delineate wetland boundaries scientifically. After analyzing the difference of system construction among aquatic habitats, wetlands and uplands, the lower limit of rooted plants was chosen as the lower boundary criterion of wetlands. Because soil diagnostic horizon is the result of the long-term interaction among all environments, and it is less responsive than vegetation to short-term change, soil diagnostic horizon was chosen as the indicator to delineate wetland upper boundary, which lies at the thinning-out point of soil diagnostic horizon. Case study indicated that it was feasible using the lower limit of rooted plants and the thinning-out point of soil diagnostic horizon as criteria to delineate the lower and upper boundaries of wetland. In the study area, the thinning-out line of albic horizon was coincident with the 55.74m contour line, the maximum horizonerror was less than lm, and the maximum vertical error less than 0.04m. The problem on wetland definition always arises on the boundaries. Having delineated wetland boundaries, wetlands can be defined as follows: wetlands are the transitional zones between uplands and deepwater habitats, they are a kind of azonal complex that are inundated or saturated by surface or ground water, with the lower boundary lying at the lower limit of rooted plants, and the upper boundary at the thinning-out line of upland soil diagnostic horizon.

  11. Regime shifts in exploited marine food webs: detecting mechanisms underlying alternative stable states using size- structured community dynamics theory

    NARCIS (Netherlands)

    Gårdmark, A.; Casini, M.; Huss, M.; van Leeuwen, A.; Hjelm, J.; Persson, L.; de Roos, A.M.

    2014-01-01

    Many marine ecosystems have undergone ‘regime shifts’, i.e. abrupt reorgan- izations across trophic levels. Establishing whether these constitute shifts between alternative stable states is of key importance for the prospects of eco- system recovery and for management. We show how mechanisms underly

  12. Elemental methods in ergodic Ramsey theory

    CERN Document Server

    McCutcheon, Randall

    1999-01-01

    This book, suitable for graduate students and professional mathematicians alike, didactically introduces methodologies due to Furstenberg and others for attacking problems in chromatic and density Ramsey theory via recurrence in topological dynamics and ergodic theory, respectively. Many standard results are proved, including the classical theorems of van der Waerden, Hindman, and Szemerédi. More importantly, the presentation strives to reflect the extent to which the field has been streamlined since breaking onto the scene around twenty years ago. Potential readers who were previously intrigued by the subject matter but found it daunting may want to give a second look.

  13. Computationally efficient methods for modelling laser wakefield acceleration in the blowout regime

    CERN Document Server

    Cowan, B M; Beck, A; Davoine, X; Bunkers, K; Lifschitz, A F; Lefebvre, E; Bruhwiler, D L; Shadwick, B A; Umstadter, D P

    2012-01-01

    Electron self-injection and acceleration until dephasing in the blowout regime is studied for a set of initial conditions typical of recent experiments with 100 terawatt-class lasers. Two different approaches to computationally efficient, fully explicit, three-dimensional particle-in-cell modelling are examined. First, the Cartesian code VORPAL using a perfect-dispersion electromagnetic solver precisely describes the laser pulse and bubble dynamics, taking advantage of coarser resolution in the propagation direction, with a proportionally larger time step. Using third-order splines for macroparticles helps suppress the sampling noise while keeping the usage of computational resources modest. The second way to reduce the simulation load is using reduced-geometry codes. In our case, the quasi-cylindrical code CALDER-CIRC uses decomposition of fields and currents into a set of poloidal modes, while the macroparticles move in the Cartesian 3D space. Cylindrical symmetry of the interaction allows using just two mo...

  14. Methods from Differential Geometry in Polytope Theory

    OpenAIRE

    Adiprasito, Karim Alexander

    2014-01-01

    The purpose of this thesis is to study classical combinatorial objects, such as polytopes, polytopal complexes, and subspace arrangements, using tools that have been developed in combinatorial topology, especially those tools developed in connection with (discrete) differential geometry, geometric group theory and low-dimensional topology.

  15. Study on Theory and Methods of Telecommunication Tariff

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The pricing of telecommunication services is quite important aswell as complicated. This paper strengthens the research of theories and implementation of telecommunication tariff in China. It is helpful for the government authorities and enterprises to unify and standardize the regulatory methods, to guide the deciding of the structure and level of telecommunication tariff by implementing scientific theories, to further develop and optimize the tariff system. This paper conducts a systematic, in-depth and creative research on some of the most popular and most difficult problems in the area of telecommunication tariff research, such as the regulation of telecommunication tariff, the theories of telecommunication tariff, the systematic pricing theory, the interconnection charge, the model cost evaluation theory, the long-run incremental cost theory, and the international telecommunication tariff. After studying the foreign methods on telecommunication tariff regulation, basing on the current situation of China's tariff regulation, the scope and methods for China's telecommunication tariff regulation are suggested. Aimed at the weakness of pricing theory for enterprises to set up telecommunication tariffs, an overall frame work of telecommunication tariff theories is proposed. The systematic pricing theory and model cost evaluation theory of telecommunication services are put forward from a brand new perspective. A front topic, the LRIC theory, is probed. In addition, the pricing practices of network interconnection charge and international telecommunication tariff, which are currently very attractive to the theorists, are discussed. Basing on these studies, this paper improves the structure of telecommunication tariff theory. It provides the Chinese government authorities with practical methods and helpful supports to regulate the telecommunication tariffs; in the mean time, it also provides the enterprises with scientific pricing theories and methods to set up

  16. Grounded Theory in Practice: Is It Inherently a Mixed Method?

    Science.gov (United States)

    Johnson, R. B.; McGowan, M. W.; Turner, L. A.

    2010-01-01

    We address 2 key points of contention in this article. First, we engage the debate concerning whether particular methods are necessarily linked to particular research paradigms. Second, we briefly describe a mixed methods version of grounded theory (MM-GT). Grounded theory can be tailored to work well in any of the 3 major forms of mixed methods…

  17. Efficient methods for linear Schrödinger equation in the semiclassical regime with time-dependent potential

    Science.gov (United States)

    Bader, Philipp; Iserles, Arieh; Kropielnicka, Karolina; Singh, Pranav

    2016-09-01

    We build efficient and unitary (hence stable) methods for the solution of the linear time-dependent Schrödinger equation with explicitly time-dependent potentials in a semiclassical regime. The Magnus-Zassenhaus schemes presented here are based on a combination of the Zassenhaus decomposition (Bader et al. 2014 Found. Comput. Math. 14, 689-720. (doi:10.1007/s10208-013-9182-8)) with the Magnus expansion of the time-dependent Hamiltonian. We conclude with numerical experiments.

  18. Exploring biomedical ontology mappings with graph theory methods

    National Research Council Canada - National Science Library

    Simon Kocbek; Jin-Dong Kim

    2017-01-01

    .... Methods We report an analysis of biomedical ontology mapping data over time. We apply graph theory methods such as Modularity Analysis and Betweenness Centrality to analyse data gathered at five different time points...

  19. A New CAC Method Using Queuing Theory

    Directory of Open Access Journals (Sweden)

    P. Kvackaj

    2008-12-01

    Full Text Available The CAC (Connection Admission Control method plays an important role in the ATM (Asynchronous Transfer Mode network environment. The CAC is the first step in the prevention of congested states in the network topology, and conducts to the optimal network resources utilization. The paper is aimed to propose an enhancement for a convolution method that is one of the statistical CAC methods used in ATM. The convolution method uses a buffer-less assumption in the estimation of the cell loss. Using formulas for the G/M/1 queuing system, the cell loss can be estimated as the buffer overflow probability. In this paper, the proposed CAC method is compared with other three statistical CAC methods, and conclusions regarding the exploitation of the CAC method are presented.

  20. Quantum-counting CT in the regime of count-rate paralysis: introduction of the pile-up trigger method

    Science.gov (United States)

    Kappler, S.; Hölzer, S.; Kraft, E.; Stierstorfer, K.; Flohr, T.

    2011-03-01

    The application of quantum-counting detectors in clinical Computed Tomography (CT) is challenged by extreme X-ray fluxes provided by modern high-power X-ray tubes. Scanning of small objects or sub-optimal patient positioning may lead to situations where those fluxes impinge on the detector without attenuation. Even in operation modes optimized for high-rate applications, with small pixels and high bias voltage, CdTe/CdZnTe detectors deliver pulses in the range of several nanoseconds. This can result in severe pulse pile-up causing detector paralysis and ambiguous detector signals. To overcome this problem we introduce the pile-up trigger, a novel method that provides unambiguous detector signals in rate regimes where classical rising-edge counters run into count-rate paralysis. We present detailed CT image simulations assuming ideal sensor material not suffering from polarization effects at high X-ray fluxes. This way we demonstrate the general feasibility of the pile-up trigger method and quantify resulting imaging properties such as contrasts, image noise and dual-energy performance in the high-flux regime of clinical CT devices.

  1. Sampling methods for the quasistationary regime of epidemic processes on regular and complex networks

    Science.gov (United States)

    Sander, Renan S.; Costa, Guilherme S.; Ferreira, Silvio C.

    2016-10-01

    A major hurdle in the simulation of the steady state of epidemic processes is that the system will unavoidably visit an absorbing, disease-free state at sufficiently long times due to the finite size of the networks where epidemics evolves. In the present work, we compare different quasistationary (QS) simulation methods where the absorbing states are suitably handled and the thermodynamical limit of the original dynamics can be achieved. We analyze the standard QS (SQS) method, where the sampling is constrained to active configurations, the reflecting boundary condition (RBC), where the dynamics returns to the pre-absorbing configuration, and hub reactivation (HR), where the most connected vertex of the network is reactivated after a visit to an absorbing state. We apply the methods to the contact process (CP) and susceptible-infected-susceptible (SIS) models on regular and scale free networks. The investigated methods yield the same epidemic threshold for both models. For CP, that undergoes a standard collective phase transition, the methods are equivalent. For SIS, whose phase transition is ruled by the hub mutual reactivation, the SQS and HR methods are able to capture localized epidemic phases while RBC is not. We also apply the autocorrelation time as a tool to characterize the phase transition and observe that this analysis provides the same finite-size scaling exponents for the critical relaxation time for the investigated methods. Finally, we verify the equivalence between RBC method and a weak external field for epidemics on networks.

  2. Testing and building theories: mixed methods synthesis

    OpenAIRE

    2008-01-01

    Presentation on use of mixed methods in diverse study types, which combines the findings of ‘qualitative’ and ‘quantitative’ studies within a single systematic review, in order to address the same, overlapping or complementary review questions.

  3. Developments and retrospectives in Lie theory algebraic methods

    CERN Document Server

    Penkov, Ivan; Wolf, Joseph

    2014-01-01

    This volume reviews and updates a prominent series of workshops in representation/Lie theory, and reflects the widespread influence of those  workshops in such areas as harmonic analysis, representation theory, differential geometry, algebraic geometry, and mathematical physics.  Many of the contributors have had leading roles in both the classical and modern developments of Lie theory and its applications. This Work, entitled Developments and Retrospectives in Lie Theory, and comprising 26 articles, is organized in two volumes: Algebraic Methods and Geometric and Analytic Methods. This is the Algebraic Methods volume. The Lie Theory Workshop series, founded by Joe Wolf and Ivan Penkov and joined shortly thereafter by Geoff Mason, has been running for over two decades. Travel to the workshops has usually been supported by the NSF, and local universities have provided hospitality. The workshop talks have been seminal in describing new perspectives in the field covering broad areas of current research.  Mos...

  4. Developments and retrospectives in Lie theory geometric and analytic methods

    CERN Document Server

    Penkov, Ivan; Wolf, Joseph

    2014-01-01

    This volume reviews and updates a prominent series of workshops in representation/Lie theory, and reflects the widespread influence of those  workshops in such areas as harmonic analysis, representation theory, differential geometry, algebraic geometry, and mathematical physics.  Many of the contributors have had leading roles in both the classical and modern developments of Lie theory and its applications. This Work, entitled Developments and Retrospectives in Lie Theory, and comprising 26 articles, is organized in two volumes: Algebraic Methods and Geometric and Analytic Methods. This is the Geometric and Analytic Methods volume. The Lie Theory Workshop series, founded by Joe Wolf and Ivan Penkov and joined shortly thereafter by Geoff Mason, has been running for over two decades. Travel to the workshops has usually been supported by the NSF, and local universities have provided hospitality. The workshop talks have been seminal in describing new perspectives in the field covering broad areas of current re...

  5. Direct simulation of electron transfer using ring polymer molecular dynamics: comparison with semiclassical instanton theory and exact quantum methods.

    Science.gov (United States)

    Menzeleev, Artur R; Ananth, Nandini; Miller, Thomas F

    2011-08-21

    The use of ring polymer molecular dynamics (RPMD) for the direct simulation of electron transfer (ET) reaction dynamics is analyzed in the context of Marcus theory, semiclassical instanton theory, and exact quantum dynamics approaches. For both fully atomistic and system-bath representations of condensed-phase ET, we demonstrate that RPMD accurately predicts both ET reaction rates and mechanisms throughout the normal and activationless regimes of the thermodynamic driving force. Analysis of the ensemble of reactive RPMD trajectories reveals the solvent reorganization mechanism for ET that is anticipated in the Marcus rate theory, and the accuracy of the RPMD rate calculation is understood in terms of its exact description of statistical fluctuations and its formal connection to semiclassical instanton theory for deep-tunneling processes. In the inverted regime of the thermodynamic driving force, neither RPMD nor a related formulation of semiclassical instanton theory capture the characteristic turnover in the reaction rate; comparison with exact quantum dynamics simulations reveals that these methods provide inadequate quantization of the real-time electronic-state dynamics in the inverted regime.

  6. Macroeconomic regimes

    NARCIS (Netherlands)

    Baele, L.T.M.; Bekaert, G.R.J.; Cho, S.; Inghelbrecht, K.; Moreno, A.

    2015-01-01

    A New-Keynesian macro-model is estimated accommodating regime-switching behavior in monetary policy and macro-shocks. A key to our estimation strategy is the use of survey-based expectations for inflation and output. Output and inflation shocks shift to the low volatility regime around 1985 and 1990

  7. Sampling methods for the quasistationary regime of epidemic processes on regular and complex networks

    CERN Document Server

    Sander, Renan S; Ferreira, Silvio C

    2016-01-01

    A major hurdle in the simulation of the steady state of epidemic processes is that the system will unavoidably visit an absorbing, disease-free state at sufficiently long times due to the finite size of the networks where epidemics evolves. In the present work, we compare different quasistationary (QS) simulation methods where the absorbing states are suitably handled and the thermodynamical limit of the original dynamics can be achieved. We analyzed the standard QS (SQS) method, where the sampling is constrained to active configurations, the reflecting boundary condition (RBC), where the dynamics returns to the pre-absorbing configuration, and hub reactivation (HR), where the most connected vertices of the network is reactivated after a visit to an absorbing state. We applied the methods to the contact process (CP) and susceptible-infected-susceptible (SIS) models on regular and scale-free networks. The investigated methods yield the same epidemic threshold for both models. For CP, that undergoes a standard ...

  8. An experimental method for validating compressor valve vibration theory

    NARCIS (Netherlands)

    Habing, R.A.; Peters, M.C.A.M.

    2006-01-01

    This paper presents an experimental method for validating traditional compressor valve theory for unsteady flow conditions. Traditional valve theory considers the flow force acting on the plate and the flow rate as quasi-steady variables. These variables are related via semi-empirical coefficients

  9. Using grounded theory as a method for rigorously reviewing literature

    NARCIS (Netherlands)

    Wolfswinkel, J.; Furtmueller, E.; Wilderom, C.P.M.

    2013-01-01

    This paper offers guidance to conducting a rigorous literature review. We present this in the form of a five-stage process in which we use Grounded Theory as a method. We first probe the guidelines explicated by Webster and Watson, and then we show the added value of Grounded Theory for rigorously a

  10. An experimental method for validating compressor valve vibration theory

    NARCIS (Netherlands)

    Habing, R.A.; Peters, M.C.A.M.

    2006-01-01

    This paper presents an experimental method for validating traditional compressor valve theory for unsteady flow conditions. Traditional valve theory considers the flow force acting on the plate and the flow rate as quasi-steady variables. These variables are related via semi-empirical coefficients w

  11. Theory of complex fluids in the warm-dense-matter regime, and application to phase-transitions in liquid carbon

    CERN Document Server

    Dharma-wardana, M W C

    2016-01-01

    Using data from recent laser-shock experiments and related density-functional molecular-dynamics simulations on carbon, we demonstrate that the ionic structures predicted within the neutral-pseudo-atom approach for a complex liquid in the warm-dense matter regime are in good agreement with available data, even where transient covalent bonding dominates ionic correlations. Evidence for an unusual phase transition of a liquid $\\to$ vapor with an abrupt decrease in ionization occurring simultaneously is presented. Here a covalently-bonded metallic-liquid, i.e., carbon of density 1.0 g/cm$^3$, transits to a disordered mono-atomic fluid at 7 eV. Other transitions where the mean ionization $Z$ drops abruptly are also uncovered

  12. Theory Emergence in IS Research: The Grounded Theory Method Applied : 10. JAIS Theory Development Workshop

    NARCIS (Netherlands)

    Olbrich, Sebastian; Mueller, Benjamin; Niederman, Fred

    2011-01-01

    Where IS research aims at theory building and testing, the vast bulk of theory is borrowed from reference disciplines. While this provides some momentum for research output, it also tends to shift the focus of research away from direct observation of central, core IS issues. The purpose of this pape

  13. A numerical method based on probability theory

    Institute of Scientific and Technical Information of China (English)

    唐立; 邹捷中; 杨文胜

    2003-01-01

    By using the connections between Brownian family with drift and elliptic differential equations, an efficient probabilistic computing method is given. This method is applied to a wide-range Diriehlet problem. Detail analysis and deduction of solving the problem are offered. The stochastic representation of the solution to the problem makes a 3-dimensional problem turned into a 2-dimensional problem. And an auxiliary ball is constructed. The strong Markov property and the joint distributions of the time and place of hitting spheres for Brownian family with drift are employed. Finally, good convergence of the numerical solution to the problem over domain with arbitrary boundary is obtained.

  14. Diffusion method in random matrix theory

    Science.gov (United States)

    Grela, Jacek

    2016-01-01

    We introduce a calculational tool useful in computing ratios and products of characteristic polynomials averaged over Gaussian measures with an external source. The method is based on Dyson’s Brownian motion and Grassmann/complex integration formulas for determinants. The resulting formulas are exact for finite matrix size N and form integral representations convenient for large N asymptotics. Quantities obtained by the method are interpreted as averages over standard matrix models. We provide several explicit and novel calculations with special emphasis on the β =2 Girko-Ginibre ensembles.

  15. Identifying past fire regimes throughout the Holocene in Ireland using new and established methods of charcoal analysis

    Science.gov (United States)

    Hawthorne, Donna; Mitchell, Fraser J. G.

    2016-04-01

    Globally, in recent years there has been an increase in the scale, intensity and level of destruction caused by wildfires. This can be seen in Ireland where significant changes in vegetation, land use, agriculture and policy, have promoted an increase in fires in the Irish landscape. This study looks at wildfire throughout the Holocene and draws on lacustrine charcoal records from seven study sites spread across Ireland, to reconstruct the past fire regimes recorded at each site. This work utilises new and accepted methods of fire history reconstruction to provide a recommended analytical procedure for statistical charcoal analysis. Digital charcoal counting was used and fire regime reconstructions carried out via the CharAnalysis programme. To verify this record new techniques are employed; an Ensemble-Member strategy to remove the objectivity associated with parameter selection, a Signal to Noise Index to determine if the charcoal record is appropriate for peak detection, and a charcoal peak screening procedure to validate the identified fire events based on bootstrapped samples. This analysis represents the first study of its kind in Ireland, examining the past record of fire on a multi-site and paleoecological timescale, and will provide a baseline level of data which can be built on in the future when the frequency and intensity of fire is predicted to increase.

  16. Review of Test Theory and Methods.

    Science.gov (United States)

    1981-01-01

    their roots in work in the 1940s by Mosier, Guttman, and Lazarsfeld , among others. Although the basic ideas were known about 40 years ago, the methods...St. Paul , MN: Minnesota Department of Personnel Selection Research Unit Feldt, L. S. 1975. Estimation of the reliability of a test divided into two

  17. Application of Multiphase Particle Methods in Atomization and Breakup Regimes of Liquid Jets

    CERN Document Server

    Farrokhpanah, Amirsaman

    2016-01-01

    Multiphase Smoothed Particle Hydrodynamics (SPH) method has been used to study the jet breakup phenomena. It has been shown that this method is well capable of capturing different jet breakup characteristics. The value obtained for critical Weber number here in transition from dripping to jetting is a very good match to available values in literature. Jet breakup lengths are also agreeing well with several empirical correlations. Successful usage of SPH, as a comparably fast CFD solver, in jet breakup analysis helps in speeding up the numerical study of this phenomenon.

  18. Coalition and connection in games problems of modern game theory using methods belonging to systems theory and information theory

    CERN Document Server

    Guiasu, Silviu

    1979-01-01

    Coalition and Connection in Games: Problems of Modern Game Theory using Methods Belonging to Systems Theory and Information Theory focuses on coalition formation and on connections occurring in games, noting the use of mathematical models in the evaluation of processes involved in games. The book first takes a look at the process of strategy in playing games in which the conditional choices of players are noted. The sequence of decisions during the playing of games and observance of the rules are emphasized. The text also ponders on the mathematical tool of game theory in which the differences

  19. Behavioural responses under different feeding methods and light regimes of the African catfish (Clarias gariepinus) juveniles

    NARCIS (Netherlands)

    Almazán Rueda, P.; Schrama, J.W.; Verreth, J.A.J.

    2004-01-01

    Little is known about the behaviour of fish under culture conditions. Several factors may have a direct effect on fish behaviour and its variations during the day. This study assessed the effect of feeding method (continuous by self-feeders vs. twice a day hand-feeding), light intensity (15 vs. 150

  20. A method for calorimetric analysis in variable conditions heating; Methode d'analyse calorimetrique en regime variable

    Energy Technology Data Exchange (ETDEWEB)

    Berthier, G. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1965-07-01

    By the analysis of the thermal transition conditions given by the quenching of a sample in a furnace maintained at a high temperature, it is possible to study the thermal diffusivity of some materials and those of solid state structure transformation on a qualitative as well as a quantitative standpoint. For instance the transformation energy of {alpha}-quartz into {beta}-quartz and the Wigner energy stored within neutron-irradiated beryllium oxide have been measured. (author) [French] L'analyse du regime thermique transitoire, obtenu par la trempe d'un echantillon dans l'enceinte d'un four maintenu a tres haute temperature, peut permettre l'etude de la diffusivite thermique de certains materiaux et celle des transformations structurales en phase solide, tant du point de vue qualitatif que du point de vue quantitatif (mesure de l'energie de transformation du quartz {alpha} en quartz {beta} et determination de l'energie Wigner emmagasinee par l'oxyde de beryllium irradie aux neutrons). (auteur)

  1. The Monte Carlo method in quantum field theory

    CERN Document Server

    Morningstar, C

    2007-01-01

    This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.

  2. Methods of quantum field theory in statistical physics

    CERN Document Server

    Abrikosov, A A; Gorkov, L P; Silverman, Richard A

    1975-01-01

    This comprehensive introduction to the many-body theory was written by three renowned physicists and acclaimed by American Scientist as ""a classic text on field theoretic methods in statistical physics."

  3. Restricted Kalman Filtering Theory, Methods, and Application

    CERN Document Server

    Pizzinga, Adrian

    2012-01-01

    In statistics, the Kalman filter is a mathematical method whose purpose is to use a series of measurements observed over time, containing random variations and other inaccuracies, and produce estimates that tend to be closer to the true unknown values than those that would be based on a single measurement alone. This Brief offers developments on Kalman filtering subject to general linear constraints. There are essentially three types of contributions: new proofs for results already established; new results within the subject; and applications in investment analysis and macroeconomics, where th

  4. 基于图像多特征融合和支持向量机的气液两相流流型识别%Identification Method of Gas-Liquid Two-phase Flow Regime Based on Image Multi-feature Fusion and Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    周云龙; 陈飞; 孙斌

    2008-01-01

    The knowledge of flow regime is very important for quantifying the pressure drop, the stability and safety of two-phase flow systems. Based on image multi-feature fusion and support vector machine, a new method to identify flow regime in two-phase flow was presented. Firstly, gas-liquid two-phase flow images including bubbly flow, plug flow, slug flow, stratified flow, wavy flow, annular flow and mist flow were captured by digital high speed video systems in the horizontal tube. The image moment invariants and gray level co-occurrence matrix texture features were extracted using image processing techniques. To improve the performance of a multiple classifier system, the rough sets theory was used for reducing the inessential factors. Furthermore, the support vector machine was trained by using these eigenvectors to reduce the dimension as flow regime samples, and the flow regime intelligent identification was realized. The test results showed that image features which were reduced with the rough sets theory could excellently reflect the difference between seven typical flow regimes, and successful training the support vector machine could quickly and accurately identify seven typical flow regimes of gas-liquid two-phase flow in the horizontal tube. Image multi-feature fusion method provided a new way to identify the gas-liquid two-phase flow, and achieved higher identification ability than that of single characteristic. The overall identification accuracy was 100%, and an estimate of the image processing time was 8 ms for online flow regime identification.

  5. The finite section method and problems in frame theory

    DEFF Research Database (Denmark)

    Christensen, Ole; Strohmer, T.

    2005-01-01

    solves related computational problems in frame theory. In the case of a frame which is localized w.r.t. an orthonormal basis we are able to estimate the rate of approximation. The results are applied to the reproducing kernel frame appearing in the theory for shift-invariant spaces generated by a Riesz......The finite section method is a convenient tool for approximation of the inverse of certain operators using finite-dimensional matrix techniques. In this paper we demonstrate that the method is very useful in frame theory: it leads to an efficient approximation of the inverse frame operator and also...

  6. Kinetic-theory predictions of clustering instabilities in granular flows: beyond the small-Knudsen-number regime

    Energy Technology Data Exchange (ETDEWEB)

    Mitrano, Peter P.; Zenk, John R.; Benyahia, Sofiane; Galvin, Janine E.; Dahl, Steven R.; Hrenya, Christine M.

    2013-12-04

    In this work we quantitatively assess, via instabilities, a Navier–Stokes-order (small- Knudsen-number) continuum model based on the kinetic theory analogy and applied to inelastic spheres in a homogeneous cooling system. Dissipative collisions are known to give rise to instabilities, namely velocity vortices and particle clusters, for sufficiently large domains. We compare predictions for the critical length scales required for particle clustering obtained from transient simulations using the continuum model with molecular dynamics (MD) simulations. The agreement between continuum simulations and MD simulations is excellent, particularly given the presence of well-developed velocity vortices at the onset of clustering. More specifically, spatial mapping of the local velocity-field Knudsen numbers (Knu) at the time of cluster detection reveals Knu » 1 due to the presence of large velocity gradients associated with vortices. Although kinetic-theory-based continuum models are based on a small- Kn (i.e. small-gradient) assumption, our findings suggest that, similar to molecular gases, Navier–Stokes-order (small-Kn) theories are surprisingly accurate outside their expected range of validity.

  7. Higher-order paraxial theory of the propagation of ring rippled laser beam in plasma: Relativistic ponderomotive regime

    Energy Technology Data Exchange (ETDEWEB)

    Purohit, Gunjan, E-mail: gunjan75@gmail.com; Rawat, Priyanka [Department of Physics, Laser-Plasma Computational Laboratory, DAV PG College, Dehradun, Uttarakhand (India); Chauhan, Prashant [Department of Physics and Material Science and Engineering, Jaypee Institute of Information Technology, Uttar Pradesh (India); Mahmoud, Saleh T. [Department of Physics, College of Science, UAE University, PO Box 17551 Al-Ain (United Arab Emirates)

    2015-05-15

    This article presents higher-order paraxial theory (non-paraxial theory) for the ring ripple formation on an intense Gaussian laser beam and its propagation in plasma, taking into account the relativistic-ponderomotive nonlinearity. The intensity dependent dielectric constant of the plasma has been determined for the main laser beam and ring ripple superimposed on the main laser beam. The dielectric constant of the plasma is modified due to the contribution of the electric field vector of ring ripple. Nonlinear differential equations have been formulated to examine the growth of ring ripple in plasma, self focusing of main laser beam, and ring rippled laser beam in plasma using higher-order paraxial theory. These equations have been solved numerically for different laser intensities and plasma frequencies. The well established experimental laser and plasma parameters are used in numerical calculation. It is observed that the focusing of the laser beams (main and ring rippled) becomes fast in the nonparaxial region by expanding the eikonal and other relevant quantities up to the fourth power of r. The splitted profile of laser beam in the plasma is observed due to uneven focusing/defocusing of the axial and off-axial rays. The growths of ring ripple increase when the laser beam intensity increases. Furthermore, the intensity profile of ring rippled laser beam gets modified due to the contribution of growth rate.

  8. Development of a new IHA method for impact assessment of climate change on flow regime

    Science.gov (United States)

    Yang, Tao; Cui, Tong; Xu, Chong-Yu; Ciais, Philippe; Shi, Pengfei

    2017-09-01

    The Indicators of Hydrologic Alteration (IHA) based on 33 parameters in five dimensions (flow magnitude, timing, duration, frequency and change rate) have been widely used in evaluation of hydrologic alteration in river systems. Yet, inter-correlation seriously exists amongst those parameters, therefore constantly underestimates or overestimates actual hydrological changes. Toward the end, a new method (Representative-IHA, RIHA) is developed by removing repetitions based on Criteria Importance Through Intercriteria Correlation (CRITIC) algorithm. RIHA is testified in evaluating effects of future climate change on hydro-ecology in the Niger River of Africa. Future flows are projected using three watershed hydrological models forced by five general circulation models (GCMs) under three Representative Concentration Pathways (RCPs) scenarios. Results show that: (1) RIHA is able to eliminate self-correlations amongst IHA indicators and identify the dominant characteristics of hydrological alteration in the Upper Niger River, (2) March streamflow, September streamflow, December streamflow, 30-day annual maximum, low pluses duration and fall rates tends to increase over the period 2010-2099, while July streamflow and 90-day annual minimum streamflow shows decrease, (3) the Niger River will undergo moderate flow alteration under RCP8.5 in 2050s and 2080s and low alteration other scenarios, (4) future flow alteration may induce increase water temperatures, reduction dissolved oxygen and food resources. Consequently, aquatic biodiversity and fish community of Upper Niger River would become more vulnerable in the future. The new method enables more scientific evaluation for multi-dimensional hydrologic alteration under the context of climate change.

  9. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  10. Theory of quasi-elastic secondary emission from a quantum dot in the regime of vibrational resonance.

    Science.gov (United States)

    Rukhlenko, Ivan D; Fedorov, Anatoly V; Baymuratov, Anvar S; Premaratne, Malin

    2011-08-01

    We develop a low-temperature theory of quasi-elastic secondary emission from a semiconductor quantum dot, the electronic subsystem of which is resonant with the confined longitudinal-optical (LO) phonon modes. Our theory employs a generalized model for renormalization of the quantum dot's energy spectrum, which is induced by the polar electron-phonon interaction. The model takes into account the degeneration of electronic states and allows for several LO-phonon modes to be involved in the vibrational resonance. We give solutions to three fundamental problems of energy-spectrum renormalization--arising if one, two, or three LO-phonon modes resonantly couple a pair of electronic states--and discuss the most general problem of this kind that admits an analytical solution. With these results, we solve the generalized master equation for the reduced density matrix, in order to derive an expression for the differential cross section of secondary emission from a single quantum dot. The obtained expression is then analyzed to establish the basics of optical spectroscopy for measuring fundamental parameters of the quantum dot's polaron-like states.

  11. BASIC THEORY AND METHOD OF WELDING ARC SPECTRAL INFORMATION

    Institute of Scientific and Technical Information of China (English)

    Li Junyue; Li Zhiyong; Li Huan; Xue Haitao

    2004-01-01

    Arc spectral information is a rising information source which can solve many problems that can not be done with arc electric information and other arc information.It is of important significance to develop automatic control technique of welding process.The basic theory and methods on it play an important role in expounding and applying arc spectral information.Using concerned equation in plasma physics and spectrum theory,a system of equations including 12 equations which serve as basic theory of arc spectral information is set up.Through analyzing of the 12 equations,a basic view that arc spectral information is the reflection of arc state and state variation,and is the most abundant information resource reflecting welding arc process is drawn.Furthermore,based on the basic theory,the basic methods of test and control of arc spectral information and points out some applications of it are discussesed.

  12. Multigrid methods for propagators in lattice gauge theories

    CERN Document Server

    Kalkreuter, T

    1994-01-01

    Multigrid methods were invented for the solution of discretized partial differential equations in ordered systems. The slowness of traditional algorithms is overcome by updates on various length scales. In this article we discuss generalizations of multigrid methods for disordered systems, in particular for propagators in lattice gauge theories. A discretized nonabelian gauge theory can be formulated as a system of statistical mechanics where the gauge field degrees of freedom are SU(N) matrices on the links of the lattice. These SU(N) matrices appear as random coefficients in Dirac equations. We aim at finding an efficient method by which one can solve Dirac equations without critical slowing down. If this could be achieved, Monte Carlo simulations of Quantum Chromodynamics (the theory of the strong interaction) would be accelerated considerably. In principle, however, the methods discussed can be used in arbitrary space-time dimension and for arbitrary gauge group. Moreover, there are applications in multig...

  13. The Algebra Theory for PolynomialInterpolation Method

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    In this paper, several usually used polynomial interpolation methods are explained in view of vector basis and dimension in linearalgebra theory. Using transition matrixes, general conversion formula between the basis function sets of these polynomialinterpolation methods are given. An example also shows the effectiveness of the results.

  14. Proceedings First Workshop on Quantitative Formal Methods: Theory and Applications

    CERN Document Server

    Andova, Suzana; D'Argenio, Pedro; Cuijpers, Pieter; Markovski, Jasen; Morgan, Caroll; Núñez, Manuel; 10.4204/EPTCS.13

    2009-01-01

    This volume contains the papers presented at the 1st workshop on Quantitative Formal Methods: Theory and Applications, which was held in Eindhoven on 3 November 2009 as part of the International Symposium on Formal Methods 2009. This volume contains the final versions of all contributions accepted for presentation at the workshop.

  15. A Method for Dispersion Compensation Based on GLM Theory

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A method used to design the waveguide gratings for dispersion compensation employing GLM theory is briefly described. By using this method a reflective grating is designed, which has both a flat amplitude and a quadratic phase response over the transfer bandwidth.

  16. Correlation theory-based signal processing method for CMF signals

    Science.gov (United States)

    Shen, Yan-lin; Tu, Ya-qing

    2016-06-01

    Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.

  17. Between the theory and method: the interpretation of the theory of Emilia Ferreiro for literacy

    Directory of Open Access Journals (Sweden)

    Fernanda Cargnin Gonçalves

    2008-12-01

    Full Text Available This article aims to show the difficulty of understanding the theory of Emilia Ferreiro by teachers from first grade at a school of public municipal city of Florianopolis / SC. It presents the theory of real Ferreiro described in his book "Psicogênese of Language Writing," co-authored with Teberosky, and interpretation of literacy observed in their practices of teaching. There are also options for work to teaching a child to escape the labeling of students in the literacy phases, which are based on essays, showing what is possible without turning theory into teaching method.

  18. Theory and Method for Identifying Well Water Level Anomalies in a Groundwater Overdraft Area

    Institute of Scientific and Technical Information of China (English)

    Zhang Suxin; Zhang Ziguang; Ren Xiaoxia; Wang Xiang

    2007-01-01

    The overexploitation of underground water leads to the continuous drawdown of groundwater levels, change of water quality and dry-up in dynamic water level observation wells. Due to land subsidence, the well pipes uplift and the observation piping systems are damaged. These environmental geology problems can present serious difficulties for the identification of earthquake anomalies by groundwater level observation. Basied on hydrogeological theories and methods, the paper analyzes the relations of the water balance state of aquifers with stressstrain conditions and the water level regime, and then discusses preliminarily the theory and method for identifying well water level anomalies in a groundwater overdraft area. The result shows that we can accurately judge the nature of the anomaly according to the diffusion character of the drawdown funnel in the well area in combination with the aforementioned theory and method and multi-year variation patterns obtained from existing data. The results of the research are helpful for distinguishing the influence of single centralized water pumping from the long-term overdraft of water on the water level, correctly recognizing water level anomalies in the groundwater overdraft area and increasing the level of earthquake analysis and prediction.

  19. Numerical methods for the sign problem in Lattice Field Theory

    CERN Document Server

    Bongiovanni, Lorenzo

    2016-01-01

    The great majority of algorithms employed in the study of lattice field theory are based on Monte Carlo's importance sampling method, i.e. on probability interpretation of the Boltzmann weight. Unfortunately in many theories of interest one cannot associated a real and positive weight to every configuration, that is because their action is explicitly complex or because the weight is multiplied by some non positive term. In this cases one says that the theory on the lattice is affected by the sign problem. An outstanding example of sign problem preventing a quantum field theory to be studied, is QCD at finite chemical potential. Whenever the sign problem is present, standard Monte Carlo methods are problematic to apply and, in general, new approaches are needed to explore the phase diagram of the complex theory. Here we will review three of the main candidate methods to deal with the sign problem, namely complex Langevin dynamics, Lefschetz thimbles and density of states method. We will first study complex Lan...

  20. Superfluid phase transition and effects of mass imbalance in the BCS-BEC crossover regime of an ultracold Fermi gas: A self-consistent T-matrix theory

    Science.gov (United States)

    Hanai, Ryo; Ohashi, Yoji

    2014-03-01

    We investigate a two-component Fermi gas with mass imbalance (m↑ ≠m↓ , where mσ is an atomic mass in the σ-component) in the BCS-BEC crossover region. Including pairing fluctuations within a self-consistent T-matrix theory, we examine how the superfluid instability is affected by the presence of mass imbalance. We determine the superfluid region in the phase diagram of a Fermi gas in terms of the temperature, the strength of a pairing interaction, and the ratio of mass imbalance. The superfluid phase transition is shown to always occur even when m↑ ≠m↓ .[2] This behavior of Tc is quite different from the previous result in an extended T-matrix theory,[3] where Tc vanishes at a certain value of m↑ /m↓ > 0 in the BCS regime. Since Fermi condensates with mass imbalance have been discussed in various systems, such as a cold Fermi gas, an exciton(polariton) condensate, as well as color superconductivity, our results would be useful for further understandings of these novel Fermi superfluids. R.H. was supported by Graduate School Doctoral Student Aid Program, Keio University.

  1. A density gradient theory based method for surface tension calculations

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios

    2016-01-01

    The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...

  2. Method and Theory in the Study of Religion

    DEFF Research Database (Denmark)

    Geertz, Armin W

    2007-01-01

    An introduction to debates on method and theory in the study of religion as a prelude to papers read at a panel on the subject during the XIXth World Congress of the International Association for the History of Religions, March 24-30, 2005 in Tokyo.......An introduction to debates on method and theory in the study of religion as a prelude to papers read at a panel on the subject during the XIXth World Congress of the International Association for the History of Religions, March 24-30, 2005 in Tokyo....

  3. Theory and design methods of special space orbits

    CERN Document Server

    Zhang, Yasheng; Zhou, Haijun

    2017-01-01

    This book focuses on the theory and design of special space orbits. Offering a systematic and detailed introduction to the hovering orbit, spiral cruising orbit, multi-target rendezvous orbit, initiative approaching orbit, responsive orbit and earth pole-sitter orbit, it also discusses the concept, theory, design methods and application of special space orbits, particularly the design and control method based on kinematics and astrodynamics. In addition the book presents the latest research and its application in space missions. It is intended for researchers, engineers and postgraduates, especially those working in the fields of orbit design and control, as well as space-mission planning and research.

  4. Detecting spatial regimes in ecosystems

    Science.gov (United States)

    Sundstrom, Shana M.; Eason, Tarsha; Nelson, R. John; Angeler, David G.; Barichievy, Chris; Garmestani, Ahjond S.; Graham, Nicholas A.J.; Granholm, Dean; Gunderson, Lance; Knutson, Melinda; Nash, Kirsty L.; Spanbauer, Trisha; Stow, Craig A.; Allen, Craig R.

    2017-01-01

    Research on early warning indicators has generally focused on assessing temporal transitions with limited application of these methods to detecting spatial regimes. Traditional spatial boundary detection procedures that result in ecoregion maps are typically based on ecological potential (i.e. potential vegetation), and often fail to account for ongoing changes due to stressors such as land use change and climate change and their effects on plant and animal communities. We use Fisher information, an information theory-based method, on both terrestrial and aquatic animal data (U.S. Breeding Bird Survey and marine zooplankton) to identify ecological boundaries, and compare our results to traditional early warning indicators, conventional ecoregion maps and multivariate analyses such as nMDS and cluster analysis. We successfully detected spatial regimes and transitions in both terrestrial and aquatic systems using Fisher information. Furthermore, Fisher information provided explicit spatial information about community change that is absent from other multivariate approaches. Our results suggest that defining spatial regimes based on animal communities may better reflect ecological reality than do traditional ecoregion maps, especially in our current era of rapid and unpredictable ecological change.

  5. A method for characterizing late-season low-flow regime in the upper Grand Ronde River Basin, Oregon

    Science.gov (United States)

    Kelly, Valerie J.; White, Seth

    2016-04-19

    This report describes a method for estimating ecologically relevant low-flow metrics that quantify late‑season streamflow regime for ungaged sites in the upper Grande Ronde River Basin, Oregon. The analysis presented here focuses on sites sampled by the Columbia River Inter‑Tribal Fish Commission as part of their efforts to monitor habitat restoration to benefit spring Chinook salmon recovery in the basin. Streamflow data were provided by the U.S. Geological Survey and the Oregon Water Resources Department. Specific guidance was provided for selection of streamgages, development of probabilistic frequency distributions for annual 7-day low-flow events, and regionalization of the frequency curves based on multivariate analysis of watershed characteristics. Evaluation of the uncertainty associated with the various components of this protocol indicates that the results are reliable for the intended purpose of hydrologic classification to support ecological analysis of factors contributing to juvenile salmon success. They should not be considered suitable for more standard water-resource evaluations that require greater precision, especially those focused on management and forecasting of extreme low-flow conditions.

  6. Calibrated Probabilistic Forecasting at the Stateline Wind Energy Center: The Regime-Switching Space-Time (RST) Method

    Science.gov (United States)

    2004-09-01

    flow . The Columbia Gorge gap flow plays a profound role in defining the weather and climate within and near the Gorge, which is one of the windiest...warm season when the subtropical ridge over the eastern Pacific Ocean moves north, resulting in higher surface pressure offshore. Easterly gap flow is...west (Sharp and Mass 2002, 200x). The alternation of westerly and easterly gap flow suggests the postulation of two forecast regimes, a westerly regime

  7. Heat kernel methods for Lifshitz theories arXiv

    CERN Document Server

    Barvinsky, Andrei O.; Herrero-Valea, Mario; Nesterov, Dmitry V.; Pérez-Nadal, Guillem; Steinwachs, Christian F.

    We study the one-loop covariant effective action of Lifshitz theories using the heat kernel technique. The characteristic feature of Lifshitz theories is an anisotropic scaling between space and time. This is enforced by the existence of a preferred foliation of space-time, which breaks Lorentz invariance. In contrast to the relativistic case, covariant Lifshitz theories are only invariant under diffeomorphisms preserving the foliation structure. We develop a systematic method to reduce the calculation of the effective action for a generic Lifshitz operator to an algorithm acting on known results for relativistic operators. In addition, we present techniques that drastically simplify the calculation for operators with special properties. We demonstrate the efficiency of these methods by explicit applications.

  8. Flexible and generalized uncertainty optimization theory and methods

    CERN Document Server

    Lodwick, Weldon A

    2017-01-01

    This book presents the theory and methods of flexible and generalized uncertainty optimization. Particularly, it describes the theory of generalized uncertainty in the context of optimization modeling. The book starts with an overview of flexible and generalized uncertainty optimization. It covers uncertainties that are both associated with lack of information and that more general than stochastic theory, where well-defined distributions are assumed. Starting from families of distributions that are enclosed by upper and lower functions, the book presents construction methods for obtaining flexible and generalized uncertainty input data that can be used in a flexible and generalized uncertainty optimization model. It then describes the development of such a model in detail. All in all, the book provides the readers with the necessary background to understand flexible and generalized uncertainty optimization and develop their own optimization model. .

  9. Theory of and Method for Nontraditional Mining Assessment

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    @@NONTRADITIONAL MINING ASSESSMENT THEORY Nontraditional Mineral Resources Nontraditional mineral resources refers to the potential mineral resources that are ignored, undiscovered and unutilized under present technical, economic and environmental conditions. This research scope can be listed as follows: (1)Nontraditional mineral resources refer to new types, new depths, new scopes, new techniques and new utilization. (2)Nontraditional theories and methods include new theories,new technologies and new methods in the aspects of ore-forming, prospecting, mining, metallurgy and mining assessment such as nontraditional ore-forming predication. (3) Nontraditional mining refers to the types of clean and unpolluted mining, intensive mining, high value-added mining, high technology mining, post-mining economy and comprehensive service mining.

  10. Razumikhin's method in the qualitative theory of processes with delay

    Directory of Open Access Journals (Sweden)

    Anatoly D. Myshkis

    1995-01-01

    Full Text Available B.S. Razumikhin's concept in the qualitative theory of systems delay is clarified and discussed. Various ways of improvements of stability conditions are considered. The author shows that the guiding role of Lyapunov functions and demonstrates Razumikhin's method as a practical case of continuous version of the mathematical induction. Several examples demonstrate the obtained results.

  11. The Role of Method and Theory in the IAHR

    DEFF Research Database (Denmark)

    2016-01-01

    A reprint with a new "afterword" of an article published in 2000 in the anthology Perspectives on Method and Theory in the Study of Religion, edited by Armin W. Geertz and Russell T. McCutcheon, Brill, 2000, 3-37....

  12. The Constant Comparative Analysis Method Outside of Grounded Theory

    Science.gov (United States)

    Fram, Sheila M.

    2013-01-01

    This commentary addresses the gap in the literature regarding discussion of the legitimate use of Constant Comparative Analysis Method (CCA) outside of Grounded Theory. The purpose is to show the strength of using CCA to maintain the emic perspective and how theoretical frameworks can maintain the etic perspective throughout the analysis. My…

  13. The Relation between Sociocultural Theory and Teaching Methods

    Institute of Scientific and Technical Information of China (English)

    杨帆

    2013-01-01

    Up to now, it has been learned so many theories about second language acquisition (SLA), namely behaviorism theo-ries, innatism theories, psychological theories, and interactionist theories. Among these theories the interactionist theories especial-ly Vygotsky’s sociocultural theory is most in line with the real teaching philosophy.

  14. Automated Methods in Chiral Perturbation Theory on the Lattice

    CERN Document Server

    Borasoy, B; Krebs, H; Lewis, R; Borasoy, Bugra; Hippel, Georg M. von; Krebs, Hermann; Lewis, Randy

    2005-01-01

    We present a method to automatically derive the Feynman rules for mesonic chiral perturbation theory with a lattice regulator. The Feynman rules can be output both in a human-readable format and in a form suitable for an automated numerical evaluation of lattice Feynman diagrams. The automated method significantly simplifies working with improved or extended actions. Some applications to the study of finite-volume effects will be presented.

  15. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  16. A novel scheme of hybrid entanglement swapping and teleportation using cavity QED in the small and large detuning regimes and quasi-Bell state measurement method

    Science.gov (United States)

    Pakniat, R.; Tavassoly, M. K.; Zandi, M. H.

    2016-10-01

    We outline a scheme for entanglement swapping based on cavity QED as well as quasi-Bell state measurement (quasi-BSM) methods. The atom-field interaction in the cavity QED method is performed in small and large detuning regimes. We assume two atoms are initially entangled together and, distinctly two cavities are prepared in an entangled coherent-coherent state. In this scheme, we want to transform entanglement to the atom-field system. It is observed that, the fidelities of the swapped entangled state in the quasi-BSM method can be compatible with those obtained in the small and large detuning regimes in the cavity QED method (the condition of this compatibility will be discussed). In addition, in the large detuning regime, the swapped entangled state is obtained by detecting and quasi-BSM approaches. In the continuation, by making use of the atom-field entangled state obtained in both approaches in a large detuning regime, we show that the atomic as well as field states teleportation with complete fidelity can be achieved.

  17. Multicomponent and multiscale systems theory, methods, and applications in engineering

    CERN Document Server

    Geiser, Juergen

    2016-01-01

    This book examines the latest research results from combined multi-component and multi-scale explorations. It provides theory, considers underlying numerical methods, and presents brilliant computational experimentation. Engineering computations featured in this monograph further offer particular interest to many researchers, engineers, and computational scientists working in frontier modeling and applications of multicomponent and multiscale problems. Professor Geiser gives specific attention to the aspects of decomposing and splitting delicate structures and controlling decomposition and the rationale behind many important applications of multi-component and multi-scale analysis. Multicomponent and Multiscale Systems: Theory, Methods, and Applications in Engineering also considers the question of why iterative methods can be powerful and more appropriate for well-balanced multiscale and multicomponent coupled nonlinear problems. The book is ideal for engineers and scientists working in theoretical and a...

  18. Resilience of river flow regimes.

    Science.gov (United States)

    Botter, Gianluca; Basso, Stefano; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea

    2013-08-06

    Landscape and climate alterations foreshadow global-scale shifts of river flow regimes. However, a theory that identifies the range of foreseen impacts on streamflows resulting from inhomogeneous forcings and sensitivity gradients across diverse regimes is lacking. Here, we derive a measurable index embedding climate and landscape attributes (the ratio of the mean interarrival of streamflow-producing rainfall events and the mean catchment response time) that discriminates erratic regimes with enhanced intraseasonal streamflow variability from persistent regimes endowed with regular flow patterns. Theoretical and empirical data show that erratic hydrological regimes typical of rivers with low mean discharges are resilient in that they hold a reduced sensitivity to climate fluctuations. The distinction between erratic and persistent regimes provides a robust framework for characterizing the hydrology of freshwater ecosystems and improving water management strategies in times of global change.

  19. Regimes internacionais

    OpenAIRE

    Meireles, André Bezerra

    2004-01-01

    Dissertação (mestraddo) - Universidade Federal de Santa Catarina, Centro de Ciências Jurídicas. Programa de Pós-Graduação em Direito. Qual o papel dos regimes internacionais com relação ao comportamento dos agentes das relações internacionais contemporâneas, em especial, o Estado? Dentro das negociações de uma esfera internacional caracterizada por uma forte interdependência econômica, verificada a existência de múltiplos canais de conexões entre as sociedades, e uma tendência contínua p...

  20. Mathematical methods of many-body quantum field theory

    CERN Document Server

    Lehmann, Detlef

    2004-01-01

    Mathematical Methods of Many-Body Quantum Field Theory offers a comprehensive, mathematically rigorous treatment of many-body physics. It develops the mathematical tools for describing quantum many-body systems and applies them to the many-electron system. These tools include the formalism of second quantization, field theoretical perturbation theory, functional integral methods, bosonic and fermionic, and estimation and summation techniques for Feynman diagrams. Among the physical effects discussed in this context are BCS superconductivity, s-wave and higher l-wave, and the fractional quantum Hall effect. While the presentation is mathematically rigorous, the author does not focus solely on precise definitions and proofs, but also shows how to actually perform the computations.Presenting many recent advances and clarifying difficult concepts, this book provides the background, results, and detail needed to further explore the issue of when the standard approximation schemes in this field actually work and wh...

  1. Advanced methods for scattering amplitudes in gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Peraro, Tiziano

    2014-09-24

    We present new techniques for the evaluation of multi-loop scattering amplitudes and their application to gauge theories, with relevance to the Standard Model phenomenology. We define a mathematical framework for the multi-loop integrand reduction of arbitrary diagrams, and elaborate algebraic approaches, such as the Laurent expansion method, implemented in the software Ninja, and the multivariate polynomial division technique by means of Groebner bases.

  2. Transfinite methods in metric fixed-point theory

    Directory of Open Access Journals (Sweden)

    W. A. Kirk

    2003-01-01

    Full Text Available This is a brief survey of the use of transfinite induction in metric fixed-point theory. Among the results discussed in some detail is the author's 1989 result on directionally nonexpansive mappings (which is somewhat sharpened, a result of Kulesza and Lim giving conditions when countable compactness implies compactness, a recent inwardness result for contractions due to Lim, and a recent extension of Caristi's theorem due to Saliga and the author. In each instance, transfinite methods seem necessary.

  3. Theory and Methods for Supporting High Level Military Decisionmaking

    Science.gov (United States)

    2007-01-01

    Gompert, and Kugler, 1996; Davis, 2002a). The relationship between defense applications and finance is more metaphorical than mathematical. A...be summarized as the fractal problem: • • 62 Theory and Methods for Supporting High-Level Military Decisionmaking Describing objectives...strategies, tactics, and tasks is a fractal matter—i.e., the concepts apply and are needed at each level, whether that of the president, the theater commander

  4. Applications of Symmetry Methods to the Theory of Plasma Physics

    OpenAIRE

    Giampaolo Cicogna; Francesco Ceccherini; Francesco Pegoraro

    2006-01-01

    The theory of plasma physics offers a number of nontrivial examples of partial differential equations, which can be successfully treated with symmetry methods. We propose three different examples which may illustrate the reciprocal advantage of this "interaction" between plasma physics and symmetry techniques. The examples include, in particular, the complete symmetry analysis of system of two PDE's, with the determination of some conditional and partial symmetries, the construction of group-...

  5. Tools of the trade: theory and method in mindfulness neuroscience.

    Science.gov (United States)

    Tang, Yi-Yuan; Posner, Michael I

    2013-01-01

    Mindfulness neuroscience is an emerging research field that investigates the underlying mechanisms of different mindfulness practices, different stages and different states of practice as well as different effects of practice over the lifespan. Mindfulness neuroscience research integrates theory and methods from eastern contemplative traditions, western psychology and neuroscience, and from neuroimaging techniques, physiological measures and behavioral tests. We here review several key theoretical and methodological challenges in the empirical study of mindfulness neuroscience and provide suggestions for overcoming these challenges.

  6. Terrestrial Water Storage in African Hydrological Regimes Derived from GRACE Mission Data: Intercomparison of Spherical Harmonics, Mass Concentration, and Scalar Slepian Methods

    Directory of Open Access Journals (Sweden)

    Ashraf Rateb

    2017-03-01

    Full Text Available Spherical harmonics (SH and mascon solutions are the two most common types of solutions for Gravity Recovery and Climate Experiment (GRACE mass flux observations. However, SH signals are degraded by measurement and leakage errors. Mascon solutions (the Jet Propulsion Laboratory (JPL release, herein exhibit weakened signals at submascon resolutions. Both solutions require a scale factor examined by the CLM4.0 model to obtain the actual water storage signal. The Slepian localization method can avoid the SH leakage errors when applied to the basin scale. In this study, we estimate SH errors and scale factors for African hydrological regimes. Then, terrestrial water storage (TWS in Africa is determined based on Slepian localization and compared with JPL-mascon and SH solutions. The three TWS estimates show good agreement for the TWS of large-sized and humid regimes but present discrepancies for the TWS of medium and small-sized regimes. Slepian localization is an effective method for deriving the TWS of arid zones. The TWS behavior in African regimes and its spatiotemporal variations are then examined. The negative TWS trends in the lower Nile and Sahara at −1.08 and −6.92 Gt/year, respectively, are higher than those previously reported.

  7. The FET1 Level 1 Method: Theory and Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, C.

    2000-03-01

    This report summarizes our experiences in developing a prototype serial code for the implementation of the Level 1 Finite Element Tearing and Interconnecting (FETI) method. This method is a non-overlapping domain-decomposition scheme for the parallel solution of ill-conditioned systems of linear equations arising in structural mechanics problems. The FETI method has been shown to be numerically scalable for second order elasticity and fourth order plate and shell problems. In this report, we first outline the theory underlying the FETI method and discuss the approaches taken to improve the robustness and convergence of the method. We next provide implementation details, focusing on our serial prototype code. Finally, we present experimental results, followed by a summary of our observations.

  8. Scaling Fire Regimes in Space and Time.

    Science.gov (United States)

    Falk, D. A.

    2004-12-01

    Spatial and temporal variability are important properties of the forest fire regimes of coniferous forests of southwestern North America. We use a variety of analytical techniques to examine scaling in a surface fire regime in the Jemez Mountains of northern New Mexico, USA, based on an original data set collected from Monument Canyon Research Natural Area (MCN). Spatio-temporal scale dependence in the fire regime can be analyzed quantitatively using statistical descriptors of the fire regime, such as fire frequency and mean fire interval. We describe a theory of the event-area (EA) relationship, an extension of the species-area relationship for events distributed in space and time; the interval-area (IA) relationship, is a related form for fire intervals. We use the EA and IA to demonstrate scale dependence in the MCN fire regime. The slope and intercept of these functions are influenced by fire size, frequency, and spatial distribution, and thus are potentially useful metrics of spatio-temporal synchrony of events in the paleofire record. Second, we outline a theory of fire interval probability, working from first principles in fire ecology and statistics. Fires are conditional events resulting from the interaction of multiple contingent factors that must be satisfied for an event to occur. Outcomes of this kind represent a multiplicative process for which a lognormal model is the limiting distribution. We examine the application of this framework to two probability models, the Weibull and lognormal distributions, which can be used to characterize the distribution of fire intervals over time. Lastly, we present a general model for the collector's curve, with application to the theory and effects of sample size in fire history. Sources of uncertainty in fire history can be partitioned into an error typology; analytical methods used in fire history (particularly the formation of composite fire records) are designed to minimize certain types of error in inference

  9. Application of semiclassical methods to reaction rate theory

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, R.

    1993-11-01

    This work is concerned with the development of approximate methods to describe relatively large chemical systems. This effort has been divided into two primary directions: First, we have extended and applied a semiclassical transition state theory (SCTST) originally proposed by Miller to obtain microcanonical and canonical (thermal) rates for chemical reactions described by a nonseparable Hamiltonian, i.e. most reactions. Second, we have developed a method to describe the fluctuations of decay rates of individual energy states from the average RRKM rate in systems where the direct calculation of individual rates would be impossible. Combined with the semiclassical theory this latter effort has provided a direct comparison to the experimental results of Moore and coworkers. In SCTST, the Hamiltonian is expanded about the barrier and the ``good`` action-angle variables are obtained perturbatively; a WKB analysis of the effectively one-dimensional reactive direction then provides the transmission probabilities. The advantages of this local approximate treatment are that it includes tunneling effects and anharmonicity, and it systematically provides a multi-dimensional dividing surface in phase space. The SCTST thermal rate expression has been reformulated providing increased numerical efficiency (as compared to a naive Boltzmann average), an appealing link to conventional transition state theory (involving a ``prereactive`` partition function depending on the action of the reactive mode), and the ability to go beyond the perturbative approximation.

  10. Variational methods in electron-atom scattering theory

    CERN Document Server

    Nesbet, Robert K

    1980-01-01

    The investigation of scattering phenomena is a major theme of modern physics. A scattered particle provides a dynamical probe of the target system. The practical problem of interest here is the scattering of a low­ energy electron by an N-electron atom. It has been difficult in this area of study to achieve theoretical results that are even qualitatively correct, yet quantitative accuracy is often needed as an adjunct to experiment. The present book describes a quantitative theoretical method, or class of methods, that has been applied effectively to this problem. Quantum mechanical theory relevant to the scattering of an electron by an N-electron atom, which may gain or lose energy in the process, is summarized in Chapter 1. The variational theory itself is presented in Chapter 2, both as currently used and in forms that may facilitate future applications. The theory of multichannel resonance and threshold effects, which provide a rich structure to observed electron-atom scattering data, is presented in Cha...

  11. Novel welding image processing method based on fractal theory

    Institute of Scientific and Technical Information of China (English)

    陈强; 孙振国; 肖勇; 路井荣

    2002-01-01

    Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.

  12. Similarity theory based method for MEMS dynamics analysis

    Institute of Scientific and Technical Information of China (English)

    LI Gui-xian; PENG Yun-feng; ZHANG Xin

    2008-01-01

    A new method for MEMS dynamics analysis is presented, ased on the similarity theory. With this method, two systems' similarities can be captured in terms of physics quantities/governed-equations amongst different energy fields, and then the unknown dynamic characteristics of one of the systems can be analyzed ac-cording to the similar ones of the other system. The probability to establish a pair of similar systems among MEMS and other energy systems is also discussed based on the equivalent between mechanics and electrics, and then the feasibility of applying this method is proven by an example, in which the squeezed damping force in MEMS and the current of its equivalent circuit established by this method are compared.

  13. The Gaussian radial basis function method for plasma kinetic theory

    Energy Technology Data Exchange (ETDEWEB)

    Hirvijoki, E., E-mail: eero.hirvijoki@chalmers.se [Department of Applied Physics, Chalmers University of Technology, SE-41296 Gothenburg (Sweden); Candy, J.; Belli, E. [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States); Embréus, O. [Department of Applied Physics, Chalmers University of Technology, SE-41296 Gothenburg (Sweden)

    2015-10-30

    Description of a magnetized plasma involves the Vlasov equation supplemented with the non-linear Fokker–Planck collision operator. For non-Maxwellian distributions, the collision operator, however, is difficult to compute. In this Letter, we introduce Gaussian Radial Basis Functions (RBFs) to discretize the velocity space of the entire kinetic system, and give the corresponding analytical expressions for the Vlasov and collision operator. Outlining the general theory, we also highlight the connection to plasma fluid theories, and give 2D and 3D numerical solutions of the non-linear Fokker–Planck equation. Applications are anticipated in both astrophysical and laboratory plasmas. - Highlights: • A radically new method to address the velocity space discretization of the non-linear kinetic equation of plasmas. • Elegant and physically intuitive, flexible and mesh-free. • Demonstration of numerical solution of both 2-D and 3-D non-linear Fokker–Planck relaxation problem.

  14. Method of Fire Image Identification Based on Optimization Theory

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In view of some distinctive characteristics of the early-stage flame image, a corresponding method of characteristic extraction is presented. Also introduced is the application of the improved BP algorithm based on the optimization theory to identifying fire image characteristics. First the optimization of BP neural network adopting Levenberg-Marquardt algorithm with the property of quadratic convergence is discussed, and then a new system of fire image identification is devised. Plenty of experiments and field tests have proved that this system can detect the early-stage fire flame quickly and reliably.

  15. Applications of Symmetry Methods to the Theory of Plasma Physics

    Directory of Open Access Journals (Sweden)

    Giampaolo Cicogna

    2006-02-01

    Full Text Available The theory of plasma physics offers a number of nontrivial examples of partial differential equations, which can be successfully treated with symmetry methods. We propose three different examples which may illustrate the reciprocal advantage of this "interaction" between plasma physics and symmetry techniques. The examples include, in particular, the complete symmetry analysis of system of two PDE's, with the determination of some conditional and partial symmetries, the construction of group-invariant solutions, and the symmetry classification of a nonlinear PDE.

  16. The method of trigonometrical sums in the theory of numbers

    CERN Document Server

    Vinogradov, I M

    2004-01-01

    Since the 1930s, the analytic theory of numbers has been transformed by the influence of I. M. Vinogradov, and this text for upper-level undergraduates and graduate students testifies to its author's ingenuity and to the effectiveness of his methods. Starting with a discussion of general lemmas, it advances to an investigation of Waring's problem, including explorations of singular series, the contribution of the basic intervals, and an estimate for G(n). Further topics include approximation by the fractional parts of the values of a polynomial, estimates for Weyl sums, the asymptotic formula

  17. Advanced Methods in Black-Hole Perturbation Theory

    CERN Document Server

    Pani, Paolo

    2013-01-01

    Black-hole perturbation theory is a useful tool to investigate issues in astrophysics, high-energy physics, and fundamental problems in gravity. It is often complementary to fully-fledged nonlinear evolutions and instrumental to interpret some results of numerical simulations. Several modern applications require advanced tools to investigate the linear dynamics of generic small perturbations around stationary black holes. Here, we present an overview of these applications and introduce extensions of the standard semianalytical methods to construct and solve the linearized field equations in curved spacetime. Current state-of-the-art techniques are pedagogically explained and exciting open problems are presented.

  18. Augmented Lagrangian Method for Constrained Nuclear Density Functional Theory

    CERN Document Server

    Staszczak, A; Baran, A; Nazarewicz, W

    2010-01-01

    The augmented Lagrangiam method (ALM), widely used in quantum chemistry constrained optimization problems, is applied in the context of the nuclear Density Functional Theory (DFT) in the self-consistent constrained Skyrme Hartree-Fock-Bogoliubov (CHFB) variant. The ALM allows precise calculations of multidimensional energy surfaces in the space of collective coordinates that are needed to, e.g., determine fission pathways and saddle points; it improves accuracy of computed derivatives with respect to collective variables that are used to determine collective inertia; and is well adapted to supercomputer applications.

  19. Detecting spatial regimes in ecosystems | Science Inventory ...

    Science.gov (United States)

    Research on early warning indicators has generally focused on assessing temporal transitions with limited application of these methods to detecting spatial regimes. Traditional spatial boundary detection procedures that result in ecoregion maps are typically based on ecological potential (i.e. potential vegetation), and often fail to account for ongoing changes due to stressors such as land use change and climate change and their effects on plant and animal communities. We use Fisher information, an information theory based method, on both terrestrial and aquatic animal data (US Breeding Bird Survey and marine zooplankton) to identify ecological boundaries, and compare our results to traditional early warning indicators, conventional ecoregion maps, and multivariate analysis such as nMDS (non-metric Multidimensional Scaling) and cluster analysis. We successfully detect spatial regimes and transitions in both terrestrial and aquatic systems using Fisher information. Furthermore, Fisher information provided explicit spatial information about community change that is absent from other multivariate approaches. Our results suggest that defining spatial regimes based on animal communities may better reflect ecological reality than do traditional ecoregion maps, especially in our current era of rapid and unpredictable ecological change. Use an information theory based method to identify ecological boundaries and compare our results to traditional early warning

  20. SOLVING PROBLEMS OF STATISTICS WITH THE METHODS OF INFORMATION THEORY

    Directory of Open Access Journals (Sweden)

    Lutsenko Y. V.

    2015-02-01

    Full Text Available The article presents a theoretical substantiation, methods of numerical calculations and software implementation of the decision of problems of statistics, in particular the study of statistical distributions, methods of information theory. On the basis of empirical data by calculation we have determined the number of observations used for the analysis of statistical distributions. The proposed method of calculating the amount of information is not based on assumptions about the independence of observations and the normal distribution, i.e., is non-parametric and ensures the correct modeling of nonlinear systems, and also allows comparable to process heterogeneous (measured in scales of different types data numeric and non-numeric nature that are measured in different units. Thus, ASC-analysis and "Eidos" system is a modern innovation (ready for implementation technology solving problems of statistical methods of information theory. This article can be used as a description of the laboratory work in the disciplines of: intelligent systems; knowledge engineering and intelligent systems; intelligent technologies and knowledge representation; knowledge representation in intelligent systems; foundations of intelligent systems; introduction to neuromaturation and methods neural networks; fundamentals of artificial intelligence; intelligent technologies in science and education; knowledge management; automated system-cognitive analysis and "Eidos" intelligent system which the author is developing currently, but also in other disciplines associated with the transformation of data into information, and its transformation into knowledge and application of this knowledge to solve problems of identification, forecasting, decision making and research of the simulated subject area (which is virtually all subjects in all fields of science

  1. Grassmann phase space methods for fermions. II. Field theory

    Energy Technology Data Exchange (ETDEWEB)

    Dalton, B.J., E-mail: bdalton@swin.edu.au [Centre for Quantum and Optical Science, Swinburne University of Technology, Melbourne, Victoria 3122 (Australia); Jeffers, J. [Department of Physics, University of Strathclyde, Glasgow G4ONG (United Kingdom); Barnett, S.M. [School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom)

    2017-02-15

    In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker–Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.

  2. Integrability: mathematical methods for studying solitary waves theory

    Science.gov (United States)

    Wazwaz, Abdul-Majid

    2014-03-01

    In recent decades, substantial experimental research efforts have been devoted to linear and nonlinear physical phenomena. In particular, studies of integrable nonlinear equations in solitary waves theory have attracted intensive interest from mathematicians, with the principal goal of fostering the development of new methods, and physicists, who are seeking solutions that represent physical phenomena and to form a bridge between mathematical results and scientific structures. The aim for both groups is to build up our current understanding and facilitate future developments, develop more creative results and create new trends in the rapidly developing field of solitary waves. The notion of the integrability of certain partial differential equations occupies an important role in current and future trends, but a unified rigorous definition of the integrability of differential equations still does not exist. For example, an integrable model in the Painlevé sense may not be integrable in the Lax sense. The Painlevé sense indicates that the solution can be represented as a Laurent series in powers of some function that vanishes on an arbitrary surface with the possibility of truncating the Laurent series at finite powers of this function. The concept of Lax pairs introduces another meaning of the notion of integrability. The Lax pair formulates the integrability of nonlinear equation as the compatibility condition of two linear equations. However, it was shown by many researchers that the necessary integrability conditions are the existence of an infinite series of generalized symmetries or conservation laws for the given equation. The existence of multiple soliton solutions often indicates the integrability of the equation but other tests, such as the Painlevé test or the Lax pair, are necessary to confirm the integrability for any equation. In the context of completely integrable equations, studies are flourishing because these equations are able to describe the

  3. Examining Philosophy of Technology Using Grounded Theory Methods

    Directory of Open Access Journals (Sweden)

    Mark David Webster

    2016-03-01

    Full Text Available A qualitative study was conducted to examine the philosophy of technology of K-12 technology leaders, and explore the influence of their thinking on technology decision making. The research design aligned with CORBIN and STRAUSS grounded theory methods, and I proceeded from a research paradigm of critical realism. The subjects were school technology directors and instructional technology specialists, and data collection consisted of interviews and a written questionnaire. Data analysis involved the use of grounded theory methods including memo writing, open and axial coding, constant comparison, the use of purposive and theoretical sampling, and theoretical saturation of categories. Three broad philosophy of technology views were widely held by participants: an instrumental view of technology, technological optimism, and a technological determinist perspective that saw technological change as inevitable. Technology leaders were guided by two main approaches to technology decision making, represented by the categories Educational goals and curriculum should drive technology, and Keep up with technology (or be left behind. The core category and central phenomenon that emerged was that technology leaders approached technology leadership by placing greater emphasis on keeping up with technology, being influenced by an ideological orientation to technological change, and being concerned about preparing students for a technological future. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs160252

  4. Simplified theory of plastic zones based on Zarka's method

    CERN Document Server

    Hübel, Hartwig

    2017-01-01

    The present book provides a new method to estimate elastic-plastic strains via a series of linear elastic analyses. For a life prediction of structures subjected to variable loads, frequently encountered in mechanical and civil engineering, the cyclically accumulated deformation and the elastic plastic strain ranges are required. The Simplified Theory of Plastic Zones (STPZ) is a direct method which provides the estimates of these and all other mechanical quantities in the state of elastic and plastic shakedown. The STPZ is described in detail, with emphasis on the fact that not only scientists but engineers working in applied fields and advanced students are able to get an idea of the possibilities and limitations of the STPZ. Numerous illustrations and examples are provided to support the reader's understanding.

  5. Disobeying Power Laws: Perils for Theory and Method

    Directory of Open Access Journals (Sweden)

    G. Christopher Crawford

    2012-08-01

    Full Text Available The “norm of normality” is a myth that organization design scholars should believe only at their peril. In contrast to the normal (bell-shaped distribution with independent observations and linear relationships assumed by Gaussian statistics, research shows that nearly every input and outcome in organizational domains is power-law (Pareto distributed. These highly skewed distributions exhibit unstable means, unlimited variance, underlying interdependence, and extreme outcomes that disproportionally influence the entire system, making Gaussian methods and assumptions largely invalid. By developing more focused research designs and using methods that assume interdependence and potentially nonlinear relationships, organization design scholars can develop theories that more closely depict empirical reality and provide more useful insights to practitioners and other stakeholders.

  6. Frozen Gaussian approximation based domain decomposition methods for the linear Schrödinger equation beyond the semi-classical regime

    Science.gov (United States)

    Lorin, E.; Yang, X.; Antoine, X.

    2016-06-01

    The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.

  7. ADAPTIVE SYSTEMS THEORY: SOME BASIC CONCEPTS, METHODS AND RESULTS

    Institute of Scientific and Technical Information of China (English)

    GUO Lei

    2003-01-01

    The adaptive systems theory to be presented in this paper consists of two closely related parts: adaptive estimation (or filtering, prediction) and adaptive control of dynamical systems. Both adaptive estimation and control are nonlinear mappings of the on-line observed signals of dynamical systems, where the main features are the uncertainties in both the system's structure and external disturbances, and the non-stationarity and dependency of the system signals. Thus, a key difficulty in establishing a mathematical theory of adaptive systems lies in how to deal with complicated nonlinear stochastic dynamical systems which describe the adaptation processes. In this paper, we will illustrate some of the basic concepts, methods and results through some simple examples. The following fundamental questions will be discussed: How much information is needed for estimation? How to deal with uncertainty by adaptation? How to analyze an adaptive system? What are the convergence or tracking performances of adaptation? How to find the proper rate of adaptation in some sense? We will also explore the following more fundamental questions: How much uncertainty can be dealt with by adaptation ? What are the limitations of adaptation ? How does the performance of adaptation depend on the prior information ? We will partially answer these questions by finding some "critical values" and establishing some "Impossibility Theorems" for the capability of adaptation, for several basic classes of nonlinear dynamical control systems with either parametric or nonparametric uncertainties.

  8. A uniformly accurate multiscale time integrator spectral method for the Klein-Gordon-Zakharov system in the high-plasma-frequency limit regime

    Science.gov (United States)

    Bao, Weizhu; Zhao, Xiaofei

    2016-12-01

    A multiscale time integrator sine pseudospectral (MTI-SP) method is presented for discretizing the Klein-Gordon-Zakharov (KGZ) system with a dimensionless parameter 0 MDF) to the electric field component of the solution at each time step and then apply the sine pseudospectral discretization for spatial derivatives followed by using the exponential wave integrator in phase space for integrating the MDF and the equation of the ion density component. The method is explicit and easy to be implemented. Extensive numerical results show that the MTI-SP method converges uniformly and optimally in space with exponential convergence rate if the solution is smooth, and uniformly in time with linear convergence rate at O (τ) for ε ∈ (0 , 1 ] with τ time step size and optimally with quadratic convergence rate at O (τ2) in the regime when either ε = O (1) or 0 < ε ≤ τ. Thus the meshing strategy requirement (or ε-scalability) of the MTI-SP for the KGZ system in the high-plasma-frequency limit regime is τ = O (1) and h = O (1) for 0 < ε ≪ 1, which is significantly better than classical methods in the literatures. Finally, we apply the MTI-SP method to study the convergence rates of the KGZ system to its limiting models in the high-plasma-frequency limit and the interactions of bright solitons of the KGZ system, and to identify certain parameter regimes that the solution of the KGZ system will be blow-up in one dimension.

  9. Comparism of Computer Based Yield Line Theory with Elastic Theory and Finite Element Methods for Solid Slabs

    Directory of Open Access Journals (Sweden)

    J.O. Akinyele

    2011-02-01

    Full Text Available The complexity and conservative nature of the Yield Line Theory and its being an upper bound theory have made many design engineers to jettison the use of the analytical method in the analysis of slabs. Before now, the method has basically been a manual or hand methodwhich some engineers did not see a need for its use since there are many computer based packages in the analysis and design of slabs and other civil engineering structures. This paper presents a computer program that has adopted the yield line theory in the analysis of solid slabs. Two rectangular slabs of the same depth but differentdimensions were investigated. The Yield Line Theory was compared with two other analytical methods namely, Finite Element Method and Elastic Theory Method. The results obtained for a two-way spanning slab showed that the yield line theory is truly conservative, butincreasing the result by 25% caused the moment obtained to be very close to the results of the other two methods. Although it was still conservative, the check for deflections showed that it is reliable and economical in terms of reinforcement provision. For a one way spanning slab the results without any increment falls in between the two other methods with the Elastic method giving a conservative results. The paper concludes that the introduction of a computer-based yield line theory program will make the analytical method acceptable to design engineers in the developing countries of the world.

  10. Regime Change and the Role of Airpower

    Science.gov (United States)

    2006-08-01

    that connects the target sets with anticipated actions that lead to defeat of the regime. The theory’s mechanism relies on collective action theory as...collective action theory and Ted Gurr’s deprived actor theory to offer a theory of collective dissent. 7. Bueno de Mesquita et al., “Policy Failure

  11. New Nonperturbative Methods in Quantum Field Theory: From Large-N Orbifold Equivalence to Bions and Resurgence

    Science.gov (United States)

    Dunne, Gerald V.; Ünsal, Mithat

    2016-10-01

    We present a broad conceptual introduction to some new ideas in nonperturbative quantum field theory (QFT) that have led to progress toward an understanding of quark confinement in gauge theories and, more broadly, toward a nonperturbative continuum definition of QFTs. We first present exact orbifold equivalences of supersymmetric and nonsupersymmetric QFTs in the large-N limit and exact equivalences of large-N theories in infinite volume to large-N theories in finite volume, or even at a single point. We discuss principles by which calculable QFTs are continuously connected to strong-coupling QFTs, allowing understanding of the physics of confinement or the absence thereof. We discuss the role of particular saddle solutions, termed bions, in weak-coupling calculable regimes. The properties of bions motivate an extension of semiclassical methods used to evaluate functional integrals to include families of complex saddles (Picard-Lefschetz theory). This analysis leads us to the resurgence program, which may provide a framework for combining divergent perturbation series with semiclassical instanton and bion/renormalon contributions. This program could provide a nonperturbative definition of the path integral.

  12. Bootstrapping conformal field theories with the extremal functional method.

    Science.gov (United States)

    El-Showk, Sheer; Paulos, Miguel F

    2013-12-13

    The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.

  13. [Basic theory and research method of urban forest ecology].

    Science.gov (United States)

    He, Xingyuan; Jin, Yingshan; Zhu, Wenquan; Xu, Wenduo; Chen, Wei

    2002-12-01

    With the development of world economy and the increment of urban population, the urban environment problem hinders the urban sustainable development. Now, more and more people realized the importance of urban forests in improving the quality of urban ecology. Therefore, a new subject, urban forest ecology, and correlative new concept frame in the field formed. The theoretic foundation of urban forest ecology derived from the mutual combination of theory relating to forest ecology, landscape ecology, landscape architecture ecology and anthrop-ecology. People survey the development of city from the view of ecosystem, and regard the environment, a colony of human, animals and plants, as main factors of the system. The paper introduces systematically the urban forest ecology as follows: 1) the basic concept of urban forest ecology; 2) the meaning of urban forest ecology; 3) the basic principle and theoretic base of urban forest ecology; 4) the research method of urban forest ecology; 5) the developmental expectation of urban forest ecology.

  14. Grounded Theory Method: Sociology's Quest for Exclusive Items of Inquiry

    Directory of Open Access Journals (Sweden)

    Edward Tolhurst

    2012-09-01

    Full Text Available The genesis and development of grounded theory method (GTM is evaluated with reference to sociology's attempt to demarcate exclusive referents of inquiry. The links of objectivist GTM to positivistic terminology and to the natural scientific distinction from "common sense" are explored. It is then considered how the biological sciences have prompted reorientation towards constructivist GTM, underpinned by the metaphysics of social constructionism. GTM has been shaped by the endeavor to attain the sense of exactitude associated with positivism, whilst also seeking exclusive referents of inquiry that are distinct from the empirical realm of the natural sciences. This has generated complex research techniques underpinned by tortuous methodological debate: eschewing the perceived requirement to define and defend an academic niche could help to facilitate the development of a more useful and pragmatic orientation to qualitative social research. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1203261

  15. Unified CFD Methods Via Flowfield-Dependent Variation Theory

    Science.gov (United States)

    Chung, T. J.; Schunk, Greg; Canabal, Francisco; Heard, Gary

    1999-01-01

    This paper addresses the flowfield-dependent variation (FDV) methods in which complex physical phenomena are taken into account in the final form of partial differential equations to be solved so that finite difference methods (FDM) or finite element methods (FEM) themselves will not dictate the physics, but rather are no more than simply the options how to discretize between adjacent nodal points or within an element. The variation parameters introduced in the formulation are calculated from the current flowfield based on changes of Mach numbers, Reynolds numbers, Peclet numbers, and Damkohler numbers between adjacent nodal points, which play many significant roles such as adjusting the governing equations (hyperbolic, parabolic, and/or e!liptic), resolving various physical phenomena, and controlling the accuracy and stability of the numerical solution. The theory is verified by a number of example problems addressing the physical implications of the variation parameters which resemble the flowfield itself, shock capturing mechanism, transitions and interactions between inviscid/viscous, compressibility/incompressibility, and laminar/turbulent flows.

  16. Evolutionary game theory using agent-based methods.

    Science.gov (United States)

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations.

  17. (Bejan's) early vs. late regimes method applied to entropy generation in one-dimensional conduction

    Energy Technology Data Exchange (ETDEWEB)

    Bautista, O.; Martinez-Meyer, J.L. [Division de Ingenieria y Arquitectura, ITESM, 14380 Mexico DF (Mexico); Mendez, F. [Facultad de Ingenieria, UNAM, 04510 Mexico DF (Mexico)

    2005-06-01

    In this paper, we treat the unsteady entropy generation rate due to an instantaneous internal heat generation in a solid slab. Following the basic ideas developed by Bejan [Heat Transfer, Wiley, 1993], we conduct a multiple-scale analysis identifying the ''early'' and ''late'' regimes to derive, in a very simple way, the non-dimensional unsteady temperature profile for small values of the Biot number, Bi. In consequence, the non-dimensional spatial average entropy generation rate per unit volume, {phi} and the corresponding average steady-state entropy generation rate, {psi}, were evaluated for different values of the non-dimensional heat generation parameter {beta}. This parameter represents physically the ratio of the temperature of the solid slab (due to the internal heat generation) to the fluid temperature. We show that for the assumed values of this parameter {beta}, the non-dimensional temperature and entropy generation rate variables present a very sensible dependence between both parameters, indicating a direct relationship between the basic heat transfer mechanics: heat conduction, heat convection and internal heat generation. (authors)

  18. Application of information theory methods to food web reconstruction

    Science.gov (United States)

    Moniz, L.J.; Cooch, E.G.; Ellner, S.P.; Nichols, J.D.; Nichols, J.M.

    2007-01-01

    In this paper we use information theory techniques on time series of abundances to determine the topology of a food web. At the outset, the food web participants (two consumers, two resources) are known; in addition we know that each consumer prefers one of the resources over the other. However, we do not know which consumer prefers which resource, and if this preference is absolute (i.e., whether or not the consumer will consume the non-preferred resource). Although the consumers and resources are identified at the beginning of the experiment, we also provide evidence that the consumers are not resources for each other, and the resources do not consume each other. We do show that there is significant mutual information between resources; the model is seasonally forced and some shared information between resources is expected. Similarly, because the model is seasonally forced, we expect shared information between consumers as they respond to the forcing of the resources. The model that we consider does include noise, and in an effort to demonstrate that these methods may be of some use in other than model data, we show the efficacy of our methods with decreasing time series size; in this particular case we obtain reasonably clear results with a time series length of 400 points. This approaches ecological time series lengths from real systems.

  19. Grassmann phase space methods for fermions. I. Mode theory

    Science.gov (United States)

    Dalton, B. J.; Jeffers, J.; Barnett, S. M.

    2016-07-01

    In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggest the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. The theory of Grassmann phase space methods for fermions based on separate modes is developed, showing how the distribution function is defined and used to determine quantum correlation functions, Fock state populations and coherences via Grassmann phase space integrals, how the Fokker-Planck equations are obtained and then converted into equivalent Ito equations for stochastic Grassmann variables. The fermion distribution function is an even Grassmann function, and is unique. The number of c-number Wiener increments involved is 2n2, if there are n modes. The situation is somewhat different to the bosonic c-number case where only 2 n Wiener increments are involved, the sign of the drift term in the Ito equation is reversed and the diffusion matrix in the Fokker-Planck equation is anti-symmetric rather than symmetric. The un-normalised B distribution is of particular importance for determining Fock state populations and coherences, and as pointed out by Plimak, Collett and Olsen, the drift vector in its Fokker-Planck equation only depends linearly on the Grassmann variables. Using this key feature we show how the Ito stochastic equations can be solved numerically for finite times in terms of c-number stochastic

  20. Density functional theory based generalized effective fragment potential method

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Kiet A., E-mail: kiet.nguyen@wpafb.af.mil, E-mail: ruth.pachter@wpafb.af.mil [Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio 45433 (United States); UES, Inc., Dayton, Ohio 45432 (United States); Pachter, Ruth, E-mail: kiet.nguyen@wpafb.af.mil, E-mail: ruth.pachter@wpafb.af.mil [Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio 45433 (United States); Day, Paul N. [Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio 45433 (United States); General Dynamics Information Technology, Inc., Dayton, Ohio 45431 (United States)

    2014-06-28

    We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes.

  1. A blood pressure measurement method based on synergetics theory

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    <正>The principle for blood pressure measurement using pulse transit time is introduced in this paper.And the math model of synergetics theory is studied in detail.The synergetics theory is applied in the analysis of blood pressure measurement data.The simulation results show that the application of synergetics theory is helpful to judge the normal blood pressure,and the accuracy is up to 80%.

  2. Toric Methods in F-Theory Model Building

    Directory of Open Access Journals (Sweden)

    Johanna Knapp

    2011-01-01

    Full Text Available We discuss recent constructions of global F-theory GUT models and explain how to make use of toric geometry to do calculations within this framework. After introducing the basic properties of global F-theory GUTs, we give a self-contained review of toric geometry and introduce all the tools that are necessary to construct and analyze global F-theory models. We will explain how to systematically obtain a large class of compact Calabi-Yau fourfolds which can support F-theory GUTs by using the software package PALP.

  3. Exploring biomedical ontology mappings with graph theory methods.

    Science.gov (United States)

    Kocbek, Simon; Kim, Jin-Dong

    2017-01-01

    In the era of semantic web, life science ontologies play an important role in tasks such as annotating biological objects, linking relevant data pieces, and verifying data consistency. Understanding ontology structures and overlapping ontologies is essential for tasks such as ontology reuse and development. We present an exploratory study where we examine structure and look for patterns in BioPortal, a comprehensive publicly available repository of live science ontologies. We report an analysis of biomedical ontology mapping data over time. We apply graph theory methods such as Modularity Analysis and Betweenness Centrality to analyse data gathered at five different time points. We identify communities, i.e., sets of overlapping ontologies, and define similar and closest communities. We demonstrate evolution of identified communities over time and identify core ontologies of the closest communities. We use BioPortal project and category data to measure community coherence. We also validate identified communities with their mutual mentions in scientific literature. With comparing mapping data gathered at five different time points, we identified similar and closest communities of overlapping ontologies, and demonstrated evolution of communities over time. Results showed that anatomy and health ontologies tend to form more isolated communities compared to other categories. We also showed that communities contain all or the majority of ontologies being used in narrower projects. In addition, we identified major changes in mapping data after migration to BioPortal Version 4.

  4. A computer-supported method to reveal and assess Personal Professional Theories in vocational education

    NARCIS (Netherlands)

    van den Bogaart, Antoine C.M.; Bilderbeek, Richardus; Schaap, Harmen; Hummel, Hans G.K.; Kirschner, Paul A.

    2016-01-01

    This article introduces a dedicated, computer-supported method to construct and formatively assess open, annotated concept maps of Personal Professional Theories (PPTs). These theories are internalised, personal bodies of formal and practical knowledge, values, norms and convictions that professiona

  5. Extension theory for elliptic partial differential operators with pseudodifferential methods

    DEFF Research Database (Denmark)

    Grubb, Gerd

    2012-01-01

    This is a short survey on the connection between general extension theories and the study of realizations of elliptic operators A on smooth domains in R^n, n >1. The theory of pseudodifferential boundary problems has turned out to be very useful here, not only as a formulational framework, but also...... for the solution of specific questions. We recall some elements of that theory, and show its application in several cases (including new results), namely to the lower boundedness question, and the question of spectral asymptotics for differences between resolvents....

  6. Self-interacting scalar fields in their strong regime

    CERN Document Server

    Deur, A

    2016-01-01

    We study two self-interacting scalar field theories in their strong regime. We numerically investigate them in the static limit using path integrals on a lattice. We first recall the formalism and then recover known static potentials to validate the method and verify that calculations are independent of the choice of the simulation's arbitrary parameters, such as the space discretization size. The calculations in the strong field regime yield linear potentials for both theories. We discuss how these theories can represent the Strong Interaction and General Relativity in their static and classical limits. In the case of Strong Interaction, the model suggests an origin for the emergence of the confinement scale from the approximately conformal Lagrangian. The model also underlines the role of quantum effects in the appearance of the long-range linear quark-quark potential. For General Relativity, the results have important implications on the nature of Dark Matter. In particular, non-perturbative effects natura...

  7. Pulsar Timing Arrays and Gravity Tests in the Radiative Regime

    CERN Document Server

    Lee, K J

    2014-01-01

    In this paper, we focus on testing gravity theories in the radiative regime using pulsar timing array observations. After reviewing current techniques to measure the dispersion and alternative polarization of gravitational waves, we extend the framework to the most general situations, where the combinations of a massive graviton and alternative polarization modes are considered. The atlas of the Hellings-Downs functions is completed by the new calculations for these dispersive alternative polarization modes. We find that each mode and corresponding graviton mass introduce characteristic features in the Hellings-Downs function. Thus, in principal, we can not only detect each polarization mode, measure the corresponding graviton mass, but also discriminate the different scenarios. In this way, we can test gravity theories in the radiative regime in a generalized fashion, and such method is a direct experiment, where one can address the gauge symmetry of the gravity theories in their linearised limits. Although ...

  8. Introducing Evidence Through Research "Push": Using Theory and Qualitative Methods.

    Science.gov (United States)

    Morden, Andrew; Ong, Bie Nio; Brooks, Lauren; Jinks, Clare; Porcheret, Mark; Edwards, John J; Dziedzic, Krysia S

    2015-11-01

    A multitude of factors can influence the uptake and implementation of complex interventions in health care. A plethora of theories and frameworks recognize the need to establish relationships, understand organizational dynamics, address context and contingency, and engage key decision makers. Less attention is paid to how theories that emphasize relational contexts can actually be deployed to guide the implementation of an intervention. The purpose of the article is to demonstrate the potential role of qualitative research aligned with theory to inform complex interventions. We detail a study underpinned by theory and qualitative research that (a) ensured key actors made sense of the complex intervention at the earliest stage of adoption and (b) aided initial engagement with the intervention. We conclude that using theoretical approaches aligned with qualitative research can provide insights into the context and dynamics of health care settings that in turn can be used to aid intervention implementation.

  9. Systematic method for unification of various field theories in a two-dimensional classical $\\phi^4$ field theory

    CERN Document Server

    Zarei, Mohammad Hossein

    2016-01-01

    Although creating a unified theory in Elementary Particles Physics is still an open problem, there are a lot of attempts for unifying other fields of physics. Following such unifications, we regard a two dimensional (2D) classical $\\Phi^{4}$ field theory model to study several field theories with different symmetries in various dimensions. While the completeness of this model has been already proved by a mapping between statistical mechanics and quantum information theory, here, we take into account a fundamental systematic approach with purely mathematical basis to re-derive such completeness in a general manner. Due to simplicity and generality, we believe that our method leads to a general approach which can be understood by other physical communities as well as quantum information theorists. Furthermore, our proof of the completeness is not only a proof-of-principle, but also an interesting algorithmic proof. We consider a discrete version of a general field theory as an arbitrary polynomial function of f...

  10. Simulation of Fluid-Structure and Fluid-Mediated Structure-Structure Interactions in Stokes Regime Using Immersed Boundary Method

    Directory of Open Access Journals (Sweden)

    Masoud Baghalnezhad

    2014-01-01

    Full Text Available The Stokes flow induced by the motion of an elastic massless filament immersed in a two-dimensional fluid is studied. Initially, the filament is deviated from its equilibrium state and the fluid is at rest. The filament will induce fluid motion while returning to its equilibrium state. Two different test cases are examined. In both cases, the motion of a fixed-end massless filament induces the fluid motion inside a square domain. However, in the second test case, a deformable circular string is placed in the square domain and its interaction with the Stokes flow induced by the filament motion is studied. The interaction between the fluid and deformable body/bodies can become very complicated from the computational point of view. An immersed boundary method is used in the present study. In order to substantiate the accuracy of the numerical method employed, the simulated results associated with the Stokes flow induced by the motion of an extending star string are compared well with those obtained by the immersed interface method. The results show the ability and accuracy of the IBM method in solving the complicated fluid-structure and fluid-mediated structure-structure interaction problems happening in a wide variety of engineering and biological systems.

  11. Finite Element Methods and Multiphase Continuum Theory for Modeling 3D Air-Water-Sediment Interactions

    Science.gov (United States)

    Kees, C. E.; Miller, C. T.; Dimakopoulos, A.; Farthing, M.

    2016-12-01

    The last decade has seen an expansion in the development and application of 3D free surface flow models in the context of environmental simulation. These models are based primarily on the combination of effective algorithms, namely level set and volume-of-fluid methods, with high-performance, parallel computing. These models are still computationally expensive and suitable primarily when high-fidelity modeling near structures is required. While most research on algorithms and implementations has been conducted in the context of finite volume methods, recent work has extended a class of level set schemes to finite element methods on unstructured methods. This work considers models of three-phase flow in domains containing air, water, and granular phases. These multi-phase continuum mechanical formulations show great promise for applications such as analysis of coastal and riverine structures. This work will consider formulations proposed in the literature over the last decade as well as new formulations derived using the thermodynamically constrained averaging theory, an approach to deriving and closing macroscale continuum models for multi-phase and multi-component processes. The target applications require the ability to simulate wave breaking and structure over-topping, particularly fully three-dimensional, non-hydrostatic flows that drive these phenomena. A conservative level set scheme suitable for higher-order finite element methods is used to describe the air/water phase interaction. The interaction of these air/water flows with granular materials, such as sand and rubble, must also be modeled. The range of granular media dynamics targeted including flow and wave transmision through the solid media as well as erosion and deposition of granular media and moving bed dynamics. For the granular phase we consider volume- and time-averaged continuum mechanical formulations that are discretized with the finite element method and coupled to the underlying air

  12. Equilibrium theory of the hard sphere fluid and glasses in the metastable regime up to jamming. II. Structure and application to hopping dynamics

    Science.gov (United States)

    Jadrich, Ryan; Schweizer, Kenneth S.

    2013-08-01

    Building on the equation-of-state theory of Paper I, we construct a new thermodynamically consistent integral equation theory for the equilibrium pair structure of 3-dimensional monodisperse hard spheres applicable up to the jamming transition. The approach is built on a two Yukawa generalized mean spherical approximation closure for the direct correlation function (DCF) beyond contact that reproduces the exact contact value of the pair correlation function and isothermal compressibility. The detailed construction of the DCF is guided by the desire to capture its distinctive features as jamming is approached. Comparison of the theory with jamming limit simulations reveals good agreement for many, but not all, of the key features of the pair correlation function. The theory is more accurate in Fourier space where predictions for the structure factor and DCF are accurate over a wide range of wavevectors from significantly below the first cage peak to very high wavevectors. New features of the equilibrium pair structure are predicted for packing fractions below jamming but well above crystallization. For example, the oscillatory DCF decays very slowly at large wavevectors for high packing fractions as a consequence of the unusual structure of the radial distribution function at small separations. The structural theory is used as input to the nonlinear Langevin equation theory of activated dynamics, and calculations of the alpha relaxation time based on single particle hopping are compared to recent colloid experiments and simulations at very high volume fractions.

  13. A Direct Method For Predicting The High-Cycle Fatigue Regime In SMAs: Application To Nitinol Stents

    Directory of Open Access Journals (Sweden)

    Colombé Pierre

    2015-01-01

    Full Text Available In fatigue design of metals, it is common practice to distinguish between high-cycle fatigue (occurring after 10000–100000 cycles and low-cycle fatigue. For elastic-plastic materials, there is an established correlation between fatigue and energy dissipation. In particular, high-cycle fatigue occurs when the energy dissipation remains bounded in time. Although the physical mechanisms in SMAs differ from plasticity, the hysteresis observed in the stress-strain response shows that some energy dissipation occurs, and it can be reasonably assumed that situations where the energy dissipation remains bounded is the most favorable for fatigue design. We present a direct method for determining if the energy dissipation in a SMA structure is bounded or not. That method relies only on elastic calculations, thus bypassing incremental nonlinear analysis. Moreover, only a partial knowledge of the loading (namely the extreme values is needed. Some results related to Nitinol stents are presented.

  14. Grounded Theory as a "Family of Methods": A Genealogical Analysis to Guide Research

    Science.gov (United States)

    Babchuk, Wayne A.

    2011-01-01

    This study traces the evolution of grounded theory from a nuclear to an extended family of methods and considers the implications that decision-making based on informed choices throughout all phases of the research process has for realizing the potential of grounded theory for advancing adult education theory and practice. [This paper was…

  15. On the Methods for Constructing Meson-Baryon Reaction Models within Relativistic Quantum Field Theory

    Energy Technology Data Exchange (ETDEWEB)

    B. Julia-Diaz, H. Kamano, T.-S. H. Lee, A. Matsuyama, T. Sato, N. Suzuki

    2009-04-01

    Within the relativistic quantum field theory, we analyze the differences between the $\\pi N$ reaction models constructed from using (1) three-dimensional reductions of Bethe-Salpeter Equation, (2) method of unitary transformation, and (3) time-ordered perturbation theory. Their relations with the approach based on the dispersion relations of S-matrix theory are dicusssed.

  16. Methods of Approximation Theory in Complex Analysis and Mathematical Physics

    CERN Document Server

    Saff, Edward

    1993-01-01

    The book incorporates research papers and surveys written by participants ofan International Scientific Programme on Approximation Theory jointly supervised by Institute for Constructive Mathematics of University of South Florida at Tampa, USA and the Euler International Mathematical Instituteat St. Petersburg, Russia. The aim of the Programme was to present new developments in Constructive Approximation Theory. The topics of the papers are: asymptotic behaviour of orthogonal polynomials, rational approximation of classical functions, quadrature formulas, theory of n-widths, nonlinear approximation in Hardy algebras,numerical results on best polynomial approximations, wavelet analysis. FROM THE CONTENTS: E.A. Rakhmanov: Strong asymptotics for orthogonal polynomials associated with exponential weights on R.- A.L. Levin, E.B. Saff: Exact Convergence Rates for Best Lp Rational Approximation to the Signum Function and for Optimal Quadrature in Hp.- H. Stahl: Uniform Rational Approximation of x .- M. Rahman, S.K. ...

  17. Surviving Grounded Theory Research Method in an Academic World: Proposal Writing and Theoretical Frameworks

    Directory of Open Access Journals (Sweden)

    Naomi Elliott

    2012-12-01

    Full Text Available Grounded theory research students are frequently faced with the challenge of writing a research proposal and using a theoretical framework as part of the academic requirements for a degree programme. Drawing from personal experiences of two PhD graduates who used classic grounded theory in two different universities, this paper highlights key lessons learnt which may help future students who are setting out to use grounded theory method. It identifies key discussion points that students may find useful when engaging with critical audiences, and defending their grounded theory thesis at final examination. Key discussion points included are: the difference between inductive and deductive inquiry; how grounded theory method of data gathering and analysis provide researchers with a viable means of generating new theory; the primacy of the questions used in data gathering and data analysis; and, the research-theory link as opposed to the theory-research link.

  18. Methods of qualitative theory of differential equations and related topics

    CERN Document Server

    Lerman, L; Shilnikov, L

    2000-01-01

    Dedicated to the memory of Professor E. A. Leontovich-Andronova, this book was composed by former students and colleagues who wished to mark her contributions to the theory of dynamical systems. A detailed introduction by Leontovich-Andronova's close colleague, L. Shilnikov, presents biographical data and describes her main contribution to the theory of bifurcations and dynamical systems. The main part of the volume is composed of research papers presenting the interests of Leontovich-Andronova, her students and her colleagues. Included are articles on traveling waves in coupled circle maps, b

  19. The Gaussian radial basis function method for plasma kinetic theory

    Science.gov (United States)

    Hirvijoki, E.; Candy, J.; Belli, E.; Embréus, O.

    2015-10-01

    Description of a magnetized plasma involves the Vlasov equation supplemented with the non-linear Fokker-Planck collision operator. For non-Maxwellian distributions, the collision operator, however, is difficult to compute. In this Letter, we introduce Gaussian Radial Basis Functions (RBFs) to discretize the velocity space of the entire kinetic system, and give the corresponding analytical expressions for the Vlasov and collision operator. Outlining the general theory, we also highlight the connection to plasma fluid theories, and give 2D and 3D numerical solutions of the non-linear Fokker-Planck equation. Applications are anticipated in both astrophysical and laboratory plasmas.

  20. Adopting a Grounded Theory Approach to Cultural-Historical Research: Conflicting Methodologies or Complementary Methods?

    Directory of Open Access Journals (Sweden)

    Jayson Seaman PhD

    2008-03-01

    Full Text Available Grounded theory has long been regarded as a valuable way to conduct social and educational research. However, recent constructivist and postmodern insights are challenging long-standing assumptions, most notably by suggesting that grounded theory can be flexibly integrated with existing theories. This move hinges on repositioning grounded theory from a methodology with positivist underpinnings to an approach that can be used within different theoretical frameworks. In this article the author reviews this recent transformation of grounded theory, engages in the project of repositioning it as an approach by using cultural historical activity theory as a test case, and outlines several practical methods implied by the joint use of grounded theory as an approach and activity theory as a methodology. One implication is the adoption of a dialectic, as opposed to a constructivist or objectivist, stance toward grounded theory inquiry, a stance that helps move past the problem of emergence versus forcing.

  1. Lattice Field Theory with the Sign Problem and the Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Masahiro Imachi

    2007-02-01

    Full Text Available Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the θ term. We reconsider this problem from the point of view of the maximum entropy method.

  2. Linking Symbolic Interactionism and Grounded Theory Methods in a Research Design

    Directory of Open Access Journals (Sweden)

    Jennifer Chamberlain-Salaun

    2013-09-01

    Full Text Available This article focuses on Corbin and Strauss’ evolved version of grounded theory. In the third edition of their seminal text, Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, the authors present 16 assumptions that underpin their conception of grounded theory methodology. The assumptions stem from a symbolic interactionism perspective of social life, including the themes of meaning, action and interaction, self and perspectives. As research design incorporates both methodology and methods, the authors aim to expose the linkages between the 16 assumptions and essential grounded theory methods, highlighting the application of the latter in light of the former. Analyzing the links between symbolic interactionism and essential grounded theory methods provides novice researchers and researchers new to grounded theory with a foundation from which to design an evolved grounded theory research study.

  3. Long-memory time series theory and methods

    CERN Document Server

    Palma, Wilfredo

    2007-01-01

    Wilfredo Palma, PhD, is Chairman and Professor of Statistics in the Department of Statistics at Pontificia Universidad Católica de Chile. Dr. Palma has published several refereed articles and has received over a dozen academic honors and awards. His research interests include time series analysis, prediction theory, state space systems, linear models, and econometrics.

  4. Suggestology and Suggestopedia: The Theory of the Lozanov Method.

    Science.gov (United States)

    Bancroft, W. Jane

    In "Suggestologiia," Georgi Lozanov discusses his theories of Suggestology, the scientific study of suggestion, and Suggestopedia, the application of suggestion to pedagogy. The Lozanov thesis cannot properly be understood in isolation, however, and Suggestology and Suggestopedia should be considered in relation to yoga, Soviet and…

  5. Suggestology and Suggestopedia: The Theory of the Lozanov Method.

    Science.gov (United States)

    Bancroft, W. Jane

    In "Suggestologiia," Georgi Lozanov discusses his theories of Suggestology, the scientific study of suggestion, and Suggestopedia, the application of suggestion to pedagogy. The Lozanov thesis cannot properly be understood in isolation, however, and Suggestology and Suggestopedia should be considered in relation to yoga, Soviet and Western work in…

  6. Suggestology and Suggestopedia: The Theory of the Lozanov Method.

    Science.gov (United States)

    Bancroft, W. Jane

    In "Suggestologiia," Georgi Lozanov discusses his theories of Suggestology, the scientific study of suggestion, and Suggestopedia, the application of suggestion to pedagogy. The Lozanov thesis cannot properly be understood in isolation, however, and Suggestology and Suggestopedia should be considered in relation to yoga, Soviet and…

  7. The Navier-Stokes Equations Theory and Numerical Methods

    CERN Document Server

    Masuda, Kyûya; Rautmann, Reimund; Solonnikov, Vsevolod

    1990-01-01

    These proceedings contain original (refereed) research articles by specialists from many countries, on a wide variety of aspects of Navier-Stokes equations. Additionally, 2 survey articles intended for a general readership are included: one surveys the present state of the subject via open problems, and the other deals with the interplay between theory and numerical analysis.

  8. Team Performance Pay and Motivation Theory: A Mixed Methods Study

    Science.gov (United States)

    Wells, Pamela; Combs, Julie P.; Bustamante, Rebecca M.

    2013-01-01

    This study was conducted to explore teachers' perceptions of a team performance pay program in a large suburban school district through the lens of motivation theories. Mixed data analysis was used to analyze teacher responses from two archival questionnaires (Year 1, n = 368; Year 2, n = 649). Responses from teachers who participated in the team…

  9. Theory, Method and Practice of Neuroscientific Findings in Science Education

    Science.gov (United States)

    Liu, Chia-Ju; Chiang, Wen-Wei

    2014-01-01

    This report provides an overview of neuroscience research that is applicable for science educators. It first offers a brief analysis of empirical studies in educational neuroscience literature, followed by six science concept learning constructs based on the whole brain theory: gaining an understanding of brain function; pattern recognition and…

  10. Sustainable urban regime adjustments

    DEFF Research Database (Denmark)

    Quitzau, Maj-Britt; Jensen, Jens Stissing; Elle, Morten

    2013-01-01

    The endogenous agency that urban governments increasingly portray by making conscious and planned efforts to adjust the regimes they operate within is currently not well captured in transition studies. There is a need to acknowledge the ambiguity of regime enactment at the urban scale. This directs...... attention to the transformative implications of conscious strategic maneuvering by incumbent regime actors, when confronting regime structurations. This article provides insight to processes of regime enactment performed by local governments by applying a flow-oriented perspective on regime dynamics...

  11. Between the theory and method: the interpretation of the theory of Emilia Ferreiro for literacy

    OpenAIRE

    Fernanda Cargnin Gonçalves

    2008-01-01

    This article aims to show the difficulty of understanding the theory of Emilia Ferreiro by teachers from first grade at a school of public municipal city of Florianopolis / SC. It presents the theory of real Ferreiro described in his book "Psicogênese of Language Writing," co-authored with Teberosky, and interpretation of literacy observed in their practices of teaching. There are also options for work to teaching a child to escape the labeling of students in the literacy phases, which are ba...

  12. Ensemble method: Community detection based on game theory

    Science.gov (United States)

    Zhang, Xia; Xia, Zhengyou; Xu, Shengwu; Wang, J. D.

    2014-08-01

    Timely and cost-effective analytics over social network has emerged as a key ingredient for success in many businesses and government endeavors. Community detection is an active research area of relevance to analyze online social network. The problem of selecting a particular community detection algorithm is crucial if the aim is to unveil the community structure of a network. The choice of a given methodology could affect the outcome of the experiments because different algorithms have different advantages and depend on tuning specific parameters. In this paper, we propose a community division model based on the notion of game theory, which can combine advantages of previous algorithms effectively to get a better community classification result. By making experiments on some standard dataset, it verifies that our community detection model based on game theory is valid and better.

  13. Theory of traditional Chinese medicine and therapeutic method of diseases.

    Science.gov (United States)

    Lu, Ai-Ping; Jia, Hong-Wei; Xiao, Cheng; Lu, Qing-Ping

    2004-07-01

    Traditional Chinese medicine, including herbal medicine and acupuncture, as one of the most important parts in complementary and alternative medicine (CAM), plays the key role in the formation of integrative medicine. Why do not the modern drugs targeting the specificity of diseases produce theoretical effects in clinical observation? Why does not the traditional Chinese medicine targeting the Zheng (syndrome) produce theoretical effects in clinic? There should have some reasons to combine Western medicine with Chinese herbal medicine so as to form the integrative medicine. During the integration, how to clarify the impact of CAM theory on Western medicine has become an emergent topic. This paper focuses on the exploration of the impact of theory of traditional Chinese medicine on the therapy of diseases in Western medicine.

  14. Theory of traditional Chinese medicine and therapeutic method of diseases

    Institute of Scientific and Technical Information of China (English)

    Ai-Ping Lu; Hong-Wei Jia; Cheng Xiao; Qing-Ping Lu

    2004-01-01

    Traditional Chinese medicine, including herbal medicine and acupuncture, as one of the most important parts in complementary and alternative medicine (CAM), plays the key role in the formation of integrative medicine. Why do not the modern drugs targeting the specificity of diseases produce theoretical effects in clinical observation? Why does not the traditional Chinese medicine targeting the Zheng (syndrome) produce theoretical effects in clinic?There should have some reasons to combine Western medicine with Chinese herbal medicine so as to form the integrative medicine. During the integration, how to clarify the impact of CAM theory on Western medicine has become an emergent topic. This paper focuses on the exploration of the impact of theory of traditional Chinese medicine on the therapy of diseases in Western medicine.

  15. A Method to Retrieve the Multi-Receiver Moho Reflection Response from SH-Wave Scattering Coda in the Radiative Transfer Regime

    Science.gov (United States)

    Hartstra, I.; Wapenaar, C. P. A.

    2015-12-01

    We discuss a method to retrieve the multi-receiver Moho reflection response by interferometry from SH-wave coda in the 0.5-3 Hz frequency range. An image derived from a reflection response with a well defined virtual source would provide deterministic impedance contrasts, which can complement transmission tomography. For an accurate retrieval, cross-correlation interferometry requires the coda wave field to sample the imaging target and isotropically illuminate the receiver array. When these illumination requirements are not or only partially met, the stationary phase cannot be fully captured and artifacts will contaminate the retrieved reflection response. Here we conduct numerical scalar 2D finite difference simulations to investigate the challenging situation in which only shallow crustal earthquake sources illuminate the Moho and the response is recorded by a 2D linear array. We quantify to what extent the prevalence of scatterers in the crust can improve the illumination conditions and thus the retrieval of the Moho reflection. The accuracy of the retrieved reflection is evaluated for two physically different scattering regimes: the Rayleigh and Mie regime. We only use the earlier part of the scattering coda, because we have found that the later diffusive part does not significantly improve the retrieval. The density of the spherical scatterers is varied in order to change the scattering mean free path. This characteristic length scale is calculated for each model with the 2D radiative transfer equation, which is the governing equation in the earlier part of the scattering coda. The experiment is repeated for models of different geological settings derived from existing S-wave tomographies, which vary in Moho depth and reflectivity. The scattering mean free path can be approximated for real data if intrinsic attenuation is known, because the wavenumber-dependent scattering attenuation of the coherent wave amplitude is dependent on the scattering mean free path

  16. Analyzing ground ozone formation regimes using a principal axis factoring method: A case study of Kladno (Czech Republic) industrial area

    Energy Technology Data Exchange (ETDEWEB)

    Malec, L.; Skacel, F. [Department of Gas, Coke and Air Protection, Institute of Chemical Technology in Prague, (Czech Republic)]. E-mail: Lukas.Malec@vscht.cz; Fousek, T. [Institute of Public Health, District of Central Czech Republic, Kladno (Czech Republic); Tekac, V. [Department of Gas, Coke and Air Protection, Institute of Chemical Technology in Prague, (Czech Republic); Kral, P. [Institute of Public Health, District of Central Czech Republic, Kladno (Czech Republic)

    2008-07-15

    Tropospheric ozone is a secondary air pollutant, changes in the ambient content of which are affected by both, the emission rates of primary pollutants and the variability of meteorological conditions. In this paper, we use two multivariate statistical methods to analyze the impact of the meteorological conditions associated with pollutant transformation processes. First, we evaluated the variability of the spatial and temporal distribution of ozone precursor parameters by using discriminant analysis (DA) in locations close to the industrial area of Kladno (a city in the Czech Republic). Second, we interpreted the data set by using factor analysis (FA) to examine the differences between ozone formation processes in summer and in winter. To avoid temperature dependency between the variables, as well as to describe tropospheric washout processes, we used water vapour content rather than the more commonly employed relative humidity parameter. In this way, we were able to successfully determine and subsequently evaluate the various processes of ozone formation, together with the distribution of ozone precursors. High air temperature, radiation and low water content relate to summer pollution episodes, while radiation and wind speed prove to be the most important parameters during winter. [Spanish] El ozono troposferico es un contaminante fotoquimico secundario cuyos contenidos estan influidos tanto por las razones de emision de las sustancias contaminantes primarias como por la variabilidad de las condiciones meteorologicas. En este trabajo utilizamos dos metodos estadisticos multivariados para el analisis de la influencia de las condiciones meteorologicas relacionadas con los procesos de transformacion de las sustancias contaminantes. Primero, estimamos la variabilidad de la descomposicion espacial y temporal de los precursores de ozono mediante el analisis discriminante (DA) en las areas cercanas a la zona industrial de Kladno (una ciudad de la Republica Checa

  17. Functional methods underlying classical mechanics, relativity and quantum theory

    OpenAIRE

    Kryukov, Alexey A.

    2013-01-01

    The paper investigates the physical content of a recently proposed mathematical framework that unifies the standard formalisms of classical mechanics, relativity and quantum theory. In the framework states of a classical particle are identified with Dirac delta functions. The classical space is "made" of these functions and becomes a submanifold in a Hilbert space of states of the particle. The resulting embedding of the classical space into the space of states is highly non-trivial and accou...

  18. [Aesthetics theory and method of landscape resource assessment].

    Science.gov (United States)

    Wang, Baozhong; Wang, Baoming; He, Ping

    2006-09-01

    With the destruction of natural environment by human beings, scenic resources are no longer inexhaustible in supply and use. Human beings begin to lay the scenic resources on the same important strategic status as other natural resources, while landscape resources assessment is the prerequisite of their sustainable exploitation and conservation. This paper illustrated the psychological mechanisms of aesthetic and its approaches, compared with the methodologies of traditional and modem landscape aesthetic research, discussed the characteristics of important aesthetic theories (Platonism, Kant paradigm, Empathizing theory, Gestalt paradigm, Marxism aesthetics theory, and Appleton theory) and the landscape assessment theories of 4 paradigms (expert, psychological, cognitive, and empirical) and 2 groups (landscape environment science and landscape architecture culture), and summarized the important practices and successful examples at home and abroad. It was demonstrated that the historical development of landscape assessment had the feature of a contest between expert- and perception-based approaches, with the expert approach dominated in landscape management, while the perception-based approach dominated in landscape research. Both of these approaches generallty accepted that landscape quality was derived from the interaction between the biophysical features of landscape and the percepultual (judgmental) processes of human viewer. In the future, landscape quality assessment will evolve toward a shaky marriage, both expert- and perceptual approaches will be applied in parallel and merged in the final landscape management decision-making process in some but unspecified way, landscape information and complex geo-temporal dynamics representation central to scenic ecosystem management will present major challenges to the traditional landscape aesthetic assessment, and modem science and technology will continue to help meet these challenges. The main trends of landscape

  19. Are There Two Methods of Grounded Theory? Demystifying the Methodological Debate

    Directory of Open Access Journals (Sweden)

    Cheri Ann Hernandez, RN, Ph.D., CDE

    2008-06-01

    Full Text Available Grounded theory is an inductive research method for the generation of substantive or formal theory, using qualitative or quantitative data generated from research interviews, observation, or written sources, or some combination thereof (Glaser & Strauss, 1967. In recent years there has been much controversy over the etiology of its discovery, as well as, the exact way in which grounded theory research is to be operationalized. Unfortunately, this situation has resulted in much confusion, particularly among novice researchers who wish to utilize this research method. In this article, the historical, methodological and philosophical roots of grounded theory are delineated in a beginning effort to demystify this methodological debate. Grounded theory variants such as feminist grounded theory (Wuest, 1995 or constructivist grounded theory (Charmaz, 1990 are beyond the scope of this discussion.

  20. A Novel Method of Enhancing Grounded Theory Memos with Voice Recording

    Science.gov (United States)

    Stocker, Rachel; Close, Helen

    2013-01-01

    In this article the authors present the recent discovery of a novel method of supplementing written grounded theory memos with voice recording, the combination of which may provide significant analytical advantages over solely the traditional written method. Memo writing is an essential component of a grounded theory study, however it is often…

  1. A new method of constructing energy momentum tensor of non-minimally coupled theories

    CERN Document Server

    Mukherjee, Pradip; Roy, Amit Singha

    2016-01-01

    A new method of constructing conserved energy momentum tensor of non minimally coupled theories is developed from first principles. This method is based on Noether procedure in the locally inertial system.

  2. Theory of difference equations numerical methods and applications

    CERN Document Server

    Lakshmikantham, Vangipuram

    1988-01-01

    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

  3. New theory of superfluidity. Method of equilibrium density matrix

    CERN Document Server

    Bondarev, Boris

    2014-01-01

    The variational theory of equilibrium boson system state to have been previously developed by the author under the density matrix formalism is applicable for researching equilibrium states and thermodynamic properties of the quantum Bose gas which consists of zero-spin particles. Particle pulse distribution function is obtained and duly employed for calculation of chemical potential, internal energy and gas capacity temperature dependences. It is found that specific phase transition, which is similar to transition of liquid helium to its superfluid state, occurs at the temperature exceeding that of the Bose condensation.

  4. Comparison of Kernel Equating and Item Response Theory Equating Methods

    Science.gov (United States)

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  5. Theory of linear physical systems theory of physical systems from the viewpoint of classical dynamics, including Fourier methods

    CERN Document Server

    Guillemin, Ernst A

    2013-01-01

    An eminent electrical engineer and authority on linear system theory presents this advanced treatise, which approaches the subject from the viewpoint of classical dynamics and covers Fourier methods. This volume will assist upper-level undergraduates and graduate students in moving from introductory courses toward an understanding of advanced network synthesis. 1963 edition.

  6. Harmonic generation by noble-gas atoms in the near-IR regime using ab initio time-dependent R -matrix theory

    Science.gov (United States)

    Hassouneh, O.; Brown, A. C.; van der Hart, H. W.

    2014-10-01

    We demonstrate the capability of ab initio time-dependent R -matrix theory to obtain accurate harmonic generation spectra of noble-gas atoms at near-IR wavelengths between 1200 and 1800 nm and peak intensities up to 1.8 × 10 14 W /cm 2. To accommodate the excursion length of the ejected electron, we use an angular-momentum expansion up to Lmax=279 . The harmonic spectra show evidence of atomic structure through the presence of a Cooper minimum in harmonic generation for Kr, and of multielectron interaction through the giant resonance for Xe. The theoretical spectra agree well with those obtained experimentally.

  7. Harmonic generation of noble-gas atoms in the Near-IR regime using ab-initio time-dependent R-matrix theory

    CERN Document Server

    Hassouneh, O; van der Hart, H W

    2014-01-01

    We demonstrate the capability of ab-initio time-dependent R-matrix theory to obtain accurate harmonic generation spectra of noble-gas atoms at Near-IR wavelengths between 1200 and 1800 nm and peak intensities up to 1.8 X 10(14) W/cm(2) . To accommodate the excursion length of the ejected electron, we use an angular-momentum expansion up to Lmax = 279. The harmonic spectra show evidence of atomic structure through the presence of a Cooper minimum in harmonic generation for Kr, and of multielectron interaction through the giant resonance for Xe. The theoretical spectra agree well with those obtained experimentally.

  8. Regimes, Non-State Actors and the State System: A 'Structurational' Regime Model

    NARCIS (Netherlands)

    Arts, B.J.M.

    2000-01-01

    Regime analysis has become a popular approach in International Relations theory and in international policy studies. However, current regime models exhibit some shortcomings with regard to (1) addressing non-state actors, and in particular nongovernmental organizations (NGOs), (2) the balancing of

  9. Regimes, Non-State Actors and the State System: A 'Structurational' Regime Model

    NARCIS (Netherlands)

    Arts, B.J.M.

    2000-01-01

    Regime analysis has become a popular approach in International Relations theory and in international policy studies. However, current regime models exhibit some shortcomings with regard to (1) addressing non-state actors, and in particular nongovernmental organizations (NGOs), (2) the balancing of a

  10. Scharz Preconditioners for Krylov Methods: Theory and Practice

    Energy Technology Data Exchange (ETDEWEB)

    Szyld, Daniel B.

    2013-05-10

    Several numerical methods were produced and analyzed. The main thrust of the work relates to inexact Krylov subspace methods for the solution of linear systems of equations arising from the discretization of partial di erential equa- tions. These are iterative methods, i.e., where an approximation is obtained and at each step. Usually, a matrix-vector product is needed at each iteration. In the inexact methods, this product (or the application of a preconditioner) can be done inexactly. Schwarz methods, based on domain decompositions, are excellent preconditioners for thise systems. We contributed towards their under- standing from an algebraic point of view, developed new ones, and studied their performance in the inexact setting. We also worked on combinatorial problems to help de ne the algebraic partition of the domains, with the needed overlap, as well as PDE-constraint optimization using the above-mentioned inexact Krylov subspace methods.

  11. Theory and Method of Commercial Bank Credit Risk Measurement

    Institute of Scientific and Technical Information of China (English)

    BeimingXiao; JinlinLi

    2004-01-01

    Calculating and measuring credit risk is the key technique of commercial bank management. International relative achievements mainly include Z and ZETA modelof Altman, Standard&pool external rating system, Moody external rating system, KMV model, CreditMetrics model, CreditRisk model, McKinsey model and so on. Chinese relative achievements mainly includes: credit score method, comprehensive estimating method,discriminative analysis method, artificial neural network method etc. This paper analyzes the relative research achievements of credit risk measurement and the future research trend.

  12. Functional methods underlying classical mechanics, relativity and quantum theory

    Science.gov (United States)

    Kryukov, A.

    2013-04-01

    The paper investigates the physical content of a recently proposed mathematical framework that unifies the standard formalisms of classical mechanics, relativity and quantum theory. In the framework states of a classical particle are identified with Dirac delta functions. The classical space is "made" of these functions and becomes a submanifold in a Hilbert space of states of the particle. The resulting embedding of the classical space into the space of states is highly non-trivial and accounts for numerous deep relations between classical and quantum physics and relativity. One of the most striking results is the proof that the normal probability distribution of position of a macroscopic particle (equivalently, position of the corresponding delta state within the classical space submanifold) yields the Born rule for transitions between arbitrary quantum states.

  13. Theory of Mind: Mechanisms, Methods, and New Directions

    Directory of Open Access Journals (Sweden)

    Lindsey Jacquelyn Byom

    2013-08-01

    Full Text Available Theory of Mind (ToM has received significant research attention. Traditional ToM research has provided important understanding of how humans reason about mental states by utilizing shared world knowledge, social cues, and the interpretation of actions, however many current behavioral paradigms are limited to static, third-person protocols. Emerging experimental approaches such as cognitive simulation and simulated social interaction offer opportunities to investigate ToM in interactive, first-person and second-person scenarios while affording greater experimental control. The advantages and limitations of traditional and emerging ToM methodologies are discussed with the intent of advancing the understanding of ToM in socially mediated situations.

  14. Methods of information theory and algorithmic complexity for network biology.

    Science.gov (United States)

    Zenil, Hector; Kiani, Narsis A; Tegnér, Jesper

    2016-03-01

    We survey and introduce concepts and tools located at the intersection of information theory and network biology. We show that Shannon's information entropy, compressibility and algorithmic complexity quantify different local and global aspects of synthetic and biological data. We show examples such as the emergence of giant components in Erdös-Rényi random graphs, and the recovery of topological properties from numerical kinetic properties simulating gene expression data. We provide exact theoretical calculations, numerical approximations and error estimations of entropy, algorithmic probability and Kolmogorov complexity for different types of graphs, characterizing their variant and invariant properties. We introduce formal definitions of complexity for both labeled and unlabeled graphs and prove that the Kolmogorov complexity of a labeled graph is a good approximation of its unlabeled Kolmogorov complexity and thus a robust definition of graph complexity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Non-tangible methods of payment and monetary theory

    Directory of Open Access Journals (Sweden)

    F. CESARANO

    2013-12-01

    Full Text Available In recent years a strong interest in the topic of monetary standards has developed in two distinct fields of investigation: monetary policy and innovation in the payments system. In the latter, the developments stem from the concrete possibility of creating payment systems that do away with physical or tangible means of payment entirely. In this case, changes in the institutional framework provoke new perspectives in monetary analysis with regard to various themes: the organisational form of the system, the relationships between the functions of money, the minimum requirements of a monetary system, the means of implementing an optimal monetary policy, the theory of the bank, etc. The present work discusses some aspects of particular importance with regard to these issues. After describing the essential components of innovation in the payments system, the author identifies some critical points that have yet to be clarified by the recent literature.

  16. A proposed method for enhanced eigen-pair extraction using finite element methods: Theory and application

    Science.gov (United States)

    Jara-Almonte, J.; Mitchell, L. D.

    1988-01-01

    The paper covers two distinct parts: theory and application. The goal of this work was the reduction of model size with an increase in eigenvalue/vector accuracy. This method is ideal for the condensation of large truss- or beam-type structures. The theoretical approach involves the conversion of a continuum transfer matrix beam element into an 'Exact' dynamic stiffness element. This formulation is implemented in a finite element environment. This results in the need to solve a transcendental eigenvalue problem. Once the eigenvalue is determined the eigenvectors can be reconstructed with any desired spatial precision. No discretization limitations are imposed on the reconstruction. The results of such a combined finite element and transfer matrix formulation is a much smaller FEM eigenvalue problem. This formulation has the ability to extract higher eigenvalues as easily and as accurately as lower eigenvalues. Moreover, one can extract many more eigenvalues/vectors from the model than the number of degrees of freedom in the FEM formulation. Typically, the number of eigenvalues accurately extractable via the 'Exact' element method are at least 8 times the number of degrees of freedom. In contrast, the FEM usually extracts one accurate (within 5 percent) eigenvalue for each 3-4 degrees of freedom. The 'Exact' element results in a 20-30 improvement in the number of accurately extractable eigenvalues and eigenvectors.

  17. A proposed method for enhanced eigen-pair extraction using finite element methods: Theory and application

    Science.gov (United States)

    Jara-Almonte, J.; Mitchell, L. D.

    1988-01-01

    The paper covers two distinct parts: theory and application. The goal of this work was the reduction of model size with an increase in eigenvalue/vector accuracy. This method is ideal for the condensation of large truss- or beam-type structures. The theoretical approach involves the conversion of a continuum transfer matrix beam element into an 'Exact' dynamic stiffness element. This formulation is implemented in a finite element environment. This results in the need to solve a transcendental eigenvalue problem. Once the eigenvalue is determined the eigenvectors can be reconstructed with any desired spatial precision. No discretization limitations are imposed on the reconstruction. The results of such a combined finite element and transfer matrix formulation is a much smaller FEM eigenvalue problem. This formulation has the ability to extract higher eigenvalues as easily and as accurately as lower eigenvalues. Moreover, one can extract many more eigenvalues/vectors from the model than the number of degrees of freedom in the FEM formulation. Typically, the number of eigenvalues accurately extractable via the 'Exact' element method are at least 8 times the number of degrees of freedom. In contrast, the FEM usually extracts one accurate (within 5 percent) eigenvalue for each 3-4 degrees of freedom. The 'Exact' element results in a 20-30 improvement in the number of accurately extractable eigenvalues and eigenvectors.

  18. Deformed transition-state theory: Deviation from Arrhenius behavior and application to bimolecular hydrogen transfer reaction rates in the tunneling regime.

    Science.gov (United States)

    Carvalho-Silva, Valter H; Aquilanti, Vincenzo; de Oliveira, Heibbe C B; Mundim, Kleber C

    2017-01-30

    A formulation is presented for the application of tools from quantum chemistry and transition-state theory to phenomenologically cover cases where reaction rates deviate from Arrhenius law at low temperatures. A parameter d is introduced to describe the deviation for the systems from reaching the thermodynamic limit and is identified as the linearizing coefficient in the dependence of the inverse activation energy with inverse temperature. Its physical meaning is given and when deviation can be ascribed to quantum mechanical tunneling its value is calculated explicitly. Here, a new derivation is given of the previously established relationship of the parameter d with features of the barrier in the potential energy surface. The proposed variant of transition state theory permits comparison with experiments and tests against alternative formulations. Prescriptions are provided and implemented to three hydrogen transfer reactions: CH4  + OH → CH3  + H2 O, CH3 Cl + OH → CH2 Cl + H2 O and H2  + CN → H + HCN, widely investigated both experimentally and theoretically. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. A study of the limitations of linear theory methods as applied to sonic boom calculations

    Science.gov (United States)

    Darden, Christine M.

    1990-01-01

    Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.

  20. The theory of an arbitrarily perturbed bird-cage resonator, and a simple method for restoring it to full symmetry

    Science.gov (United States)

    Tropp, James

    The first-order theory of a low-pass bird-cage resonator perturbed at a single capacitor [J. Tropp, J. Magn. Reson.82, 51 (1989)] is extended by explicit calculation to cover a low-pass bird cage perturbed arbitrarily at every reactance, provided that a first-order condition is satisfied. It is shown that the effect of arbitrary perturbation, i.e., the splitting of resonances and rotation of the polarization axes, can be exactly mimicked (in first order) by a pair of capacitors spaced by an azimuth of {π}/{4}. This result may be extended by symmetry arguments to the high-pass and simple band-pass bird cage. A method of correcting symmetry (abolishing the splitting of the useful eigenstates) is then derived, which provides near-perfect correction by the application of two capacitors, typically spaced {π}/{4} on the resonator azimuth. Experimental results are given for a low-pass bird-cage; and the correction procedure is verified and demonstrated in practical detail; and it is shown that the limit of the first-order theory is that the first of the two requisite correction capacitors should be within 7 or 8% of the nominal bird-cage capacitance. Practical examples of symmetry correction outside the first-order regime are also given.

  1. A NUMERICAL EMBEDDING METHOD FOR SOLVING THE NONLINEAR COMPLEMENTARITY PROBLEM(Ⅰ)--THEORY

    Institute of Scientific and Technical Information of China (English)

    Jian-jun Zhang; De-ren Wang

    2002-01-01

    In this paper, we extend the numerical embedding method for solving the smooth equations to the nonlinear complementarity problem. By using the nonsmooth theory,we prove the existence and the continuation of the following path for the corresponding homotopy equations. Therefore the basic theory of the numerical embedding method for solving the nonlinear complementarity problem is established. In part Ⅱ of this paper, we will further study the implementation of the method and give some numerical exapmles.

  2. Improvement method for the combining rule of Dempster-Shafer evidence theory based on reliability

    Institute of Scientific and Technical Information of China (English)

    Wang Ping; Yang Genqing

    2005-01-01

    An improvement method for the combining rule of Dempster evidence theory is proposed. Different from Dempster theory, the reliability of evidences isn't identical; and varies with the event. By weight evidence according to their reliability, the effect of unreliable evidence is reduced, and then get the fusion result that is closer to the truth. An example to expand the advantage of this method is given. The example proves that this method is helpful to find a correct result.

  3. A Theory and Method for Modeling of Structures with Stochastic Parameters

    Institute of Scientific and Technical Information of China (English)

    ZHANG Bei; YIN Xue-gang; WANG Fu-ming; ZHONG Yan-hui; CAI Ying-chun

    2004-01-01

    In order to reflect the stochastic characteristics of structures more comprehensively and accurately, a theory and method for modeling of structures with stochastic parameters is presented by using probability finite element method and stochastic experiment data of structures based on the modeling of structures with deterministic parameters. Double-decker space frame is taken as an example to validate this theory and method, good results are gained.

  4. Robust methods and asymptotic theory in nonlinear econometrics

    CERN Document Server

    Bierens, Herman J

    1981-01-01

    This Lecture Note deals with asymptotic properties, i.e. weak and strong consistency and asymptotic normality, of parameter estimators of nonlinear regression models and nonlinear structural equations under various assumptions on the distribution of the data. The estimation methods involved are nonlinear least squares estimation (NLLSE), nonlinear robust M-estimation (NLRME) and non­ linear weighted robust M-estimation (NLWRME) for the regression case and nonlinear two-stage least squares estimation (NL2SLSE) and a new method called minimum information estimation (MIE) for the case of structural equations. The asymptotic properties of the NLLSE and the two robust M-estimation methods are derived from further elaborations of results of Jennrich. Special attention is payed to the comparison of the asymptotic efficiency of NLLSE and NLRME. It is shown that if the tails of the error distribution are fatter than those of the normal distribution NLRME is more efficient than NLLSE. The NLWRME method is appropriate ...

  5. Hybrid Fundamental Solution Based Finite Element Method: Theory and Applications

    OpenAIRE

    Changyong Cao; Qing-Hua Qin

    2015-01-01

    An overview on the development of hybrid fundamental solution based finite element method (HFS-FEM) and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field) are employed. The formulations for...

  6. Matching method with theory in person-oriented developmental psychopathology research.

    Science.gov (United States)

    Sterba, Sonya K; Bauer, Daniel J

    2010-05-01

    The person-oriented approach seeks to match theories and methods that portray development as a holistic, highly interactional, and individualized process. Over the past decade, this approach has gained popularity in developmental psychopathology research, particularly as model-based varieties of person-oriented methods have emerged. Although these methods allow some principles of person-oriented theory to be tested, little attention has been paid to the fact that these methods cannot test other principles, and may actually be inconsistent with certain principles. Lacking clarification regarding which aspects of person-oriented theory are testable under which person-oriented methods, assumptions of the methods have sometimes been presented as testable hypotheses or interpreted as affirming the theory. This general blurring of the line between person-oriented theory and method has even led to the occasional perception that the method is the theory and vice versa. We review assumptions, strengths, and limitations of model-based person-oriented methods, clarifying which theoretical principles they can test and the compromises and trade-offs required to do so.

  7. New Image Recognition Method Based on Rough-Sets and Fuzzy Theory

    Institute of Scientific and Technical Information of China (English)

    张艳; 李凤霞; 战守义

    2003-01-01

    A new image recognition method based on fuzzy-rough sets theory is proposed, and its implementation discussed. The performance of this method as applied to ferrography image recognition is evaluated. It is shown that the new method gives better results than fuzzy or rough-sets method when used alone.

  8. Introduction to modern methods of quantum many-body theory and their applications

    CERN Document Server

    Fantoni, Stefano; Krotscheck, Eckhard S

    2002-01-01

    This invaluable book contains pedagogical articles on the dominant nonstochastic methods of microscopic many-body theories - the methods of density functional theory, coupled cluster theory, and correlated basis functions - in their widest sense. Other articles introduce students to applications of these methods in front-line research, such as Bose-Einstein condensates, the nuclear many-body problem, and the dynamics of quantum liquids. These keynote articles are supplemented by experimental reviews on intimately connected topics that are of current relevance. The book addresses the striking l

  9. An adaptive finite element method for simulating surface tension with the gradient theory of fluid interfaces

    KAUST Repository

    Kou, Jisheng

    2014-01-01

    The gradient theory for the surface tension of simple fluids and mixtures is rigorously analyzed based on mathematical theory. The finite element approximation of surface tension is developed and analyzed, and moreover, an adaptive finite element method based on a physical-based estimator is proposed and it can be coupled efficiently with Newton\\'s method as well. The numerical tests are carried out both to verify the proposed theory and to demonstrate the efficiency of the proposed method. © 2013 Elsevier B.V. All rights reserved.

  10. Text Steganography using LSB insertion method along with Chaos Theory

    CERN Document Server

    S., Bhavana

    2012-01-01

    The art of information hiding has been around nearly as long as the need for covert communication. Steganography, the concealing of information, arose early on as an extremely useful method for covert information transmission. Steganography is the art of hiding secret message within a larger image or message such that the hidden message or an image is undetectable; this is in contrast to cryptography, where the existence of the message itself is not disguised, but the content is obscure. The goal of a steganographic method is to minimize the visually apparent and statistical differences between the cover data and a steganogram while maximizing the size of the payload. Current digital image steganography presents the challenge of hiding message in a digital image in a way that is robust to image manipulation and attack. This paper explains about how a secret message can be hidden into an image using least significant bit insertion method along with chaos.

  11. Advanced quantitative magnetic nondestructive evaluation methods - Theory and experiment

    Science.gov (United States)

    Barton, J. R.; Kusenberger, F. N.; Beissner, R. E.; Matzkanin, G. A.

    1979-01-01

    The paper reviews the scale of fatigue crack phenomena in relation to the size detection capabilities of nondestructive evaluation methods. An assessment of several features of fatigue in relation to the inspection of ball and roller bearings suggested the use of magnetic methods; magnetic domain phenomena including the interaction of domains and inclusions, and the influence of stress and magnetic field on domains are discussed. Experimental results indicate that simplified calculations can be used to predict many features of these results; the data predicted by analytic models which use finite element computer analysis predictions do not agree with respect to certain features. Experimental analyses obtained on rod-type fatigue specimens which show experimental magnetic measurements in relation to the crack opening displacement and volume and crack depth should provide methods for improved crack characterization in relation to fracture mechanics and life prediction.

  12. The TEACH Method: An Interactive Approach for Teaching the Needs-Based Theories Of Motivation

    Science.gov (United States)

    Moorer, Cleamon, Jr.

    2014-01-01

    This paper describes an interactive approach for explaining and teaching the Needs-Based Theories of Motivation. The acronym TEACH stands for Theory, Example, Application, Collaboration, and Having Discussion. This method can help business students to better understand and distinguish the implications of Maslow's Hierarchy of Needs,…

  13. The TEACH Method: An Interactive Approach for Teaching the Needs-Based Theories Of Motivation

    Science.gov (United States)

    Moorer, Cleamon, Jr.

    2014-01-01

    This paper describes an interactive approach for explaining and teaching the Needs-Based Theories of Motivation. The acronym TEACH stands for Theory, Example, Application, Collaboration, and Having Discussion. This method can help business students to better understand and distinguish the implications of Maslow's Hierarchy of Needs,…

  14. UNDERSTANDING OF FUZZY OPTIMIZATION:THEORIES AND METHODS

    Institute of Scientific and Technical Information of China (English)

    TANG Jiafu; WANG Dingwei; Richard Y K FUNG; Kai-Leung Yung

    2004-01-01

    A brief summary on and comprehensive understanding of fuzzy optimizationis presentedThis summary is made on aspects of fuzzy modelling and fuzzy optimization,classification and formulation for the fuzzy optimization problems, models and methods.The importance of interpretation of the problem and formulation of the optimal solutionin fuzzy sense are emphasized in the summary of the fuzzy optimization.

  15. Methods in Educational Research: From Theory to Practice

    Science.gov (United States)

    Lodico, Marguerite G.; Spaulding Dean T.; Voegtle, Katherine H.

    2006-01-01

    Written for students, educators, and researchers, "Methods in Educational Research" offers a refreshing introduction to the principles of educational research. Designed for the real world of educational research, the book's approach focuses on the types of problems likely to be encountered in professional experiences. Reflecting the importance of…

  16. Generalized semi-infinite programming: Theory and methods

    NARCIS (Netherlands)

    Still, G.

    1999-01-01

    Generalized semi-infinite optimization problems (GSIP) are considered. The difference between GSIP and standard semi-infinite problems (SIP) is illustrated by examples. By applying the `Reduction Ansatz', optimality conditions for GSIP are derived. Numerical methods for solving GSIP are considered i

  17. Characterizing multistationarity regimes in biochemical reaction networks.

    Directory of Open Access Journals (Sweden)

    Irene Otero-Muras

    Full Text Available Switch like responses appear as common strategies in the regulation of cellular systems. Here we present a method to characterize bistable regimes in biochemical reaction networks that can be of use to both direct and reverse engineering of biological switches. In the design of a synthetic biological switch, it is important to study the capability for bistability of the underlying biochemical network structure. Chemical Reaction Network Theory (CRNT may help at this level to decide whether a given network has the capacity for multiple positive equilibria, based on their structural properties. However, in order to build a working switch, we also need to ensure that the bistability property is robust, by studying the conditions leading to the existence of two different steady states. In the reverse engineering of biological switches, knowledge collected about the bistable regimes of the underlying potential model structures can contribute at the model identification stage to a drastic reduction of the feasible region in the parameter space of search. In this work, we make use and extend previous results of the CRNT, aiming not only to discriminate whether a biochemical reaction network can exhibit multiple steady states, but also to determine the regions within the whole space of parameters capable of producing multistationarity. To that purpose we present and justify a condition on the parameters of biochemical networks for the appearance of multistationarity, and propose an efficient and reliable computational method to check its satisfaction through the parameter space.

  18. Interlaminar Stresses by Refined Beam Theories and the Sinc Method Based on Interpolation of Highest Derivative

    Science.gov (United States)

    Slemp, Wesley C. H.; Kapania, Rakesh K.; Tessler, Alexander

    2010-01-01

    Computation of interlaminar stresses from the higher-order shear and normal deformable beam theory and the refined zigzag theory was performed using the Sinc method based on Interpolation of Highest Derivative. The Sinc method based on Interpolation of Highest Derivative was proposed as an efficient method for determining through-the-thickness variations of interlaminar stresses from one- and two-dimensional analysis by integration of the equilibrium equations of three-dimensional elasticity. However, the use of traditional equivalent single layer theories often results in inaccuracies near the boundaries and when the lamina have extremely large differences in material properties. Interlaminar stresses in symmetric cross-ply laminated beams were obtained by solving the higher-order shear and normal deformable beam theory and the refined zigzag theory with the Sinc method based on Interpolation of Highest Derivative. Interlaminar stresses and bending stresses from the present approach were compared with a detailed finite element solution obtained by ABAQUS/Standard. The results illustrate the ease with which the Sinc method based on Interpolation of Highest Derivative can be used to obtain the through-the-thickness distributions of interlaminar stresses from the beam theories. Moreover, the results indicate that the refined zigzag theory is a substantial improvement over the Timoshenko beam theory due to the piecewise continuous displacement field which more accurately represents interlaminar discontinuities in the strain field. The higher-order shear and normal deformable beam theory more accurately captures the interlaminar stresses at the ends of the beam because it allows transverse normal strain. However, the continuous nature of the displacement field requires a large number of monomial terms before the interlaminar stresses are computed as accurately as the refined zigzag theory.

  19. The method of approximate inverse theory and applications

    CERN Document Server

    Schuster, Thomas

    2007-01-01

    Inverse problems arise whenever one tries to calculate a required quantity from given measurements of a second quantity that is associated to the first one. Besides medical imaging and non-destructive testing, inverse problems also play an increasing role in other disciplines such as industrial and financial mathematics. Hence, there is a need for stable and efficient solvers. The book is concerned with the method of approximate inverse which is a regularization technique for stably solving inverse problems in various settings such as L2-spaces, Hilbert spaces or spaces of distributions. The performance and functionality of the method is demonstrated on several examples from medical imaging and non-destructive testing such as computerized tomography, Doppler tomography, SONAR, X-ray diffractometry and thermoacoustic computerized tomography. The book addresses graduate students and researchers interested in the numerical analysis of inverse problems and regularization techniques or in efficient solvers for the...

  20. Hybrid Fundamental Solution Based Finite Element Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    Changyong Cao

    2015-01-01

    Full Text Available An overview on the development of hybrid fundamental solution based finite element method (HFS-FEM and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field are employed. The formulations for all cases are derived from the modified variational functionals and the fundamental solutions to a given problem. Generation of elemental stiffness equations from the modified variational principle is also described. Typical numerical examples are given to demonstrate the validity and performance of the HFS-FEM. Finally, a brief summary of the approach is provided and future trends in this field are identified.

  1. New theory of superconductivity. Method of equilibrium density matrix

    CERN Document Server

    Bondarev, Boris

    2014-01-01

    A new variational method for studying the equilibrium states of an interacting particles system has been proposed. The statistical description of the system is realized by means of a density matrix. This method is used for description of conduction electrons in metals. An integral equation for the electron distribution function over wave vectors has been obtained. The solutions of this equation have been found for those cases where the single-particle Hamiltonian and the electron interaction Hamiltonian can be approximated by a quite simple expression. It is shown that the distribution function at temperatures below the critical value possesses previously unknown features which allow to explain the superconductivity of metals and presence of a gap in the energy spectrum of superconducting electrons.

  2. Theory of Radio Propagation in Inhomogeneous Media (The Eikonal Method)

    CERN Document Server

    Bianchi, S; Settimi, A

    2010-01-01

    The Istituto Nazionale di Geofisica e Vulcanologia has been involved since its foundation in the forecast of the conditions in which a radio link that makes use of propagation by means of ionospheric wave takes place. In the last times it gained interest also the precise forecast of the trajectory covered by a radio wave propagating into the atmosphere, specifically into the ionosphere, which can be considered, at first approximation, as an inhomogeneous medium, defined by a refraction index slowly varying in time. This work describes the theoretical bases to study a trajectory; they substantially make use of the methods of the geometrical optics. Such theoretical bases find applications in numerical methods to calculate the trajectories, as quoted in references [Bianchi, 2009].

  3. Explicating students’ personal professional theories in vocational education through multi-method triangulation

    NARCIS (Netherlands)

    Schaap, Harmen; De Bruijn, Elly; Van der Schaaf, Marieke; Baartman, Liesbeth; Kirschner, Paul A.

    2011-01-01

    Schaap, H., De Bruijn, E., Van der Schaaf, M. F., Baartman, L. K. J., & Kirschner, P. A. (2011). Explicating students’ personal professional theories in vocational education through multi-method triangulation. Scandinavian Journal of Educational Research, 55, 567-586.

  4. Explicating students’ personal professional theories in vocational education through multi-method triangulation

    NARCIS (Netherlands)

    Schaap, Harmen; De Bruijn, Elly; Van der Schaaf, Marieke; Baartman, Liesbeth; Kirschner, Paul A.

    2011-01-01

    Schaap, H., De Bruijn, E., Van der Schaaf, M. F., Baartman, L. K. J., & Kirschner, P. A. (2011). Explicating students’ personal professional theories in vocational education through multi-method triangulation. Scandinavian Journal of Educational Research, 55, 567-586.

  5. Subspace Iteration and Immersed Interface Methods: Theory, Algorithm, and Applications

    Science.gov (United States)

    2010-08-20

    solution via alevel set fun tion. The new approa hes provide a se ond order dis retedelta fun tion for ellipti and elasti interfa e problems. The...domains [10℄with appli ation to ow past xed obsta les. In the appli ation to problems in mathemati al biology , our immersed-interfa e/level set method...applied to the biologi al problem of for es reating bran hing morphogenesis shows that ontra tility of the mes-en hyme is indeed suÆ ient to reate a

  6. Theory of secondary vocational English project teaching method

    Institute of Scientific and Technical Information of China (English)

    张梅

    2016-01-01

    Under the impetus of the rapid development of science and technology and economy,Social productivity level and greatly enhance people's quality of life,education level also got further development.Under the influence of the globalization development trend,English gradually become the most widely used foreign language of people daily life. Secondary vocational English is a public basic course of secondary vocational school students, as an important part of secondary vocational students training plan, has received the widespread attention.In this study, secondary vocational English project teaching method is briefly discussed.

  7. Theory of restriction degree of Triple I method with total inference rules of fuzzy reasoning

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on the theory of sustentation degree of Triple I method together with the formulas of a-Triple I modus ponens (MP) and a-Triple I modus tollens( MT), the theory of restriction degree of Triple I method is proposed. Its properties are analyzed, and the general formulas of supremum of a-Triple IMP and infimum of a-Triple I MT are obtained.

  8. Applying systems-centered theory (SCT) and methods in organizational contexts: putting SCT to work.

    Science.gov (United States)

    Gantt, Susan P

    2013-04-01

    Though initially applied in psychotherapy, a theory of living human systems (TLHS) and its systems-centered practice (SCT) offer a comprehensive conceptual framework replete with operational definitions and methods that is applicable in a wide range of contexts. This article elaborates the application of SCT in organizations by first summarizing systems-centered theory, its constructs and methods, and then using case examples to illustrate how SCT has been used in organizational and coaching contexts.

  9. Theories and calculation methods for regional objective ET (evapotranspiration): Applications

    Institute of Scientific and Technical Information of China (English)

    LIU diaHong; QIN DaYong; WANG MingNa; L(U) JinYan; SANG XueFeng; ZHANG RuiMei

    2009-01-01

    The regional objective ET (evapotranspiration) is defined as the quantity of water that could be con-sumed in a particular region. It varies with the water conditions and economic development stages in the region. It is also constrained by the requirement of benign environment cycle. At the same time, it must meet the demands of sustainable economic growth and the construction of harmony society.Objective ET based water resources distribution will replace the conventional method, which empha-sizes the balance between the water demand and the water supply. It puts focus on the reasonable water consumption instead of the forecasted water demand, which is usually greater than the actual one. In this paper, we calculated the objective ET of 2010 year level in Tianjin by an analysis-integra-tion-assessment method. Objective ET can be classified into two parts: controllable ET and uncontrol-lable ET. Controllable ET includes the ET from irrigation land and the ET from resident land, among which the former can be calculated with soil moisture model and evapotranspiration model, while the latter can be calculated by water use ration and water consumption rate. The uncontrollable ET can be calculated with the distributed hydrological model and the remote sensing monitoring model. The two models can be mutually calibrated. In this paper, eight schemes are put forward based on different portfolios of water resources. The objective ET of each scheme was calculated and the results were assessed and analyzed. Finally, an optimal scheme was recommended.

  10. The surface Laplacian technique in EEG: Theory and methods.

    Science.gov (United States)

    Carvalhaes, Claudio; de Barros, J Acacio

    2015-09-01

    This paper reviews the method of surface Laplacian differentiation to study EEG. We focus on topics that are helpful for a clear understanding of the underlying concepts and its efficient implementation, which is especially important for EEG researchers unfamiliar with the technique. The popular methods of finite difference and splines are reviewed in detail. The former has the advantage of simplicity and low computational cost, but its estimates are prone to a variety of errors due to discretization. The latter eliminates all issues related to discretization and incorporates a regularization mechanism to reduce spatial noise, but at the cost of increasing mathematical and computational complexity. These and several other issues deserving further development are highlighted, some of which we address to the extent possible. Here we develop a set of discrete approximations for Laplacian estimates at peripheral electrodes. We also provide the mathematical details of finite difference approximations that are missing in the literature, and discuss the problem of computational performance, which is particularly important in the context of EEG splines where data sets can be very large. Along this line, the matrix representation of the surface Laplacian operator is carefully discussed and some figures are given illustrating the advantages of this approach. In the final remarks, we briefly sketch a possible way to incorporate finite-size electrodes into Laplacian estimates that could guide further developments.

  11. The construction of optimal stated choice experiments theory and methods

    CERN Document Server

    Street, Deborah J

    2007-01-01

    The most comprehensive and applied discussion of stated choice experiment constructions available The Construction of Optimal Stated Choice Experiments provides an accessible introduction to the construction methods needed to create the best possible designs for use in modeling decision-making. Many aspects of the design of a generic stated choice experiment are independent of its area of application, and until now there has been no single book describing these constructions. This book begins with a brief description of the various areas where stated choice experiments are applicable, including marketing and health economics, transportation, environmental resource economics, and public welfare analysis. The authors focus on recent research results on the construction of optimal and near-optimal choice experiments and conclude with guidelines and insight on how to properly implement these results. Features of the book include: Construction of generic stated choice experiments for the estimation of main effects...

  12. The Gaussian Radial Basis Function Method for Plasma Kinetic Theory

    CERN Document Server

    Hirvijoki, Eero; Belli, Emily; Embréus, Ola

    2015-01-01

    A fundamental macroscopic description of a magnetized plasma is the Vlasov equation supplemented by the nonlinear inverse-square force Fokker-Planck collision operator [Rosenbluth et al., Phys. Rev., 107, 1957]. The Vlasov part describes advection in a six-dimensional phase space whereas the collision operator involves friction and diffusion coefficients that are weighted velocity-space integrals of the particle distribution function. The Fokker-Planck collision operator is an integro-differential, bilinear operator, and numerical discretization of the operator is far from trivial. In this letter, we describe a new approach to discretize the entire kinetic system based on an expansion in Gaussian Radial Basis functions (RBFs). This approach is particularly well-suited to treat the collision operator because the friction and diffusion coefficients can be analytically calculated. Although the RBF method is known to be a powerful scheme for the interpolation of scattered multidimensional data, Gaussian RBFs also...

  13. Methods of separation of variables in turbulence theory

    Science.gov (United States)

    Tsuge, S.

    1978-01-01

    Two schemes of closing turbulent moment equations are proposed both of which make double correlation equations separated into single-point equations. The first is based on neglected triple correlation, leading to an equation differing from small perturbed gasdynamic equations where the separation constant appears as the frequency. Grid-produced turbulence is described in this light as time-independent, cylindrically-isotropic turbulence. Application to wall turbulence guided by a new asymptotic method for the Orr-Sommerfeld equation reveals a neutrally stable mode of essentially three dimensional nature. The second closure scheme is based on an assumption of identity of the separated variables through which triple and quadruple correlations are formed. The resulting equation adds, to its equivalent of the first scheme, an integral of nonlinear convolution in the frequency describing a role due to triple correlation of direct energy-cascading.

  14. Newton’s method an updated approach of Kantorovich’s theory

    CERN Document Server

    Ezquerro Fernández, José Antonio

    2017-01-01

    This book shows the importance of studying semilocal convergence in iterative methods through Newton's method and addresses the most important aspects of the Kantorovich's theory including implicated studies. Kantorovich's theory for Newton's method used techniques of functional analysis to prove the semilocal convergence of the method by means of the well-known majorant principle. To gain a deeper understanding of these techniques the authors return to the beginning and present a deep-detailed approach of Kantorovich's theory for Newton's method, where they include old results, for a historical perspective and for comparisons with new results, refine old results, and prove their most relevant results, where alternative approaches leading to new sufficient semilocal convergence criteria for Newton's method are given. The book contains many numerical examples involving nonlinear integral equations, two boundary value problems and systems of nonlinear equations related to numerous physical phenomena. The book i...

  15. PREFACE: Classical density functional theory methods in soft and hard matter Classical density functional theory methods in soft and hard matter

    Science.gov (United States)

    Haataja, Mikko; Gránásy, László; Löwen, Hartmut

    2010-08-01

    Herein we provide a brief summary of the background, events and results/outcome of the CECAM workshop 'Classical density functional theory methods in soft and hard matter held in Lausanne between October 21 and October 23 2009, which brought together two largely separately working communities, both of whom employ classical density functional techniques: the soft-matter community and the theoretical materials science community with interests in phase transformations and evolving microstructures in engineering materials. After outlining the motivation for the workshop, we first provide a brief overview of the articles submitted by the invited speakers for this special issue of Journal of Physics: Condensed Matter, followed by a collection of outstanding problems identified and discussed during the workshop. 1. Introduction Classical density functional theory (DFT) is a theoretical framework, which has been extensively employed in the past to study inhomogeneous complex fluids (CF) [1-4] and freezing transitions for simple fluids, amongst other things. Furthermore, classical DFT has been extended to include dynamics of the density field, thereby opening a new avenue to study phase transformation kinetics in colloidal systems via dynamical DFT (DDFT) [5]. While DDFT is highly accurate, the computations are numerically rather demanding, and cannot easily access the mesoscopic temporal and spatial scales where diffusional instabilities lead to complex solidification morphologies. Adaptation of more efficient numerical methods would extend the domain of DDFT towards this regime of particular interest to materials scientists. In recent years, DFT has re-emerged in the form of the so-called 'phase-field crystal' (PFC) method for solid-state systems [6, 7], and it has been successfully employed to study a broad variety of interesting materials phenomena in both atomic and colloidal systems, including elastic and plastic deformations, grain growth, thin film growth, solid

  16. On theTranslation Methods and Theory of International Advertising

    Institute of Scientific and Technical Information of China (English)

    Yang Xuemei

    2012-01-01

    Advertisement, as a way to propagandize products, always plays its role in the special stage. A successful advertisement would help the manufacturer to achieve a large sale while an unsuccessful even a bad one would do the contrast work. Advertising is an activity containing intelligence, patience, art and diligence. With the globalization and China' s entering WT0, more and more Chinese products get the opportunity of entering world market. In this battle without smoke of gun powder, the most powerful weapon is commercial advertisement. Therefore, advertisement becomes more important. At the same time, the translation of advertising plays its role as to internationalize advertisement to the world. It seems as a bridge overpass different countries and languages. However, due to the difference among cultures, problems appear how to be a good advertising translator and how to make an excellent translation of advertising. This thesis tries to analyses the criteria and strategies of advertising translation after the discussion of the types, structure and features of style of advertisements and states that there is no an established method for advertising translation. What we should do is to be flexible to deal with the various advertisements we meet.

  17. Theories and calculation methods for regional objective ET

    Institute of Scientific and Technical Information of China (English)

    QIN DaYong; LO JinYan; LIU JiaHong; WANG MingNa

    2009-01-01

    The regional objective ET (Evapotranspiration) is a new concept in water resources research, which refers to the total amount of water that could be exhausted from a region in the form of vapor per year. The objective-ET based water resources management allocates water to different regions in terms of ET. It controls the water exhausted from a region to meet the objective ET. The regional objective ET must be adapted to fit the region's local available water resources. By improving the water utilization effi-ciency and reducing the unrecoverable water in the social water circle, it is saved so that water related production is maintained or even increased under the same water consumption conditions. Regional water balance is realized by rationally deploying the available water among different industries, adjusting industrial structures, and adopting new water-saving technologies, therefore to meeting the requirements for groundwater conservation, agricultural income stability, and avoiding environmental damages. Furthermore, water competition among various departments and industries (including envi-ronmental and ecological water use) may be avoided. This paper proposes an innovative definition of objective ET, and its principles, sub-index systems. Besides, a computational method for regional ob-jective ET is developed by combining the distributed hydrological model and the soil moisture model.

  18. Theories and Diagnostic Methods of Land Use Conflicts

    Institute of Scientific and Technical Information of China (English)

    Yongfang; YANG; Lianqi; ZHU

    2013-01-01

    With social and economic development, the land resources are becoming increasingly scarce, and the land use conflicts are getting more frequent, deeper, more diversified and more severe. Besides, the factors that induce land use conflicts are more and more complicated. Therefore, the key to solve many difficult problems in regional sustainable land use lies in the research of land use conflicts, scientific evaluation of the intensity of regional land use conflicts, and the further reveal of external forms as well as intrinsic mechanisms of land use conflicts. Based on the review of both domestic and foreign literatures, this paper has completed the theoretical framework as well as the contents of land use conflicts research, established the diagnostic models and methods of land use conflicts intensity and proposed the key research areas of future studies. The purpose is to promote the evolution of spatial structure of China’s land resources to the positive direction and achieve integrated and coordinated management of land use through improving spatial allocation efficiency of land factors and buffering the pressure on land resources.

  19. A design method based on photonic crystal theory for Bragg concave diffraction grating

    Science.gov (United States)

    Du, Bingzheng; Zhu, Jingping; Mao, Yuzheng; Li, Bao; Zhang, Yunyao; Hou, Xun

    2017-02-01

    A design method based on one-dimensional photonic crystal theory (1-D PC theory) is presented to design Bragg concave diffraction grating (Bragg-CDG) for the demultiplexer. With this design method, the reflection condition calculated by the 1-D PC theory can be matched perfectly with the diffraction condition. As a result, the shift of central wavelength of diffraction spectra can be improved, while keeping high diffraction efficiency. Performances of Bragg-CDG for TE and TM-mode are investigated, and the simulation results are consistent with the 1-D PC theory. This design method is expected to be applied to improve the accuracy and efficiency of Bragg-CDG after further research.

  20. Theory and methods of global stability analysis for high arch dam

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The global stability of high arch dam is one of the key problems in the safety study of arch dams,but no feasible method with theoretical basis is available.In this paper,based on the stability theory of mechanical system,it is demonstrated that the global failure of high arch dams belongs to a physical instability starting from local strength failure,which is the extreme point instability according to the characteristics of load-displacement curve obtained from the failure process of dam-foundation system. So the global failure of dam-foundation system should be studied with the stability theory of mechanical system.It is also pointed out that the current stability analysis methods used in engineering are consistent with the stability theory,but not established according to the mechanical system stability theory directly.A perfect method can be obtained through the study of physical disturbance equations.

  1. Measuring autocratic regime stability

    Directory of Open Access Journals (Sweden)

    Joseph Wright

    2016-01-01

    Full Text Available Researchers measure regime stability in autocratic contexts using a variety of data sources that capture distinct concepts. Often this research uses concepts developed for the study of democratic politics, such as leadership change or institutionalized authority, to construct measures of regime breakdown in non-democratic contexts. This article assesses whether the measure a researcher chooses influences the results they obtain by examining data on executive leadership, political authority, and autocratic regimes. We illustrate the conceptual differences between these variables by extending recent studies in the literature on the political consequences of non-tax revenue and unearned foreign income.

  2. Exchange rate regime choice

    Directory of Open Access Journals (Sweden)

    Beker Emilija

    2006-01-01

    Full Text Available The choice of an adequate exchange rate regime proves to be a highly sensitive field within which the economic authorities present and confirm themselves. The advantages and disadvantages of fixed and flexible exchange rate regimes, which have been quite relativized from the conventional point of view, together with simultaneous, but not synchronized effects of structural and external factors, remain permanently questioned throughout a complex process of exchange rate regime decision making. The paper reflects the attempt of critical identification of the key exchange rate performances with emphasis on continuous non-uniformity and (uncertainty of shelf life of a relevant choice.

  3. Communication: An efficient analytic gradient theory for approximate spin projection methods

    Science.gov (United States)

    Hratchian, Hrant P.

    2013-03-01

    Spin polarized and broken symmetry density functional theory are popular approaches for treating the electronic structure of open shell systems. However, spin contamination can significantly affect the quality of predicted geometries and properties. One scheme for addressing this concern in studies involving broken-symmetry states is the approximate projection method developed by Yamaguchi and co-workers. Critical to the exploration of potential energy surfaces and the study of properties using this method will be an efficient analytic gradient theory. This communication introduces such a theory formulated, for the first time, within the framework of general post-self consistent field (SCF) derivative theory. Importantly, the approach taken here avoids the need to explicitly solve for molecular orbital derivatives of each nuclear displacement perturbation, as has been used in a recent implementation. Instead, the well-known z-vector scheme is employed and only one SCF response equation is required.

  4. Influence of intra-event-based flood regime on sediment flow behavior from a typical agro-catchment of the Chinese Loess Plateau

    Science.gov (United States)

    Zhang, Le-Tao; Li, Zhan-Bin; Wang, He; Xiao, Jun-Bo

    2016-07-01

    The pluvial erosion process is significantly affected by tempo-spatial patterns of flood flows. However, despite their importance, only a few studies have investigated the sediment flow behavior that is driven by different flood regimes. The study aims to investigate the effect of intra-event-based flood regimes on the dynamics of sediment exports at Tuanshangou catchment, a typical agricultural catchment (unmanaged) in the hilly loess region on the Chinese Loess Plateau. Measurements of 193 flood events and 158 sediment-producing events were collected from Tuanshangou station between 1961 and 1969. The combined methods of hierarchical clustering approach, discriminant analysis and One-Way ANOVA were used to classify the flood events in terms of their event-based flood characteristics, including flood duration, peak discharge, and event flood runoff depth. The 193 flood events were classified into five regimes, and the mean statistical features of each regime significantly differed. Regime A includes flood events with the shortest duration (76 min), minimum flood crest (0.045 m s-1), least runoff depth (0.2 mm), and highest frequency. Regime B includes flood events with a medium duration (274 min), medium flood crest (0.206 m s-1), and minor runoff depth (0.7 mm). Regime C includes flood events with the longest duration (822 min), medium flood crest (0.236 m s-1), and medium runoff depth (1.7 mm). Regime D includes flood events with a medium duration (239 min), large flood crest (4.21 m s-1), and large runoff depth (10 mm). Regime E includes flood events with a medium duration (304 min), maximum flood crest (8.62 m s-1), and largest runoff depth (25.9 mm). The sediment yield by different flood regimes is ranked as follows: Regime E > Regime D > Regime B > Regime C > Regime A. In terms of event-based average and maximum suspended sediment concentration, these regimes are ordered as follows: Regime E > Regime D > Regime C > Regime B > Regime A. Regimes D and E

  5. Social Theory, Qualitative Methods and Ethnography: Representation and Reflexivity in Social Sciences

    Directory of Open Access Journals (Sweden)

    Juan Pablo Vera Lugo

    2007-07-01

    Full Text Available The article contributes to the discussion of the relation between qualitative methods and social theory, within the framework of the representativeness crisis and the emergence of reflexivity. The rise and development of the qualitative in the twentieth century, social theory implications in the postwar period, as well as the role played by critical theories in finding an answer to the crisis of sense and representation are the main issues reviewed by the authors. Finally, the article concludes with a discussion about the need for reflexivity within the ethnographic field.

  6. Spectral methods in chemistry and physics applications to kinetic theory and quantum mechanics

    CERN Document Server

    Shizgal, Bernard

    2015-01-01

    This book is a pedagogical presentation of the application of spectral and pseudospectral methods to kinetic theory and quantum mechanics. There are additional applications to astrophysics, engineering, biology and many other fields. The main objective of this book is to provide the basic concepts to enable the use of spectral and pseudospectral methods to solve problems in diverse fields of interest and to a wide audience. While spectral methods are generally based on Fourier Series or Chebychev polynomials, non-classical polynomials and associated quadratures are used for many of the applications presented in the book. Fourier series methods are summarized with a discussion of the resolution of the Gibbs phenomenon. Classical and non-classical quadratures are used for the evaluation of integrals in reaction dynamics including nuclear fusion, radial integrals in density functional theory, in elastic scattering theory and other applications. The subject matter includes the calculation of transport coefficient...

  7. A unified convergence theory of a numerical method,and applications to the replenishment policies

    Institute of Scientific and Technical Information of China (English)

    MI Xiang-jiang(宓湘江); WANG Xing-hua(王兴华)

    2004-01-01

    In determining the replenishment policy for an inventory system, some researchers advocated that the iterative method of Newton could be applied to the derivative of the total cost function in order to get the optimal solution. But this approach requires calculation of the second derivative of the function. Avoiding this complex computation we use another iterative method presented by the second author. One of the goals of this paper is to present a unified convergence theory of this method. Then we give a numerical example to show the application of our theory.

  8. A unified convergence theory of a numerical method, and applications to the replenishment policies

    Institute of Scientific and Technical Information of China (English)

    宓湘江; 王兴华

    2004-01-01

    In determining the replenishment policy for an inventory system, some researchers advocated that the iterative method of Newton could be applied to the derivative of the total cost function in order to get the optimal solution. But this approach requires calculation of the second derivative of the function. Avoiding this complex computation we use another iterative method presented by the second author. One of the goals of this paper is to present a unified convergence theory of this method. Then we give a numerical example to show the application of our theory.

  9. A collocation method for surface tension calculations with the density gradient theory

    DEFF Research Database (Denmark)

    Larsen, Peter Mahler; Maribo-Mogensen, Bjørn; Kontogeorgis, Georgios M.

    2016-01-01

    Surface tension calculations are important in many industrial applications and over a wide range of temperatures, pressures and compositions. Empirical parachor methods are not suitable over a wide condition range and the combined use of density gradient theory with equations of state has been...... proposed in literature. Often, many millions of calculations are required in the gradient theory methods, which is computationally very intensive. In this work, we have developed an algorithm to calculate surface tensions an order of magnitude faster than the existing methods, with no loss of accuracy...

  10. Inverse problem theory methods for data fitting and model parameter estimation

    CERN Document Server

    Tarantola, A

    2002-01-01

    Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi

  11. A unified convergence theory of a numerical method, and applications to the replenishment policies.

    Science.gov (United States)

    Mi, Xiang-jiang; Wang, Xing-hua

    2004-01-01

    In determining the replenishment policy for an inventory system, some researchers advocated that the iterative method of Newton could be applied to the derivative of the total cost function in order to get the optimal solution. But this approach requires calculation of the second derivative of the function. Avoiding this complex computation we use another iterative method presented by the second author. One of the goals of this paper is to present a unified convergence theory of this method. Then we give a numerical example to show the application of our theory.

  12. On unified modeling, theory, and method for solving multi-scale global optimization problems

    Science.gov (United States)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  13. MESHING THEORY AND DESIGN METHOD OF NEW SILENT CHAIN AND SPROCKET

    Institute of Scientific and Technical Information of China (English)

    MENG Fanzhong; FENG Zengming; CHU Yaxu

    2006-01-01

    Based on the study of the meshing theory of a new silent chain and sprockets, and the rolling cutting theory of sprocket and hob, the harmonious relations of dominating dimensions among the new silent chain, sprocket and hob is build, the meshing conditions are expatiated, and the resolved expression, which can instruct design and calculation, is educed. The tests show that the meshing design method is feasible.

  14. Phenomenography and Grounded Theory as Research Methods in Computing Education Research Field

    Science.gov (United States)

    Kinnunen, Paivi; Simon, Beth

    2012-01-01

    This paper discusses two qualitative research methods, phenomenography and grounded theory. We introduce both methods' data collection and analysis processes and the type or results you may get at the end by using examples from computing education research. We highlight some of the similarities and differences between the aim, data collection and…

  15. Phenomenography and Grounded Theory as Research Methods in Computing Education Research Field

    Science.gov (United States)

    Kinnunen, Paivi; Simon, Beth

    2012-01-01

    This paper discusses two qualitative research methods, phenomenography and grounded theory. We introduce both methods' data collection and analysis processes and the type or results you may get at the end by using examples from computing education research. We highlight some of the similarities and differences between the aim, data collection and…

  16. Role of Logic and Mentality as the Basics of Wittgenstein's Picture Theory of Language and Extracting Educational Principles and Methods According to This Theory

    Science.gov (United States)

    Heshi, Kamal Nosrati; Nasrabadi, Hassanali Bakhtiyar

    2016-01-01

    The present paper attempts to recognize principles and methods of education based on Wittgenstein's picture theory of language. This qualitative research utilized inferential analytical approach to review the related literature and extracted a set of principles and methods from his theory on picture language. Findings revealed that Wittgenstein…

  17. Chiral extrapolation beyond the power-counting regime

    CERN Document Server

    Hall, J M M; Leinweber, D B; Liu, K F; Mathur, N; Young, R D; Zhang, J B

    2011-01-01

    Chiral effective field theory can provide valuable insight into the chiral physics of hadrons when used in conjunction with non-perturbative schemes such as lattice QCD. In this discourse, the attention is focused on extrapolating the mass of the rho meson to the physical pion mass in quenched QCD (QQCD). With the absence of a known experimental value, this serves to demonstrate the ability of the extrapolation scheme to make predictions without prior bias. By using extended effective field theory developed previously, an extrapolation is performed using quenched lattice QCD data that extends outside the chiral power-counting regime (PCR). The method involves an analysis of the renormalization flow curves of the low energy coefficients in a finite-range regularized effective field theory. The analysis identifies an optimal regulator, which is embedded in the lattice QCD data themselves. This optimal regulator is the regulator value at which the renormalization of the low energy coefficients is approximately i...

  18. Advancing Understanding of the Surface Water Quality Regime of Contemporary Mixed-Land-Use Watersheds: An Application of the Experimental Watershed Method

    Directory of Open Access Journals (Sweden)

    Elliott Kellner

    2017-06-01

    Full Text Available A representative watershed was instrumented with five gauging sites (n = 5, partitioning the catchment into five nested-scale sub-watersheds. Four physiochemical variables were monitored: water temperature, pH, total dissolved solids (TDS, and dissolved oxygen (DO. Data were collected four days per week from October 2010–May 2014 at each gauging site. Statistical analyses indicated significant differences (p < 0.05 between nearly every monitoring site pairing for each physiochemical variable. The water temperature regime displayed a threshold/step-change condition, with an upshifted and more variable regime attributable to the impacts of urban land uses. TDS, pH, and DO displayed similar spatiotemporal trends, with increasing median concentrations from site #1 (agriculture to #3 (mixed-use urban and decreasing median concentrations from site #3 to #5 (suburban. Decreasing concentrations and increasing streamflow volume with stream distance, suggest the contribution of dilution processes to the physiochemical regime of the creek below urban site #3. DO concentrations exceeded water quality standards on an average of 31% of observation days. Results showed seasonal trends for each physiochemical parameter, with higher TDS, pH, and DO during the cold season (November–April relative to the warm season (May–October. Multivariate modeling results emphasize the importance of the pH/DO relationship in these systems, and demonstrate the potential utility of a simple two factor model (water temperature and pH in accurately predicting DO. Collectively, results highlight the interacting influences of natural (autotrophic photosynthesis, organic detritus loading and anthropogenic (road salt application factors on the physiochemical regime of mixed-land-use watersheds.

  19. Kinetic theory of correlated fluids: from dynamic density functional to Lattice Boltzmann methods.

    Science.gov (United States)

    Marconi, Umberto Marini Bettolo; Melchionna, Simone

    2009-07-07

    Using methods of kinetic theory and liquid state theory we propose a description of the nonequilibrium behavior of molecular fluids, which takes into account their microscopic structure and thermodynamic properties. The present work represents an alternative to the recent dynamic density functional theory, which can only deal with colloidal fluids and is not apt to describe the hydrodynamic behavior of a molecular fluid. The method is based on a suitable modification of the Boltzmann transport equation for the phase space distribution and provides a detailed description of the local structure of the fluid and its transport coefficients. Finally, we propose a practical scheme to solve numerically and efficiently the resulting kinetic equation by employing a discretization procedure analogous to the one used in the Lattice Boltzmann method.

  20. Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities

    CERN Document Server

    Kinney, Justin B

    2014-01-01

    Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.

  1. SAR images classification method based on Dempster-Shafer theory and kernel estimate

    Institute of Scientific and Technical Information of China (English)

    He Chu; Xia Guisong; Sun Hong

    2007-01-01

    To study the scene classification in the Synthetic Aperture Radar (SAR) image, a novel method based on kernel estimate, with the Markov context and Dempster-Shafer evidence theory is proposed.Initially, a nonparametric Probability Density Function (PDF) estimate method is introduced, to describe the scene of SAR images.And then under the Markov context, both the determinate PDF and the kernel estimate method are adopted respectively, to form a primary classification.Next, the primary classification results are fused using the evidence theory in an unsupervised way to get the scene classification.Finally, a regularization step is used, in which an iterated maximum selecting approach is introduced to control the fragments and modify the errors of the classification.Use of the kernel estimate and evidence theory can describe the complicated scenes with little prior knowledge and eliminate the ambiguities of the primary classification results.Experimental results on real SAR images illustrate a rather impressive performance.

  2. Form the density-of-states method to finite density quantum field theory

    CERN Document Server

    Langfeld, Kurt

    2016-01-01

    During the last 40 years, Monte Carlo calculations based upon Importance Sampling have matured into the most widely employed method for determinig first principle results in QCD. Nevertheless, Importance Sampling leads to spectacular failures in situations in which certain rare configurations play a non-secondary role as it is the case for Yang-Mills theories near a first order phase transition or quantum field theories at finite matter density when studied with the re-weighting method. The density-of-states method in its LLR formulation has the potential to solve such overlap or sign problems by means of an exponential error suppression. We here introduce the LLR approach and its generalisation to complex action systems. Applications include U(1), SU(2) and SU(3) gauge theories as well as the Z3 spin model at finite densities and heavy-dense QCD.

  3. Is social projection based on simulation or theory? Why new methods are needed for differentiating.

    Science.gov (United States)

    Bazinger, Claudia; Kühberger, Anton

    2012-12-01

    The literature on social cognition reports many instances of a phenomenon titled 'social projection' or 'egocentric bias'. These terms indicate egocentric predictions, i.e., an over-reliance on the self when predicting the cognition, emotion, or behavior of other people. The classic method to diagnose egocentric prediction is to establish high correlations between our own and other people's cognition, emotion, or behavior. We argue that this method is incorrect because there is a different way to come to a correlation between own and predicted states, namely, through the use of theoretical knowledge. Thus, the use of correlational measures is not sufficient to identify the source of social predictions. Based on the distinction between simulation theory and theory theory, we propose the following alternative methods for inferring prediction strategies: independent vs. juxtaposed predictions, the use of 'hot' mental processes, and the use of participants' self-reports.

  4. Optical Mixing in the Strong Coupling Regime: A New Method of Beam Conditioning at Hohlraum LEH and Direct Drive ICF Coronal Plasmas

    Science.gov (United States)

    Mardirian, Marine; Afeyan, Bedros; Huller, Stefan; Montgomery, David; Froula, Dustin; Kirkwood, Robert

    2012-10-01

    We will present theoretical and computational results on Brillouin interactions between two beams in co-, counter-, and orthogonal propagation geometries. The beams will be structured (with speckle patterns), the plasma will have inhomogeneous flow including the Mach -1 surface. As the growth rate of the instability surpasses the natural frequency of the ion wave, the strong coupling regime (SCR) is reached, where reactive quasi-modes with intensity dependent frequency shifts result. This is especially true in laser hot spots. We trace the consequences of operations in this regime with different damping rates on the ion acoustic waves. We consider convective and absolute instabilities as well as the design of experiments which could examine these new regimes of instability behavior with new 10 psec time resolved diagnostics. Whether well enough conditioned beams can result after 10's or 100's of pairwise crossings in direct and indirect drive ICF configurations, and whether SRS can thus be strongly suppressed downstream, remains to be demonstrated. But the prospects exist for such new paths to instability control in a staged manner before STUD pulses are implemented.-

  5. What does it mean to use a method? Towards a practice theory for software engineering

    DEFF Research Database (Denmark)

    Dittrich, Yvonne

    2016-01-01

    is lacking is an understanding of how methods affect software development. Objective The article develops a set of concepts based on the practice-concept in philosophy of sociology as a base to describe software development as social practice, and develop an understanding of methods and their application...... that explains the heterogeneity in the outcome. Practice here is not understood as opposed to theory, but as a commonly agreed upon way of acting that is acknowledged by the team. Method The article applies concepts from philosophy of sociology and social theory to describe software development and develops...... the concepts of method and method usage. The results and steps in the philosophical argumentation are exemplified using published empirical research. Results The article develops a conceptual base for understanding software development as social and epistemic practices, and defines methods as practice patterns...

  6. Empirical Analysis of Value-at-Risk Estimation Methods Using Extreme Value Theory

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper investigates methods of value-at-risk (VaR) estimation using extreme value theory (EVT). Itcompares two different estimation methods, 。two-step subsample bootstrap" based on moment estimation and maximumlikelihood estimation (MLE), according to their theoretical bases and computation procedures. Then, the estimationresults are analyzed together with those of normal method and empirical method. The empirical research of foreignexchange data shows that the EVT methods have good characters in estimating VaR under extreme conditions and"two-step subsample bootstrap" method is preferable to MLE.

  7. Phenomenography and grounded theory as research methods in computing education research field

    Science.gov (United States)

    Kinnunen, Päivi; Simon, Beth

    2012-06-01

    This paper discusses two qualitative research methods, phenomenography and grounded theory. We introduce both methods' data collection and analysis processes and the type or results you may get at the end by using examples from computing education research. We highlight some of the similarities and differences between the aim, data collection and analysis phases and the type of resulting outcomes of these methods. We also discuss the challenges and threats the both methods may pose to the researcher. We conclude that while aimed at tackling different types of research questions, both of these methods provide computing education researchers a useful tool in their research method toolbox.

  8. A Case for Rhetorical Method: Criticism, Theory, and the Exchange of Jean Baudrillard

    OpenAIRE

    Gogan, Brian James

    2011-01-01

    This dissertation uses the case of Jean Baudrillard to argue that successful critics must consider rhetorical method as it relates to theory. Throughout this dissertation, I follow Edwin Black in using the term rhetorical method to describe the procedures a rhetor uses to guide composition. The project's two main goals are, first, to demonstrate how rhetorical method can serve as a foundation for worthwhile criticism, and, second, to outline a Baudrillardian rhetoric. In order to meet these g...

  9. The Grounded Theory Method: Deconstruction and Reconstruction in a Human Patient Simulation Context

    Directory of Open Access Journals (Sweden)

    Brian Parker PhD

    2011-03-01

    Full Text Available Certain modes of qualitative inquiry, such as grounded theory, can serve to uncover the abstract processes and broad conceptual themes influencing the personal experiences of undergraduate nursing students encountering clinical scenarios utilizing human patient simulators (HPS. To date insufficient research has been conducted to uncover the basic social-psychological processes encountered by students as they engage in a HPS-based clinical scenario. The authors assert that HPS-based learning experiences are in reality social endeavors that lead to the creation of socially negotiated knowledge and meanings relevant to the adult learner. To understand how grounded theory is suited to deriving answers to these questions, an analysis of the theoretical and philosophical foundations of grounded theory is undertaken. This critical analysis concludes with a discussion of specific considerations to be reflected upon by researchers when applying the inductively derived method of grounded theory in uncovering the social processes that occur within HPS-based clinical scenarios.

  10. A novel trust evaluation method for Ubiquitous Healthcare based on cloud computational theory.

    Science.gov (United States)

    Athanasiou, Georgia; Fengou, Maria-Anna; Beis, Antonios; Lymberopoulos, Dimitrios

    2014-01-01

    The notion of trust is considered to be the cornerstone on patient-psychiatrist relationship. Thus, a trustfully background is fundamental requirement for provision of effective Ubiquitous Healthcare (UH) service. In this paper, the issue of Trust Evaluation of UH Providers when register UH environment is addressed. For that purpose a novel trust evaluation method is proposed, based on cloud theory, exploiting User Profile attributes. This theory mimics human thinking, regarding trust evaluation and captures fuzziness and randomness of this uncertain reasoning. Two case studies are investigated through simulation in MATLAB software, in order to verify the effectiveness of this novel method.

  11. Developing Econometrics Statistical Theories and Methods with Applications to Economics and Business

    CERN Document Server

    Tong, Hengqing; Huang, Yangxin

    2011-01-01

    Statistical Theories and Methods with Applications to Economics and Business highlights recent advances in statistical theory and methods that benefit econometric practice. It deals with exploratory data analysis, a prerequisite to statistical modelling and part of data mining. It provides recently developed computational tools useful for data mining, analysing the reasons to do data mining and the best techniques to use in a given situation.Provides a detailed description of computer algorithms.Provides recently developed computational tools useful for data miningHighlights recent advances in

  12. The TR method: the use of slip preference to separate heterogeneous fault-slip data in compressional stress regimes. The surface rupture of the 1999 Chi-Chi Taiwan earthquake as a case study

    Science.gov (United States)

    Tranos, Markos D.

    2013-11-01

    Synthetic contractional fault-slip data have been considered in order to examine the validity of widely applied criteria such as the slip preference, slip tendency, kinematic (P and T) axes, transport orientation and strain compatibility in different Andersonian compressional stress regimes. Radial compression (RC), radial-pure compression (RC-PC), pure compression (PC), pure compression-transpression (PC-TRP), and transpression (TRP) are examined with the aid of the Win-Tensor stress inversion software. Furthermore, the validity of the recently proposed graphical TR method, which uses the concept of slip preference for the separation of heterogeneous fault-slip data, is also examined for compressional stress regimes. In these regimes only contractional faults can be activated, and their slip preferences imply the distinction between “real”, i.e., RC, RC-PC and PC, and “hybrid”, i.e., PC-TRP and TRP stress regimes. For slip tendency values larger than 0.6, the activated faults dip at angles from 10° to 50°, but in the “hybrid” regimes faults can dip with even higher angles. The application of the TR method is here refined by introducing two controlling parameters, the coefficient of determination (R2) of the Final Tensor Ratio Line (FTRL) and the “normal” or “inverse” distribution of the faults plotted within the Final Tensor Ratio Belt (FTRB). The application of the TR method on fault-slip data of the 1999 Chi-Chi earthquake, Taiwan, allowed the meaningful separation of complex heterogeneous contractional fault-slip data into homogeneous groups. In turn, this allowed the identification of different compressional stress regimes and the determination of local stress perturbations of the regional or far-stress field generated by the 1999 Chi-Chi earthquake. This includes clear examples of “stress permutation” and “stress partitioning” caused by pre-existing fault structures, such as the N-S trending Chelungpu thrust and the NE

  13. The neuromatrix theory of pain: implications for selected nonpharmacologic methods of pain relief for labor.

    Science.gov (United States)

    Trout, Kimberly K

    2004-01-01

    Women experience the pain of labor differently, with many factors contributing to their overall perception of pain. The neuromatrix theory of pain provides a framework that may explain why selected nonpharmacologic methods of pain relief can be quite effective for the relief of pain for the laboring woman. The concept of a pain "neuromatrix" suggests that perception of pain is simultaneously modulated by multiple influences. The theory was developed by Ronald Melzack and represents an expansion beyond his original "gate theory" of pain, first proposed in 1965 with P. D. Wall. This article reviews several nonpharmacologic methods of pain relief with implications for the practicing clinician. Providing adequate pain relief during labor and birth is an important component of caring for women during labor and birth.

  14. The method of rigged spaces in singular perturbation theory of self-adjoint operators

    CERN Document Server

    Koshmanenko, Volodymyr; Koshmanenko, Nataliia

    2016-01-01

    This monograph presents the newly developed method of rigged Hilbert spaces as a modern approach in singular perturbation theory. A key notion of this approach is the Lax-Berezansky triple of Hilbert spaces embedded one into another, which specifies the well-known Gelfand topological triple. All kinds of singular interactions described by potentials supported on small sets (like the Dirac δ-potentials, fractals, singular measures, high degree super-singular expressions) admit a rigorous treatment only in terms of the equipped spaces and their scales. The main idea of the method is to use singular perturbations to change inner products in the starting rigged space, and the construction of the perturbed operator by the Berezansky canonical isomorphism (which connects the positive and negative spaces from a new rigged triplet). The approach combines three powerful tools of functional analysis based on the Birman-Krein-Vishik theory of self-adjoint extensions of symmetric operators, the theory of singular quadra...

  15. THE STUDY OF THE KINETICS OF DRYING FOOD RAW MATERIAL OF PLANT ORIGIN IN THE ACTIVE HYDRODYNAMIC REGIMES AND DEVELOPMENT OF DRYER ENGINEERING CALCULATION METHODS

    Directory of Open Access Journals (Sweden)

    A. N. Ostrikov

    2015-01-01

    Full Text Available Consumer properties of food raw material formed during the heat treatment. New physical, flavoring and aromatic properties of the products of plant origin, formed during drying due to substantial changes in the composition of the raw materia l occurring as a result of biochemical reactions. In the production of dried and roasted products is very important to follow the parameters that contribute to the passage of biochemical processes aimed at creating a product with high nutritional qualities, strong aroma and pleasant taste. We studied the basic kinetics of the drying process of food raw material (in terms of artichoke in a dense interspersed layer, which formed the basis for the rational choice of the drying regime with due consideration of changes in the moisture content of the product are studied. The nature of the effect of the dried product movement hydrodynamic conditions on a layer height and intensity of drying is established. As a result of food raw material drying process kinetics analysis (in terms of artichoke multistep drying regimes were chosen. Analysis of the artichoke particles drying by air, air-steam mixture and superheated steam intensity showed the presence of two parts: the horizontal one and gradually diminishing one. Kinetic laws of the artichoke drying process in a dense interspersed layer were the basis of engineering calculation of dryer with a transporting body in the form of a "traveling wave". Application of the dryer with the transporting body in the form of a "traveling wave" for food raw material drying allow to achieve uniform drying of the product due to the use of soft, gentle regimes of oversleeping while preserving to the utmost particles of the product; to improve the quality of the finished product through the use of interspersed layer that reduces clumping of product to be dried.

  16. World Nonproliferation Regime

    Institute of Scientific and Technical Information of China (English)

    Ouyang Liping; Wu Xingzuo

    2007-01-01

    2006 witnessed an intense struggle between nuclear proliferation and nonproliferation. Iran's nuclear issue and North Korea's nuclear test have cast a deep shadow over the current international nonproliferation regime. The international contest for civil nuclear development became especially fierce as global energy prices went up. Such a situation , to some extent, accelerated the pace of nuclear proliferation. Furthermore, the existing international nonproliferation regime, based upon the Nuclear Nonproliferation Treaty (NPT), was affected by loopholes, and the U.S. failed in its ambition to unite other forces to mend fences. The international community needs to come up with a comprehensive and long-term strategy to meet the demand for an effective future nonproliferation regime in a healthy nuclear order.

  17. A pragmatic basis for judging models and theories in health psychology: the axiomatic method.

    Science.gov (United States)

    Smedslund, G

    2000-03-01

    Psychology and its subfield of health psychology suffer from a lack of standardized terminology and a unified theoretical framework for the prediction and explanation of health behaviour. Hence, it is difficult to establish whether a given theory is logically consistent and to compare different theories. Science involves both empirical and conceptual issues. It is asserted that psychology has overemphasized the former and underemphasized the latter. Empirical psychology needs an explicit, shared conceptual system in order to develop its theories. An example of an axiomatic method (Psycho-Logic; see e.g. J. Smedslund.Psychological Inquiry 1991a; 2: 325-338) is applied to show how the Health Belief Model,the Theory of Planned Behaviour and Social Cognitive Theory all conform to the a priori conditions of acting. One implication is that studies of the predictive power of theories stated as definitional truths only assess auxiliary hypotheses, i.e. the extent to which the measuring instruments are reliable and valid. On the other hand, the introduction of logic into health psychology can facilitate genuine empirical studies by helping to avoid so-called 'pseudoempirical' work (Smedslund, J. In Smith, Harré & Van Langenhove (Eds.) Rethinking psychology, 1995). Systems such as Psycho-Logic can also enhance conceptual integration by using logic to explicate and demonstrate intuitive relations. Implications for practitioners are discussed briefly.

  18. e-Research and Learning Theory: What Do Sequence and Process Mining Methods Contribute?

    Science.gov (United States)

    Reimann, Peter; Markauskaite, Lina; Bannert, Maria

    2014-01-01

    This paper discusses the fundamental question of how data-intensive e-research methods could contribute to the development of learning theories. Using methodological developments in research on self-regulated learning as an example, it argues that current applications of data-driven analytical techniques, such as educational data mining and its…

  19. Connecting Practice, Theory and Method: Supporting Professional Doctoral Students in Developing Conceptual Frameworks

    Science.gov (United States)

    Kumar, Swapna; Antonenko, Pavlo

    2014-01-01

    From an instrumental view, conceptual frameworks that are carefully assembled from existing literature in Educational Technology and related disciplines can help students structure all aspects of inquiry. In this article we detail how the development of a conceptual framework that connects theory, practice and method is scaffolded and facilitated…

  20. Languaging and Visualisation Method for Grammar Teaching: A Conceptual Change Theory Perspective

    Science.gov (United States)

    Rattya, Kaisu

    2013-01-01

    Conceptual grammatical knowledge is an area which causes problems at different levels of education. This article examines the ideas of conceptual change theory as a basis for establishing a new grammar teaching method. The research strategy which I use is educational design research and the research data have been collected from teacher students…

  1. Using Popular Media and a Collaborative Approach to Teaching Grounded Theory Research Methods

    Science.gov (United States)

    Creamer, Elizabeth G.; Ghoston, Michelle R.; Drape, Tiffany; Ruff, Chloe; Mukuni, Joseph

    2012-01-01

    Popular movies were used in a doctoral-level qualitative research methods course as a way to help students learn about how to collect and analyze qualitative observational data in order to develop a grounded theory. The course was designed in such a way that collaboration was central to the generation of knowledge. Using media depictions had the…

  2. Comparison of Item Response Theory and Thurstone Methods of Vertical Scaling.

    Science.gov (United States)

    Burket, George R.; Yen, Wendy M.

    1997-01-01

    Using simulated data modeled after real tests, a Thurstone method (L. Thurstone, 1925 and later) and three-parameter item response theory were compared for vertical scaling. Neither procedure produced artificial scale shrinkage, and both produced modest scale expansion for one simulated condition. (SLD)

  3. A Comparison of Developmental Scales Based on Thurstone Methods and Item Response Theory.

    Science.gov (United States)

    Williams, Valerie S. L.; Pommerich, Mary; Thissen, David

    1998-01-01

    Created a developmental scale for the North Carolina End-of-Grade Mathematics Tests using a subset of identical test forms administered to adjacent grade levels with Thurstone scaling and Item Response Theory methods. Discusses differences in patterns produced. (Author/SLD)

  4. The Pugh Controlled Convergence method: model-based evaluation and implications for design theory

    NARCIS (Netherlands)

    Frey, D.D.; Herder, P.M.; Wijnia, Y.; Saubrahmanian, E.; Katsikopoulos, K.; Clausing, D.P.

    2008-01-01

    This paper evaluates the Pugh Controlled Convergence method and its relationship to recent developments in design theory. Computer executable models are proposed simulating a team of people involved in iterated cycles of evaluation, ideation, and investigation. The models suggest that: (1) convergen

  5. Audience studies 2.0: on the theory, politics and method of qualitative audience research

    NARCIS (Netherlands)

    Hermes, J.

    2009-01-01

    Audience research, this paper suggests, is an excellent field to test the claims of Media Studies 2.0. Moreover, 2.0 claims are a good means to review qualitative audience research itself too. Working from a broad strokes analysis of the theory, politics and method of interpretative research with au

  6. Optimizing the Structure of Tetracyanoplatinate (II): A Comparison of Relativistic Density Functional Theory Methods

    DEFF Research Database (Denmark)

    Dohn, Asmus Ougaard; Møller, Klaus Braagaard; Sauer, Stephan P. A.

    2013-01-01

    The geometry of tetracyanoplatinate(II) (TCP) has been optimized with density functional theory (DFT) calculations in order to compare different computational strategies. Two approximate scalar relativistic methods, i.e. the scalar zeroth-order regular approximation (ZORA) and non-relativistic ca...

  7. Irreducible gauge theories in the framework of the Sp(2)-covariant quantization method

    CERN Document Server

    Lavrov, P M; Reshetnyak, A A; Lavrov, P M; Moshin, P Yu; Reshetnyak, A A

    1995-01-01

    Irreducible gauge theories in both the Lagrangian and Hamiltonian versions of the Sp(2)-covariant quantization method are studied. Solutions to generating equations are obtained in the form of expansions in power series of ghost and auxiliary variables up to the 3d order inclusively.

  8. Method for Hydrocarbon Detection Based on Theory of Multi-phase Medium

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A way is developed to detect hydrocarbon in accordance with BOlT theory and laboratory data,which is applied to several areas .The coincidence rate for hydrocarbon detection is higher than other sirnilar techniques. This method shows a good prospect for being widely used in hydrocarbon detecting at exploration stage and in reservoir monitoring at production stage.

  9. Theories and Methods for Research on Informal Learning and Work: Towards Cross-Fertilization

    Science.gov (United States)

    Sawchuk, Peter H.

    2008-01-01

    The topic of informal learning and work has quickly become a staple in contemporary work and adult learning research internationally. The narrow conceptualization of work is briefly challenged before the article turns to a review of the historical origins as well as contemporary theories and methods involved in researching informal learning and…

  10. Two ways to get an Integral Theory: Ken Wilber's method of integration

    Directory of Open Access Journals (Sweden)

    Claus Tirel

    2012-01-01

    Full Text Available Ken Wilber is at times deemed to be one of the most prominent and intellectual integral thinkers of our time. His so-called ‘Integral Theory’ shows up with no minor claims: it alleges to have succeeded in integrating most of the insights elaborated by contemporary natural sciences such as biology and physics, together with those of the social sciences and humanities, especially with the deep truths found in religion as well as in philosophy from the ancient Greeks until today. Wilber started developing his theory in the late 1970s. Today, he presents his theory as a framework that claims to provide no less than a place for everything that exists, including the various scientific disciplines and approaches. The theory seems to provide a proper place for everything. That place is defined first of all by its level of development and its specific perspective, from which it perceives and describes the world. This makes Wilber praise his theory as a downright ‘theory of everything’, being able to provide the long needed integration of the manifold and fragmented bodies of knowledge in our post-modern world. From his holistic theory Wilber derives prac­tical suggestions for a more integral life, an integral practice which consists of meditation, physical exercises and social commitment. In this article the author examines in particular the method that Wilber applies in making up his theory. The main focus lays on the question how it realises the integration, that became the core concept and main label under which his theory is traded today.

  11. Assessment Method of Heavy NC Machine Reliability Based on Bayes Theory

    Institute of Scientific and Technical Information of China (English)

    张雷; 王太勇; 胡占齐

    2016-01-01

    It is difficult to collect the prior information for small-sample machinery products when their reliability is assessed by using Bayes method. In this study, an improved Bayes method with gradient reliability (GR) results as prior information was proposed to solve the problem. A certain type of heavy NC boring and milling machine was considered as the research subject, and its reliability model was established on the basis of its functional and structural characteristics and working principle. According to the stress-intensity interference theory and the reli-ability model theory, the GR results of the host machine and its key components were obtained. Then the GR results were deemed as prior information to estimate the probabilistic reliability (PR) of the spindle box, the column and the host machine in the present method. The comparative studies demonstrated that the improved Bayes method was applicable in the reliability assessment of heavy NC machine tools.

  12. B-Theory of Runge-Kutta methods for stiff Volterra functional differential equations

    Institute of Scientific and Technical Information of China (English)

    LI; Shoufu(李寿佛)

    2003-01-01

    B-stability and B-convergence theories of Runge-Kutta methods for nonlinear stiff Volterra func-tional differential equations (VFDEs) are established which provide unified theoretical foundation for the studyof Runge-Kutta methods when applied to nonlinear stiff initial value problems (IVPs) in ordinary differentialequations (ODEs), delay differential equations (DDEs), integro-differential equations (IDEs) and VFDEs ofother type which appear in practice.

  13. An Automatic Evaluation Method for Conversational Agents Based on Affect-as-Information Theory

    OpenAIRE

    Ptaszynski, Michal; Dybala, Pawel; Rzepka, Rafal; Araki, Kenji

    2010-01-01

    This paper presents a method for automatic evaluation of conversational agents. The method consists of several steps. First, an affect analysis system is used to detect users' general emotional engagement in the conversation and classify their specific emotional states. Next, we interpret this data with the use of reasoning based on Affect-as-Information Theory to obtain information about users' general attitudes to the conversational agent and its performance. The affect analysis system was ...

  14. Statistical methods of discrimination and classification advances in theory and applications

    CERN Document Server

    Choi, Sung C

    1986-01-01

    Statistical Methods of Discrimination and Classification: Advances in Theory and Applications is a collection of papers that tackles the multivariate problems of discriminating and classifying subjects into exclusive population. The book presents 13 papers that cover that advancement in the statistical procedure of discriminating and classifying. The studies in the text primarily focus on various methods of discriminating and classifying variables, such as multiple discriminant analysis in the presence of mixed continuous and categorical data; choice of the smoothing parameter and efficiency o

  15. [A method for the medical image registration based on the statistics samples averaging distribution theory].

    Science.gov (United States)

    Xu, Peng; Yao, Dezhong; Luo, Fen

    2005-08-01

    The registration method based on mutual information is currently a popular technique for the medical image registration, but the computation for the mutual information is complex and the registration speed is slow. In engineering process, a subsampling technique is taken to accelerate the registration speed at the cost of registration accuracy. In this paper a new method based on statistics sample theory is developed, which has both a higher speed and a higher accuracy as compared with the normal subsampling method, and the simulation results confirm the validity of the new method.

  16. Computationally efficient double hybrid density functional theory using dual basis methods

    CERN Document Server

    Byrd, Jason N

    2015-01-01

    We examine the application of the recently developed dual basis methods of Head-Gordon and co-workers to double hybrid density functional computations. Using the B2-PLYP, B2GP-PLYP, DSD-BLYP and DSD-PBEP86 density functionals, we assess the performance of dual basis methods for the calculation of conformational energy changes in C$_4$-C$_7$ alkanes and for the S22 set of noncovalent interaction energies. The dual basis methods, combined with resolution-of-the-identity second-order M{\\o}ller-Plesset theory, are shown to give results in excellent agreement with conventional methods at a much reduced computational cost.

  17. Limitations in direct and indirect methods for solving optimal control problems in growth theory

    Directory of Open Access Journals (Sweden)

    Ratković Kruna

    2016-01-01

    Full Text Available The focus of this paper is on a comprehensive analysis of different methods and mathematical techniques used for solving optimal control problems (OCP in growth theory. Most important methods for solving dynamic non-linear infinite-horizon growth models using optimal control theory are presented and a critical view of the limitations of different methods is given. The main problem is to determine the optimal rate of growth over time in a way that maximizes the welfare function over an infinite horizon. The welfare function depends on capital-labor ratio, the state variable, and the per-capita consumption, the control variable. Numerical methods for solving OCP are divided into two classes: direct and indirect approach. How the indirect approach can be used is given in the example of the neo-classical growth model. In order to present the indirect and the direct approach simultaneously, two endogenous growth models, one written by Romer and another by Lucas and Uzawa, are studied. Advantages and efficiency of these different approaches will be discussed. Although the indirect methods for solving OCP are still the most expanded in growth theory, it will be seen that using direct methods can also be very efficient and help to overcome problems that can occur by using the indirect approach.

  18. Study on the accuracy of comprehensive evaluating method based on fuzzy set theory

    Institute of Scientific and Technical Information of China (English)

    Xu Weixiang; Liu Xumin

    2005-01-01

    The evaluation method and its accuracy for evaluating complex systems are considered. In order to evaluate accurately complex systems, the existed evaluating methods are simply analyzed, and a new comprehensive evaluating method is developed. The new method is integration of Delphi approach, analytic hierarchy process, gray interconnect degree and fuzzy evaluation (DHGF). Its theory foundation is the meta-synthesis methodology from qualitative analysis to quantitative analysis. According to fuzzy set approach, using the methods of concordance of evaluation, redundant verify, double models redundant, and limitations of the method etc, the accuracy of evaluating method of DHGF is estimated, and a practical example is given. The result shows that using the method to evaluate complex system projects is feasible and credible.

  19. Regimes of justification

    NARCIS (Netherlands)

    Arts, Irma; Buijs, A.E.; Verschoor, G.M.

    2017-01-01

    Legitimacy of environmental management and policies is an important topic in environmental research. Based on the notion of ‘regimes of justification’, we aim to analyse the dynamics in argumentations used to legitimize and de-legitimize Dutch nature conservation practices. Contrary to prior

  20. Control of Chaotic Regimes in Encryption Algorithm Based on Dynamic Chaos

    OpenAIRE

    Sidorenko, V.; Mulyarchik, K. S.

    2013-01-01

    Chaotic regime of a dynamic system is a necessary condition determining cryptographic security of an encryption algorithm. A chaotic dynamic regime control method is proposed which uses parameters of nonlinear dynamics regime for an analysis of encrypted data.

  1. A Novel Evaluation Method for Building Construction Project Based on Integrated Information Entropy with Reliability Theory

    Science.gov (United States)

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects. PMID:23533352

  2. Stochastic multi-reference perturbation theory with application to linearized coupled cluster method

    CERN Document Server

    Jeanmairet, Guillaume; Alavi, Ali

    2016-01-01

    In this article we report a stochastic evaluation of the recently proposed LCC multireference perturbation theory [Sharma S., and Alavi A., J. Chem. Phys. 143, 102815, (2015)]. In this method both the zeroth order and first order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth order wavefunction follows a set of stochastic processes identical to the one used in the FCIQMC method. To sample the first order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first order wavefunction from the zeroth order wavefunction. The second order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory...

  3. Fault Diagnosis Method Based on Fractal Theory and Its Application in Wind Power Systems

    Institute of Scientific and Technical Information of China (English)

    赵玲; 黄大荣; 宋军

    2012-01-01

    The non-linear dynamic theory brought a new method for recognizing and predicting complex non-linear dynamic behaviors. The non-linear behavior of vibration signals can be described by using fractal dimension quantitatively. In this paper, a fractal dimension calculation method for discrete signals in the fractal theory was applied to extract the fractal di- mension feature vectors and classified various fault types. Based on the wavelet packet transform, the energy feature vectors were extracted after the vibration signal was decomposed and reconstructed. Then, a wavelet neural network was used to recognize the mechanical faults. Finally, the fault diagnosis for a wind power system was taken as an example to show the method' s feasibility.

  4. A reexamination of information theory-based methods for DNA-binding site identification

    Directory of Open Access Journals (Sweden)

    O'Neill Michael C

    2009-02-01

    Full Text Available Abstract Background Searching for transcription factor binding sites in genome sequences is still an open problem in bioinformatics. Despite substantial progress, search methods based on information theory remain a standard in the field, even though the full validity of their underlying assumptions has only been tested in artificial settings. Here we use newly available data on transcription factors from different bacterial genomes to make a more thorough assessment of information theory-based search methods. Results Our results reveal that conventional benchmarking against artificial sequence data leads frequently to overestimation of search efficiency. In addition, we find that sequence information by itself is often inadequate and therefore must be complemented by other cues, such as curvature, in real genomes. Furthermore, results on skewed genomes show that methods integrating skew information, such as Relative Entropy, are not effective because their assumptions may not hold in real genomes. The evidence suggests that binding sites tend to evolve towards genomic skew, rather than against it, and to maintain their information content through increased conservation. Based on these results, we identify several misconceptions on information theory as applied to binding sites, such as negative entropy, and we propose a revised paradigm to explain the observed results. Conclusion We conclude that, among information theory-based methods, the most unassuming search methods perform, on average, better than any other alternatives, since heuristic corrections to these methods are prone to fail when working on real data. A reexamination of information content in binding sites reveals that information content is a compound measure of search and binding affinity requirements, a fact that has important repercussions for our understanding of binding site evolution.

  5. A reexamination of information theory-based methods for DNA-binding site identification

    Science.gov (United States)

    Erill, Ivan; O'Neill, Michael C

    2009-01-01

    Background Searching for transcription factor binding sites in genome sequences is still an open problem in bioinformatics. Despite substantial progress, search methods based on information theory remain a standard in the field, even though the full validity of their underlying assumptions has only been tested in artificial settings. Here we use newly available data on transcription factors from different bacterial genomes to make a more thorough assessment of information theory-based search methods. Results Our results reveal that conventional benchmarking against artificial sequence data leads frequently to overestimation of search efficiency. In addition, we find that sequence information by itself is often inadequate and therefore must be complemented by other cues, such as curvature, in real genomes. Furthermore, results on skewed genomes show that methods integrating skew information, such as Relative Entropy, are not effective because their assumptions may not hold in real genomes. The evidence suggests that binding sites tend to evolve towards genomic skew, rather than against it, and to maintain their information content through increased conservation. Based on these results, we identify several misconceptions on information theory as applied to binding sites, such as negative entropy, and we propose a revised paradigm to explain the observed results. Conclusion We conclude that, among information theory-based methods, the most unassuming search methods perform, on average, better than any other alternatives, since heuristic corrections to these methods are prone to fail when working on real data. A reexamination of information content in binding sites reveals that information content is a compound measure of search and binding affinity requirements, a fact that has important repercussions for our understanding of binding site evolution. PMID:19210776

  6. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  7. A simple, universal theory and method for computer plotting of stable equilibrium phase diagrams of a multisystem SFM method

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A new easy method has been presented to calculate the variable intervals corresponding to the stable univariant curves and to discriminate the stabilities of invariant points. This method and the one reported previously constitute a simple and universal theory for the computer-plotting of the equilibrium phase diagrams of a multisystem sign function matrix (SFM) discrimination method. Its main steps are: determining the stable univariant scheme according to the derivative (or difference) of ΔrGm; grouping the univariant curves by comparisons of the mutual relations among them; determining the existing intervals of the variables for the stable curves by comparisons of coordinate values of the curves about the invariant point; determining the stabilities of invariant points by comparisons of relations between the common curves and the invariant points. This method is suitable for any kind of phase diagram of closed or open systems in a phase diagram "space" with either 2 or more than 2 dimensions.

  8. Cavity Optomechanics in the Quantum Regime

    Science.gov (United States)

    Botter, Thierry Claude Marc

    An exciting scientific goal, common to many fields of research, is the development of ever-larger physical systems operating in the quantum regime. Relevant to this dissertation is the objective of preparing and observing a mechanical object in its motional quantum ground state. In order to sense the object's zero-point motion, the probe itself must have quantum-limited sensitivity. Cavity optomechanics, the interactions between light and a mechanical object inside an optical cavity, provides an elegant means to achieve the quantum regime. In this dissertation, I provide context to the successful cavity-based optical detection of the quantum-ground-state motion of atoms-based mechanical elements; mechanical elements, consisting of the collective center-of-mass (CM) motion of ultracold atomic ensembles and prepared inside a high-finesse Fabry-Perot cavity, were dispersively probed with an average intracavity photon number as small as 0.1. I first show that cavity optomechanics emerges from the theory of cavity quantum electrodynamics when one takes into account the CM motion of one or many atoms within the cavity, and provide a simple theoretical framework to model optomechanical interactions. I then outline details regarding the apparatus and the experimental methods employed, highlighting certain fundamental aspects of optical detection along the way. Finally, I describe background information, both theoretical and experimental, to two published results on quantum cavity optomechanics that form the backbone of this dissertation. The first publication shows the observation of zero-point collective motion of several thousand atoms and quantum-limited measurement backaction on that observed motion. The second publication demonstrates that an array of near-ground-state collective atomic oscillators can be simultaneously prepared and probed, and that the motional state of one oscillator can be selectively addressed while preserving the near-zero-point motion of

  9. An analytical method for calculating stresses and strains of ATF cladding based on thick walled theory

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Hyun; Kim, Hak Sung [Hanyang University, Seoul (Korea, Republic of); Kim, Hyo Chan; Yang, Yong Sik; In, Wang kee [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this paper, an analytical method based on thick walled theory has been studied to calculate stress and strain of ATF cladding. In order to prescribe boundary conditions of the analytical method, two algorithms were employed which are called subroutine 'Cladf' and 'Couple' of FRACAS, respectively. To evaluate the developed method, equivalent model using finite element method was established and stress components of the method were compared with those of equivalent FE model. One of promising ATF concepts is the coated cladding, which take advantages such as high melting point, a high neutron economy, and low tritium permeation rate. To evaluate the mechanical behavior and performance of the coated cladding, we need to develop the specified model to simulate the ATF behaviors in the reactor. In particular, the model for simulation of stress and strain for the coated cladding should be developed because the previous model, which is 'FRACAS', is for one body model. The FRACAS module employs the analytical method based on thin walled theory. According to thin-walled theory, radial stress is defined as zero but this assumption is not suitable for ATF cladding because value of the radial stress is not negligible in the case of ATF cladding. Recently, a structural model for multi-layered ceramic cylinders based on thick-walled theory was developed. Also, FE-based numerical simulation such as BISON has been developed to evaluate fuel performance. An analytical method that calculates stress components of ATF cladding was developed in this study. Thick-walled theory was used to derive equations for calculating stress and strain. To solve for these equations, boundary and loading conditions were obtained by subroutine 'Cladf' and 'Couple' and applied to the analytical method. To evaluate the developed method, equivalent FE model was established and its results were compared to those of analytical model. Based on the

  10. Reactions to Reading 'Remaining Consistent with Method? An Analysis of Grounded Theory Research in Accounting': A Comment on Gurd

    OpenAIRE

    2008-01-01

    Purpose: This paper is a comment on Gurd's paper published in QRAM 5(2) on the use of grounded theory in interpretive accounting research. Methodology: Like Gurd, we conducted a bibliographic study on prior pieces of research claiming the use of grounded theory. Findings: We found a large diversity of ways of doing grounded theory. There are as many ways as articles. Consistent with the spirit of grounded theory, the field suggested the research questions, methods and verifiability criteria. ...

  11. Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2012-01-01

    Full Text Available The divide and conquer method is a typical granular computing method using multiple levels of abstraction and granulations. So far, although some achievements based on divided and conquer method in the rough set theory have been acquired, the systematic methods for knowledge reduction based on divide and conquer method are still absent. In this paper, the knowledge reduction approaches based on divide and conquer method, under equivalence relation and under tolerance relation, are presented, respectively. After that, a systematic approach, named as the abstract process for knowledge reduction based on divide and conquer method in rough set theory, is proposed. Based on the presented approach, two algorithms for knowledge reduction, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented. Some experimental evaluations are done to test the methods on uci data sets and KDDCUP99 data sets. The experimental results illustrate that the proposed approaches are efficient to process large data sets with good recognition rate, compared with KNN, SVM, C4.5, Naive Bayes, and CART.

  12. Splines and the Galerkin method for solving the integral equations of scattering theory

    Science.gov (United States)

    Brannigan, M.; Eyre, D.

    1983-06-01

    This paper investigates the Galerkin method with cubic B-spline approximants to solve singular integral equations that arise in scattering theory. We stress the relationship between the Galerkin and collocation methods.The error bound for cubic spline approximates has a convergence rate of O(h4), where h is the mesh spacing. We test the utility of the Galerkin method by solving both two- and three-body problems. We demonstrate, by solving the Amado-Lovelace equation for a system of three identical bosons, that our numerical treatment of the scattering problem is both efficient and accurate for small linear systems.

  13. The FN method for anisotropic scattering in neutron transport theory: the critical slab problem.

    Science.gov (United States)

    Gülecyüz, M. C.; Tezcan, C.

    1996-08-01

    The FN method which has been applied to many physical problems for isotropic and anisotropic scattering in neutron transport theory is extended for problems for extremely anisotropic scattering. This method depends on the Placzek lemma and the use of the infinite medium Green's function. Here the Green's function for extremely anisotropic scattering which was expressed as a combination of the Green's functions for isotropic scattering is used to solve the critical slab problem. It is shown that the criticality condition is in agreement with the one obtained previously by reducing the transport equation for anisotropic scattering to isotropic scattering and solving using the FN method.

  14. Quantum chemistry the development of ab initio methods in molecular electronic structure theory

    CERN Document Server

    Schaefer III, Henry F

    2004-01-01

    This guide is guaranteed to prove of keen interest to the broad spectrum of experimental chemists who use electronic structure theory to assist in the interpretation of their laboratory findings. A list of 150 landmark papers in ab initio molecular electronic structure methods, it features the first page of each paper (which usually encompasses the abstract and introduction). Its primary focus is methodology, rather than the examination of particular chemical problems, and the selected papers either present new and important methods or illustrate the effectiveness of existing methods in predi

  15. Network Theory and Effects of Transcranial Brain Stimulation Methods on the Brain Networks

    Directory of Open Access Journals (Sweden)

    Sema Demirci

    2014-12-01

    Full Text Available In recent years, there has been a shift from classic localizational approaches to new approaches where the brain is considered as a complex system. Therefore, there has been an increase in the number of studies involving collaborations with other areas of neurology in order to develop methods to understand the complex systems. One of the new approaches is graphic theory that has principles based on mathematics and physics. According to this theory, the functional-anatomical connections of the brain are defined as a network. Moreover, transcranial brain stimulation techniques are amongst the recent research and treatment methods that have been commonly used in recent years. Changes that occur as a result of applying brain stimulation techniques on physiological and pathological networks help better understand the normal and abnormal functions of the brain, especially when combined with techniques such as neuroimaging and electroencephalography. This review aims to provide an overview of the applications of graphic theory and related parameters, studies conducted on brain functions in neurology and neuroscience, and applications of brain stimulation systems in the changing treatment of brain network models and treatment of pathological networks defined on the basis of this theory.

  16. Utilization of Engineering Calculation Method for Transitional Regime Aerodynamics%返回器稀薄区气动特性工程计算方法的应用研究

    Institute of Scientific and Technical Information of China (English)

    赵波; 黄飞; 程晓丽

    2014-01-01

    During the chinese spacecraft non-ballistic reentry process, the rarefied regime residence time increases significantly, and the effects of rarefied gas effect on reentry significantly are enhanced, so fast and accurate prediction of low-density aerodynamic characteristics becomes important. The present paper labors development and utilization of many engineering calculation methods, eg. the bridging function methods, for aerodynamic characteristics in transition regime, compares computing precisions of three different bridging methods using the Stardust return capsule data, and gives the most appropriated engineering method for return capsule in low density regime. The results show that the aerodynamic characteristics obtained by different bridging function methods have remarkable discrepancy, and comparing with the Direct Simulation Monte Carlo data, the local bridging function is the best engineering method for forecasting transitional regime aero-dynamics, especially for moment coefficients.%中国航天返回器再入过程采用非弹道式再入轨道,其高空稀薄区滞留时间显著增长,稀薄效应对再入飞行的影响显著增强,因此快速而准确地预测返回器稀薄区气动特性变得非常重要。文章分析归纳了多种稀薄区气动特性工程计算方法即桥函数方法的发展和应用,并以 Stardust 返回舱为对象,对比分析了三种不同桥函数的预测精度,给出了在航天返回器气动预测中更为合适的工程方法。结果显示:不同桥函数预测结果差别很大;在不同攻角下,与数值模拟的对比分析表明,局部桥函数方法气动特性预测结果基本优于其它桥函数,尤其是在力矩系数预测上。因此稀薄区气动特性的预测采用局部桥函数较为合适。

  17. Spectral difference Lanczos method for efficient time propagation in quantum control theory.

    Science.gov (United States)

    Farnum, John D; Mazziotti, David A

    2004-04-01

    Spectral difference methods represent the real-space Hamiltonian of a quantum system as a banded matrix which possesses the accuracy of the discrete variable representation (DVR) and the efficiency of finite differences. When applied to time-dependent quantum mechanics, spectral differences enhance the efficiency of propagation methods for evolving the Schrodinger equation. We develop a spectral difference Lanczos method which is computationally more economical than the sinc-DVR Lanczos method, the split-operator technique, and even the fast-Fourier-Transform Lanczos method. Application of fast propagation is made to quantum control theory where chirped laser pulses are designed to dissociate both diatomic and polyatomic molecules. The specificity of the chirped laser fields is also tested as a possible method for molecular identification and discrimination.

  18. A sequential fuzzy diagnosis method for rotating machinery using ant colony optimization and possibility theory

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Hao; Ping, Xueliang; Cao, Yi; Lie, Ke [Jiangnan University, Wuxi (China); Chen, Peng [Mie University, Mie (Japan); Wang, Huaqing [Beijing University, Beijing (China)

    2014-04-15

    This study proposes a novel intelligent fault diagnosis method for rotating machinery using ant colony optimization (ACO) and possibility theory. The non-dimensional symptom parameters (NSPs) in the frequency domain are defined to reflect the features of the vibration signals measured in each state. A sensitive evaluation method for selecting good symptom parameters using principal component analysis (PCA) is proposed for detecting and distinguishing faults in rotating machinery. By using ACO clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. A fuzzy diagnosis method using sequential inference and possibility theory is also proposed, by which the conditions of the machinery can be identified sequentially. Lastly, the proposed method is compared with a conventional neural networks (NN) method. Practical examples of diagnosis for a V-belt driving equipment used in a centrifugal fan are provided to verify the effectiveness of the proposed method. The results verify that the faults that often occur in V-belt driving equipment, such as a pulley defect state, a belt defect state and a belt looseness state, are effectively identified by the proposed method, while these faults are difficult to detect using conventional NN.

  19. Matrix product states and variational methods applied to critical quantum field theory

    CERN Document Server

    Milsted, Ashley; Osborne, Tobias J

    2013-01-01

    We study the second-order quantum phase-transition of massive real scalar field theory with a quartic interaction in (1+1) dimensions on an infinite spatial lattice using matrix product states (MPS). We introduce and apply a naive variational conjugate gradient method, based on the time-dependent variational principle (TDVP) for imaginary time, to obtain approximate ground states, using a related ansatz for excitations to calculate the particle and soliton masses and to obtain the spectral density. We also estimate the central charge using finite-entanglement scaling. Our value for the critical parameter agrees well with recent Monte Carlo results, improving on an earlier study which used the related DMRG method, verifying that these techniques are well-suited to studying critical field systems. We also obtain critical exponents that agree, as expected, with those of the transverse Ising model. Additionally, we treat the special case of uniform product states (mean field theory) separately, showing that they ...

  20. A Fault Diagnosis Method of Power Systems Based on Gray System Theory

    Directory of Open Access Journals (Sweden)

    Huang Darong

    2015-01-01

    Full Text Available To provide some decision-making suggestions for fault diagnosis in power systems, a new model for identifying fault component is constructed by using Gray theory. Firstly, the basic concepts of Gray theory are introduced and explained in detail. And then the recognition algorithm of the power supply interrupted districts and the assignment principle of fault state vectors are depicted according to the working principle of protective relays (PRs and circuit breakers (CBs. Secondly, based on the concept of the Gray correlation degree, the fault information explanation degree model is constructed and the judging method of malfunction and rejection for PRs and CBs is established. Meanwhile, to achieve the goal of the fault diagnosis, the fault diagnosis procedure that determined which components malfunction is designed for power systems. Finally, some simple experiments have already verified that the proposed method and model are effective and reasonable and the trend of further research is analyzed and summarized.

  1. Geospatial Big Data Handling Theory and Methods: A Review and Research Challenges

    CERN Document Server

    Li, S; Anton, F; Sester, M; Winter, S; Coltekin, A; Pettit, C; Jiang, B; Haworth, J; Stein, A; Cheng, T

    2015-01-01

    Big data has now become a strong focus of global interest that is increasingly attracting the attention of academia, industry, government and other organizations. Big data can be situated in the disciplinary area of traditional geospatial data handling theory and methods. The increasing volume and varying format of collected geospatial big data presents challenges in storing, managing, processing, analyzing, visualizing and verifying the quality of data. This has implications for the quality of decisions made with big data. Consequently, this position paper of the International Society for Photogrammetry and Remote Sensing (ISPRS) Technical Commission II (TC II) revisits the existing geospatial data handling methods and theories to determine if they are still capable of handling emerging geospatial big data. Further, the paper synthesises problems, major issues and challenges with current developments as well as recommending what needs to be developed further in the near future. Keywords: Big data, Geospatial...

  2. Geospatial big data handling theory and methods: A review and research challenges

    Science.gov (United States)

    Li, Songnian; Dragicevic, Suzana; Castro, Francesc Antón; Sester, Monika; Winter, Stephan; Coltekin, Arzu; Pettit, Christopher; Jiang, Bin; Haworth, James; Stein, Alfred; Cheng, Tao

    2016-05-01

    Big data has now become a strong focus of global interest that is increasingly attracting the attention of academia, industry, government and other organizations. Big data can be situated in the disciplinary area of traditional geospatial data handling theory and methods. The increasing volume and varying format of collected geospatial big data presents challenges in storing, managing, processing, analyzing, visualizing and verifying the quality of data. This has implications for the quality of decisions made with big data. Consequently, this position paper of the International Society for Photogrammetry and Remote Sensing (ISPRS) Technical Commission II (TC II) revisits the existing geospatial data handling methods and theories to determine if they are still capable of handling emerging geospatial big data. Further, the paper synthesises problems, major issues and challenges with current developments as well as recommending what needs to be developed further in the near future. Keywords: Big data, Geospatial, Data handling, Analytics, Spatial Modeling, Review

  3. Regimes Of Helium Burning

    CERN Document Server

    Timmes, F X

    2000-01-01

    The burning regimes encountered by laminar deflagrations and ZND detonations propagating through helium-rich compositions in the presence of buoyancy-driven turbulence are analyzed. Particular attention is given to models of X-ray bursts which start with a thermonuclear runaway on the surface of a neutron star, and the thin shell helium instability of intermediate-mass stars. In the X-ray burst case, turbulent deflagrations propagating in the lateral or radial directions encounter a transition from the distributed regime to the flamlet regime at a density of 10^8 g cm^{-3}. In the radial direction, the purely laminar deflagration width is larger than the pressure scale height for densities smaller than 10^6 g cm^{-3}. Self-sustained laminar deflagrations travelling in the radial direction cannot exist below this density. Similarily, the planar ZND detonation width becomes larger than the pressure scale height at 10^7 g cm^{-3}, suggesting that a steady-state, self-sustained detonations cannot come into exista...

  4. Error estimations of mixed finite element methods for nonlinear problems of shallow shell theory

    Science.gov (United States)

    Karchevsky, M.

    2016-11-01

    The variational formulations of problems of equilibrium of a shallow shell in the framework of the geometrically and physically nonlinear theory by boundary conditions of different main types, including non-classical, are considered. Necessary and sufficient conditions for their solvability are derived. Mixed finite element methods for the approximate solutions to these problems based on the use of second derivatives of the bending as auxiliary variables are proposed. Estimations of accuracy of approximate solutions are established.

  5. Studies on tautomerism in tetrazole: comparison of Hartree Fock and density functional theory quantum chemical methods

    Science.gov (United States)

    Mazurek, A. P.; Sadlej-Sosnowska, N.

    2000-11-01

    A comparison of the ab initio quantum chemical methods: Hartree-Fock (HF) and hybrid density functional theory (DFT)/B3LYP for the treatment of tautomeric equilibria both in the gas phase and in the solution is made. The solvent effects were investigated in terms of the self-consistent reaction field (SCRF). Ionization potentials (IP), calculated by DFT/B3LYP, are also compared with those calculated previously within the HF frame.

  6. Subspace accelerated inexact Newton method for large scale wave functions calculations in Density Functional Theory

    Energy Technology Data Exchange (ETDEWEB)

    Fattebert, J

    2008-07-29

    We describe an iterative algorithm to solve electronic structure problems in Density Functional Theory. The approach is presented as a Subspace Accelerated Inexact Newton (SAIN) solver for the non-linear Kohn-Sham equations. It is related to a class of iterative algorithms known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of real applications using a finite difference discretization and multigrid preconditioning.

  7. Complexity Theory of Beam Halo-Chaos and Its Control Methods With Prospective Applications

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This article offers an overview and comprehensive survey of the complexity theory of beamhalo-chaos and its control methods with prospective applications. In recent years, there has been growinginterest in proton beams of high power linear accelerator due to its attractive features in possiblebreakthrough applications, such as production of nuclear materials (e.g., tritium, transforming 232Th to233U), transmutation of radioactive wastes, productions of radioactive isotopes for medical use, heavy ion

  8. Two-Group Theory of the Feynman-Alpha Method for Reactivity Measurement in ADS

    Directory of Open Access Journals (Sweden)

    Lénárd Pál

    2012-01-01

    Full Text Available The theory of the Feynman-alpha method, which is used to determine the subcritical reactivity of systems driven by an external source such as an ADS, is extended to two energy groups with the inclusion of delayed neutrons. This paper presents a full derivation of the variance to mean formula with the inclusion of two energy groups and delayed neutrons. The results are illustrated quantitatively and discussed in physical terms.

  9. Heavy dense QCD from a 3d effective lattice theory

    CERN Document Server

    Glesaaen, Jonas; Philipsen, Owe

    2015-01-01

    The cold and dense regime of the QCD phase diagram is to this day inaccessible to first principle lattice calculations owing to the sign problem. Here we present progress of an ongoing effort to probe this particularly difficult regime utilising a dimensionally reduced effective lattice theory with a significantly reduced sign problem. The effective theory is derived by combined character and hopping expansion and is valid for heavy quarks near the continuum. We show an extension of the effective theory to order $u^5\\kappa^8$ in the cold regime. A linked cluster expansion is applied to the effective theory resulting in a consistent mechanism for handling the effective theory fully analytically. The new results are consistent with the ones from simulations confirming the viability of analytic methods. Finally we resum the analytical result which doubles the convergence region of the expansion.

  10. Introduction of the Scientific Method and Atomic Theory to Liberal Arts Chemistry Students

    Science.gov (United States)

    Hohman, James R.

    1998-12-01

    Liberal arts chemistry students often struggle with the application of the scientific method to problem solving in the sciences, in part because of insufficient concrete examples. These same students also tend to have significant difficulty in appreciating the value of weight ratios in chemistry, particularly in the establishment of the laws that led to the atomic theory. A simple classroom exercise utilizing net weights of envelopes containing varying numbers of BB's or paper clips can be used to illustrate and differentiate the steps of the scientific method: observation (with corrections) to get scientific facts, induction to arrive at laws, tentative explanation by hypothesis, experimentation to test the hypothesis, and final establishment of a scientific theory. Since the students participating in this exercise arrive at each of these steps on their own, there is greater appreciation and more effective internalization of the scientific method on their part. The exercise depends upon the discrete nature of the BB's or paper clips (i.e., on the fact that they are individual "particles" of similar properties, and so are useful analogies to atoms). Finally, since weight ratios are used to solve the problem posed by this exercise, it can be used to lead directly into the weight ratios summarized by the laws of constant composition (fixed proportions) and multiple proportions, which in turn lead to Dalton's atomic theory.

  11. An information theory criteria based blind method for enumerating active users in DS-CDMA system

    Science.gov (United States)

    Samsami Khodadad, Farid; Abed Hodtani, Ghosheh

    2014-11-01

    In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.

  12. Numerical methods for one-dimensional reaction-diffusion equations arising in combustion theory

    Science.gov (United States)

    Ramos, J. I.

    1987-01-01

    A review of numerical methods for one-dimensional reaction-diffusion equations arising in combustion theory is presented. The methods reviewed include explicit, implicit, quasi-linearization, time linearization, operator-splitting, random walk and finite-element techniques and methods of lines. Adaptive and nonadaptive procedures are also reviewed. These techniques are applied first to solve two model problems which have exact traveling wave solutions with which the numerical results can be compared. This comparison is performed in terms of both the wave profile and computed wave speed. It is shown that the computed wave speed is not a good indicator of the accuracy of a particular method. A fourth-order time-linearized, Hermitian compact operator technique is found to be the most accurate method for a variety of time and space sizes.

  13. Factorization Method for Asset Pricing in Regime Switching Problems%状态转换问题中资产定价的因子分解方法

    Institute of Scientific and Technical Information of China (English)

    周璟华; 何春雄

    2011-01-01

    当支付流既依赖于基础资产价格又受外界干预时,资产的定价问题通常用状态转换模型来刻画.以对数布朗运动为基础资产模型,通过对期望现值算子进行Wiener-Hopf分解,给出了计算永久支付流的期望现值的具体步骤,并针对具跌停和涨停两种具体情形,得到了资产价格的闭形式表达式.%If a payoff stream depends on both a fundamental and possible interventions of an authority, the asset pricing problem is usually characterized by regime switching models. The fundamental is modeled as geometric Brownian motion. By Wiener-Hopf factorization of an EPV (expected present value) operator, concrete steps are provided to calculate the EPV of a perpetual payoff stream. Furthermore, the closed-form expressions of asset prices with surged limit and decline limit are given.

  14. Scale-adaptive tensor algebra for local many-body methods of electronic structure theory

    Energy Technology Data Exchange (ETDEWEB)

    Liakh, Dmitry I [ORNL

    2014-01-01

    While the formalism of multiresolution analysis (MRA), based on wavelets and adaptive integral representations of operators, is actively progressing in electronic structure theory (mostly on the independent-particle level and, recently, second-order perturbation theory), the concepts of multiresolution and adaptivity can also be utilized within the traditional formulation of correlated (many-particle) theory which is based on second quantization and the corresponding (generally nonorthogonal) tensor algebra. In this paper, we present a formalism called scale-adaptive tensor algebra (SATA) which exploits an adaptive representation of tensors of many-body operators via the local adjustment of the basis set quality. Given a series of locally supported fragment bases of a progressively lower quality, we formulate the explicit rules for tensor algebra operations dealing with adaptively resolved tensor operands. The formalism suggested is expected to enhance the applicability and reliability of local correlated many-body methods of electronic structure theory, especially those directly based on atomic orbitals (or any other localized basis functions).

  15. Regime change thresholds in flute-like instruments: influence of the mouth pressure dynamics

    CERN Document Server

    Terrien, Soizic; Vergez, Christophe; Fabre, Benoît

    2014-01-01

    Since they correspond to a jump from a given note to another one, the mouth pressure thresholds leading to regime changes are particularly important quantities in flute-like instruments. In this paper, a comparison of such thresholds between an artificial mouth, an experienced flutist and a non player is provided. It highlights the ability of the experienced player to considerabily shift regime change thresholds, and thus to enlarge its control in terms of nuances and spectrum. Based on recent works on other wind instruments and on the theory of dynamic bifurcations, the hypothe- sis is tested experimentally and numerically that the dynamics of the blowing pressure influences regime change thresholds. The results highlight the strong influence of this parameter on thresholds, suggesting its wide use by experienced musicians. Starting from these observations and from an analysis of a physical model of flute-like instruments, involving numerical continuation methods and Floquet stability analysis, a phenomenolo...

  16. Considerations in Grounded Theory Research Method: A reflection on the lessons learned

    OpenAIRE

    Mavetera, Nehemiah; Kroeze, Jan H

    2010-01-01

    This paper is a discussion on the practical issues faced by Information Systems (IS) professionals when they employ Grounded Theory Method (GTM) in Information Systems research. Various strands of GTM are in use, all of which are derivatives of the grand GTM proposed by Barney G. Glaser and Anselm G. Strauss in 1967. Starting with the dicta proposed by these two authors in 1967 on the use of GTM, the paper explores several variants of the method that have surfaced and are currently in use....

  17. Research on knowledge acquisition method about the IF/THEN rules based on rough set theory

    Institute of Scientific and Technical Information of China (English)

    Liu Daohua; Yuan Sicong; Zhang Xiaolong; Wang Fazhan

    2008-01-01

    The basic principles of IF/THEN rules in rough set theory are analyzed first, and then the automatic process of knowledge acquisition is given. The numerical data is qualitatively processed by the classification of membership functions and membership degrees to get the normative decision table. The regular method of relations and the reduction algorithm of attributes are studied. The reduced relations are presented by the multi-represent-value method and its algorithm is offered. The whole knowledge acquisition process has high degree of automation and the extracted knowledge is true and reliable.

  18. Study on Interaction Between Two Parallel Plates with Iteration Method in Functional Theory

    Institute of Scientific and Technical Information of China (English)

    Ming Zhou; Zheng-wu Wang; Zu-min Xu

    2008-01-01

    By introducing the functional theory into the calculation of electric double layer (EDL) interaction,the interaction energies of two parallel plates were calculated respectively at low,moderate,and high potentials. Compared with the results of two existing methods,Debye-Hiickel and Langmuir methods,which are appli- cable just to the critical potentials and perform poorly in the intermediate potential,the functional approach not only has much simpler expression of the EDL interaction energy,but also performs well in the entire range of potentials.

  19. Solution of the radiative transfer theory problems by the Monte Carlo method

    Science.gov (United States)

    Marchuk, G. I.; Mikhailov, G. A.

    1974-01-01

    The Monte Carlo method is used for two types of problems. First, there are interpretation problems of optical observations from meteorological satellites in the short wave part of the spectrum. The sphericity of the atmosphere, the propagation function, and light polarization are considered. Second, problems dealt with the theory of spreading narrow light beams. Direct simulation of light scattering and the mathematical form of medium radiation model representation are discussed, and general integral transfer equations are calculated. The dependent tests method, derivative estimates, and solution to the inverse problem are also considered.

  20. The Adapted Ordering Method for the Representation Theory of Lie Algebras and Superalgebras and their Generalizations

    CERN Document Server

    Gato-Rivera, Beatriz

    2008-01-01

    In 1998 the Adapted Ordering Method was developed for the study of the representation theory of the superconformal algebras in two dimensions. It allows: to determine the maximal dimension for a given type of space of singular vectors, to identify all singular vectors by only a few coefficients, to spot subsingular vectors and to set the basis for constructing embedding diagrams. In this talk I introduce the present version of the Adapted Ordering Method, published in J. Phys. A: Math. Theor. 41 (2008) 045201, which can be applied to general Lie algebras and superalgebras and their generalizations, provided they can be triangulated.

  1. On the accuracy of density functional theory and wave function methods for calculating vertical ionization energies

    Energy Technology Data Exchange (ETDEWEB)

    McKechnie, Scott [Cavendish Laboratory, Department of Physics, University of Cambridge, J J Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Booth, George H. [Theory and Simulation of Condensed Matter, King’s College London, The Strand, London WC2R 2LS (United Kingdom); Cohen, Aron J. [Department of Chemistry, University of Cambridge, Lensfield Road, Cambridge CB2 1EW (United Kingdom); Cole, Jacqueline M., E-mail: jmc61@cam.ac.uk [Cavendish Laboratory, Department of Physics, University of Cambridge, J J Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Argonne National Laboratory, 9700 S Cass Avenue, Argonne, Illinois 60439 (United States)

    2015-05-21

    The best practice in computational methods for determining vertical ionization energies (VIEs) is assessed, via reference to experimentally determined VIEs that are corroborated by highly accurate coupled-cluster calculations. These reference values are used to benchmark the performance of density functional theory (DFT) and wave function methods: Hartree-Fock theory, second-order Møller-Plesset perturbation theory, and Electron Propagator Theory (EPT). The core test set consists of 147 small molecules. An extended set of six larger molecules, from benzene to hexacene, is also considered to investigate the dependence of the results on molecule size. The closest agreement with experiment is found for ionization energies obtained from total energy difference calculations. In particular, DFT calculations using exchange-correlation functionals with either a large amount of exact exchange or long-range correction perform best. The results from these functionals are also the least sensitive to an increase in molecule size. In general, ionization energies calculated directly from the orbital energies of the neutral species are less accurate and more sensitive to an increase in molecule size. For the single-calculation approach, the EPT calculations are in closest agreement for both sets of molecules. For the orbital energies from DFT functionals, only those with long-range correction give quantitative agreement with dramatic failing for all other functionals considered. The results offer a practical hierarchy of approximations for the calculation of vertical ionization energies. In addition, the experimental and computational reference values can be used as a standardized set of benchmarks, against which other approximate methods can be compared.

  2. CSAU methodology and results for an ATWS event in a BWR using information theory methods

    Energy Technology Data Exchange (ETDEWEB)

    Munoz-Cobo, J.L., E-mail: jlcobos@iqn.upv.es [Universitat Politècnica de València, Thermal-Hydraulics and Nuclear Engineering Group (TIN), Institute for Energy Engineering (IEE), Valencia (Spain); Escrivá, A., E-mail: aescriva@iqn.upv.es [Universitat Politècnica de València, Thermal-Hydraulics and Nuclear Engineering Group (TIN), Institute for Energy Engineering (IEE), Valencia (Spain); Mendizabal, R., E-mail: rmsanz@csn.es [Consejo de Seguridad Nuclear, 28040 Madrid (Spain); Pelayo, F., E-mail: fpl@csn.es [Consejo de Seguridad Nuclear, 28040 Madrid (Spain); Melara, J., E-mail: jls@iberdrola.es [IBERINCO, IBERDROLA Ingeniería y Construcción, Madrid (Spain)

    2014-10-15

    Highlights: • We apply the CSAU methodology to an ATWS in a BWR using information theory methods. • We show how to perform the selection of the most influential inputs on the critical safety parameter. • We apply the maximum entropy principle to get the input parameter distribution. • We examine the maximum relative entropy principle to update the input parameter PDF. • We quantify the uncertainty of the critical safety parameter using order statistics and information theory. - Abstract: This paper shows an application of the CSAU methodology to an ATWS in a BWR reactor, when the temperature of the suppression pool is taken as the critical safety parameter. The method combines CSAU methodology with recent techniques of information theory. In this paper we use auxiliary tools to help in the evaluation and improvement of the parameters distribution that enter in the elements II and III of CSAU based methodologies. These tools have been implemented in two FORTRAN programs: GEDIPA (Generation of the Parameter Distribution) and UNTHERCO (Uncertainty in Thermal Hydraulic Codes). The first one analyzes the information data available on a given parameter or parameters with the goal to know all the information about the probability distribution function of these parameters. The second apply information theory methods, as the maximum entropy principle (MEP) and the maximum relative entropy Principle (MREP), in order to build conservative distribution functions for the parameters from the available data. Also, the distribution function of a given parameter can be updated using the MREP principle when new information is provided. UNTHERCO performs the MONTECARLO sampling for a given set of parameters when the distribution function of these parameters is previously known. If the distribution of a parameter is unknown, then, the MEP is applied to deduce the distribution function for this parameter.

  3. On the accuracy of density functional theory and wave function methods for calculating vertical ionization energies

    Science.gov (United States)

    McKechnie, Scott; Booth, George H.; Cohen, Aron J.; Cole, Jacqueline M.

    2015-05-01

    The best practice in computational methods for determining vertical ionization energies (VIEs) is assessed, via reference to experimentally determined VIEs that are corroborated by highly accurate coupled-cluster calculations. These reference values are used to benchmark the performance of density functional theory (DFT) and wave function methods: Hartree-Fock theory, second-order Møller-Plesset perturbation theory, and Electron Propagator Theory (EPT). The core test set consists of 147 small molecules. An extended set of six larger molecules, from benzene to hexacene, is also considered to investigate the dependence of the results on molecule size. The closest agreement with experiment is found for ionization energies obtained from total energy difference calculations. In particular, DFT calculations using exchange-correlation functionals with either a large amount of exact exchange or long-range correction perform best. The results from these functionals are also the least sensitive to an increase in molecule size. In general, ionization energies calculated directly from the orbital energies of the neutral species are less accurate and more sensitive to an increase in molecule size. For the single-calculation approach, the EPT calculations are in closest agreement for both sets of molecules. For the orbital energies from DFT functionals, only those with long-range correction give quantitative agreement with dramatic failing for all other functionals considered. The results offer a practical hierarchy of approximations for the calculation of vertical ionization energies. In addition, the experimental and computational reference values can be used as a standardized set of benchmarks, against which other approximate methods can be compared.

  4. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming Analysis, Simulation and Engineering Applications

    CERN Document Server

    Hu, Ping; Liu, Li-zhong; Zhu, Yi-guo

    2013-01-01

    Over the last 15 years, the application of innovative steel concepts in the automotive industry has increased steadily. Numerical simulation technology of hot forming of high-strength steel allows engineers to modify the formability of hot forming steel metals and to optimize die design schemes. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming focuses on hot and cold forming theories, numerical methods, relative simulation and experiment techniques for high-strength steel forming and die design in the automobile industry. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming introduces the general theories of cold forming, then expands upon advanced hot forming theories and simulation methods, including: • the forming process, • constitutive equations, • hot boundary constraint treatment, and • hot forming equipment and experiments. Various calculation methods of cold and hot forming, based on the authors’ experience in commercial CAE software f...

  5. A Feature Extraction Method Based on Information Theory for Fault Diagnosis of Reciprocating Machinery

    Science.gov (United States)

    Wang, Huaqing; Chen, Peng

    2009-01-01

    This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to. PMID:22574021

  6. A Feature Extraction Method Based on Information Theory for Fault Diagnosis of Reciprocating Machinery

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2009-04-01

    Full Text Available This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to.

  7. Chapter 29: Unproved and controversial methods and theories in allergy-immunology.

    Science.gov (United States)

    Shah, Rachna; Greenberger, Paul A

    2012-01-01

    Unproved methods and controversial theories in the diagnosis and management of allergy-immunology are those that lack scientific credibility. Some definitions are provided for perspective because in chronic medical conditions, frequently, nonscientifically based treatments are developed that can have a very positive psychological effect on the patients in the absence of objective physical benefit. Standard practice can be described as "the methods of diagnosis and treatment used by reputable physicians in a particular subspecialty or primary care practice" with the understanding that diagnosis and treatment options are consistent with established mechanisms of conditions or diseases.(3) Conventional medicine (Western or allopathic medicine) is that which is practiced by the majority of MDs, DOs, psychologists, RNs, and physical therapists. Complementary medicine uses the practice of conventional medicine with complementary and alternative medicine such as using acupuncture for pain relief in addition to opioids. Alternative medicine implies use of complementary and alternative practices in place of conventional medicine. Unproved and controversial methods and theories do not have supporting data, validation, and sufficient scientific scrutiny, and they should not be used in the practice of allergy-immunology. Some examples of unproven theories about allergic immunologic conditions include allergic toxemia, idiopathic environmental intolerance, association with childhood vaccinations, and adrenal fatigue. Unconventional (unproved) diagnostic methods for allergic-immunologic conditions include cytotoxic tests, provocation-neutralization, electrodermal diagnosis, applied kinesiology assessments, and serum IgG or IgG(4) testing. Unproven treatments and intervention methods for allergic-immunologic conditions include acupuncture, homeopathy ("likes cure likes"), halotherapy, and autologous urine injections.

  8. Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Lin; Shao, Sihong; E, Weinan

    2012-11-06

    We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Pt$_{2}$, Au$_{2}$, TlF, and Bi$_{2}$Se$_{3}$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.

  9. Practice of Improving Roll Deformation Theory in Strip Rolling Process Based on Boundary Integral Equation Method

    Science.gov (United States)

    Yuan, Zhengwen; Xiao, Hong; Xie, Hongbiao

    2014-02-01

    Precise strip-shape control theory is significant to improve rolled strip quality, and roll flattening theory is a primary part of the strip-shape theory. To improve the accuracy of roll flattening calculation based on semi-infinite body model, a new and more accurate roll flattening model is proposed in this paper, which is derived based on boundary integral equation method. The displacement fields of the finite length semi-infinite body on left and right sides are simulated by using finite element method (FEM) and displacement decay functions on left and right sides are established. Based on the new roll flattening model, a new 4Hi mill deformation model is established and verified by FEM. The new model is compared with Foppl formula and semi-infinite body model in different strip width, roll shifting value and bending force. The results show that the pressure and flattening between rolls calculated by the new model are more precise than other two models, especially near the two roll barrel edges.

  10. Bandstructure meets many-body theory: the LDA+DMFT method

    Energy Technology Data Exchange (ETDEWEB)

    Held, K [Max-Planck Institut fuer Festkoerperforschung, D-70569 Stuttgart (Germany); Andersen, O K [Max-Planck Institut fuer Festkoerperforschung, D-70569 Stuttgart (Germany); Feldbacher, M [Max-Planck Institut fuer Festkoerperforschung, D-70569 Stuttgart (Germany); Yamasaki, A [Max-Planck Institut fuer Festkoerperforschung, D-70569 Stuttgart (Germany); Yang, Y-F [Max-Planck Institut fuer Festkoerperforschung, D-70569 Stuttgart (Germany)

    2008-02-13

    Ab initio calculation of the electronic properties of materials is a major challenge for solid-state theory. Whereas 40 years' experience has proven density-functional theory (DFT) in a suitable form, e.g. local approximation (LDA), to give a satisfactory description when electronic correlations are weak, materials with strongly correlated electrons, say d- or f-electrons, remain a challenge. Such materials often exhibit 'colossal' responses to small changes of external parameters such as pressure, temperature, and magnetic field, and are therefore most interesting for technical applications. Encouraged by the success of dynamical mean-field theory (DMFT) in dealing with model Hamiltonians for strongly correlated electron systems, physicists from the bandstructure and many-body communities have joined forces and developed a combined LDA+DMFT method for treating materials with strongly correlated electrons ab initio. As a function of increasing Coulomb correlations, this new approach yields a weakly correlated metal, a strongly correlated metal, or a Mott insulator. In this paper, we introduce the LDA+DMFT method by means of an example, LaMnO{sub 3}. Results for this material, including the 'colossal' magnetoresistance of doped manganites, are presented. We also discuss the advantages and disadvantages of the LDA+DMFT approach.

  11. Bandstructure meets many-body theory: the LDA+DMFT method.

    Science.gov (United States)

    Held, K; Andersen, O K; Feldbacher, M; Yamasaki, A; Yang, Y-F

    2008-02-13

    Ab initio calculation of the electronic properties of materials is a major challenge for solid-state theory. Whereas 40 years' experience has proven density-functional theory (DFT) in a suitable form, e.g. local approximation (LDA), to give a satisfactory description when electronic correlations are weak, materials with strongly correlated electrons, say d- or f-electrons, remain a challenge. Such materials often exhibit 'colossal' responses to small changes of external parameters such as pressure, temperature, and magnetic field, and are therefore most interesting for technical applications. Encouraged by the success of dynamical mean-field theory (DMFT) in dealing with model Hamiltonians for strongly correlated electron systems, physicists from the bandstructure and many-body communities have joined forces and developed a combined LDA+DMFT method for treating materials with strongly correlated electrons ab initio. As a function of increasing Coulomb correlations, this new approach yields a weakly correlated metal, a strongly correlated metal, or a Mott insulator. In this paper, we introduce the LDA+DMFT method by means of an example, LaMnO(3). Results for this material, including the 'colossal' magnetoresistance of doped manganites, are presented. We also discuss the advantages and disadvantages of the LDA+DMFT approach.

  12. PEXSI-$\\Sigma$: A Green's function embedding method for Kohn-Sham density functional theory

    CERN Document Server

    Li, Xiantao; Lu, Jianfeng

    2016-01-01

    As Kohn-Sham density functional theory (KSDFT) being applied to increasingly more complex materials, the periodic boundary condition associated with supercell approaches also becomes unsuitable for a number of important scenarios. Green's function embedding methods allow a more versatile treatment of complex boundary conditions, and hence provide an attractive alternative to describe complex systems that cannot be easily treated in supercell approaches. In this paper, we first revisit the literature of Green's function embedding methods from a numerical linear algebra perspective. We then propose a new Green's function embedding method called PEXSI-$\\Sigma$. The PEXSI-$\\Sigma$ method approximates the density matrix using a set of nearly optimally chosen Green's functions evaluated at complex frequencies. For each Green's function, the complex boundary conditions are described by a self energy matrix $\\Sigma$ constructed from a physical reference Green's function, which can be computed relatively easily. In th...

  13. A Method for Recognizing Fatigue Driving Based on Dempster-Shafer Theory and Fuzzy Neural Network

    Directory of Open Access Journals (Sweden)

    WenBo Zhu

    2017-01-01

    Full Text Available This study proposes a method based on Dempster-Shafer theory (DST and fuzzy neural network (FNN to improve the reliability of recognizing fatigue driving. This method measures driving states using multifeature fusion. First, FNN is introduced to obtain the basic probability assignment (BPA of each piece of evidence given the lack of a general solution to the definition of BPA function. Second, a modified algorithm that revises conflict evidence is proposed to reduce unreasonable fusion results when unreliable information exists. Finally, the recognition result is given according to the combination of revised evidence based on Dempster’s rule. Experiment results demonstrate that the recognition method proposed in this paper can obtain reasonable results with the combination of information given by multiple features. The proposed method can also effectively and accurately describe driving states.

  14. Crane Safety Assessment Method Based on Entropy and Cumulative Prospect Theory

    Directory of Open Access Journals (Sweden)

    Aihua Li

    2017-01-01

    Full Text Available Assessing the safety status of cranes is an important problem. To overcome the inaccuracies and misjudgments in such assessments, this work describes a safety assessment method for cranes that combines entropy and cumulative prospect theory. Firstly, the proposed method transforms the set of evaluation indices into an evaluation vector. Secondly, a decision matrix is then constructed from the evaluation vectors and evaluation standards, and an entropy-based technique is applied to calculate the index weights. Thirdly, positive and negative prospect value matrices are established from reference points based on the positive and negative ideal solutions. Thus, this enables the crane safety grade to be determined according to the ranked comprehensive prospect values. Finally, the safety status of four general overhead traveling crane samples is evaluated to verify the rationality and feasibility of the proposed method. The results demonstrate that the method described in this paper can precisely and reasonably reflect the safety status of a crane.

  15. Grey situation group decision-making method based on prospect theory.

    Science.gov (United States)

    Zhang, Na; Fang, Zhigeng; Liu, Xiaqing

    2014-01-01

    This paper puts forward a grey situation group decision-making method on the basis of prospect theory, in view of the grey situation group decision-making problems that decisions are often made by multiple decision experts and those experts have risk preferences. The method takes the positive and negative ideal situation distance as reference points, defines positive and negative prospect value function, and introduces decision experts' risk preference into grey situation decision-making to make the final decision be more in line with decision experts' psychological behavior. Based on TOPSIS method, this paper determines the weight of each decision expert, sets up comprehensive prospect value matrix for decision experts' evaluation, and finally determines the optimal situation. At last, this paper verifies the effectiveness and feasibility of the method by means of a specific example.

  16. Revised Max-Min Average Composition Method for Decision Making Using Intuitionistic Fuzzy Soft Matrix Theory

    Directory of Open Access Journals (Sweden)

    P. Shanmugasundaram

    2014-01-01

    Full Text Available In this paper a revised Intuitionistic Fuzzy Max-Min Average Composition Method is proposed to construct the decision method for the selection of the professional students based on their skills by the recruiters using the operations of Intuitionistic Fuzzy Soft Matrices. In Shanmugasundaram et al. (2014, Intuitionistic Fuzzy Max-Min Average Composition Method was introduced and applied in Medical diagnosis problem. Sanchez’s approach (Sanchez (1979 for decision making is studied and the concept is modified for the application of Intuitionistic fuzzy soft set theory. Through a survey, the opportunities and selection of the students with the help of Intuitionistic fuzzy soft matrix operations along with Intuitionistic fuzzy max-min average composition method is discussed.

  17. Potential function methods for approximately solving linear programming problems theory and practice

    CERN Document Server

    Bienstock, Daniel

    2002-01-01

    Potential Function Methods For Approximately Solving Linear Programming Problems breaks new ground in linear programming theory. The book draws on the research developments in three broad areas: linear and integer programming, numerical analysis, and the computational architectures which enable speedy, high-level algorithm design. During the last ten years, a new body of research within the field of optimization research has emerged, which seeks to develop good approximation algorithms for classes of linear programming problems. This work both has roots in fundamental areas of mathematical programming and is also framed in the context of the modern theory of algorithms. The result of this work, in which Daniel Bienstock has been very much involved, has been a family of algorithms with solid theoretical foundations and with growing experimental success. This book will examine these algorithms, starting with some of the very earliest examples, and through the latest theoretical and computational developments.

  18. Geometric Methods in the Algebraic Theory of Quadratic Forms : Summer School

    CERN Document Server

    2004-01-01

    The geometric approach to the algebraic theory of quadratic forms is the study of projective quadrics over arbitrary fields. Function fields of quadrics have been central to the proofs of fundamental results since the renewal of the theory by Pfister in the 1960's. Recently, more refined geometric tools have been brought to bear on this topic, such as Chow groups and motives, and have produced remarkable advances on a number of outstanding problems. Several aspects of these new methods are addressed in this volume, which includes - an introduction to motives of quadrics by Alexander Vishik, with various applications, notably to the splitting patterns of quadratic forms under base field extensions; - papers by Oleg Izhboldin and Nikita Karpenko on Chow groups of quadrics and their stable birational equivalence, with application to the construction of fields which carry anisotropic quadratic forms of dimension 9, but none of higher dimension; - a contribution in French by Bruno Kahn which lays out a general fra...

  19. A new method for the design of slot antenna arrays: Theory and experiment

    KAUST Repository

    Clauzier, Sebastien

    2016-04-10

    The present paper proposes and validates a new general design methodology that can be used to automatically find proper positions and orientations of waveguide-based radiating slots capable of realizing any given radiation beam profile. The new technique combines basic radiation theory and waveguide propagation theory in a novel analytical model that allows the prediction of the radiation characteristics of generic slots without the need to perform full-wave numerical solution. The analytical model is then used to implement a low-cost objective function within a global optimization scheme (here genetic algorithm.) The algorithm is then deployed to find optimum positions and orientations of clusters of radiating slots cut into the waveguide surface such that any desired beam pattern can be obtained. The method is verified using both full-wave numerical solution and experiment.

  20. Poisson theory and integration method for a dynamical system of relative motion

    Institute of Scientific and Technical Information of China (English)

    Zhang Yi; Shang Mei

    2011-01-01

    This paper focuses on studying the Poisson theory and the integration method of dynamics of relative motion. Equations of a dynamical system of relative motion in phase space are given. Poisson theory of the system is established. The Jacobi last multiplier of the system is defined, and the relation between the Jacobi last multiplier and the first integrals of the system is studied. Our research shows that for a dynamical system of relative motion, whose configuration is determined by n generalized coordinates, the solution of the system can be found by using the Jacobi last multiplier if (2n - 1) first integrals of the system are known. At the end of the paper, an example is given to illustrate the application of the results.

  1. Description of light nuclei in pionless effective field theory using the stochastic variational method

    Science.gov (United States)

    Lensky, Vadim; Birse, Michael C.; Walet, Niels R.

    2016-09-01

    We construct a coordinate-space potential based on pionless effective field theory (EFT) with a Gaussian regulator. Charge-symmetry breaking is included through the Coulomb potential and through two- and three-body contact interactions. Starting with the effective field theory potential, we apply the stochastic variational method to determine the ground states of nuclei with mass number A ≤4 . At next-to-next-to-leading order, two out of three independent three-body parameters can be fitted to the three-body binding energies. To fix the remaining one, we look for a simultaneous description of the binding energy of 4He and the charge radii of 3He and 4He. We show that at the order considered we can find an acceptable solution, within the uncertainty of the expansion. We find that the EFT expansion shows good agreement with empirical data within the estimated uncertainty, even for a system as dense as 4He.

  2. Description of light nuclei in pionless effective field theory using the stochastic variational method

    CERN Document Server

    Lensky, Vadim; Walet, Niels R

    2016-01-01

    We construct a coordinate-space potential based on pionless effective field theory with a Gaussian regulator. Charge-symmetry breaking is included through the Coulomb potential and through two- and three-body contact interactions. Starting with the effective field theory potential, we apply the stochastic variational method to determine the ground states of nuclei with mass number $A\\leq 4$. At next-to-next-to-leading order, two out of three independent three-body parameters can be fitted to the three-body binding energies. To fix the remaining one, we look for a simultaneous description of the binding energy of $^4$He and the charge radii of $^3$He and $^4$He. We show that at the order considered we can find an acceptable solution, within the uncertainty of the expansion. We find that the EFT expansion shows good convergence, even for a system as dense as $^4$He.

  3. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Science.gov (United States)

    Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.

    2016-06-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.

  4. On the generalized eigenvalue method for energies and matrix elements in lattice field theory

    CERN Document Server

    Blossier, Benoit; von Hippel, Georg; Mendes, Tereza; Sommer, Rainer

    2009-01-01

    We discuss the generalized eigenvalue problem for computing energies and matrix elements in lattice gauge theory, including effective theories such as HQET. It is analyzed how the extracted effective energies and matrix elements converge when the time separations are made large. This suggests a particularly efficient application of the method for which we can prove that corrections vanish asymptotically as $\\exp(-(E_{N+1}-E_n) t)$. The gap $E_{N+1}-E_n$ can be made large by increasing the number $N$ of interpolating fields in the correlation matrix. We also show how excited state matrix elements can be extracted such that contaminations from all other states disappear exponentially in time. As a demonstration we present numerical results for the extraction of ground state and excited B-meson masses and decay constants in static approximation and to order $1/m_b$ in HQET.

  5. A mixed element based on Lagrange multiplier method for modified couple stress theory

    Science.gov (United States)

    Kwon, Young-Rok; Lee, Byung-Chai

    2017-01-01

    A 2D mixed element is proposed for the modified couple stress theory. The C1 continuity for the displacement field is required because of the second derivatives of displacement in the energy form of the theory. The C1 continuity is satisfied in a weak sense with the Lagrange multiplier method. A supplementary rotation is introduced as an independent variable and the kinematic relation between the physical rotation and the supplementary rotation is constrained with Lagrange multipliers. Convergence criteria and a stability condition are derived, and the number and the positions of nodes for each independent variable are determined. Internal degrees of freedom are condensed out, so the element has only 21 degrees of freedom. The proposed element passes the C^{0-1} patch test. Numerical results show that the principle of limitation is applied to the element and the element is robust to mesh distortion. Furthermore, the size effects are captured well with the element.

  6. Application of the Hori Method in the Theory of Nonlinear Oscillations

    Directory of Open Access Journals (Sweden)

    Sandro da Silva Fernandes

    2012-01-01

    Full Text Available Some remarks on the application of the Hori method in the theory of nonlinear oscillations are presented. Two simplified algorithms for determining the generating function and the new system of differential equations are derived from a general algorithm proposed by Sessin. The vector functions which define the generating function and the new system of differential equations are not uniquely determined, since the algorithms involve arbitrary functions of the constants of integration of the general solution of the new undisturbed system. Different choices of these arbitrary functions can be made in order to simplify the new system of differential equations and define appropriate near-identity transformations. These simplified algorithms are applied in determining second-order asymptotic solutions of two well-known equations in the theory of nonlinear oscillations: van der Pol equation and Duffing equation.

  7. Dealing with defaulting suppliers using behavioral based governance methods: An Agency Theory perspective

    DEFF Research Database (Denmark)

    Prosman, Ernst Johannes; Scholten, Kirstin; Power, Damien

    2016-01-01

    Purpose: The aim of this paper is to explore factors influencing the effectiveness of buyer initiated Behavioral Based Governance Methods (BBGMs). The ability of BBGMs to improve supplier performance is assessed considering power imbalances and the resource intensiveness of the BBGM. Agency Theory...... is used as an interpretive lens. Design/methodology/approach: An explorative multiple case study approach is used to collect qualitative and quantitative data from buying companies involved in 13 BBGMs. Findings: Drawing on agency theory several factors are identified which can explain BBGM effectiveness...... considering power differences and the resource intensiveness of the BBGM. Our data show that even high resource intensive BBGMs can be implemented effectively if there are benefits for a powerful supplier. Cultural influences and uncertainty of the business environment also play a role. Originality...

  8. Numerical simulation of the second-order Stokes theory using finite difference method

    Directory of Open Access Journals (Sweden)

    M.A. Maâtoug

    2016-09-01

    Full Text Available The nonlinear water waves problem is of great importance because, according to the mechanical modeling of this problem, a relationship exists between the potential flow and pressure exerted by water waves. The difficulty of this problem comes not only from the fact that the kinematic and dynamic conditions are nonlinear in relation to the velocity potential, but especially because they are applied at an unknown and variable free surface. To overcome this difficulty, Stokes used an approach consisting of perturbations series around the still water level to develop a nonlinear theory. This paper deals with computation of the second-order Stokes theory in order to simulate the potential flow and the surface elevation and then to deduct the pressure loads. The Crank–Nicholson scheme and the finite difference method are used. The modeling accuracy was proved and is of order two in time and in space. Some computational results are presented and discussed.

  9. East Asian welfare regime

    DEFF Research Database (Denmark)

    Abrahamson, Peter

    2017-01-01

    . Political science studies tend to conclude that the region has left the old legacies behind and are now welfare states comparable to European states including them either in the conservative type (e.g. Japan), the liberal type (e.g. Korea) or even as a tendency in the Nordic type (e.g. China), while studies......The paper asks if East Asian welfare regimes are still productivist and Confucian? And, have they developed public care policies? The literature is split on the first question but (mostly) confirmative on the second. Care has to a large, but insufficient extent, been rolled out in the region...

  10. Integrating design science theory and methods to improve the development and evaluation of health communication programs.

    Science.gov (United States)

    Neuhauser, Linda; Kreps, Gary L

    2014-12-01

    Traditional communication theory and research methods provide valuable guidance about designing and evaluating health communication programs. However, efforts to use health communication programs to educate, motivate, and support people to adopt healthy behaviors often fail to meet the desired goals. One reason for this failure is that health promotion issues are complex, changeable, and highly related to the specific needs and contexts of the intended audiences. It is a daunting challenge to effectively influence health behaviors, particularly culturally learned and reinforced behaviors concerning lifestyle factors related to diet, exercise, and substance (such as alcohol and tobacco) use. Too often, program development and evaluation are not adequately linked to provide rapid feedback to health communication program developers so that important revisions can be made to design the most relevant and personally motivating health communication programs for specific audiences. Design science theory and methods commonly used in engineering, computer science, and other fields can address such program and evaluation weaknesses. Design science researchers study human-created programs using tightly connected build-and-evaluate loops in which they use intensive participatory methods to understand problems and develop solutions concurrently and throughout the duration of the program. Such thinking and strategies are especially relevant to address complex health communication issues. In this article, the authors explore the history, scientific foundation, methods, and applications of design science and its potential to enhance health communication programs and their evaluation.

  11. Applying Critical Race Theory to Group Model Building Methods to Address Community Violence.

    Science.gov (United States)

    Frerichs, Leah; Lich, Kristen Hassmiller; Funchess, Melanie; Burrell, Marcus; Cerulli, Catherine; Bedell, Precious; White, Ann Marie

    2016-01-01

    Group model building (GMB) is an approach to building qualitative and quantitative models with stakeholders to learn about the interrelationships among multilevel factors causing complex public health problems over time. Scant literature exists on adapting this method to address public health issues that involve racial dynamics. This study's objectives are to (1) introduce GMB methods, (2) present a framework for adapting GMB to enhance cultural responsiveness, and (3) describe outcomes of adapting GMB to incorporate differences in racial socialization during a community project seeking to understand key determinants of community violence transmission. An academic-community partnership planned a 1-day session with diverse stakeholders to explore the issue of violence using GMB. We documented key questions inspired by critical race theory (CRT) and adaptations to established GMB "scripts" (i.e., published facilitation instructions). The theory's emphasis on experiential knowledge led to a narrative-based facilitation guide from which participants created causal loop diagrams. These early diagrams depict how violence is transmitted and how communities respond, based on participants' lived experiences and mental models of causation that grew to include factors associated with race. Participants found these methods useful for advancing difficult discussion. The resulting diagrams can be tested and expanded in future research, and will form the foundation for collaborative identification of solutions to build community resilience. GMB is a promising strategy that community partnerships should consider when addressing complex health issues; our experience adapting methods based on CRT is promising in its acceptability and early system insights.

  12. A numerical homogenization method for heterogeneous, anisotropic elastic media based on multiscale theory

    KAUST Repository

    Gao, Kai

    2015-06-05

    The development of reliable methods for upscaling fine-scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. Therefore, we have proposed a numerical homogenization algorithm based on multiscale finite-element methods for simulating elastic wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that was similar to the rotated staggered-grid finite-difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity in which the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.

  13. A New Extension Theory-based Production Operation Method in Industrial Process

    Institute of Scientific and Technical Information of China (English)

    XU Yuan; ZHU Qunxiong

    2013-01-01

    To explore the problems of dynamic change in production demand and operating contradiction in production process,a new extension theory-based production operation method is proposed.The core is the demand requisition,contradiction resolution and operation classification.For the demand requisition,the deep and comprehensive demand elements are collected by the conjugating analysis.For the contradiction resolution,the conflict between the demand and operating elements are solved by the extension reasoning,extension transformation and consistency judgment.For the operating classification,the operating importance among the operating elements is calculated by the extension clustering so as to guide the production operation and ensure the production safety.Through the actual application in the cascade reaction process of high-density polyethylene (HDPE) of a chemicalplant,cases study and comparison show that the proposed extension theory-based production operation method is significantly better than the traditional experience-based operation method in actual production process,which exploits a new way to the research on the production operating methods for industrial process.

  14. Theory of fully developed hydrodynamic turbulent flow: Applications of renormalization-group methods

    Science.gov (United States)

    Yuan, Jian-Yang; Ronis, David

    1992-04-01

    A model for randomly stirred or homogeneous turbulent fluids is analyzed using renormalization-group methods on a path-integral representation of the Navier-Stokes equations containing a spatially and temporally colored noise source. For moderate Reynolds numbers and certain values of the dynamic exponent governing the noise correlation, an additional scaling regime is found at wave vectors k beyond those where the Kolmogorov 5/3 law holds. In this case, the energy spectrum decays as k-1-z, where 1zz, and the velocity-distribution function (as characterized by its skewness) deviates from a Gaussian. The additional scaling region disappears, and the Kolmogorov constant and Prandtl number become universal in the limit of infinite Reynolds number. In three spatial dimensions, the latter two equal 3/2( 5) / 3 )1/3 and √0.8 , respectively. The recent homodyne scattering experiments of Tong and co-workers [Phys. Rev. Lett. 65, 2780 (1990)] are analyzed, and the connection of the new scaling region with intermittency is discussed.

  15. Integration of Qualitative and Quantitative Methods: Building and Interpreting Clusters from Grounded Theory and Discourse Analysis

    Directory of Open Access Journals (Sweden)

    Aldo Merlino

    2007-01-01

    Full Text Available Qualitative methods present a wide spectrum of application possibilities as well as opportunities for combining qualitative and quantitative methods. In the social sciences fruitful theoretical discussions and a great deal of empirical research have taken place. This article introduces an empirical investigation which demonstrates the logic of combining methodologies as well as the collection and interpretation, both sequential as simultaneous, of qualitative and quantitative data. Specifically, the investigation process will be described, beginning with a grounded theory methodology and its combination with the techniques of structural semiotics discourse analysis to generate—in a first phase—an instrument for quantitative measuring and to understand—in a second phase—clusters obtained by quantitative analysis. This work illustrates how qualitative methods allow for the comprehension of the discursive and behavioral elements under study, and how they function as support making sense of and giving meaning to quantitative data. URN: urn:nbn:de:0114-fqs0701219

  16. Linear-scaling density functional theory using the projector augmented wave method

    Science.gov (United States)

    Hine, Nicholas D. M.

    2017-01-01

    Quantum mechanical simulation of realistic models of nanostructured systems, such as nanocrystals and crystalline interfaces, demands computational methods combining high-accuracy with low-order scaling with system size. Blöchl’s projector augmented wave (PAW) approach enables all-electron (AE) calculations with the efficiency and systematic accuracy of plane-wave pseudopotential calculations. Meanwhile, linear-scaling (LS) approaches to density functional theory (DFT) allow for simulation of thousands of atoms in feasible computational effort. This article describes an adaptation of PAW for use in the LS-DFT framework provided by the ONETEP LS-DFT package. ONETEP uses optimisation of the density matrix through in situ-optimised local orbitals rather than the direct calculation of eigenstates as in traditional PAW approaches. The method is shown to be comparably accurate to both PAW and AE approaches and to exhibit improved convergence properties compared to norm-conserving pseudopotential methods.

  17. Ranking Journals Using Social Choice Theory Methods: A Novel Approach in Bibliometrics

    Energy Technology Data Exchange (ETDEWEB)

    Aleskerov, F.T.; Pislyakov, V.; Subochev, A.N.

    2016-07-01

    We use data on economic, management and political science journals to produce quantitative estimates of (in)consistency of evaluations based on seven popular bibliometric indica (impact factor, 5-year impact factor, immediacy index, article influence score, h-index, SNIP and SJR). We propose a new approach to aggregating journal rankings: since rank aggregation is a multicriteria decision problem, ordinal ranking methods from social choice theory may solve it. We apply either a direct ranking method based on majority rule (the Copeland rule, the Markovian method) or a sorting procedure based on a tournament solution, such as the uncovered set and the minimal externally stable set. We demonstrate that aggregate rankings reduce the number of contradictions and represent the set of single-indicator-based rankings better than any of the seven rankings themselves. (Author)

  18. Comparing the content of leadership theories and managers' shared perceptions of effective leadership: a Q-method study of trainee managers in the English NHS.

    Science.gov (United States)

    Freeman, Tim

    2013-08-01

    Health service managers face potential conflicts between corporate and professional agendas, a tension sharpened for trainees by their junior status and relative inexperience. While academic leadership theory forms an integral part of contemporary management development programmes, relatively little is known of trainees' patterned subjectivities in relation to leadership theories. The objective of this study was to explore such subjectivities within a cohort of trainees on the National Health Service Graduate Management Training Scheme (NHS GMTS), a 'fast-track' programme which prepares graduate entrants for director-level health service management posts. A Q-method design was used and four shared subjectivities were identified: leadership as collaborative social process ('relational'); leadership as integrity ('moral'); leadership as effective support of subordinates ('team'); and leadership as construction of a credible leadership persona ('identity'). While the factors broadly map onto competencies indicated within the NHS Leadership Qualities Framework which underpin assessments of performance for this student group, it is important not to overstate the governance effect of the assessment regime. Rather, factors reflect tensions between required competencies, namely the mobilisation of diverse interest groups, the ethical base of decisions and the identity work required to convince others of leadership status. Indeed, factor 2 ('moral') effectively defines leadership as the embodiment of public service ethos.

  19. Large deformation of uniaxially loaded slender microbeams on the basis of modified couple stress theory: Analytical solution and Galerkin-based method

    Science.gov (United States)

    Kiani, Keivan

    2017-09-01

    Large deformation regime of micro-scale slender beam-like structures subjected to axially pointed loads is of high interest to nanotechnologists and applied mechanics community. Herein, size-dependent nonlinear governing equations are derived by employing modified couple stress theory. Under various boundary conditions, analytical relations between axially applied loads and deformations are presented. Additionally, a novel Galerkin-based assumed mode method (AMM) is established to solve the highly nonlinear equations. In some particular cases, the predicted results by the analytical approach are also checked with those of AMM and a reasonably good agreement is reported. Subsequently, the key role of the material length scale on the load-deformation of microbeams is discussed and the deficiencies of the classical elasticity theory in predicting such a crucial mechanical behavior are explained in some detail. The influences of slenderness ratio and thickness of the microbeam on the obtained results are also examined. The present work could be considered as a pivotal step in better realizing the postbuckling behavior of nano-/micro- electro-mechanical systems consist of microbeams.

  20. Two Approaches in the Lunar Libration Theory: Analytical vs. Numerical Methods

    Science.gov (United States)

    Petrova, Natalia; Zagidullin, Arthur; Nefediev, Yurii; Kosulin, Valerii

    2016-10-01

    Observation of the physical libration of the Moon and the celestial bodies is one of the astronomical methods to remotely evaluate the internal structure of a celestial body without using expensive space experiments. Review of the results obtained due to the physical libration study, is presented in the report.The main emphasis is placed on the description of successful lunar laser ranging for libration determination and on the methods of simulating the physical libration. As a result, estimation of the viscoelastic and dissipative properties of the lunar body, of the lunar core parameters were done. The core's existence was confirmed by the recent reprocessing of seismic data Apollo missions. Attention is paid to the physical interpretation of the phenomenon of free libration and methods of its determination.A significant part of the report is devoted to describing the practical application of the most accurate to date the analytical tables of lunar libration built by comprehensive analytical processing of residual differences obtained when comparing the long-term series of laser observations with numerical ephemeris DE421 [1].In general, the basic outline of the report reflects the effectiveness of two approaches in the libration theory - numerical and analytical solution. It is shown that the two approaches complement each other for the study of the Moon in different aspects: numerical approach provides high accuracy of the theory necessary for adequate treatment of modern high-accurate observations and the analytic approach allows you to see the essence of the various kind manifestations in the lunar rotation, predict and interpret the new effects in observations of physical libration [2].[1] Rambaux, N., J. G. Williams, 2011, The Moon's physical librations and determination of their free modes, Celest. Mech. Dyn. Astron., 109, 85-100.[2] Petrova N., A. Zagidullin, Yu. Nefediev. Analysis of long-periodic variations of lunar libration parameters on the basis of

  1. Students' Perceptions of Teaching Methods That Bridge Theory to Practice in Dental Hygiene Education.

    Science.gov (United States)

    Wilkinson, Denise M; Smallidge, Dianne; Boyd, Linda D; Giblin, Lori

    2015-10-01

    Health care education requires students to connect classroom learning with patient care. The purpose of this study was to explore dental hygiene students' perceptions of teaching tools, activities and teaching methods useful in closing the gap between theory and practice as students transition from classroom learning into the clinical phase of their training. This was an exploratory qualitative study design examining retrospective data from journal postings of a convenience sample of dental hygiene students (n=85). Open-ended questions related to patient care were given to junior and senior students to respond in a reflective journaling activity. A systematic approach was used to establish themes. Junior students predicted hands-on experiences (51%), critical thinking exercises (42%) and visual aids (27%) would be the most supportive in helping them connect theory to practice. Senior students identified critical thinking exercises (44%) and visual aids (44%) as the most beneficial in connecting classroom learning to patient care. Seniors also identified barriers preventing them from connecting theory to patient care. Barriers most often cited were not being able to see firsthand what is in the text (56%) and being unsure that what was seen during clinical practice was the same as what was taught (28%). Students recognized the benefits of critical thinking and problem solving skills after having experienced patient care and were most concerned with performance abilities prior to patient care experiences. This information will be useful in developing curricula to enhance critical thinking and problem solving skills. Copyright © 2015 The American Dental Hygienists’ Association.

  2. The Middle Number World: A View of Complexity Theory and Methods in Ecology

    Science.gov (United States)

    Bradshaw, G.; Bradshaw, G.

    2001-12-01

    Ecosystems, like the porridge and chair that Goldilocks found in the Three Bear's house, are characterized by numbers neither too large nor too small; they belong instead to the class of middle number systems. As such, complexity theory and methods complement the web of structures and interactions which make up landscapes and ecosystems and concern the inception of "life itself" (Rosen, 1991). As a field integral to critical socio-ecological issues confronting the globe today, and one concerned with intricate scale relationships between observer (ecologist) and observed (ecosystem), ecology brings an intriguing perspective to complex systems analysis. We discuss these new findings from complexity theory within ecological research. In this overview, we describe a systematics of ecosystem dynamics (emergence, unfolding, embedding, and operational closure) which is evolving for ecological phenomena and is common to other complex adaptive systems. Further, we discuss future research directions which are emerging with the integration of complexity and social sciences theories as they develop into a new post-modern epistemology.

  3. A New Communication Theory on Complex Information and a Groundbreaking New Declarative Method to Update Object Databases

    OpenAIRE

    Virkkunen, Heikki

    2016-01-01

    In this article I introduce a new communication theory for complex information represented as a direct graph of nodes. In addition, I introduce an application for the theory, a new radical method, embed, that can be used to update object databases declaratively. The embed method revolutionizes updating of object databases. One embed method call can replace dozens of lines of complicated updating code in a traditional client program of an object database, which is a huge improvement. As a decl...

  4. Robust method for infrared small-target detection based on Boolean map visual theory.

    Science.gov (United States)

    Qi, Shengxiang; Ming, Delie; Ma, Jie; Sun, Xiao; Tian, Jinwen

    2014-06-20

    In this paper, we present an infrared small target detection method based on Boolean map visual theory. The scheme is inspired by the phenomenon that small targets can often attract human attention due to two characteristics: brightness and Gaussian-like shape in the local context area. Motivated by this observation, we perform the task under a visual attention framework with Boolean map theory, which reveals that an observer's visual awareness corresponds to one Boolean map via a selected feature at any given instant. Formally, the infrared image is separated into two feature channels, including a color channel with the original gray intensity map and an orientation channel with the orientation texture maps produced by a designed second order directional derivative filter. For each feature map, Boolean maps delineating targets are computed from hierarchical segmentations. Small targets are then extracted from the target enhanced map, which is obtained by fusing the weighted Boolean maps of the two channels. In experiments, a set of real infrared images covering typical backgrounds with sky, sea, and ground clutters are tested to verify the effectiveness of our method. The results demonstrate that it outperforms the state-of-the-art methods with good performance.

  5. Hybrid Multicriteria Group Decision Making Method for Information System Project Selection Based on Intuitionistic Fuzzy Theory

    Directory of Open Access Journals (Sweden)

    Jian Guo

    2013-01-01

    Full Text Available Information system (IS project selection is of critical importance to every organization in dynamic competing environment. The aim of this paper is to develop a hybrid multicriteria group decision making approach based on intuitionistic fuzzy theory for IS project selection. The decision makers’ assessment information can be expressed in the form of real numbers, interval-valued numbers, linguistic variables, and intuitionistic fuzzy numbers (IFNs. All these evaluation pieces of information can be transformed to the form of IFNs. Intuitionistic fuzzy weighted averaging (IFWA operator is utilized to aggregate individual opinions of decision makers into a group opinion. Intuitionistic fuzzy entropy is used to obtain the entropy weights of the criteria. TOPSIS method combined with intuitionistic fuzzy set is proposed to select appropriate IS project in group decision making environment. Finally, a numerical example for information system projects selection is given to illustrate application of hybrid multi-criteria group decision making (MCGDM method based on intuitionistic fuzzy theory and TOPSIS method.

  6. Early detection of ecosystem regime shifts

    DEFF Research Database (Denmark)

    Lindegren, Martin; Dakos, Vasilis; Groeger, Joachim P.;

    2012-01-01

    methods may have limited utility in ecosystem-based management as they show no or weak potential for early-warning. We therefore propose a multiple method approach for early detection of ecosystem regime shifts in monitoring data that may be useful in informing timely management actions in the face......Critical transitions between alternative stable states have been shown to occur across an array of complex systems. While our ability to identify abrupt regime shifts in natural ecosystems has improved, detection of potential early-warning signals previous to such shifts is still very limited...

  7. Determining the orientation of quasiprincipal stresses by borehole electrometry: theory of the method, I

    Energy Technology Data Exchange (ETDEWEB)

    Oparin, V.N.

    1986-01-01

    The authors propose a method for determination of the orientation of quasiprincipal stresses in rock beds based on data from borehole electrometry. Knowing the orientation of maximum stresses in rock strata and how they are changed by mining operations is important for various geomechanical and engineering problems. The theory of electrometric determination of the orientational maximum stresses presented in this paper proceeds from an a priori assumption of a direct correlation between the orientation of the maximum conductivity axis of rocks and the orientation of the axis of maximum mechanical stress.

  8. Piecewise continuous distribution function method in the theory of wave disturbances of inhomogeneous gas

    Energy Technology Data Exchange (ETDEWEB)

    Vereshchagin, D.A. [Theoretical Physics Department, Kaliningrad State University, A. Nevsky st. 14, Kaliningrad (Russian Federation); Leble, S.B. [Theoretical Physics Department, Kaliningrad State University, A. Nevsky st. 14, Kaliningrad (Russian Federation) and Theoretical Physics and Mathematical Methods Department, Gdansk University of Technology, ul. Narutowicza 11/12, Gdansk (Poland)]. E-mail: leble@mifgate.pg.gda.pl; Solovchuk, M.A. [Theoretical Physics Department, Kaliningrad State University, A. Nevsky st. 14, Kaliningrad (Russian Federation)]. E-mail: solovchuk@yandex.ru

    2006-01-02

    The system of hydrodynamic-type equations for a stratified gas in gravity field is derived from BGK equation by method of piecewise continuous distribution function. The obtained system of the equations generalizes the Navier-Stokes one at arbitrary Knudsen numbers. The problem of a wave disturbance propagation in a rarefied gas is explored. The verification of the model is made for a limiting case of a homogeneous medium. The phase velocity and attenuation coefficient values are in an agreement with former fluid mechanics theories; the attenuation behavior reproduces experiment and kinetics-based results at more wide range of the Knudsen numbers.

  9. Method to modify random matrix theory using short-time behavior in chaotic systems.

    Science.gov (United States)

    Smith, A Matthew; Kaplan, Lev

    2009-09-01

    We discuss a modification to random matrix theory (RMT) eigenstate statistics that systematically takes into account the nonuniversal short-time behavior of chaotic systems. The method avoids diagonalization of the Hamiltonian, instead requiring only knowledge of short-time dynamics for a chaotic system or ensemble of similar systems. Standard RMT and semiclassical predictions are recovered in the limits of zero Ehrenfest time and infinite Heisenberg time, respectively. As examples, we discuss wave-function autocorrelations and cross correlations and show how the approach leads to a significant improvement in the accuracy for simple chaotic systems where comparison can be made with brute-force diagonalization.

  10. The Inverse Amplitude Method in $\\pi\\pi$ Scattering in Chiral Perturbation Theory to Two Loops

    CERN Document Server

    Nieves, J; Ruiz-Arriola, E

    2002-01-01

    The inverse amplitude method is used to unitarize the two loop $\\pi\\pi$ scattering amplitudes of SU(2) Chiral Perturbation Theory in the $I=0,J=0$, $I=1,J=1$ and $I=2,J=0$ channels. An error analysis in terms of the low energy one-loop parameters $\\bar l_{1,2,3,4,}$ and existing experimental data is undertaken. A comparison to standard resonance saturation values for the two loop coefficients $\\bar b_{1,2,3,4,5,6} $ is also carried out. Crossing violations are quantified and the convergence of the expansion is discussed.

  11. Geospatial Big Data Handling Theory and Methods: A Review and Research Challenges

    DEFF Research Database (Denmark)

    Li, Songnian; Dragicevic, Suzana; Anton, François

    2016-01-01

    and varying format of collected geospatial big data presents challenges in storing, managing, processing, analyzing, visualizing and verifying the quality of data. This has implications for the quality of decisions made with big data. Consequently, this position paper of the International Society...... for Photogrammetry and Remote Sensing (ISPRS) Technical Commission II (TC II) revisits the existing geospatial data handling methods and theories to determine if they are still capable of handling emerging geospatial big data. Further, the paper synthesises problems, major issues and challenges with current...

  12. Orbital-free density functional theory implementation with the projector augmented-wave method

    Energy Technology Data Exchange (ETDEWEB)

    Lehtomäki, Jouko; Makkonen, Ilja; Harju, Ari; Lopez-Acevedo, Olga, E-mail: olga.lopez.acevedo@aalto.fi [COMP Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 11100, 00076 Aalto (Finland); Caro, Miguel A. [COMP Centre of Excellence, Department of Applied Physics, Aalto University, P.O. Box 11100, 00076 Aalto (Finland); Department of Electrical Engineering and Automation, Aalto University, Espoo (Finland)

    2014-12-21

    We present a computational scheme for orbital-free density functional theory (OFDFT) that simultaneously provides access to all-electron values and preserves the OFDFT linear scaling as a function of the system size. Using the projector augmented-wave method (PAW) in combination with real-space methods, we overcome some obstacles faced by other available implementation schemes. Specifically, the advantages of using the PAW method are twofold. First, PAW reproduces all-electron values offering freedom in adjusting the convergence parameters and the atomic setups allow tuning the numerical accuracy per element. Second, PAW can provide a solution to some of the convergence problems exhibited in other OFDFT implementations based on Kohn-Sham (KS) codes. Using PAW and real-space methods, our orbital-free results agree with the reference all-electron values with a mean absolute error of 10 meV and the number of iterations required by the self-consistent cycle is comparable to the KS method. The comparison of all-electron and pseudopotential bulk modulus and lattice constant reveal an enormous difference, demonstrating that in order to assess the performance of OFDFT functionals it is necessary to use implementations that obtain all-electron values. The proposed combination of methods is the most promising route currently available. We finally show that a parametrized kinetic energy functional can give lattice constants and bulk moduli comparable in accuracy to those obtained by the KS PBE method, exemplified with the case of diamond.

  13. The seismology of geothermal regimes. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aki, K.

    1997-04-01

    The authors have been developing seismological interpretation theory and methods applicable to complex structures encountered in geothermal areas for a better understanding of the earth`s geothermal regimes. The questions the y have addressed in their research may be summarized as ``What is going on in the earth`s crust under tectonically active regions; what are the structures and processes responsible for such activities as earthquakes and volcanic eruptions; and how can one capture their essence effectively by means of seismological studies?`` First, the authors found clear evidence for localization of scattered seismic energy in the deep magmatic system of the volcano on the island of Reunion in the Indian Ocean. The seismic coda of local earthquakes show concentrated energy in the intrusive zones as late as 30 to 40 seconds after the origin time. This offers a very effective method for defining a zone of strong heterogeneity on a regional scale, complementary to the high resolution study using trapped modes as pursued in the past project. Secondly, the authors identified about 700 long-period events with various frequencies and durations from the data collected during the past 5 years which included three episodes of eruption. They are applying a finite-element method to the simplest event with the longest period and the shortest duration in order to find the location, geometry and physical properties of their source deterministically. The preliminary result described here suggests that their sources may be a horizontally lying magma-filled crack at a shallow depth under the summit area. In addition to the above work on the Reunion data, they have continued the theoretical and observational studies of attenuation and scattering of seismic waves.

  14. Supply regimes in fisheries

    DEFF Research Database (Denmark)

    Nielsen, Max

    2006-01-01

    Supply in fisheries is traditionally known for its backward bending nature, owing to externalities in production. Such a supply regime, however, exist only for pure open access fisheries. Since most fisheries worldwide are neither pure open access, nor optimally managed, rather between the extremes......-economic supply model with mesh sizes is developed. It is found that in the presence of realistic management schemes, the supply curves are close to vertical in the relevant range. Also, the supply curve under open access with mesh size limitations is almost vertical in the relevant range, owing to constant...... recruitment. The implications are that the effects on supply following from e.g. trade liberalisation and reductions of subsidies are small in several and probably most fisheries worldwide. Keywords: backward-bending supply, regulated open access, regulated restricted access, mesh size regulation, Beverton...

  15. Resampling method for applying density-dependent habitat selection theory to wildlife surveys.

    Directory of Open Access Journals (Sweden)

    Olivia Tardy

    Full Text Available Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection

  16. A comparison of density functional theory and coupled cluster methods for the calculation of electric dipole polarizability gradients of methane

    DEFF Research Database (Denmark)

    Paidarová, Ivana; Sauer, Stephan P. A.

    2012-01-01

    We have compared the performance of density functional theory (DFT) using five different exchange-correlation functionals with four coupled cluster theory based wave function methods in the calculation of geometrical derivatives of the polarizability tensor of methane. The polarizability gradient...

  17. Rheology of simple shear flows of dense granular assemblies in different regimes

    Science.gov (United States)

    Chialvo, Sebastian; Sun, Jin; Sundaresan, Sankaran

    2010-11-01

    Using the discrete element method, simulations of simple shear flow of dense assemblies of frictional particles have been carried out over a range of shear rates and volume fractions in order to characterize the transition from quasistatic or inertial flow to intermediate flow. In agreement with previous results for frictionless spheres [1], the pressure and shear stress in the intermediate regime are found to approach asymptotic power law relations with shear rate; curiously, these asymptotes appear to be common to all intermediate flows regardless of the value of the particle friction coefficient. The scaling relations for stress for the inertial and quasistatic regimes are consistent with a recent extension of kinetic theory to dense inertial flows [2] and a simple model for quasistatic flows [3], respectively. For the case of steady, simple shear flow, the different regimes can be bridged readily: a harmonic weighting function blends the inertial regime to the intermediate asymptote, while a simple additive rule combines the quasistatic and intermediate regimes. [4pt] [1] T. Hatano, et al., J. Phys. Soc. Japan 76, 023001 (2007). [0pt] [2] J. Jenkins, and D. Berzi, Granular Matter 12, 151 (2010). [0pt] [3] J. Sun, and S. Sundaresan, J. Fluid Mech. (under review).

  18. Robustness of the fractal regime for the multiple-scattering structure factor

    Science.gov (United States)

    Katyal, Nisha; Botet, Robert; Puri, Sanjay

    2016-08-01

    In the single-scattering theory of electromagnetic radiation, the fractal regime is a definite range in the photon momentum-transfer q, which is characterized by the scaling-law behavior of the structure factor: S(q) ∝ 1 /q df. This allows a straightforward estimation of the fractal dimension df of aggregates in Small-Angle X-ray Scattering (SAXS) experiments. However, this behavior is not commonly studied in optical scattering experiments because of the lack of information on its domain of validity. In the present work, we propose a definition of the multiple-scattering structure factor, which naturally generalizes the single-scattering function S(q). We show that the mean-field theory of electromagnetic scattering provides an explicit condition to interpret the significance of multiple scattering. In this paper, we investigate and discuss electromagnetic scattering by three classes of fractal aggregates. The results obtained from the TMatrix method show that the fractal scaling range is divided into two domains: (1) a genuine fractal regime, which is robust; (2) a possible anomalous scaling regime, S(q) ∝ 1 /qδ, with exponent δ independent of df, and related to the way the scattering mechanism uses the local morphology of the scatterer. The recognition, and an analysis, of the latter domain is of importance because it may result in significant reduction of the fractal regime, and brings into question the proper mechanism in the build-up of multiple-scattering.

  19. Blind Forensics of Successive Geometric Transformations in Digital Images Using Spectral Method: Theory and Applications.

    Science.gov (United States)

    Chen, Chenglong; Ni, Jiangqun; Shen, Zhaoyi; Shi, Yun Qing

    2017-06-01

    Geometric transformations, such as resizing and rotation, are almost always needed when two or more images are spliced together to create convincing image forgeries. In recent years, researchers have developed many digital forensic techniques to identify these operations. Most previous works in this area focus on the analysis of images that have undergone single geometric transformations, e.g., resizing or rotation. In several recent works, researchers have addressed yet another practical and realistic situation: successive geometric transformations, e.g., repeated resizing, resizing-rotation, rotation-resizing, and repeated rotation. We will also concentrate on this topic in this paper. Specifically, we present an in-depth analysis in the frequency domain of the second-order statistics of the geometrically transformed images. We give an exact formulation of how the parameters of the first and second geometric transformations influence the appearance of periodic artifacts. The expected positions of characteristic resampling peaks are analytically derived. The theory developed here helps to address the gap left by previous works on this topic and is useful for image security and authentication, in particular, the forensics of geometric transformations in digital images. As an application of the developed theory, we present an effective method that allows one to distinguish between the aforementioned four different processing chains. The proposed method can further estimate all the geometric transformation parameters. This may provide useful clues for image forgery detection.

  20. Some questions on the relationship between theory and research in the application of method of observation

    Directory of Open Access Journals (Sweden)

    Ilić Vladimir

    2014-01-01

    Full Text Available The article discusses the gradual abandonment of the efforts to verify hypotheses and complex theoretical assumptions in the social sciences by observation. The first section shows the classical understanding that emphasized the importance of the theoretically directed observation. The second section shows the efforts towards inclusion of the observed in the interpretation of observations. The third section contains an analysis of the impact of today’s strict division on the qualitative and quantitative methodology. This influence can be seen in a complete separation of the structured observation and participatory observation, the disintegration of observation as a research procedure, its replacement by ethnography and case study method, as well as abandoning more general theoretical ambition within qualitative methodology. The fourth section analyzes the epistemological consequences of efforts to understand the fieldwork primarily as a power relationship and to transform the observed into the subjects of research. Development of the attitudes on the relationship between theory and research in the application of methods of observation in social sciences is associated with theoretical eclecticism in the field of contemporary sociological theory and distancing from philosophy of science with its understanding of the role of research programs and research traditions in the field of growth of scientific knowledge. [Projekat Ministarstva nauke Republike Srbije, br. 179035: Izazovi nove društvene integracije u Srbiji: koncepti i akteri