Multiple growth regimes: Insights from unified growth theory
Galor, Oded
2007-01-01
Unified Growth Theory uncovers the forces that contributed to the existence of multiple growth regimes and the emergence of convergence clubs. It suggests that differential timing of take-offs from stagnation to growth segmented economies into three fundamental regimes: slow growing economies in a Malthusian regime, fast growing countries in a sustained growth regime, and economies in the transition between these regimes. In contrast to existing research that links regime switching thresholds...
Life history theory predicts fish assemblage response to hydrologic regimes.
Mims, Meryl C; Olden, Julian D
2012-01-01
The hydrologic regime is regarded as the primary driver of freshwater ecosystems, structuring the physical habitat template, providing connectivity, framing biotic interactions, and ultimately selecting for specific life histories of aquatic organisms. In the present study, we tested ecological theory predicting directional relationships between major dimensions of the flow regime and life history composition of fish assemblages in perennial free-flowing rivers throughout the continental United States. Using long-term discharge records and fish trait and survey data for 109 stream locations, we found that 11 out of 18 relationships (61%) tested between the three life history strategies (opportunistic, periodic, and equilibrium) and six hydrologic metrics (two each describing flow variability, predictability, and seasonality) were statistically significant (P history strategies, with 82% of all significant relationships observed supporting predictions from life history theory. Specifically, we found that (1) opportunistic strategists were positively related to measures of flow variability and negatively related to predictability and seasonality, (2) periodic strategists were positively related to high flow seasonality and negatively related to variability, and (3) the equilibrium strategists were negatively related to flow variability and positively related to predictability. Our study provides important empirical evidence illustrating the value of using life history theory to understand both the patterns and processes by which fish assemblage structure is shaped by adaptation to natural regimes of variability, predictability, and seasonality of critical flow events over broad biogeographic scales.
Analysis of the Two-Regime Method on Square Meshes
Flegg, Mark B.
2014-01-01
The two-regime method (TRM) has been recently developed for optimizing stochastic reaction-diffusion simulations [M. Flegg, J. Chapman, and R. Erban, J. Roy. Soc. Interface, 9 (2012), pp. 859-868]. It is a multiscale (hybrid) algorithm which uses stochastic reaction-diffusion models with different levels of detail in different parts of the computational domain. The coupling condition on the interface between different modeling regimes of the TRM was previously derived for onedimensional models. In this paper, the TRM is generalized to higher dimensional reaction-diffusion systems. Coupling Brownian dynamics models with compartment-based models on regular (square) two-dimensional lattices is studied in detail. In this case, the interface between different modeling regimes contains either flat parts or right-angle corners. Both cases are studied in the paper. For flat interfaces, it is shown that the one-dimensional theory can be used along the line perpendicular to the TRM interface. In the direction tangential to the interface, two choices of the TRM parameters are presented. Their applicability depends on the compartment size and the time step used in the molecular-based regime. The two-dimensional generalization of the TRM is also discussed in the case of corners. © 2014 Society for Industrial and Applied Mathematics.
Quantum no-scale regimes in string theory
Coudarchet, Thibaut; Fleming, Claude; Partouche, Hervé
2018-05-01
We show that in generic no-scale models in string theory, the flat, expanding cosmological evolutions found at the quantum level can be attracted to a "quantum no-scale regime", where the no-scale structure is restored asymptotically. In this regime, the quantum effective potential is dominated by the classical kinetic energies of the no-scale modulus and dilaton. We find that this natural preservation of the classical no-scale structure at the quantum level occurs when the initial conditions of the evolutions sit in a subcritical region of their space. On the contrary, supercritical initial conditions yield solutions that have no analogue at the classical level. The associated intrinsically quantum universes are sentenced to collapse and their histories last finite cosmic times. Our analysis is done at 1-loop, in perturbative heterotic string compactified on tori, with spontaneous supersymmetry breaking implemented by a stringy version of the Scherk-Schwarz mechanism.
Laser Theory for Optomechanics: Limit Cycles in the Quantum Regime
Directory of Open Access Journals (Sweden)
Niels Lörch
2014-01-01
Full Text Available Optomechanical systems can exhibit self-sustained limit cycles where the quantum state of the mechanical resonator possesses nonclassical characteristics such as a strongly negative Wigner density, as was shown recently in a numerical study by Qian et al. [Phys. Rev. Lett. 109, 253601 (2012]. Here, we derive a Fokker-Planck equation describing mechanical limit cycles in the quantum regime that correctly reproduces the numerically observed nonclassical features. The derivation starts from the standard optomechanical master equation and is based on techniques borrowed from the laser theory due to Haake and Lewenstein. We compare our analytical model with numerical solutions of the master equation based on Monte Carlo simulations and find very good agreement over a wide and so far unexplored regime of system parameters. As one main conclusion, we predict negative Wigner functions to be observable even for surprisingly classical parameters, i.e., outside the single-photon strong-coupling regime, for strong cavity drive and rather large limit-cycle amplitudes. The approach taken here provides a natural starting point for further studies of quantum effects in optomechanics.
United theory of planet formation (i): Tandem regime
Ebisuzaki, Toshikazu; Imaeda, Yusuke
2017-07-01
The present paper is the first one of a series of papers that present the new united theory of planet formation, which includes magneto-rotational instability and porous aggregation of solid particles in an consistent way. We here describe the ;tandem; planet formation regime, in which a solar system like planetary systems are likely to be produced. We have obtained a steady-state, 1-D model of the accretion disk of a protostar taking into account the magneto-rotational instability (MRI) and and porous aggregation of solid particles. We find that the disk is divided into an outer turbulent region (OTR), a MRI suppressed region (MSR), and an inner turbulent region (ITR). The outer turbulent region is fully turbulent because of MRI. However, in the range, rout(= 8 - 60 AU) from the central star, MRI is suppressed around the midplane of the gas disk and a quiet area without turbulence appears, because the degree of ionization of gas becomes low enough. The disk becomes fully turbulent again in the range rin(= 0.2 - 1 AU), which is called the inner turbulent region, because the midplane temperature become high enough (>1000 K) due to gravitational energy release. Planetesimals are formed through gravitational instability at the outer and inner MRI fronts (the boundaries between the MRI suppressed region (MSR) and the outer and inner turbuent regions) without particle enhancement in the original nebula composition, because of the radial concentration of the solid particles. At the outer MRI front, icy particles grow through low-velocity collisions into porous aggregates with low densities (down to ∼10-5 gcm-3). They eventually undergo gravitational instability to form icy planetesimals. On the other hand, rocky particles accumulate at the inner MRI front, since their drift velocities turn outward due to the local maximum in gas pressure. They undergo gravitational instability in a sub-disk of pebbles to form rocky planetesimals at the inner MRI front. They are likely
Introducing legal method when teaching stakeholder theory
DEFF Research Database (Denmark)
Buhmann, Karin
2015-01-01
: the Business & Human Rights regime from a UN Global Compact perspective; and mandatory CSR reporting. Supplying integrated teaching notes and generalising on the examples, we explain how legal method may help students of business ethics, organisation and management – future managers – in their analysis...... to the business ethics literature by explaining how legal method complements stakeholder theory for organisational practice....
Esakova, Nataliya
2012-01-01
Nataliya Esakova performs an analysis of the interdependencies and the nature of cooperation between energy producing, consuming and transit countries focusing on the gas sector. For the analysis the theoretical framework of the interdependence theory by Robert O. Keohane and Joseph S. Nye and the international regime theory are applied to the recent developments within the gas relationship between the European Union and Russia in the last decade. The objective of the analysis is to determine, whether a fundamental regime change in terms of international regime theory is taking place, and, if so, which regime change explanation model in terms of interdependence theory is likely to apply.
Energy Technology Data Exchange (ETDEWEB)
Esakova, Nataliya
2012-07-01
Nataliya Esakova performs an analysis of the interdependencies and the nature of cooperation between energy producing, consuming and transit countries focusing on the gas sector. For the analysis the theoretical framework of the interdependence theory by Robert O. Keohane and Joseph S. Nye and the international regime theory are applied to the recent developments within the gas relationship between the European Union and Russia in the last decade. The objective of the analysis is to determine, whether a fundamental regime change in terms of international regime theory is taking place, and, if so, which regime change explanation model in terms of interdependence theory is likely to apply. (orig.)
A Review of the Detection Methods for Climate Regime Shifts
Directory of Open Access Journals (Sweden)
Qunqun Liu
2016-01-01
Full Text Available An abrupt climate change means that the climate system shifts from a steady state to another steady state. Study on the phenomenon and theory of the abrupt climate change is a new research field of modern climatology, and it is of great significance for the prediction of future climate change. The climate regime shift is one of the most common forms of abrupt climate change, which mainly refers to the statistical significant changes on the variable of climate system at one time scale. These detection methods can be roughly divided into five categories based on different types of abrupt changes, namely, abrupt mean value change, abrupt variance change, abrupt frequency change, abrupt probability density change, and the multivariable analysis. The main research progress of abrupt climate change detection methods is reviewed. What is more, some actual applications of those methods in observational data are provided. With the development of nonlinear science, many new methods have been presented for detecting an abrupt dynamic change in recent years, which is useful supplement for the abrupt change detection methods.
The theory and simulation of relativistic electron beam transport in the ion-focused regime
International Nuclear Information System (INIS)
Swanekamp, S.B.; Holloway, J.P.; Kammash, T.; Gilgenbach, R.M.
1992-01-01
Several recent experiments involving relativistic electron beam (REB) transport in plasma channels show two density regimes for efficient transport; a low-density regime known as the ion-focused regime (IFR) and a high-pressure regime. The results obtained in this paper use three separate models to explain the dependency of REB transport efficiency on the plasma density in the IFR. Conditions for efficient beam transport are determined by examining equilibrium solutions of the Vlasov--Maxwell equations under conditions relevant to IFR transport. The dynamic force balance required for efficient IFR transport is studied using the particle-in-cell (PIC) method. These simulations provide new insight into the transient beam front physics as well as the dynamic approach to IFR equilibrium. Nonlinear solutions to the beam envelope are constructed to explain oscillations in the beam envelope observed in the PIC simulations but not contained in the Vlasov equilibrium analysis. A test particle analysis is also developed as a method to visualize equilibrium solutions of the Vlasov equation. This not only provides further insight into the transport mechanism but also illustrates the connections between the three theories used to describe IFR transport. Separately these models provide valuable information about transverse beam confinement; together they provide a clear physical understanding of REB transport in the IFR
Evaluation and Comparison of Extremal Hypothesis-Based Regime Methods
Directory of Open Access Journals (Sweden)
Ishwar Joshi
2018-03-01
Full Text Available Regime channels are important for stable canal design and to determine river response to environmental changes, e.g., due to the construction of a dam, land use change, and climate shifts. A plethora of methods is available describing the hydraulic geometry of alluvial rivers in the regime. However, comparison of these methods using the same set of data seems lacking. In this study, we evaluate and compare four different extremal hypothesis-based regime methods, namely minimization of Froude number (MFN, maximum entropy and minimum energy dissipation rate (ME and MEDR, maximum flow efficiency (MFE, and Millar’s method, by dividing regime channel data into sand and gravel beds. The results show that for sand bed channels MFN gives a very high accuracy of prediction for regime channel width and depth. For gravel bed channels we find that MFN and ‘ME and MEDR’ give a very high accuracy of prediction for width and depth. Therefore the notion that extremal hypotheses which do not contain bank stability criteria are inappropriate for use is shown false as both MFN and ‘ME and MEDR’ lack bank stability criteria. Also, we find that bank vegetation has significant influence in the prediction of hydraulic geometry by MFN and ‘ME and MEDR’.
Tautomerism methods and theories
Antonov, Liudmil
2013-01-01
Covering the gap between basic textbooks and over-specialized scientific publications, this is the first reference available to describe this interdisciplinary topic for PhD students and scientists starting in the field. The result is an introductory description providing suitable practical examples of the basic methods used to study tautomeric processes, as well as the theories describing the tautomerism and proton transfer phenomena. It also includes different spectroscopic methods for examining tautomerism, such as UV-VIs, time-resolved fluorescence spectroscopy, and NMR spectrosc
Analysis of the Two-Regime Method on Square Meshes
Flegg, Mark B.; Chapman, S. Jonathan; Zheng, Likun; Erban, Radek
2014-01-01
The two-regime method (TRM) has been recently developed for optimizing stochastic reaction-diffusion simulations [M. Flegg, J. Chapman, and R. Erban, J. Roy. Soc. Interface, 9 (2012), pp. 859-868]. It is a multiscale (hybrid) algorithm which uses
Democracy as a Middle Ground: A Uni…ed Theory of Development and Political Regimes
Larsson, Anna; Parente, Stephen
2010-01-01
A large literature documents that autocratic regimes have not, on average, outperformed democratic regimes, although they do display greater variance in economic performance. At the same time, no long-lived autocracy currently is rich whereas every long-lived democracy is. This paper puts forth a theory to account for these observations. The theory rests on the idea that autocratic leaders are heterogenous in their preferences and the idea that special interest groups can successfully lobby a...
Enforcing the climate regime: Game theory and the Marrakesh Accords
Energy Technology Data Exchange (ETDEWEB)
Hovi, Jon
2002-07-01
The article reviews basic insights about compliance and ''hard'' enforcement that can be derived from various non-cooperative equilibrium concepts and evaluates the Marrakesh Accords in light of these insights. Five different notions of equilibrium are considered - the Nash equilibrium, the sub game perfect equilibrium, the re negotiation proof equilibrium, the coalition proof equilibrium and the perfect Bayesian equilibrium. These various types of equilibrium have number of implications for effective enforcement: 1. Consequences of non-compliance should be more than proportionate. 2. To be credible punishment needs to take place in the Pareto frontier, rather than by reversion to some suboptimal state. 3. An effective enforcement system must be able to curb collective as well as individual incentives to cheat. 4. A fully transparent enforcement regime could in fact turn out to be detrimental for compliance levels. It is concluded that constructing an effective system for ''hard'' enforcement of the Kyoto Protocol is a formidable task that has only partially been accomplished by the Marrakesh Accords. A possible explanation is that the design of a compliance system for the climate regime involved a careful balancing of the desire to minimise non-compliance against other important considerations. (Author)
Sedimentological regimes for turbidity currents: Depth-averaged theory
Halsey, Thomas C.; Kumar, Amit; Perillo, Mauricio M.
2017-07-01
Turbidity currents are one of the most significant means by which sediment is moved from the continents into the deep ocean; their properties are interesting both as elements of the global sediment cycle and due to their role in contributing to the formation of deep water oil and gas reservoirs. One of the simplest models of the dynamics of turbidity current flow was introduced three decades ago, and is based on depth-averaging of the fluid mechanical equations governing the turbulent gravity-driven flow of relatively dilute turbidity currents. We examine the sedimentological regimes of a simplified version of this model, focusing on the role of the Richardson number Ri [dimensionless inertia] and Rouse number Ro [dimensionless sedimentation velocity] in determining whether a current is net depositional or net erosional. We find that for large Rouse numbers, the currents are strongly net depositional due to the disappearance of local equilibria between erosion and deposition. At lower Rouse numbers, the Richardson number also plays a role in determining the degree of erosion versus deposition. The currents become more erosive at lower values of the product Ro × Ri, due to the effect of clear water entrainment. At higher values of this product, the turbulence becomes insufficient to maintain the sediment in suspension, as first pointed out by Knapp and Bagnold. We speculate on the potential for two-layer solutions in this insufficiently turbulent regime, which would comprise substantial bedload flow with an overlying turbidity current.
Enforcing the climate regime: Game theory and the Marrakesh Accords
International Nuclear Information System (INIS)
Hovi, Jon
2002-01-01
The article reviews basic insights about compliance and ''hard'' enforcement that can be derived from various non-cooperative equilibrium concepts and evaluates the Marrakesh Accords in light of these insights. Five different notions of equilibrium are considered - the Nash equilibrium, the sub game perfect equilibrium, the re negotiation proof equilibrium, the coalition proof equilibrium and the perfect Bayesian equilibrium. These various types of equilibrium have number of implications for effective enforcement: 1. Consequences of non-compliance should be more than proportionate. 2. To be credible punishment needs to take place in the Pareto frontier, rather than by reversion to some suboptimal state. 3. An effective enforcement system must be able to curb collective as well as individual incentives to cheat. 4. A fully transparent enforcement regime could in fact turn out to be detrimental for compliance levels. It is concluded that constructing an effective system for ''hard'' enforcement of the Kyoto Protocol is a formidable task that has only partially been accomplished by the Marrakesh Accords. A possible explanation is that the design of a compliance system for the climate regime involved a careful balancing of the desire to minimise non-compliance against other important considerations. (Author)
Neoclassical kinetic theory near an X point: Plateau regime
International Nuclear Information System (INIS)
Solano, E.R.; Hazeltine, R.D.
1994-01-01
Traditionally, neoclassical transport calculations ignore poloidal variation of the poloidal magnetic field. Near an X point of the confining field of a diverted plasma, the poloidal field is small, causing guiding centers to linger at that poloidal position. A study of how neoclassical transport is affected by this differential shaping is presented. The problem is solved in general in the plateau regime, and a model poloidal flux function with an X point is utilized as an analytic example to show that the plateau diffusion coefficient can change considerably (factor of 2 reduction). Ion poloidal rotation is proportional to the local value of B pol but otherwise it is not strongly affected by shaping. The usual favorable scaling of neoclassical confinement time with plasma current is unaffected by the X point
Study of the Transition Flow Regime using Monte Carlo Methods
Hassan, H. A.
1999-01-01
This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.
OPTIMIZATION OF TAX REGIME USING THE INSTRUMENT OF GAME THEORY
Directory of Open Access Journals (Sweden)
Igor Yu. Pelevin
2014-01-01
Full Text Available The article is devoted to one of one possible mechanism of taxation optimization of agricultural enterprises where used the game theory. Use of this mechanism allows to apply the most optimal type of taxation that would benefit both a taxpayer and the government. In the article offered the definition of the tax storage and its possible applications.
Optically levitating dielectrics in the quantum regime: Theory and protocols
International Nuclear Information System (INIS)
Romero-Isart, O.; Pflanzer, A. C.; Cirac, J. I.; Juan, M. L.; Quidant, R.; Kiesel, N.; Aspelmeyer, M.
2011-01-01
We provide a general quantum theory to describe the coupling of light with the motion of a dielectric object inside a high-finesse optical cavity. In particular, we derive the total Hamiltonian of the system as well as a master equation describing the state of the center-of-mass mode of the dielectric and the cavity-field mode. In addition, a quantum theory of elasticity is used to study the coupling of the center-of-mass motion with internal vibrational excitations of the dielectric. This general theory is applied to the recent proposal of using an optically levitating nanodielectric as a cavity optomechanical system [see Romero-Isart et al., New J. Phys. 12, 033015 (2010); Chang et al., Proc. Natl. Acad. Sci. USA 107, 1005 (2010)]. On this basis, we also design a light-mechanics interface to prepare non-Gaussian states of the mechanical motion, such as quantum superpositions of Fock states. Finally, we introduce a direct mechanical tomography scheme to probe these genuine quantum states by time-of- flight experiments.
Adiabatic perturbation theory for atoms and molecules in the low-frequency regime.
Martiskainen, Hanna; Moiseyev, Nimrod
2017-12-14
There is an increasing interest in the photoinduced dynamics in the low frequency, ω, regime. The multiphoton absorptions by molecules in strong laser fields depend on the polarization of the laser and on the molecular structure. The unique properties of the interaction of atoms and molecules with lasers in the low-frequency regime imply new concepts and directions in strong-field light-matter interactions. Here we represent a perturbational approach for the calculations of the quasi-energy spectrum in the low-frequency regime, which avoids the construction of the Floquet operator with extremely large number of Floquet channels. The zero-order Hamiltonian in our perturbational approach is the adiabatic Hamiltonian where the atoms/molecules are exposed to a dc electric field rather than to ac-field. This is in the spirit of the first step in the Corkum three-step model. The second-order perturbation correction terms are obtained when iℏω∂∂τ serves as a perturbation and τ is a dimensionless variable. The second-order adiabatic perturbation scheme is found to be an excellent approach for calculating the ac-field Floquet solutions in our test case studies of a simple one-dimensional time-periodic model Hamiltonian. It is straightforward to implement the perturbation approach presented here for calculating atomic and molecular energy shifts (positions) due to the interaction with low-frequency ac-fields using high-level electronic structure methods. This is enabled since standard quantum chemistry packages allow the calculations of atomic and molecular energy shifts due to the interaction with dc-fields. In addition to the shift of the energy positions, the energy widths (inverse lifetimes) can be obtained at the same level of theory. These energy shifts are functions of the laser parameters (low frequency, intensity, and polarization).
Adaptive two-regime method: Application to front propagation
Energy Technology Data Exchange (ETDEWEB)
Robinson, Martin, E-mail: martin.robinson@maths.ox.ac.uk; Erban, Radek, E-mail: erban@maths.ox.ac.uk [Mathematical Institute, University of Oxford, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG (United Kingdom); Flegg, Mark, E-mail: mark.flegg@monash.edu [School of Mathematical Sciences, Faculty of Science, Monash University Wellington Road, Clayton, Victoria 3800 (Australia)
2014-03-28
The Adaptive Two-Regime Method (ATRM) is developed for hybrid (multiscale) stochastic simulation of reaction-diffusion problems. It efficiently couples detailed Brownian dynamics simulations with coarser lattice-based models. The ATRM is a generalization of the previously developed Two-Regime Method [Flegg et al., J. R. Soc., Interface 9, 859 (2012)] to multiscale problems which require a dynamic selection of regions where detailed Brownian dynamics simulation is used. Typical applications include a front propagation or spatio-temporal oscillations. In this paper, the ATRM is used for an in-depth study of front propagation in a stochastic reaction-diffusion system which has its mean-field model given in terms of the Fisher equation [R. Fisher, Ann. Eugen. 7, 355 (1937)]. It exhibits a travelling reaction front which is sensitive to stochastic fluctuations at the leading edge of the wavefront. Previous studies into stochastic effects on the Fisher wave propagation speed have focused on lattice-based models, but there has been limited progress using off-lattice (Brownian dynamics) models, which suffer due to their high computational cost, particularly at the high molecular numbers that are necessary to approach the Fisher mean-field model. By modelling only the wavefront itself with the off-lattice model, it is shown that the ATRM leads to the same Fisher wave results as purely off-lattice models, but at a fraction of the computational cost. The error analysis of the ATRM is also presented for a morphogen gradient model.
Semiclassical methods in field theories
International Nuclear Information System (INIS)
Ventura, I.
1978-10-01
A new scheme is proposed for semi-classical quantization in field theory - the expansion about the charge (EAC) - which is developed within the canonical formalism. This method is suitable for quantizing theories that are invariant under global gauge transformations. It is used in the treatment of the non relativistic logarithmic theory that was proposed by Bialynicki-Birula and Mycielski - a theory we can formulate in any number of spatial dimensions. The non linear Schroedinger equation is also quantized by means of the EAC. The classical logarithmic theories - both, the non relativistic and the relativistic one - are studied in detail. It is shown that the Bohr-Sommerfeld quantization rule(BSQR) in field theory is, in many cases, equivalent to charge quantization. This rule is then applied to the massive Thirring Model and the logarithmic theories. The BSQR can be see as a simplified and non local version of the EAC [pt
Projector Method: theory and examples
International Nuclear Information System (INIS)
Dahl, E.D.
1985-01-01
The Projector Method technique for numerically analyzing lattice gauge theories was developed to take advantage of certain simplifying features of gauge theory models. Starting from a very general notion of what the Projector Method is, the techniques are applied to several model problems. After these examples have traced the development of the actual algorithm from the general principles of the Projector Method, a direct comparison between the Projector and the Euclidean Monte Carlo is made, followed by a discussion of the application to Periodic Quantum Electrodynamics in two and three spatial dimensions. Some methods for improving the efficiency of the Projector in various circumstances are outlined. 10 refs., 7 figs
Approximation methods in probability theory
Čekanavičius, Vydas
2016-01-01
This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.
The epsilon regime of chiral perturbation theory with Wilson-type fermions
Energy Technology Data Exchange (ETDEWEB)
Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Shindler, A. [Liverpool Univ. (United Kingdom). Theoretical Physics Division
2009-11-15
In this proceeding contribution we report on the ongoing effort to simulate Wilson-type fermions in the so called epsilon regime of chiral perturbation theory (cPT).We present results for the chiral condensate and the pseudoscalar decay constant obtained with Wilson twisted mass fermions employing two lattice spacings, two different physical volumes and several quark masses. With this set of simulations we make a first attempt to estimate the systematic uncertainties. (orig.)
The epsilon regime of chiral perturbation theory with Wilson-type fermions
International Nuclear Information System (INIS)
Jansen, K.; Shindler, A.
2009-11-01
In this proceeding contribution we report on the ongoing effort to simulate Wilson-type fermions in the so called epsilon regime of chiral perturbation theory (cPT).We present results for the chiral condensate and the pseudoscalar decay constant obtained with Wilson twisted mass fermions employing two lattice spacings, two different physical volumes and several quark masses. With this set of simulations we make a first attempt to estimate the systematic uncertainties. (orig.)
Analytic theory of alternate multilayer gratings operating in single-order regime.
Yang, Xiaowei; Kozhevnikov, Igor V; Huang, Qiushi; Wang, Hongchang; Hand, Matthew; Sawhney, Kawal; Wang, Zhanshan
2017-07-10
Using the coupled wave approach (CWA), we introduce the analytical theory for alternate multilayer grating (AMG) operating in the single-order regime, in which only one diffraction order is excited. Differing from previous study analogizing AMG to crystals, we conclude that symmetrical structure, or equal thickness of the two multilayer materials, is not the optimal design for AMG and may result in significant reduction in diffraction efficiency. The peculiarities of AMG compared with other multilayer gratings are analyzed. An influence of multilayer structure materials on diffraction efficiency is considered. The validity conditions of analytical theory are also discussed.
Informetrics theory, methods and applications
Qiu, Junping; Yang, Siluo; Dong, Ke
2017-01-01
This book provides an accessible introduction to the history, theory and techniques of informetrics. Divided into 14 chapters, it develops the content system of informetrics from the theory, methods and applications; systematically analyzes the six basic laws and the theory basis of informetrics and presents quantitative analysis methods such as citation analysis and computer-aided analysis. It also discusses applications in information resource management, information and library science, science of science, scientific evaluation and the forecast field. Lastly, it describes a new development in informetrics- webometrics. Providing a comprehensive overview of the complex issues in today's environment, this book is a valuable resource for all researchers, students and practitioners in library and information science.
A multidimensional theory for electron trapping by a plasma wake generated in the bubble regime
International Nuclear Information System (INIS)
Kostyukov, I; Nerush, E; Pukhov, A; Seredov, V
2010-01-01
We present a theory for electron self-injection in nonlinear, multidimensional plasma waves excited by a short laser pulse in the bubble regime or by a short electron beam in the blowout regime. In these regimes, which are typical for electron acceleration in the last impressive experiments, the laser radiation pressure or the electron beam charge pushes out plasma electrons from some region, forming a plasma cavity or a bubble with a huge ion charge. The plasma electrons can be trapped in the bubble and accelerated by the plasma wakefields up to a very high energy. We derive the condition of the electron trapping in the bubble. The developed theory predicts the trapping cross section in terms of the bubble radius and the bubble velocity. It is found that the dynamic bubble deformations observed in the three-dimensional (3D) particle-in-cell (PIC) simulations influence the trapping process significantly. The bubble elongation reduces the gamma-factor of the bubble, thereby strongly enhancing self-injection. The obtained analytical results are in good agreement with the 3D PIC simulations.
Operator theory and numerical methods
Fujita, H; Suzuki, T
2001-01-01
In accordance with the developments in computation, theoretical studies on numerical schemes are now fruitful and highly needed. In 1991 an article on the finite element method applied to evolutionary problems was published. Following the method, basically this book studies various schemes from operator theoretical points of view. Many parts are devoted to the finite element method, but other schemes and problems (charge simulation method, domain decomposition method, nonlinear problems, and so forth) are also discussed, motivated by the observation that practically useful schemes have fine mathematical structures and the converses are also true. This book has the following chapters: 1. Boundary Value Problems and FEM. 2. Semigroup Theory and FEM. 3. Evolution Equations and FEM. 4. Other Methods in Time Discretization. 5. Other Methods in Space Discretization. 6. Nonlinear Problems. 7. Domain Decomposition Method.
Topological methods in gauge theory
International Nuclear Information System (INIS)
Sarukkai, S.R.
1992-01-01
The author begins with an overview of the important topological methods used in gauge theory. In the first chapter, the author discusses the general structure of fiber bundles and associated mathematical concepts and briefly discuss their application in gauge theory. The second chapter deals with the study of instantons in both gauge and gravity theories. These self-dual solutions are presented. This chapter is also a broad introduction to certain topics in gravitational physics. Gravity and gauge theory are unified in Kaluza-Klein theory as discussed in the third chapter. Of particular interest is the physics of the U(1) bundles over non-trivial manifolds. The radius of the fifth dimension is undetermined classically in the Kaluza-Klein theory. A mechanism is described using topological information to derive the functional form of the radius of the fifth dimension and show that it is possible classically to derive expressions for the radius as a consequence of topology. The behavior of the radius is dependent on the information present in the base metric. Results are computed for three gravitational instantons. Consequences of this mechanism are discussed. The description is studied of instantons in terms of projector valued fields and universal bundles. The results of the previous chapter and this are connected via the study of universal bundles. Projector valued transformations are defined and their consequences discussed. With the solutions of instantons in this formalism, it is shown explicitly that there can be solutions which allow for a Sp(n) instanton to be transformed to a Sp(k) instanton, thus showing that there can be interpolations which carry one instanton with a rank n to another characterized by rank k with different topological numbers
Quantum fields in the non-perturbative regime. Yang-Mills theory and gravity
Energy Technology Data Exchange (ETDEWEB)
Eichhorn, Astrid
2011-09-06
In this thesis we study candidates for fundamental quantum field theories, namely non-Abelian gauge theories and asymptotically safe quantum gravity. Whereas the first ones have a stronglyinteracting low-energy limit, the second one enters a non-perturbative regime at high energies. Thus, we apply a tool suited to the study of quantum field theories beyond the perturbative regime, namely the Functional Renormalisation Group. In a first part, we concentrate on the physical properties of non-Abelian gauge theories at low energies. Focussing on the vacuum properties of the theory, we present an evaluation of the full effective potential for the field strength invariant F{sub {mu}}{sub {nu}}F{sup {mu}}{sup {nu}} from non-perturbative gauge correlation functions and find a non-trivial minimum corresponding to the existence of a dimension four gluon condensate in the vacuum. We also relate the infrared asymptotic form of the {beta} function of the running background-gauge coupling to the asymptotic behavior of Landau-gauge gluon and ghost propagators and derive an upper bound on their scaling exponents. We then consider the theory at finite temperature and study the nature of the confinement phase transition in d = 3+1 dimensions in various non-Abelian gauge theories. For SU(N) with N= 3,..,12 and Sp(2) we find a first-order phase transition in agreement with general expectations. Moreover our study suggests that the phase transition in E(7) Yang-Mills theory also is of first order. Our studies shed light on the question which property of a gauge group determines the order of the phase transition. In a second part we consider asymptotically safe quantum gravity. Here, we focus on the Faddeev-Popov ghost sector of the theory, to study its properties in the context of an interacting UV regime. We investigate several truncations, which all lend support to the conjecture that gravity may be asymptotically safe. In a first truncation, we study the ghost anomalous dimension
Quantum fields in the non-perturbative regime. Yang-Mills theory and gravity
International Nuclear Information System (INIS)
Eichhorn, Astrid
2011-01-01
In this thesis we study candidates for fundamental quantum field theories, namely non-Abelian gauge theories and asymptotically safe quantum gravity. Whereas the first ones have a stronglyinteracting low-energy limit, the second one enters a non-perturbative regime at high energies. Thus, we apply a tool suited to the study of quantum field theories beyond the perturbative regime, namely the Functional Renormalisation Group. In a first part, we concentrate on the physical properties of non-Abelian gauge theories at low energies. Focussing on the vacuum properties of the theory, we present an evaluation of the full effective potential for the field strength invariant F μν F μν from non-perturbative gauge correlation functions and find a non-trivial minimum corresponding to the existence of a dimension four gluon condensate in the vacuum. We also relate the infrared asymptotic form of the β function of the running background-gauge coupling to the asymptotic behavior of Landau-gauge gluon and ghost propagators and derive an upper bound on their scaling exponents. We then consider the theory at finite temperature and study the nature of the confinement phase transition in d = 3+1 dimensions in various non-Abelian gauge theories. For SU(N) with N= 3,..,12 and Sp(2) we find a first-order phase transition in agreement with general expectations. Moreover our study suggests that the phase transition in E(7) Yang-Mills theory also is of first order. Our studies shed light on the question which property of a gauge group determines the order of the phase transition. In a second part we consider asymptotically safe quantum gravity. Here, we focus on the Faddeev-Popov ghost sector of the theory, to study its properties in the context of an interacting UV regime. We investigate several truncations, which all lend support to the conjecture that gravity may be asymptotically safe. In a first truncation, we study the ghost anomalous dimension which we find to be negative at the
Geometrical methods in learning theory
International Nuclear Information System (INIS)
Burdet, G.; Combe, Ph.; Nencka, H.
2001-01-01
The methods of information theory provide natural approaches to learning algorithms in the case of stochastic formal neural networks. Most of the classical techniques are based on some extremization principle. A geometrical interpretation of the associated algorithms provides a powerful tool for understanding the learning process and its stability and offers a framework for discussing possible new learning rules. An illustration is given using sequential and parallel learning in the Boltzmann machine
Extrapolation methods theory and practice
Brezinski, C
1991-01-01
This volume is a self-contained, exhaustive exposition of the extrapolation methods theory, and of the various algorithms and procedures for accelerating the convergence of scalar and vector sequences. Many subroutines (written in FORTRAN 77) with instructions for their use are provided on a floppy disk in order to demonstrate to those working with sequences the advantages of the use of extrapolation methods. Many numerical examples showing the effectiveness of the procedures and a consequent chapter on applications are also provided - including some never before published results and applicat
Variational methods for field theories
Energy Technology Data Exchange (ETDEWEB)
Ben-Menahem, S.
1986-09-01
Four field theory models are studied: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. Free field theory is used as a laboratory for a new variational blocking-truncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes (Boron-Oppenheimer approximation). This ''adiabatic truncation'' method gives very accurate results for ground-state energy density and correlation functions. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Euclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. The transfer-matrix method is used to find a good (non-blocking) trial ground state for the Ising model in a transverse magnetic field in (1 + 1) dimensions.
Biometrics Theory, Methods, and Applications
Boulgouris, N V; Micheli-Tzanakou, Evangelia
2009-01-01
An in-depth examination of the cutting edge of biometrics. This book fills a gap in the literature by detailing the recent advances and emerging theories, methods, and applications of biometric systems in a variety of infrastructures. Edited by a panel of experts, it provides comprehensive coverage of:. Multilinear discriminant analysis for biometric signal recognition;. Biometric identity authentication techniques based on neural networks;. Multimodal biometrics and design of classifiers for biometric fusion;. Feature selection and facial aging modeling for face recognition;. Geometrical and
Statistical methods in nuclear theory
International Nuclear Information System (INIS)
Shubin, Yu.N.
1974-01-01
The paper outlines statistical methods which are widely used for describing properties of excited states of nuclei and nuclear reactions. It discusses physical assumptions lying at the basis of known distributions between levels (Wigner, Poisson distributions) and of widths of highly excited states (Porter-Thomas distribution, as well as assumptions used in the statistical theory of nuclear reactions and in the fluctuation analysis. The author considers the random matrix method, which consists in replacing the matrix elements of a residual interaction by random variables with a simple statistical distribution. Experimental data are compared with results of calculations using the statistical model. The superfluid nucleus model is considered with regard to superconducting-type pair correlations
Bayes linear statistics, theory & methods
Goldstein, Michael
2007-01-01
Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...
Methods of thermal field theory
Energy Technology Data Exchange (ETDEWEB)
Mallik, S [Saha Institute of Nuclear Physics, Calcutta (India)
1998-11-01
We introduce the basic ideas of thermal field theory and review its path integral formulation. We then discuss the problems of QCD theory at high and at low temperatures. At high temperature the naive perturbation expansion breaks down and is cured by resummation. We illustrate this improved perturbation expansion with the g{sup 2}{phi}{sup 4} theory and then sketch its application to find the gluon damping rate in QCD theory. At low temperature the hadronic phase is described systematically by the chiral perturbation theory. The results obtained from this theory for the quark and the gluon condensates are discussed. (author) 22 refs., 6 figs.
Perturbation theory with non-diagonal propagators and its use in the intermediate-coupling regime
International Nuclear Information System (INIS)
Znojil, M.
1998-01-01
An innovated method of construction of the Rayleigh-Schroedinger perturbation series in a seemingly nonperturbative regime is offered. Designed for the needs of condensed matter physics, nuclear physics and quantum chemistry, the flexibility of our new formalism is based on a nonstandard Lanczosean construction of unperturbed basis. With an asymmetric choice of the model space the recipe becomes recurrent not only order-by- order in a small parameter (as usual) but also projection-by-projection in the Hilbert space. Its idea and efficiency are illustrated on a few schematic examples. (Copyright (1998) World Scientific Publishing Co. Pte. Ltd)
Finite Volume Method for Pricing European Call Option with Regime-switching Volatility
Lista Tauryawati, Mey; Imron, Chairul; Putri, Endah RM
2018-03-01
In this paper, we present a finite volume method for pricing European call option using Black-Scholes equation with regime-switching volatility. In the first step, we formulate the Black-Scholes equations with regime-switching volatility. we use a finite volume method based on fitted finite volume with spatial discretization and an implicit time stepping technique for the case. We show that the regime-switching scheme can revert to the non-switching Black Scholes equation, both in theoretical evidence and numerical simulations.
Quantum resource theory of non-stabilizer states in the one-shot regime
Ahmadi, Mehdi; Dang, Hoan; Gour, Gilad; Sanders, Barry
Universal quantum computing is known to be impossible using only stabilizer states and stabilizer operations. However, addition of non-stabilizer states (also known as magic states) to quantum circuits enables us to achieve universality. The resource theory of non-stablizer states aims at quantifying the usefulness of non-stabilizer states. Here, we focus on a fundamental question in this resource theory in the so called single-shot regime: Given two resource states, is there a free quantum channel that will (approximately or exactly) convert one to the other?. To provide an answer, we phrase the question as a semidefinite program with constraints on the Choi matrix of the corresponding channel. Then, we use the semidefinite version of the Farkas lemma to derive the necessary and sufficient conditions for the conversion between two arbitrary resource states via a free quantum channel. BCS appreciates financial support from Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter.
Basic methods of soliton theory
Cherednik, I
1996-01-01
In the 25 years of its existence Soliton Theory has drastically expanded our understanding of "integrability" and contributed a lot to the reunification of Mathematics and Physics in the range from deep algebraic geometry and modern representation theory to quantum field theory and optical transmission lines.The book is a systematic introduction to the Soliton Theory with an emphasis on its background and algebraic aspects. It is the first one devoted to the general matrix soliton equations, which are of great importance for the foundations and the applications.Differential algebra (local cons
GNSS remote sensing theory, methods and applications
Jin, Shuanggen; Xie, Feiqin
2014-01-01
This book presents the theory and methods of GNSS remote sensing as well as its applications in the atmosphere, oceans, land and hydrology. It contains detailed theory and study cases to help the reader put the material into practice.
International Nuclear Information System (INIS)
Batistic, Benjamin; Robnik, Marko
2010-01-01
In this work we study the level spacing distribution in the classically mixed-type quantum systems (which are generic), exhibiting regular motion on invariant tori for some initial conditions and chaotic motion for the complementary initial conditions. In the asymptotic regime of the sufficiently deep semiclassical limit (sufficiently small effective Planck constant) the Berry and Robnik (1984 J. Phys. A: Math. Gen. 17 2413) picture applies, which is very well established. We present a new quasi-universal semiempirical theory of the level spacing distribution in a regime away from the Berry-Robnik regime (the near semiclassical limit), by describing both the dynamical localization effects of chaotic eigenstates, and the tunneling effects which couple regular and chaotic eigenstates. The theory works extremely well in the 2D mixed-type billiard system introduced by Robnik (1983 J. Phys. A: Math. Gen. 16 3971) and is also tested in other systems (mushroom billiard and Prosen billiard).
Computational Methods and Function Theory
Saff, Edward; Salinas, Luis; Varga, Richard
1990-01-01
The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.
Bosonization methods in string theory
International Nuclear Information System (INIS)
Abdalla, E.
1988-02-01
The use of bosonization/fermionization techniques to convert non-linear operators of the dual, is discussed. Non abelian bosonization to the case where the central charge of the Kac-Moody algebra is not unity, is generalized. In particular, using this generalization of non-abelian bosonization, the bosonic string vertex of the compactified theory; turns out to be fundamental field of thre fermionic theory, or bound states of it thus permiting explicit computations easily. (author) [pt
Development of objective flow regime identification method using self-organizing neural network
International Nuclear Information System (INIS)
Lee, Jae Young; Kim, Nam Seok; Kwak, Nam Yee
2004-01-01
Two-phase flow shows various flow patterns according to the amount of the void and its relative velocity to the liquid flow. This variation directly affect the interfacial transfer which is the key factor for the design or analysis of the phase change systems. Especially the safety analysis of the nuclear power plant has been performed based on the numerical code furnished with the proper constitutive relations depending highly upon the flow regimes. Heavy efforts have been focused to identify the flow regime and at this moment we stand on relative very stable engineering background compare to the other research field. However, the issues related to objectiveness and transient flow regime are still open to study. Lee et al. and Ishii developed the method for the objective and instantaneous flow regime identification based on the neural network and new index of probability distribution of the flow regime which allows just one second observation for the flow regime identification. In the present paper, we developed the self-organized neural network for more objective approach to this problem. Kohonen's Self-Organizing Map (SOM) has been used for clustering, visualization, and abstraction. The SOM is trained through unsupervised competitive learning using a 'winner takes it all' policy. Therefore, its unsupervised training character delete the possible interference of the regime developer to the neural network training. After developing the computer code, we evaluate the performance of the code with the vertically upward two-phase flow in the pipes of 25.4 and 50.4 cmm I.D. Also, the sensitivity of the number of the clusters to the flow regime identification was made
Straussian Grounded-Theory Method: An Illustration
Thai, Mai Thi Thanh; Chong, Li Choy; Agrawal, Narendra M.
2012-01-01
This paper demonstrates the benefits and application of Straussian Grounded Theory method in conducting research in complex settings where parameters are poorly defined. It provides a detailed illustration on how this method can be used to build an internationalization theory. To be specific, this paper exposes readers to the behind-the-scene work…
Design theory methods and organization for innovation
Le Masson, Pascal; Hatchuel, Armand
2017-01-01
This textbook presents the core of recent advances in design theory and its implications for design methods and design organization. Providing a unified perspective on different design methods and approaches, from the most classic (systematic design) to the most advanced (C-K theory), it offers a unique and integrated presentation of traditional and contemporary theories in the field. Examining the principles of each theory, this guide utilizes numerous real life industrial applications, with clear links to engineering design, industrial design, management, economics, psychology and creativity. Containing a section of exams with detailed answers, it is useful for courses in design theory, engineering design and advanced innovation management. "Students and professors, practitioners and researchers in diverse disciplines, interested in design, will find in this book a rich and vital source for studying fundamental design methods and tools as well as the most advanced design theories that work in practice". Pro...
Axiomatic method and category theory
Rodin, Andrei
2014-01-01
This volume offers readers a coherent look at the past, present and anticipated future of the Axiomatic Method. It presents a hypothetical New Axiomatic Method, which establishes closer relationships between mathematics and physics.
Mathematical methods of electromagnetic theory
Friedrichs, Kurt O
2014-01-01
This text provides a mathematically precise but intuitive introduction to classical electromagnetic theory and wave propagation, with a brief introduction to special relativity. While written in a distinctive, modern style, Friedrichs manages to convey the physical intuition and 19th century basis of the equations, with an emphasis on conservation laws. Particularly striking features of the book include: (a) a mathematically rigorous derivation of the interaction of electromagnetic waves with matter, (b) a straightforward explanation of how to use variational principles to solve problems in el
Energy Technology Data Exchange (ETDEWEB)
Bobylev, Yu. V. [L.N. Tolstoy Tula State Pedagogical University (Russian Federation); Kuzelev, M. V. [Moscow State University (Russian Federation); Rukhadze, A. A. [Russian Academy of Sciences, Prokhorov Institute of General Physics (Russian Federation)
2008-02-15
A general mathematical model is proposed that is based on the Vlasov kinetic equation with a self-consistent field and describes the nonlinear dynamics of the electromagnetic instabilities of a relativistic electron beam in a spatially bounded plasma. Two limiting cases are analyzed, namely, high-frequency (HF) and low-frequency (LF) instabilities of a relativistic electron beam, of which the LF instability is a qualitatively new phenomenon in comparison with the known Cherenkov resonance effects. For instabilities in the regime of the collective Cherenkov effect, the equations containing cubic nonlinearities and describing the nonlinear saturation of the instabilities of a relativistic beam in a plasma are derived by using the methods of expansion in small perturbations of the trajectories and momenta of the beam electrons. Analytic expressions for the amplitudes of the interacting beam and plasma waves are obtained. The analytical results are shown to agree well with the exact solutions obtained numerically from the basic general mathematical model of the instabilities in question. The general mathematical model is also used to discuss the effects associated with variation in the constant component of the electron current in a beam-plasma system.
DEFF Research Database (Denmark)
Andersen, O. Krogh
1975-01-01
of Korringa-Kohn-Rostoker, linear-combination-of-atomic-orbitals, and cellular methods; the secular matrix is linear in energy, the overlap integrals factorize as potential parameters and structure constants, the latter are canonical in the sense that they neither depend on the energy nor the cell volume...
Time-dependent density-functional theory in the projector augmented-wave method
DEFF Research Database (Denmark)
Walter, Michael; Häkkinen, Hannu; Lehtovaara, Lauri
2008-01-01
We present the implementation of the time-dependent density-functional theory both in linear-response and in time-propagation formalisms using the projector augmented-wave method in real-space grids. The two technically very different methods are compared in the linear-response regime where we...
Going beyond The three worlds of welfare capitalism: regime theory and public health research.
Bambra, C
2007-12-01
International research on the social determinants of health has increasingly started to integrate a welfare state regimes perspective. Although this is to be welcomed, to date there has been an over-reliance on Esping-Andersen's The three worlds of welfare capitalism typology (1990). This is despite the fact that it has been subjected to extensive criticism and that there are in fact a number of competing welfare state typologies within the comparative social policy literature. The purpose of this paper is to provide public health researchers with an up-to-date overview of the welfare state regime literature so that it can be reflected more accurately in future research. It outlines The three worlds of welfare capitalism typology, and it presents the criticisms it received and an overview of alternative welfare state typologies. It concludes by suggesting new avenues of study in public health that could be explored by drawing upon this broader welfare state regimes literature.
Study of the Higgs-Yukawa theory in the strong-Yukawa coupling regime
International Nuclear Information System (INIS)
Bulava, John; Gerhold, Philipp; Nagy, Attila; Deutsches Elektronen-Synchrotron; Hou, George W.S.; Smigielski, Brian; Jansen, Karl; Knippschild, Bastian; Univ. of Mainz; Lin, David C.J.; National Centre of Theoretical Sciences, Hsinchu; Nagai, Kei-Ichi; Ogawa, Kenji
2011-12-01
In this article, we present an ongoing lattice study of the Higgs-Yukawa model, in the regime of strong-Yukawa coupling, using overlap fermions. We investigated the phase structure in this regime by computing the Higgs vacuum expectation value, and by exploring the finite-size scaling behaviour of the susceptibility corresponding to the magnetisation. Our preliminary results indicate the existence of a second-order phase transition when the Yukawa coupling becomes large enough, at which the Higgs vacuum expectation value vanishes and the susceptibility diverges. (orig.)
Improved method for calculating neoclassical transport coefficients in the banana regime
Energy Technology Data Exchange (ETDEWEB)
Taguchi, M., E-mail: taguchi.masayoshi@nihon-u.ac.jp [College of Industrial Technology, Nihon University, Narashino 275-8576 (Japan)
2014-05-15
The conventional neoclassical moment method in the banana regime is improved by increasing the accuracy of approximation to the linearized Fokker-Planck collision operator. This improved method is formulated for a multiple ion plasma in general tokamak equilibria. The explicit computation in a model magnetic field shows that the neoclassical transport coefficients can be accurately calculated in the full range of aspect ratio by the improved method. The some neoclassical transport coefficients for the intermediate aspect ratio are found to appreciably deviate from those obtained by the conventional moment method. The differences between the transport coefficients with these two methods are up to about 20%.
Separable programming theory and methods
Stefanov, Stefan M
2001-01-01
In this book, the author considers separable programming and, in particular, one of its important cases - convex separable programming Some general results are presented, techniques of approximating the separable problem by linear programming and dynamic programming are considered Convex separable programs subject to inequality equality constraint(s) and bounds on variables are also studied and iterative algorithms of polynomial complexity are proposed As an application, these algorithms are used in the implementation of stochastic quasigradient methods to some separable stochastic programs Numerical approximation with respect to I1 and I4 norms, as a convex separable nonsmooth unconstrained minimization problem, is considered as well Audience Advanced undergraduate and graduate students, mathematical programming operations research specialists
Kohn-Sham density functional theory for quantum wires in arbitrary correlation regimes
Malet, F.; Mirtschink, A.P.; Cremon, J. C.; Reimann, S. M.; Gori Giorgi, P.
2013-01-01
We use the exact strong-interaction limit of the Hohenberg-Kohn energy density functional to construct an approximation for the exchange-correlation term of the Kohn-Sham approach. The resulting exchange-correlation potential is able to capture the features of the strongly correlated regime without
The structuration of socio-technical regimes - Conceptual foundations from institutional theory
Fuenfschilling, Lea; Truffer, Bernhard|info:eu-repo/dai/nl/6603148005
2014-01-01
In recent years, socio-technical transitions literature has gained importance in addressing long-term, transformative change in various industries. In order to account for the inertia and path-dependency experienced in these sectors, the concept of the socio-technical regime has been formulated.
The Bateman method for multichannel scattering theory
International Nuclear Information System (INIS)
Kim, Y. E.; Kim, Y. J.; Zubarev, A. L.
1997-01-01
Accuracy and convergence of the Bateman method are investigated for calculating the transition amplitude in multichannel scattering theory. This approximation method is applied to the calculation of elastic amplitude. The calculated results are remarkably accurate compared with those of exactly solvable multichannel model
International Nuclear Information System (INIS)
Kh'yuitt, G.
1980-01-01
An introduction into the problem of two-phase flows is presented. Flow regimes arizing in two-phase flows are described, and classification of these regimes is given. Structures of vertical and horizontal two-phase flows and a method of their identification using regime maps are considered. The limits of this method application are discussed. The flooding phenomena and phenomena of direction change (flow reversal) of the flow and interrelation of these phenomena as well as transitions from slug regime to churn one and from churn one to annular one in vertical flows are described. Problems of phase transitions and equilibrium are discussed. Flow regimes in tubes where evaporating liquid is running, are described [ru
Risk assessment theory, methods, and applications
Rausand, Marvin
2011-01-01
With its balanced coverage of theory and applications along with standards and regulations, Risk Assessment: Theory, Methods, and Applications serves as a comprehensive introduction to the topic. The book serves as a practical guide to current risk analysis and risk assessment, emphasizing the possibility of sudden, major accidents across various areas of practice from machinery and manufacturing processes to nuclear power plants and transportation systems. The author applies a uniform framework to the discussion of each method, setting forth clear objectives and descriptions, while also shedding light on applications, essential resources, and advantages and disadvantages. Following an introduction that provides an overview of risk assessment, the book is organized into two sections that outline key theory, methods, and applications. * Introduction to Risk Assessment defines key concepts and details the steps of a thorough risk assessment along with the necessary quantitative risk measures. Chapters outline...
Directory of Open Access Journals (Sweden)
Javad Safaee Kuchaksaraee
2016-10-01
Full Text Available The increasing consumption of electrical energy and the use of non-linear loads that create transient regime states in distribution networks is increasing day by day. This is the only reason due to which the analysis of power quality for energy sustainability in power networks has become more important. Transients are often created by energy injection through switching or lightning and make changes in voltage and nominal current. Sudden increase or decrease in voltage or current makes characteristics of the transient regime. This paper shed some lights on the capacitor bank switching, which is one of the main causes for oscillatory transient regime states in the distribution network, using wavelet transform. The identification of the switching current of capacitor bank and the internal fault current of the transformer to prevent the unnecessary outage of the differential relay, it propose a new smart method. The accurate performance of this method is shown by simulation in EMTP and MATLAB (matrix laboratory software.
δ expansion for a quantum field theory in the nonperturbative regime
International Nuclear Information System (INIS)
Bender, C.M.; Milton, K.A.; Pinsky, S.S.; Simmons, L.M. Jr.
1990-01-01
The δ expansion, a recently proposed nonperturbative technique in quantum field theory, is used to calculate the dimensionless renormalized coupling constant of a λ(var-phi 2 ) 1+δ quantum field theory in d-dimensional space-time at the critical point defined by λ→∞ with the renormalized mass held fixed. The calculation is performed to leading order in δ and compared with previous lattice strong-coupling calculations. The numerical results are good and provide new evidence that the theory in four dimensions is free for all δ
Effects of diversity and procrastination in priority queuing theory: The different power law regimes
Saichev, A.; Sornette, D.
2010-01-01
Empirical analyses show that after the update of a browser, or the publication of the vulnerability of a software, or the discovery of a cyber worm, the fraction of computers still using the older browser or software version, or not yet patched, or exhibiting worm activity decays as a power law ˜1/tα with 0procrastination,” defined as the situation in which the target task may be postponed or delayed even after the individual has solved all other pending tasks. This regime provides an explanation for even slower apparent decay and longer persistence.
Renormalization group method in the theory of dynamical systems
International Nuclear Information System (INIS)
Sinai, Y.G.; Khanin, K.M.
1988-01-01
One of the most important events in the theory of dynamical systems for the last decade has become a wide penetration of ideas and renormalization group methods (RG) into this traditional field of mathematical physics. RG-method has been one of the main tools in statistical physics and it has proved to be rather effective while solving problems of the theory of dynamical systems referring to new types of bifurcations (see further). As in statistical mechanics the application of the RG-method is of great interest in the neighborhood of the critical point concerning the order-chaos transition. First the RG-method was applied in the pioneering papers dedicated to the appearance of a stochastical regime as a result of infinite sequences of period doubling bifurcations. At present this stochasticity mechanism is the most studied one and many papers deal with it. The study of the so-called intermittency phenomenon was the next example of application of the RG-method, i.e. the study of such a situation where the domains of the stochastical and regular behavior do alternate along a trajectory of the dynamical system
Accuracy verification methods theory and algorithms
Mali, Olli; Repin, Sergey
2014-01-01
The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control. The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.
An advanced method of heterogeneous reactor theory
International Nuclear Information System (INIS)
Kochurov, B.P.
1994-08-01
Recent approaches to heterogeneous reactor theory for numerical applications were presented in the course of 8 lectures given in JAERI. The limitations of initial theory known after the First Conference on Peacefull Uses of Atomic Energy held in Geneva in 1955 as Galanine-Feinberg heterogeneous theory:-matrix from of equations, -lack of consistent theory for heterogeneous parameters for reactor cell, -were overcome by a transformation of heterogeneous reactor equations to a difference form and by a development of a consistent theory for the characteristics of a reactor cell based on detailed space-energy calculations. General few group (G-number of groups) heterogeneous reactor equations in dipole approximation are formulated with the extension of two-dimensional problem to three-dimensions by finite Furie expansion of axial dependence of neutron fluxes. A transformation of initial matrix reactor equations to a difference form is presented. The methods for calculation of heterogeneous reactor cell characteristics giving the relation between vector-flux and vector-current on a cell boundary are based on a set of detailed space-energy neutron flux distribution calculations with zero current across cell boundary and G calculations with linearly independent currents across the cell boundary. The equations for reaction rate matrices are formulated. Specific methods were developed for description of neutron migration in axial and radial directions. The methods for resonance level's approach for numerous high-energy resonances. On the basis of these approaches the theory, methods and computer codes were developed for 3D space-time react or problems including simulation of slow processes with fuel burn-up, control rod movements, Xe poisoning and fast transients depending on prompt and delayed neutrons. As a result reactors with several thousands of channels having non-uniform axial structure can be feasibly treated. (author)
Some improved methods in neutron transport theory
Energy Technology Data Exchange (ETDEWEB)
Pop-Jordanov, J; Stefanovic, D; Kocic, A; Matausek, M; Bosevski, T [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)
1973-07-01
The methods described in this paper are: analytical approach to neutron spectra in case of energy dependent anisotropy of elastic scattering; Monte Carlo estimations of neutron absorption reaction rate during slowing down process; spherical harmonics treatment of space-angle-lethargy dependent slowing down transport equation; integral transport theory based on point-wise representation of variables.
Research on new methods in transport theory
International Nuclear Information System (INIS)
Stefanovicj, D.
1975-01-01
Neutron transport theory is the basis for development of reactor theory and reactor calculational methods. It has to be acknowledged that recent applications of these disciplines have influenced considerably the development of power reactor concepts and technology. However, these achievements were implemented in a rather heuristic way, since the satisfaction of design demands were of utmost importance. Often this kind of approach turns out to be very restrictive and not even adequate for rather typical reactor applications. Many aspects and techniques of reactor theory and calculations ought to be reevaluated and/or reformulated on the more sound physical and mathematical foundations. At the same time, new reactor concepts and operational demands give rise to more sophisticated and complex design requirements. These new requirements can be met only by the development of new design techniques, which in the case of reactor neutronic calculation lead directly to the advanced transport theory methods. In addition, the rapid development of computer technology opens new opportunities for applications of advanced transport theory in practical calculations
Application of a transitional boundary-layer theory in the low hypersonic Mach number regime
Shamroth, S. J.; Mcdonald, H.
1975-01-01
An investigation is made to assess the capability of a finite-difference boundary-layer procedure to predict the mean profile development across a transition from laminar to turbulent flow in the low hypersonic Mach-number regime. The boundary-layer procedure uses an integral form of the turbulence kinetic-energy equation to govern the development of the Reynolds apparent shear stress. The present investigation shows the ability of this procedure to predict Stanton number, velocity profiles, and density profiles through the transition region and, in addition, to predict the effect of wall cooling and Mach number on transition Reynolds number. The contribution of the pressure-dilatation term to the energy balance is examined and it is suggested that transition can be initiated by the direct absorption of acoustic energy even if only a small amount (1 per cent) of the incident acoustic energy is absorbed.
International Nuclear Information System (INIS)
Kurachi, Masafumi; Shrock, Robert
2006-01-01
We consider a vectorial, confining SU(N) gauge theory with a variable number, N f , of massless fermions transforming according to the fundamental representation. Using the Schwinger-Dyson and Bethe-Salpeter equations, we calculate the S parameter in terms of the current-current correlation functions. We focus on values of N f such that the theory is in the crossover region between the regimes of walking behavior and QCD-like (nonwalking) behavior. Our calculations indicate that the contribution to S from a given fermion decreases as one moves from the QCD-like to the walking regimes. The implications of this result for technicolor theories are discussed
International Nuclear Information System (INIS)
Liles, D.R.
1982-01-01
Internal boundaries in multiphase flow greatly complicate fluid-dynamic and heat-transfer descriptions. Different flow regimes or topological configurations can have radically dissimilar interfacial and wall mass, momentum, and energy exchanges. To model the flow dynamics properly requires estimates of these rates. In this paper the common flow regimes for gas-liquid systems are defined and the techniques used to estimate the extent of a particular regime are described. Also, the current computer-code procedures are delineated and introduce a potentially better method is introduced
Spectral methods in quantum field theory
International Nuclear Information System (INIS)
Graham, Noah; Quandt, Markus; Weigel, Herbert
2009-01-01
This concise text introduces techniques from quantum mechanics, especially scattering theory, to compute the effects of an external background on a quantum field in general, and on the properties of the quantum vacuum in particular. This approach can be succesfully used in an increasingly large number of situations, ranging from the study of solitons in field theory and cosmology to the determination of Casimir forces in nano-technology. The method introduced and applied in this book is shown to give an unambiguous connection to perturbation theory, implementing standard renormalization conditions even for non-perturbative backgrounds. It both gives new theoretical insights, for example illuminating longstanding questions regarding Casimir stresses, and also provides an efficient analytic and numerical tool well suited to practical calculations. Last but not least, it elucidates in a concrete context many of the subtleties of quantum field theory, such as divergences, regularization and renormalization, by connecting them to more familiar results in quantum mechanics. While addressed primarily at young researchers entering the field and nonspecialist researchers with backgrounds in theoretical and mathematical physics, introductory chapters on the theoretical aspects of the method make the book self-contained and thus suitable for advanced graduate students. (orig.)
Time-dependent density functional theory of open quantum systems in the linear-response regime.
Tempel, David G; Watson, Mark A; Olivares-Amaya, Roberto; Aspuru-Guzik, Alán
2011-02-21
Time-dependent density functional theory (TDDFT) has recently been extended to describe many-body open quantum systems evolving under nonunitary dynamics according to a quantum master equation. In the master equation approach, electronic excitation spectra are broadened and shifted due to relaxation and dephasing of the electronic degrees of freedom by the surrounding environment. In this paper, we develop a formulation of TDDFT linear-response theory (LR-TDDFT) for many-body electronic systems evolving under a master equation, yielding broadened excitation spectra. This is done by mapping an interacting open quantum system onto a noninteracting open Kohn-Sham system yielding the correct nonequilibrium density evolution. A pseudoeigenvalue equation analogous to the Casida equations of the usual LR-TDDFT is derived for the Redfield master equation, yielding complex energies and Lamb shifts. As a simple demonstration, we calculate the spectrum of a C(2 +) atom including natural linewidths, by treating the electromagnetic field vacuum as a photon bath. The performance of an adiabatic exchange-correlation kernel is analyzed and a first-order frequency-dependent correction to the bare Kohn-Sham linewidth based on the Görling-Levy perturbation theory is calculated.
Fertilizer nitrogen leaching in relation to water regime and the fertilizer placement method
International Nuclear Information System (INIS)
Moustafa, A.T.A.; Khadr, M.S.
1983-01-01
A field experiment was conducted at the farm of Sids Experimental Station, Ministry of Agriculture, Middle Egypt, to evaluate the effect of the water regime and fertilizer placement method on the leaching of urea fertilizer under field conditions. Ordinary and heavy irrigations were the water regimes, while side-banding and surface broadcasting were the fertilizer placement methods. Wheat (Giza 158, local variety) was planted, and urea labelled with 15 N at the rate of 100 kg N/ha was added at planting. The data obtained showed that in general the leaching process of urea fertilizer, as evaluated from the amounts of fertilizer nitrogen residues, is not uniform even within replicates. This is despite the fact that the average total amount of fertilizer nitrogen residues in the soil profile to a depth of 125 cm is almost the same in the different treatments. Data also show that the bulk of fertilizer nitrogen residues is accumulated in the surface soil layers, especially at 0-25 cm. Only 10% of the fertilizer nitrogen is detected below 75 cm and up to 125 cm depth of the soil profile. It could be concluded that urea leaching (amount and depth) under these conditions is affected mainly by the soil characteristics, namely soil pores. This is in addition to some other factors that cause variable concentrations in the soil solution leaving the root zone. (author)
Improvements of the integral transport theory method
International Nuclear Information System (INIS)
Kavenoky, A.; Lam-Hime, M.; Stankovski, Z.
1979-01-01
The integral transport theory is widely used in practical reactor design calculations however it is computer time consuming for two dimensional calculations of large media. In the first part of this report a new treatment is presented; it is based on the Galerkin method: inside each region the total flux is expanded over a three component basis. Numerical comparison shows that this method can considerably reduce the computing time. The second part of the this report is devoted to homogeneization theory: a straightforward calculation of the fundamental mode for an heterogeneous cell is presented. At first general presentation of the problem is given, then it is simplified to plane geometry and numerical results are presented
The linearization method in hydrodynamical stability theory
Yudovich, V I
1989-01-01
This book presents the theory of the linearization method as applied to the problem of steady-state and periodic motions of continuous media. The author proves infinite-dimensional analogues of Lyapunov's theorems on stability, instability, and conditional stability for a large class of continuous media. In addition, semigroup properties for the linearized Navier-Stokes equations in the case of an incompressible fluid are studied, and coercivity inequalities and completeness of a system of small oscillations are proved.
A non-linear theory for the bubble regime of plasma wake fields in tailored plasma channels
Thomas, Johannes
2016-01-01
We introduce a first full analytical bubble and blow-out model for a radially inhomogeneous plasma in a quasi-static approximation. For both cases we calculate the accelerating and the focusing fields. In our model we also assume a thin electron layer that surrounds the wake field and calculate the field configuration within. Our theory holds for arbitrary radial density profiles and reduces to known models in the limit of a homogeneous plasma. From a previous study of hollow plasma channels with smooth boundaries for laser-driven electron acceleration in the bubble regime we know that pancake-like laser pulses lead to highest electron energies [Pukhov et al, PRL 113, 245003 (2014)]. As it was shown, the bubble fields can be adjusted to balance the laser depletion and dephasing lengths by varying the plasma density profile inside a deep channel. Now we show why the radial fields in the vacuum part of a channel become defocussing.
Zheligovsky, Vladislav
2011-01-01
New developments for hydrodynamical dynamo theory have been spurred by recent evidence of self-sustained dynamo activity in laboratory experiments with liquid metals. The emphasis in the present volume is on the introduction of powerful mathematical techniques required to tackle modern multiscale analysis of continous systems and there application to a number of realistic model geometries of increasing complexity. This introductory and self-contained research monograph summarizes the theoretical state-of-the-art to which the author has made pioneering contributions.
Methods and applications of analytical perturbation theory
International Nuclear Information System (INIS)
Kirchgraber, U.; Stiefel, E.
1978-01-01
This monograph on perturbation theory is based on various courses and lectures held by the authors at the ETH, Zurich and at the University of Texas, Austin. Its principal intention is to inform application-minded mathematicians, physicists and engineers about recent developments in this field. The reader is not assumed to have mathematical knowledge beyond what is presented in standard courses on analysis and linear algebra. Chapter I treats the transformations of systems of differential equations and the integration of perturbed systems in a formal way. These tools are applied in Chapter II to celestial mechanics and to the theory of tops and gyroscopic motion. Chapter III is devoted to the discussion of Hamiltonian systems of differential equations and exposes the algebraic aspects of perturbation theory showing also the necessary modifications of the theory in case of singularities. The last chapter gives the mathematical justification for the methods developed in the previous chapters and investigates important questions such as error estimations for the solutions and asymptotic stability. Each chapter ends with useful comments and an extensive reference to the original literature. (HJ) [de
Harmony Search Method: Theory and Applications
Directory of Open Access Journals (Sweden)
X. Z. Gao
2015-01-01
Full Text Available The Harmony Search (HS method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.
The Lanczos method in lattice gauge theories
International Nuclear Information System (INIS)
Barbour, I.M.; Behilil, N.E.; Gibbs, P.E.; Teper, M.; Schierholz, G.
1984-09-01
We present a modified version of the Lanczos algorithm as a computational method for tridiagonalising large sparse matrices, which avoids the requirement for large amounts of storage space. It can be applied as a first step in calculating eigenvalues and eigenvectors or for obtaining the inverse of a matrix row by row. Here we describe the method and apply it to various problems in lattice gauge theories. We have found it to have excellent convergence properties. In particular it enables us to do lattice calculations at small and even zero quark mass. (orig.)
Moments method in the theory of accelerators
International Nuclear Information System (INIS)
Perel'shtejn, Eh.A.
1984-01-01
The moments method is widely used for solution of different physical and calculation problems in the theory of accelerators, magnetic optics and dynamics of high-current beams. Techniques using moments of the second order-mean squape characteristics of charged particle beams is shown to be most developed. The moments method is suitable and sometimes even the only technique applicable for solution of computerized problems on optimization of accelerating structures, beam transport channels, matching and other systems with accout of a beam space charge
Directory of Open Access Journals (Sweden)
M. Matinfar
2011-06-01
Full Text Available In order to investigate the effects of different weed control methods and moisture regimes on safflower (Carthamus tinctorius, a field split plot experiment based on randomized complete block design with 4 replications was conducted in Takestan Iran, during growing seasons of 2007-8. Three irrigations regimes (normal irrigation, restricted irrigation at stem elongation and restricted irrigation at flowering stage were assigned to the main plots and nine chemical weed control method (complete hand weeding, treflan with 2 L/ha as pre plant herbicide, sonalan with 3 L/ha ad pre plant herbicide, estomp with 3 L/ha as pre plant herbicide, gallant super with 0/75 L/ha as post emergence herbicide, treflan with 2 L/ha as pre plant herbicide+ gallant super with 0/75 L/ha as post emergence herbicide, sonalan with 3 L/ha as pre plant herbicide + gallant super with 0/75 L/ha as post emergence herbicide estomp with 3 L/ha as pre plant herbicide + gallant super with 0/75 L/ha as post emergence herbicide and without hand weeding to sub- plots. At the end of growing period traits like number of head per plant, number of seed per head, 1000 grain weight, percent of seed oil, yield of seed oil and grain yield were measured. Results indicated that treflan + gallant super treatment in restricted irrigation at stem elongation stage had the lowest dry weight of weeds. In this study maximum grain yield (2927 Kg/ha was achieved from hand weeding + usual irrigation treatments. In general treflan + gallant super treatment was the most effective treatment on safflower yield and weed control.
Methods of algebraic geometry in control theory
Falb, Peter
1999-01-01
"Control theory represents an attempt to codify, in mathematical terms, the principles and techniques used in the analysis and design of control systems. Algebraic geometry may, in an elementary way, be viewed as the study of the structure and properties of the solutions of systems of algebraic equations. The aim of this book is to provide access to the methods of algebraic geometry for engineers and applied scientists through the motivated context of control theory" .* The development which culminated with this volume began over twenty-five years ago with a series of lectures at the control group of the Lund Institute of Technology in Sweden. I have sought throughout to strive for clarity, often using constructive methods and giving several proofs of a particular result as well as many examples. The first volume dealt with the simplest control systems (i.e., single input, single output linear time-invariant systems) and with the simplest algebraic geometry (i.e., affine algebraic geometry). While this is qui...
Quantum defect theory and asymptotic methods
International Nuclear Information System (INIS)
Seaton, M.J.
1982-01-01
It is shown that quantum defect theory provides a basis for the development of various analytical methods for the examination of electron-ion collision phenomena, including di-electronic recombination. Its use in conjuction with ab initio calculations is shown to be restricted by problems which arise from the presence of long-range non-Coulomb potentials. Empirical fitting to some formulae can be efficient in the use of computer time but extravagant in the use of person time. Calculations at a large number of energy points which make no use of analytical formulae for resonance structures may be made less extravagant in computer time by the development of more efficient asymptotic methods. (U.K.)
Mathematical Methods of Game and Economic Theory
Aubin, J-P
1982-01-01
This book presents a unified treatment of optimization theory, game theory and a general equilibrium theory in economics in the framework of nonlinear functional analysis. It not only provides powerful and versatile tools for solving specific problems in economics and the social sciences but also serves as a unifying theme in the mathematical theory of these subjects as well as in pure mathematics itself.
Simulation of regimes of convection and plume dynamics by the thermal Lattice Boltzmann Method
Mora, Peter; Yuen, David A.
2018-02-01
We present 2D simulations using the Lattice Boltzmann Method (LBM) of a fluid in a rectangular box being heated from below, and cooled from above. We observe plumes, hot narrow upwellings from the base, and down-going cold chutes from the top. We have varied both the Rayleigh numbers and the Prandtl numbers respectively from Ra = 1000 to Ra =1010 , and Pr = 1 through Pr = 5 ×104 , leading to Rayleigh-Bénard convection cells at low Rayleigh numbers through to vigorous convection and unstable plumes with pronounced vortices and eddies at high Rayleigh numbers. We conduct simulations with high Prandtl numbers up to Pr = 50, 000 to simulate in the inertial regime. We find for cases when Pr ⩾ 100 that we obtain a series of narrow plumes of upwelling fluid with mushroom heads and chutes of downwelling fluid. We also present simulations at a Prandtl number of 0.7 for Rayleigh numbers varying from Ra =104 through Ra =107.5 . We demonstrate that the Nusselt number follows power law scaling of form Nu ∼Raγ where γ = 0.279 ± 0.002 , which is consistent with published results of γ = 0.281 in the literature. These results show that the LBM is capable of reproducing results obtained with classical macroscopic methods such as spectral methods, and demonstrate the great potential of the LBM for studying thermal convection and plume dynamics relevant to geodynamics.
Theory of the Trojan-Horse method
International Nuclear Information System (INIS)
Baur, Gerhard; Typel, Stefan
2004-01-01
The Trojan-Horse method is an indirect approach to determine the energy dependence of S factors of astrophysically relevant two-body reactions. This is accomplished by studying closely related three-body reactions under quasi-free scattering conditions. The basic theory of the Trojan-Horse method is developed starting from a post-from distorted wave Born approximation of the T-matrix element. In the surface approximation the cross section of the three-body reaction can be related to the S-matrix elements of the two-body reaction. The essential feature of the Trojan-Horse method is the effective suppression of the Coulomb barrier at low energies for the astrophysical reaction leading to finite cross sections at the threshold of the two-body reaction. In a modified plane wave approximation the relation between the two-body and three-body cross sections becomes very transparent. Applications of the Trojan Horse Method are discussed. It is special interest that electron screening corrections are negligible due to the high projectile energy. (author)
Collective variables method in relativistic theory
International Nuclear Information System (INIS)
Shurgaya, A.V.
1983-01-01
Classical theory of N-component field is considered. The method of collective variables accurately accounting for conservation laws proceeding from invariance theory under homogeneous Lorentz group is developed within the frames of generalized hamiltonian dynamics. Hyperboloids are invariant surfaces Under the homogeneous Lorentz group. Proceeding from this, field transformation is introduced, and the surface is parametrized so that generators of the homogeneous Lorentz group do not include components dependent on interaction and their effect on the field function is reduced to geometrical. The interaction is completely included in the expression for the energy-momentum vector of the system which is a dynamical value. Gauge is chosen where parameters of four-dimensional translations and their canonically-conjugated pulses are non-physical and thus phase space is determined by parameters of the homogeneous Lorentz group, field function and their canonically-conjugated pulses. So it is managed to accurately account for conservation laws proceeding from the requirement of lorentz-invariance
International Nuclear Information System (INIS)
Seleghim, Paulo
1996-01-01
This work concerns the development of a methodology which objective is to characterize and diagnose two-phase flow regime transitions. The approach is based on the fundamental assumption that a transition flow is less stationary than a flow with an established regime. In a first time, the efforts focused on: 1) the design and construction of an experimental loop, allowing to reproduce the main horizontal two-phase flow patterns, in a stable and controlled way, 2) the design and construction of an electrical impedance probe, providing an imaged information of the spatial phase distribution in the pipe, the systematic study of the joint time-frequency and time-scale analysis methods, which permitted to define an adequate parameter quantifying the un-stationary degree. In a second time, in order to verify the fundamental assumption, a series of experiments were conducted, which objective was to demonstrate the correlation between un-stationary and regime transition. The un-stationary degree was quantified by calculating the Gabor's transform time-frequency covariance of the impedance probe signals. Furthermore, the phenomenology of each transition was characterized by the joint moments and entropy. The results clearly show that the regime transitions are correlated with local-time frequency covariance peaks, which demonstrates that these regime transitions are characterized by a loss of stationarity. Consequently, the time-frequency covariance constitutes an objective two-phase flow regime transition indicator. (author) [fr
FMEA using uncertainty theories and MCDM methods
Liu, Hu-Chen
2016-01-01
This book offers a thorough and systematic introduction to the modified failure mode and effect analysis (FMEA) models based on uncertainty theories (e.g. fuzzy logic, intuitionistic fuzzy sets, D numbers and 2-tuple linguistic variables) and various multi-criteria decision making (MCDM) approaches such as distance-based MCDM, compromise ranking MCDM and hybrid MCDM, etc. As such, it provides essential FMEA methods and practical examples that can be considered in applying FMEA to enhance the reliability and safety of products and services. The book offers a valuable guide for practitioners and researchers working in the fields of quality management, decision making, information science, management science, engineering, etc. It can also be used as a textbook for postgraduate and senior undergraduate students.
Nested partitions method, theory and applications
Shi, Leyuan
2009-01-01
There is increasing need to solve large-scale complex optimization problems in a wide variety of science and engineering applications, including designing telecommunication networks for multimedia transmission, planning and scheduling problems in manufacturing and military operations, or designing nanoscale devices and systems. Advances in technology and information systems have made such optimization problems more and more complicated in terms of size and uncertainty. Nested Partitions Method, Theory and Applications provides a cutting-edge research tool to use for large-scale, complex systems optimization. The Nested Partitions (NP) framework is an innovative mix of traditional optimization methodology and probabilistic assumptions. An important feature of the NP framework is that it combines many well-known optimization techniques, including dynamic programming, mixed integer programming, genetic algorithms and tabu search, while also integrating many problem-specific local search heuristics. The book uses...
Theory of the Trojan-Horse method
International Nuclear Information System (INIS)
Typel, S.; Baur, G.
2003-01-01
The Trojan-Horse method is an indirect approach to determine the energy dependence of S factors of astrophysically relevant two-body reactions. This is accomplished by studying closely related three-body reactions under quasi-free scattering conditions. The basic theory of the Trojan-Horse method is developed starting from a post-form distorted wave Born approximation of the T-matrix element. In the surface approximation the cross-section of the three-body reaction can be related to the S-matrix elements of the two-body reaction. The essential feature of the Trojan-Horse method is the effective suppression of the Coulomb barrier at low energies for the astrophysical reaction leading to finite cross-sections at the threshold of the two-body reaction. In a modified plane wave approximation the relation between the two- and three-body cross-sections becomes very transparent. The appearing Trojan-Horse integrals are studied in detail
BOOK REVIEW: Vortex Methods: Theory and Practice
Cottet, G.-H.; Koumoutsakos, P. D.
2001-03-01
The book Vortex Methods: Theory and Practice presents a comprehensive account of the numerical technique for solving fluid flow problems. It provides a very nice balance between the theoretical development and analysis of the various techniques and their practical implementation. In fact, the presentation of the rigorous mathematical analysis of these methods instills confidence in their implementation. The book goes into some detail on the more recent developments that attempt to account for viscous effects, in particular the presence of viscous boundary layers in some flows of interest. The presentation is very readable, with most points illustrated with well-chosen examples, some quite sophisticated. It is a very worthy reference book that should appeal to a large body of readers, from those interested in the mathematical analysis of the methods to practitioners of computational fluid dynamics. The use of the book as a text is compromised by its lack of exercises for students, but it could form the basis of a graduate special topics course. Juan Lopez
Social dominance theory: Its agenda and method
Sidanius, Jim; Pratto, Felicia; van Laar, Colette; Levin, Shana
2004-01-01
The theory has been misconstrued in four primary ways, which are often expressed as the claims of psychological reductionism, conceptual redundancy, biological reductionism, and hierarchy justification. This paper addresses these claims and suggests how social dominance theory builds on and moves beyond social identity theory and system justification theor.
Analysis of electrical circuits with variable load regime parameters projective geometry method
Penin, A
2015-01-01
This book introduces electric circuits with variable loads and voltage regulators. It allows to define invariant relationships for various parameters of regime and circuit sections and to prove the concepts characterizing these circuits. Generalized equivalent circuits are introduced. Projective geometry is used for the interpretation of changes of operating regime parameters. Expressions of normalized regime parameters and their changes are presented. Convenient formulas for the calculation of currents are given. Parallel voltage sources and the cascade connection of multi-port networks are d
String theory and the scientific method
Dawid, Richard
2013-01-01
String theory has played a highly influential role in theoretical physics for nearly three decades and has substantially altered our view of the elementary building principles of the Universe. However, the theory remains empirically unconfirmed, and is expected to remain so for the foreseeable future. So why do string theorists have such a strong belief in their theory? This book explores this question, offering a novel insight into the nature of theory assessment itself. Dawid approaches the topic from a unique position, having extensive experience in both philosophy and high-energy physics. He argues that string theory is just the most conspicuous example of a number of theories in high-energy physics where non-empirical theory assessment has an important part to play. Aimed at physicists and philosophers of science, the book does not use mathematical formalism and explains most technical terms.
The two-regime method for optimizing stochastic reaction-diffusion simulations
Flegg, M. B.
2011-10-19
Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches to less detailed compartment-based simulations. Compartment-based approaches yield quick and accurate mesoscopic results, but lack the level of detail that is characteristic of the computationally intensive molecular-based models. Often microscopic detail is only required in a small region (e.g. close to the cell membrane). Currently, the best way to achieve microscopic detail is to use a resource-intensive simulation over the whole domain. We develop the two-regime method (TRM) in which a molecular-based algorithm is used where desired and a compartment-based approach is used elsewhere. We present easy-to-implement coupling conditions which ensure that the TRM results have the same accuracy as a detailed molecular-based model in the whole simulation domain. Therefore, the TRM combines strengths of previously developed stochastic reaction-diffusion software to efficiently explore the behaviour of biological models. Illustrative examples and the mathematical justification of the TRM are also presented.
New methods in nuclear reaction theory
International Nuclear Information System (INIS)
Redish, E.F.
1979-01-01
Standard nuclear reaction methods are limited to treating problems that generalize two-body scattering. These are problems with only one continuous (vector) degree of freedom (CDOF). The difficulty in extending these methods to cases with two or more CDOFs is not just the additional numerical complexity: the mathematical problem is usually not well-posed. It is hard to guarantee that the proper boundary conditions (BCs) are satisfied. Since this is not generally known, the discussion is begun by considering the physics of this problem in the context of coupled-channel calculations. In practice, the difficulties are usually swept under the rug by the use of a highly developed phenomenology (or worse, by the failure to test a calculation for convergence). This approach limits the kind of reactions that can be handled to ones occurring on the surface of where a second CDOF can be treated perturbatively. In the past twenty years, the work of Faddeev, the quantum three-body problem has been solved. Many techniques (and codes) are now available for solving problems with two CDOFs. A method for using these techniques in the nuclear N-body problem is presented. A set of well-posed (connected kernal) equations for physical scattering operators is taken. Then it is shown how approximation schemes can be developed for a wide range of reaction mechanisms. The resulting general framework for a reaction theory can be applied to a number of nuclear problems. One result is a rigorous treatment of multistep transfer reactions with the possibility of systematically generating corrections. The application of the method to resonance reactions and knock-out is discussed. 12 figures
Shape theory categorical methods of approximation
Cordier, J M
2008-01-01
This in-depth treatment uses shape theory as a ""case study"" to illustrate situations common to many areas of mathematics, including the use of archetypal models as a basis for systems of approximations. It offers students a unified and consolidated presentation of extensive research from category theory, shape theory, and the study of topological algebras.A short introduction to geometric shape explains specifics of the construction of the shape category and relates it to an abstract definition of shape theory. Upon returning to the geometric base, the text considers simplical complexes and
Directory of Open Access Journals (Sweden)
Purkis SW
2014-12-01
Full Text Available During 2012, three CORESTA Recommended Methods (CRMs (1-3 were updated to include smoke yield and variability data under both ISO (4 and the Canadian Intense (CI (5 smoking regimes. At that time, repeatability and reproducibility data under the CI regime on smoke analytes other than “tar”, nicotine and carbon monoxide (6 and tobacco-specific nitrosamines (TSNAs (7 were not available in the public literature. The subsequent work involved the determination of the mainstream smoke yields of benzo[a]-pyrene, selected volatiles (benzene, toluene, 1,3-butadiene, isoprene, acrylonitrile, and selected carbonyls (acetaldehyde, formaldehyde, propionaldehyde, butyraldehyde, crotonaldehyde, acrolein, acetone and 2-butanone in ten cigarette products followed by statistical analyses according to the ISO protocol (8. This paper provides some additional perspective on the data variability under the ISO and CI smoking regimes not given in the CRMs.
Differential geometric methods in system theory.
Brockett, R. W.
1971-01-01
Discussion of certain problems in system theory which have been or might be solved using some basic concepts from differential geometry. The problems considered involve differential equations, controllability, optimal control, qualitative behavior, stochastic processes, and bilinear systems. The main goal is to extend the essentials of linear theory to some nonlinear classes of problems.
Algebraic and analytic methods in representation theory
Schlichtkrull, Henrik
1996-01-01
This book is a compilation of several works from well-recognized figures in the field of Representation Theory. The presentation of the topic is unique in offering several different points of view, which should makethe book very useful to students and experts alike.Presents several different points of view on key topics in representation theory, from internationally known experts in the field
Methods of Fourier analysis and approximation theory
Tikhonov, Sergey
2016-01-01
Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.
Group theoretical methods and wavelet theory: coorbit theory and applications
Feichtinger, Hans G.
2013-05-01
Before the invention of orthogonal wavelet systems by Yves Meyer1 in 1986 Gabor expansions (viewed as discretized inversion of the Short-Time Fourier Transform2 using the overlap and add OLA) and (what is now perceived as) wavelet expansions have been treated more or less at an equal footing. The famous paper on painless expansions by Daubechies, Grossman and Meyer3 is a good example for this situation. The description of atomic decompositions for functions in modulation spaces4 (including the classical Sobolev spaces) given by the author5 was directly modeled according to the corresponding atomic characterizations by Frazier and Jawerth,6, 7 more or less with the idea of replacing the dyadic partitions of unity of the Fourier transform side by uniform partitions of unity (so-called BUPU's, first named as such in the early work on Wiener-type spaces by the author in 19808). Watching the literature in the subsequent two decades one can observe that the interest in wavelets "took over", because it became possible to construct orthonormal wavelet systems with compact support and of any given degree of smoothness,9 while in contrast the Balian-Low theorem is prohibiting the existence of corresponding Gabor orthonormal bases, even in the multi-dimensional case and for general symplectic lattices.10 It is an interesting historical fact that* his construction of band-limited orthonormal wavelets (the Meyer wavelet, see11) grew out of an attempt to prove the impossibility of the existence of such systems, and the final insight was that it was not impossible to have such systems, and in fact quite a variety of orthonormal wavelet system can be constructed as we know by now. Meanwhile it is established wisdom that wavelet theory and time-frequency analysis are two different ways of decomposing signals in orthogonal resp. non-orthogonal ways. The unifying theory, covering both cases, distilling from these two situations the common group theoretical background lead to the
Binomial tree method for pricing a regime-switching volatility stock loans
Putri, Endah R. M.; Zamani, Muhammad S.; Utomo, Daryono B.
2018-03-01
Binomial model with regime switching may represents the price of stock loan which follows the stochastic process. Stock loan is one of alternative that appeal investors to get the liquidity without selling the stock. The stock loan mechanism resembles that of American call option when someone can exercise any time during the contract period. From the resembles both of mechanism, determination price of stock loan can be interpreted from the model of American call option. The simulation result shows the behavior of the price of stock loan under a regime-switching with respect to various interest rate and maturity.
Mathematical methods in the theory of queuing
Khinchin, A Y; Quenouille, M H
2013-01-01
Written by a prominent Russian mathematician, this concise monograph examines aspects of queuing theory as an application of probability. The three-part treatment begins with a study of the stream of incoming demands (or ""calls,"" in the author's terminology). Subsequent sections explore systems with losses and systems allowing delay. Prerequisites include a familiarity with the theory of probability and mathematical analysis. A. Y. Khinchin made significant contributions to probability theory, statistical physics, and several other fields. His elegant, groundbreaking work will prove of subs
Energy Technology Data Exchange (ETDEWEB)
Orava, J., E-mail: jo316@cam.ac.uk [Department of Materials Science & Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); WPI-Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan); Greer, A.L., E-mail: alg13@cam.ac.uk [Department of Materials Science & Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); WPI-Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577 (Japan)
2015-03-10
Highlights: • Study of ultra-fast DSC applied to the crystallization of glass-forming liquids. • Numerical modeling of DSC traces at heating rates exceeding 10 orders of magnitude. • Identification of three regimes in Kissinger plots. • Elucidation of the effect of liquid fragility on the Kissinger method. • Modeling to study the regime in which crystal growth is thermodynamically limited. - Abstract: Numerical simulation of DSC traces is used to study the validity and limitations of the Kissinger method for determining the temperature dependence of the crystal-growth rate on continuous heating of glasses from the glass transition to the melting temperature. A particular interest is to use the wide range of heating rates accessible with ultra-fast DSC to study systems such as the chalcogenide Ge{sub 2}Sb{sub 2}Te{sub 5} for which fast crystallization is of practical interest in phase-change memory. Kissinger plots are found to show three regimes: (i) at low heating rates the plot is straight, (ii) at medium heating rates the plot is curved as expected from the liquid fragility, and (iii) at the highest heating rates the crystallization rate is thermodynamically limited, and the plot has curvature of the opposite sign. The relative importance of these regimes is identified for different glass-forming systems, considered in terms of the liquid fragility and the reduced glass-transition temperature. The extraction of quantitative information on fundamental crystallization kinetics from Kissinger plots is discussed.
International Nuclear Information System (INIS)
Orava, J.; Greer, A.L.
2015-01-01
Highlights: • Study of ultra-fast DSC applied to the crystallization of glass-forming liquids. • Numerical modeling of DSC traces at heating rates exceeding 10 orders of magnitude. • Identification of three regimes in Kissinger plots. • Elucidation of the effect of liquid fragility on the Kissinger method. • Modeling to study the regime in which crystal growth is thermodynamically limited. - Abstract: Numerical simulation of DSC traces is used to study the validity and limitations of the Kissinger method for determining the temperature dependence of the crystal-growth rate on continuous heating of glasses from the glass transition to the melting temperature. A particular interest is to use the wide range of heating rates accessible with ultra-fast DSC to study systems such as the chalcogenide Ge 2 Sb 2 Te 5 for which fast crystallization is of practical interest in phase-change memory. Kissinger plots are found to show three regimes: (i) at low heating rates the plot is straight, (ii) at medium heating rates the plot is curved as expected from the liquid fragility, and (iii) at the highest heating rates the crystallization rate is thermodynamically limited, and the plot has curvature of the opposite sign. The relative importance of these regimes is identified for different glass-forming systems, considered in terms of the liquid fragility and the reduced glass-transition temperature. The extraction of quantitative information on fundamental crystallization kinetics from Kissinger plots is discussed
Homological methods, representation theory, and cluster algebras
Trepode, Sonia
2018-01-01
This text presents six mini-courses, all devoted to interactions between representation theory of algebras, homological algebra, and the new ever-expanding theory of cluster algebras. The interplay between the topics discussed in this text will continue to grow and this collection of courses stands as a partial testimony to this new development. The courses are useful for any mathematician who would like to learn more about this rapidly developing field; the primary aim is to engage graduate students and young researchers. Prerequisites include knowledge of some noncommutative algebra or homological algebra. Homological algebra has always been considered as one of the main tools in the study of finite-dimensional algebras. The strong relationship with cluster algebras is more recent and has quickly established itself as one of the important highlights of today’s mathematical landscape. This connection has been fruitful to both areas—representation theory provides a categorification of cluster algebras, wh...
International Nuclear Information System (INIS)
Hayward, Robert M.; Rahnema, Farzad; Zhang, Dingkang
2013-01-01
Highlights: ► A new hybrid stochastic–deterministic transport theory method to couple with diffusion theory. ► The method is implemented in 2D hexagonal geometry. ► The new method produces excellent results when compared with Monte Carlo reference solutions. ► The method is fast, solving all test cases in less than 12 s. - Abstract: A new hybrid stochastic–deterministic transport theory method, which is designed to couple with diffusion theory, is presented. The new method is an extension of the incident flux response expansion method, and it combines the speed of diffusion theory with the accuracy of transport theory. With ease of use in mind, the new method is derived in such a way that it can be implemented with only minimal modifications to an existing diffusion theory method. A new angular expansion, which is necessary for the diffusion theory coupling, is developed in 2D and 3D. The method is implemented in 2D hexagonal geometry, and an HTTR benchmark problem is used to test its accuracy in a standalone configuration. It is found that the new method produces excellent results (with average relative error in partial current less than 0.033%) when compared with Monte Carlo reference solutions. Furthermore, the method is fast, solving all test cases in less than 12 s
International Nuclear Information System (INIS)
Stefanovic, D.
1975-09-01
The research work of this contract was oriented towards the study of different methods in neutron transport theory. Authors studied analytical solution of the neutron slowing down transport equation and extension of this solution to include the energy dependence of the anisotropy of neutron scattering. Numerical solution of the fast and resonance transport equation for the case of mixture of scatterers including inelastic effects were also reviewed. They improved the existing formalism for treating the scattering of neutrons on water molecules; Identifying modal analysis as the Galerkin method, general conditions for modal technique applications have been investigated. Inverse problems in transport theory were considered. They obtained the evaluation of an advanced level distribution function, made improvement of the standard formalism for treating the inelastic scattering and development of a cluster nuclear model for this evaluation. Authors studied the neutron transport treatment in space energy groups for criticality calculation of a reactor core, and development of the Monte Carlo sampling scheme from the neutron transport equation
How to Map Theory: Reliable Methods Are Fruitless Without Rigorous Theory.
Gray, Kurt
2017-09-01
Good science requires both reliable methods and rigorous theory. Theory allows us to build a unified structure of knowledge, to connect the dots of individual studies and reveal the bigger picture. Some have criticized the proliferation of pet "Theories," but generic "theory" is essential to healthy science, because questions of theory are ultimately those of validity. Although reliable methods and rigorous theory are synergistic, Action Identification suggests psychological tension between them: The more we focus on methodological details, the less we notice the broader connections. Therefore, psychology needs to supplement training in methods (how to design studies and analyze data) with training in theory (how to connect studies and synthesize ideas). This article provides a technique for visually outlining theory: theory mapping. Theory mapping contains five elements, which are illustrated with moral judgment and with cars. Also included are 15 additional theory maps provided by experts in emotion, culture, priming, power, stress, ideology, morality, marketing, decision-making, and more (see all at theorymaps.org ). Theory mapping provides both precision and synthesis, which helps to resolve arguments, prevent redundancies, assess the theoretical contribution of papers, and evaluate the likelihood of surprising effects.
Methods in half-linear asymptotic theory
Directory of Open Access Journals (Sweden)
Pavel Rehak
2016-10-01
Full Text Available We study the asymptotic behavior of eventually positive solutions of the second-order half-linear differential equation $$ (r(t|y'|^{\\alpha-1}\\hbox{sgn} y''=p(t|y|^{\\alpha-1}\\hbox{sgn} y, $$ where r(t and p(t are positive continuous functions on $[a,\\infty$, $\\alpha\\in(1,\\infty$. The aim of this article is twofold. On the one hand, we show applications of a wide variety of tools, like the Karamata theory of regular variation, the de Haan theory, the Riccati technique, comparison theorems, the reciprocity principle, a certain transformation of dependent variable, and principal solutions. On the other hand, we solve open problems posed in the literature and generalize existing results. Most of our observations are new also in the linear case.
The Threat of Common Method Variance Bias to Theory Building
Reio, Thomas G., Jr.
2010-01-01
The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…
Theory, Method, and Triangulation in the Study of Street Children.
Lucchini, Riccardo
1996-01-01
Describes how a comparative study of street children in Montevideo (Uruguay), Rio de Janeiro, and Mexico City contributes to a synergism between theory and method. Notes how theoretical approaches of symbolic interactionism, genetic structuralism, and habitus theory complement interview, participant observation, and content analysis methods;…
Lattice field theories: non-perturbative methods of analysis
International Nuclear Information System (INIS)
Weinstein, M.
1978-01-01
A lecture is given on the possible extraction of interesting physical information from quantum field theories by studying their semiclassical versions. From the beginning the problem of solving for the spectrum states of any given continuum quantum field theory is considered as a giant Schroedinger problem, and then some nonperturbative methods for diagonalizing the Hamiltonian of the theory are explained without recourse to semiclassical approximations. The notion of a lattice appears as an artifice to handle the problems associated with the familiar infrared and ultraviolet divergences of continuum quantum field theory and in fact for all but gauge theories. 18 references
String Theory Methods for Condensed Matter Physics
Nastase, Horatiu
2017-09-01
Preface; Acknowledgments; Introduction; Part I. Condensed Matter Models and Problems: 1. Lightning review of statistical mechanics, thermodynamics, phases and phase transitions; 2. Magnetism in solids; 3. Electrons in solids: Fermi gas vs. Fermi liquid; 4. Bosonic quasi-particles: phonons and plasmons; 5. Spin-charge separation in 1+1 dimensional solids: spinons and holons; 6. The Ising model and the Heisenberg spin chain; 7. Spin chains and integrable systems; 8. The thermodynamic Bethe ansatz; 9. Conformal field theories and quantum phase transitions; 10. Classical vs. quantum Hall effect; 11. Superconductivity: Landau-Ginzburg, London and BCS; 12. Topology and statistics: Berry and Chern-Simons, anyons and nonabelions; 13. Insulators; 14. The Kondo effect and the Kondo problem; 15. Hydrodynamics and transport properties: from Boltzmann to Navier-Stokes; Part II. Elements of General Relativity and String Theory: 16. The Einstein equation and the Schwarzschild solution; 17. The Reissner-Nordstrom and Kerr-Newman solutions and thermodynamic properties of black holes; 18. Extra dimensions and Kaluza-Klein; 19. Electromagnetism and gravity in various dimensions. Consistent truncations; 20. Gravity plus matter: black holes and p-branes in various dimensions; 21. Weak/strong coupling dualities in 1+1, 2+1, 3+1 and d+1 dimensions; 22. The relativistic point particle and the relativistic string; 23. Lightcone strings and quantization; 24. D-branes and gauge fields; 25. Electromagnetic fields on D-branes. Supersymmetry and N = 4 SYM. T-duality of closed strings; 26. Dualities and M theory; 27. The AdS/CFT correspondence: definition and motivation; Part III. Applying String Theory to Condensed Matter Problems: 28. The pp wave correspondence: string Hamiltonian from N = 4 SYM; 29. Spin chains from N = 4 SYM; 30. The Bethe ansatz: Bethe strings from classical strings in AdS; 31. Integrability and AdS/CFT; 32. AdS/CFT phenomenology: Lifshitz, Galilean and Schrodinger
Finite element method - theory and applications
International Nuclear Information System (INIS)
Baset, S.
1992-01-01
This paper summarizes the mathematical basis of the finite element method. Attention is drawn to the natural development of the method from an engineering analysis tool into a general numerical analysis tool. A particular application to the stress analysis of rubber materials is presented. Special advantages and issues associated with the method are mentioned. (author). 4 refs., 3 figs
International Nuclear Information System (INIS)
Nishimura, K.; Ashikawa, N.; Masuzaki, S.; Miyazawa, J.; Sagara, A.; Goto, M.; Peterson, B.J.; Komori, A.; Noda, N.; Ida, K.; Kaneko, O.; Kawahata, K.; Kobuchi, T.; Kubo, S.; Morita, S.; Osakabe, M.; Sakakibara, S.; Sakamoto, R.; Sato, K.; Shimozuma, T.; Takeiri, Y.; Tanaka, K.; Motojima, O.
2005-01-01
Experiments in the large helical device have been developing since the first discharge in 1998. Baking at 95 deg C, electron cyclotron resonance discharge cleaning, glow discharge cleaning, titanium gettering and boronization were attempted for wall conditioning. Using these conditioning techniques, the partial pressures of the oxidized gases, such as H 2 O, CO and CO 2 , were reduced gradually and the plasma operational regime enlarged. The glow discharge cleaning with the various working gases, such as hydrogen, helium, neon and argon, was effective in increasing the plasma purity. By this method, we obtained a central ion temperature of 10 keV. Boronization, which was started from FY2001, was also effective in reducing the radiation losses from impurities and in enlarging the density operational regime. We obtained a plasma stored energy of 1.31 MJ and an electron density of 2.4 x 10 20 m -3
Directory of Open Access Journals (Sweden)
D. TROPEANO
2013-10-01
Full Text Available The Swedish School, with its representatives Ohlin, Hammarskjöld and Lindahl, madeimportant contributions to the economic theory of the open economy, even if such contributions have never been at the centre of attention of economists, probably due to their dark language style. In particular, it had a vision of the operating of an open economy that was completely different from post-war Keynesian orthodoxy. The exchange rate regime does not isolate a small economy from the repercussions of events that occur in financial markets and from goods at the international level. The other major assumption of open economy macroeconomics was the independence of monetary policy. The Keynesian models in the 1950s included only external money. On the contrary, the Swedes considered the credit system and the working of international banks.JEL: F41
Predicting ecological flow regime at ungaged sites: A comparison of methods
Murphy, Jennifer C.; Knight, Rodney R.; Wolfe, William J.; Gain, W. Scott
2012-01-01
Nineteen ecologically relevant streamflow characteristics were estimated using published rainfall–runoff and regional regression models for six sites with observed daily streamflow records in Kentucky. The regional regression model produced median estimates closer to the observed median for all but two characteristics. The variability of predictions from both models was generally less than the observed variability. The variability of the predictions from the rainfall–runoff model was greater than that from the regional regression model for all but three characteristics. Eight characteristics predicted by the rainfall–runoff model display positive or negative bias across all six sites; biases are not as pronounced for the regional regression model. Results suggest that a rainfall–runoff model calibrated on a single characteristic is less likely to perform well as a predictor of a range of other characteristics (flow regime) when compared with a regional regression model calibrated individually on multiple characteristics used to represent the flow regime. Poor model performance may misrepresent hydrologic conditions, potentially distorting the perceived risk of ecological degradation. Without prior selection of streamflow characteristics, targeted calibration, and error quantification, the widespread application of general hydrologic models to ecological flow studies is problematic. Published 2012. This article is a U.S. Government work and is in the public domain in the USA.
2014-09-01
been great source of encouragement, support and inspiration thousands of miles away. Mum and Dad , thank you for taking good care of me, for believing in...definition as given in (3.28). 115 Scenario 2: Separated Arrivals with Low SNR in a Snapshot Poor Regime In this scenario, 2 signals are arriving at...whose channel is characterized by rich scattering need to be sufficiently apart so that the signals at their outputs are uncorrelated [20], conventional
Review of Test Theory and Methods.
1981-01-01
literature, although some books , technical reports, and unpub- lished literature have been included where relevant. The focus of the review is on practical...1977) and Abu-Sayf (1977) developed new versions of formula scores, and Molenaar (1977) took a Bayesian approach to correcting for random guessing. The...Snow’s (1977) book on aptitude and instructional methods is a landmark review of the research on the interaction between instructional methods and
Scattering theory methods for bound state problems
International Nuclear Information System (INIS)
Raphael, R.B.; Tobocman, W.
1978-01-01
For the analysis of the properties of a bound state system one may use in place of the Schroedinger equation the Lippmann-Schwinger (LS) equation for the wave function or the LS equation for the reactance operator. Use of the LS equation for the reactance operator constrains the solution to have correct asymptotic behaviour, so this approach would appear to be desirable when the bound state wave function is to be used to calculate particle transfer form factors. The Schroedinger equation based N-level analysis of the s-wave bound states of a square well is compared to the ones based on the LS equation. It is found that the LS equation methods work better than the Schroedinger equation method. The method that uses the LS equation for the wave function gives the best results for the wave functions while the method that uses the LS equation for the reactance operator gives the best results for the binding energies. The accuracy of the reactance operator based method is remarkably insensitive to changes in the oscillator constant used for the harmonic oscillator function basis set. It is also remarkably insensitive to the number of nodes in the bound state wave function. (Auth.)
Using grounded theory as a method for rigorously reviewing literature
Wolfswinkel, J.; Furtmueller-Ettinger, Elfriede; Wilderom, Celeste P.M.
2013-01-01
This paper offers guidance to conducting a rigorous literature review. We present this in the form of a five-stage process in which we use Grounded Theory as a method. We first probe the guidelines explicated by Webster and Watson, and then we show the added value of Grounded Theory for rigorously
GP METHOD - BETWEEN THEORY AND PRACTICE
Directory of Open Access Journals (Sweden)
Violeta ISAI
2014-06-01
Full Text Available Wherever manufacturing activities where services are provided and work, there are costs. In circumstances where resources are relatively limited, companies should try to achieve what is needed for business. Competition is becoming tougher and tougher after 1990 the economic environment has made it costly the decision error caused by the reduced amount of information on costs. The work aims to analyze the theoretical and practical calculation method Georges Perrin. It will consider on the one hand to analyze the importance of the right choice of costing methods in an economic entity, on the other hand highlight the advantages and disadvantages of the method and analyzed the situations in which is opportunity or its use.
Theory, Method and Games in Communication.
MacLean, Malcolm S., Jr.
The thesis that the methods in mass communication research for collecting, analyzing and interpreting data should relate directly to the theoretical models of communication is argued in this speech. Communication models indicate that a source can usually communicate more effectively in the presence of feedback from relevant receivers on their…
Preconditioning of iterative methods - theory and applications
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Blaheta, Radim; Neytcheva, M.; Pultarová, I.
2015-01-01
Roč. 22, č. 6 (2015), s. 901-902 ISSN 1070-5325 Institutional support: RVO:68145535 Keywords : preconditioning * iterative methods * applications Subject RIV: BA - General Mathematics Impact factor: 1.431, year: 2015 http://onlinelibrary.wiley.com/doi/10.1002/nla.2016/epdf
Method and Theory of Intergroups in Organizations.
1980-10-29
Relations and Organizational Diagnosis " (T.R. #3) identified the philosophical formulations of clinical methods for studying organizations, reviewed the...earliest clinical studies of organizations that 4- took a group and intergroup perspective, and critiqued existing approaches to organizational diagnosis . This
The method of boson expansions in quantum theory
International Nuclear Information System (INIS)
Garbaczewski, P.
1977-06-01
A review is presented of boson expansion methods applied in quantum theory, e.g. expansions of spin, bifermion and fermion operators in cases of finite and infinite number of degrees of freedom. The basic purpose of the paper is to formulate the most general criterion allowing one to obtain the so-called finite spin approximation of any given Bose field theory and the class of fermion theories associated with it. On the other hand, we also need to be able to reconstruct the primary Bose field theory, when any finite spin or Fermi systems are given
Numerical methods: Analytical benchmarking in transport theory
International Nuclear Information System (INIS)
Ganapol, B.D.
1988-01-01
Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered
Pairing interaction method in crystal field theory
International Nuclear Information System (INIS)
Dushin, R.B.
1989-01-01
Expressions, permitting to describe matrix elements of secular equation for metal-ligand pairs via parameters of the method of pairing interactions, genealogical coefficients and Clebsch-Gordan coefficients, are given. The expressions are applicable to any level or term of f n and d n configurations matrix elements for the terms of the maximum multiplicity of f n and d n configurations and also for the main levels of f n configurations are tabulated
Partial differential equations methods, applications and theories
Hattori, Harumi
2013-01-01
This volume is an introductory level textbook for partial differential equations (PDE's) and suitable for a one-semester undergraduate level or two-semester graduate level course in PDE's or applied mathematics. Chapters One to Five are organized according to the equations and the basic PDE's are introduced in an easy to understand manner. They include the first-order equations and the three fundamental second-order equations, i.e. the heat, wave and Laplace equations. Through these equations we learn the types of problems, how we pose the problems, and the methods of solutions such as the separation of variables and the method of characteristics. The modeling aspects are explained as well. The methods introduced in earlier chapters are developed further in Chapters Six to Twelve. They include the Fourier series, the Fourier and the Laplace transforms, and the Green's functions. The equations in higher dimensions are also discussed in detail. This volume is application-oriented and rich in examples. Going thr...
Grabska-Barwińska, Agnieszka; Latham, Peter E
2014-06-01
We use mean field techniques to compute the distribution of excitatory and inhibitory firing rates in large networks of randomly connected spiking quadratic integrate and fire neurons. These techniques are based on the assumption that activity is asynchronous and Poisson. For most parameter settings these assumptions are strongly violated; nevertheless, so long as the networks are not too synchronous, we find good agreement between mean field prediction and network simulations. Thus, much of the intuition developed for randomly connected networks in the asynchronous regime applies to mildly synchronous networks.
Variational, projection methods and Pade approximants in scattering theory
International Nuclear Information System (INIS)
Turchetti, G.
1980-12-01
Several aspects on the scattering theory are discussed in a perturbative scheme. The Pade approximant method plays an important role in such a scheme. Solitons solutions are also discussed in this same scheme. (L.C.) [pt
Quantizing non-Lagrangian gauge theories: an augmentation method
International Nuclear Information System (INIS)
Lyakhovich, Simon L.; Sharapov, Alexei A.
2007-01-01
We discuss a recently proposed method of quantizing general non-Lagrangian gauge theories. The method can be implemented in many different ways, in particular, it can employ a conversion procedure that turns an original non-Lagrangian field theory in d dimensions into an equivalent Lagrangian, topological field theory in d+1 dimensions. The method involves, besides the classical equations of motion, one more geometric ingredient called the Lagrange anchor. Different Lagrange anchors result in different quantizations of one and the same classical theory. Given the classical equations of motion and Lagrange anchor as input data, a new procedure, called the augmentation, is proposed to quantize non-Lagrangian dynamics. Within the augmentation procedure, the originally non-Lagrangian theory is absorbed by a wider Lagrangian theory on the same space-time manifold. The augmented theory is not generally equivalent to the original one as it has more physical degrees of freedom than the original theory. However, the extra degrees of freedom are factorized out in a certain regular way both at classical and quantum levels. The general techniques are exemplified by quantizing two non-Lagrangian models of physical interest
Fire Regime Characteristics along Environmental Gradients in Spain
Directory of Open Access Journals (Sweden)
María Vanesa Moreno
2016-11-01
Full Text Available Concern regarding global change has increased the need to understand the relationship between fire regime characteristics and the environment. Pyrogeographical theory suggests that fire regimes are constrained by climate, vegetation and fire ignition processes, but it is not obvious how fire regime characteristics are related to those factors. We used a three-matrix approach with a multivariate statistical methodology that combined an ordination method and fourth-corner analysis for hypothesis testing to investigate the relationship between fire regime characteristics and environmental gradients across Spain. Our results suggest that fire regime characteristics (i.e., density and seasonality of fire activity are constrained primarily by direct gradients based on climate, population, and resource gradients based on forest potential productivity. Our results can be used to establish a predictive model for how fire regimes emerge in order to support fire management, particularly as global environmental changes impact fire regime characteristics.
Quantal density functional theory II. Approximation methods and applications
International Nuclear Information System (INIS)
Sahni, Viraht
2010-01-01
This book is on approximation methods and applications of Quantal Density Functional Theory (QDFT), a new local effective-potential-energy theory of electronic structure. What distinguishes the theory from traditional density functional theory is that the electron correlations due to the Pauli exclusion principle, Coulomb repulsion, and the correlation contribution to the kinetic energy -- the Correlation-Kinetic effects -- are separately and explicitly defined. As such it is possible to study each property of interest as a function of the different electron correlations. Approximations methods based on the incorporation of different electron correlations, as well as a many-body perturbation theory within the context of QDFT, are developed. The applications are to the few-electron inhomogeneous electron gas systems in atoms and molecules, as well as to the many-electron inhomogeneity at metallic surfaces. (orig.)
Error Parsing: An alternative method of implementing social judgment theory
Crystal C. Hall; Daniel M. Oppenheimer
2015-01-01
We present a novel method of judgment analysis called Error Parsing, based upon an alternative method of implementing Social Judgment Theory (SJT). SJT and Error Parsing both posit the same three components of error in human judgment: error due to noise, error due to cue weighting, and error due to inconsistency. In that sense, the broad theory and framework are the same. However, SJT and Error Parsing were developed to answer different questions, and thus use different m...
Scattering theory in quantum mechanics. Physical principles and mathematical methods
International Nuclear Information System (INIS)
Amrein, W.O.; Jauch, J.M.; Sinha, K.B.
1977-01-01
A contemporary approach is given to the classical topics of physics. The purpose is to explain the basic physical concepts of quantum scattering theory, to develop the necessary mathematical tools for their description, to display the interrelation between the three methods (the Schroedinger equation solutions, stationary scattering theory, and time dependence) to derive the properties of various quantities of physical interest with mathematically rigorous methods
New numerical methods for quantum field theories on the continuum
Energy Technology Data Exchange (ETDEWEB)
Emirdag, P.; Easter, R.; Guralnik, G.S.; Hahn, S.C
2000-03-01
The Source Galerkin Method is a new numerical technique that is being developed to solve Quantum Field Theories on the continuum. It is not based on Monte Carlo techniques and has a measure to evaluate relative errors. It promises to increase the accuracy and speed of calculations, and takes full advantage of symmetries of the theory. The application of this method to the non-linear {sigma} model is outlined.
Restricted Kalman Filtering Theory, Methods, and Application
Pizzinga, Adrian
2012-01-01
In statistics, the Kalman filter is a mathematical method whose purpose is to use a series of measurements observed over time, containing random variations and other inaccuracies, and produce estimates that tend to be closer to the true unknown values than those that would be based on a single measurement alone. This Brief offers developments on Kalman filtering subject to general linear constraints. There are essentially three types of contributions: new proofs for results already established; new results within the subject; and applications in investment analysis and macroeconomics, where th
The finite section method and problems in frame theory
DEFF Research Database (Denmark)
Christensen, Ole; Strohmer, T.
2005-01-01
solves related computational problems in frame theory. In the case of a frame which is localized w.r.t. an orthonormal basis we are able to estimate the rate of approximation. The results are applied to the reproducing kernel frame appearing in the theory for shift-invariant spaces generated by a Riesz......The finite section method is a convenient tool for approximation of the inverse of certain operators using finite-dimensional matrix techniques. In this paper we demonstrate that the method is very useful in frame theory: it leads to an efficient approximation of the inverse frame operator and also...
On iteration-separable method on the multichannel scattering theory
International Nuclear Information System (INIS)
Zubarev, A.L.; Ivlieva, I.N.; Podkopaev, A.P.
1975-01-01
The iteration-separable method for solving the equations of the Lippman-Schwinger type is suggested. Exponential convergency of the method of proven. Numerical convergency is clarified on the e + H scattering. Application of the method to the theory of multichannel scattering is formulated
N-body methods in the theory of nuclear reactions
International Nuclear Information System (INIS)
Bencze, Gy.
1980-08-01
The traditional method of applying two-body methods for the study of nuclear reactions is briefly reviewed. The recent developments in the N particle scattering theory are described in detail. The application of the methods in the study of effective two and few-body problems is also considered. (P.L.)
International Nuclear Information System (INIS)
Sommerfeldt, P.; Reisner, H.; Hartmann, G.; Kulicke, P.
1988-01-01
The method aims at increasing the lifetime of secondary coolant circuit components in nuclear power plants through the determination of the optimum mode of operation of the chemical water regime by help of radioisotopes
Hamiltonian lattice field theory: Computer calculations using variational methods
International Nuclear Information System (INIS)
Zako, R.L.
1991-01-01
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems
Hamiltonian lattice field theory: Computer calculations using variational methods
International Nuclear Information System (INIS)
Zako, R.L.
1991-01-01
A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems
Application of Canonical Effective Methods to Background-Independent Theories
Buyukcam, Umut
Effective formalisms play an important role in analyzing phenomena above some given length scale when complete theories are not accessible. In diverse exotic but physically important cases, the usual path-integral techniques used in a standard Quantum Field Theory approach seldom serve as adequate tools. This thesis exposes a new effective method for quantum systems, called the Canonical Effective Method, which owns particularly wide applicability in backgroundindependent theories as in the case of gravitational phenomena. The central purpose of this work is to employ these techniques to obtain semi-classical dynamics from canonical quantum gravity theories. Application to non-associative quantum mechanics is developed and testable results are obtained. Types of non-associative algebras relevant for magnetic-monopole systems are discussed. Possible modifications of hypersurface deformation algebra and the emergence of effective space-times are presented. iii.
Current-drive theory I: survey of methods
International Nuclear Information System (INIS)
Fisch, N.J.
1986-01-01
A variety of methods may be employed to drive toroidal electric current in a plasma torus. The most promising scheme is the injection of radiofrequency waves into the torus to push electrons or ions. The pushing mechanism can be either the direct conversion of wave to particle momentum, or a more subtle effect involving the alteration by waves of interparticle collisions. Alternatively, current can be produced through the injection of neutral beams, the reflection of plasma radiation, or the injection of frozen pellets. The efficacy of these schemes, in a variety of regimes, will be assessed. 9 refs
International Nuclear Information System (INIS)
ZERBO Issa
2010-01-01
A bibliographic study on the techniques of characterization of silicon solar cell, diodes, massifs and silicon wafer are presented. The influence of the modulation frequency and recombination in volume and in surface phenomena of on the profiles of carriers' densities, photocurrent and photovoltage has been put in evidence. The study of surface recombination velocities permitted to show that the bi facial silicon solar cell of Back Surface Field type behaves like an ohmic contacts solar cell for modulation frequencies above 40 khz. pplicability in frequency dynamic regime in the frequency range [0 - 40 khz] of three techniques of steady state recombination parameters determination is shown. A technique of diffusion length determination, in the range of (200 Hz - 40 khz] is proposed. It rests on the measurement of the short circuit current phase that is compared with the theoretical curve of short circuit current phase. The intersection of the experimental short circuit current phase and the theoretical curve of short circuit current phase permits to get the minority carriers effective diffusion length. An equivalent electric model of a solar cell in frequency dynamic regime is proposed. A study in modelling of the bi facial solar cell shunt resistance and space charge zone capacity is led from a determination method of these parameters proposed in steady state. (Author [fr
Guo, L; Han, S S; Liu, X; Cheng, Y; Xu, Z Z; Fan, J; Chen, J; Chen, S G; Becker, W; Blaga, C I; DiChiara, A D; Sistrunk, E; Agostini, P; DiMauro, L F
2013-01-04
A calculation of the second-order (rescattering) term in the S-matrix expansion of above-threshold ionization is presented for the case when the binding potential is the unscreened Coulomb potential. Technical problems related to the divergence of the Coulomb scattering amplitude are avoided in the theory by considering the depletion of the atomic ground state due to the applied laser field, which is well defined and does not require the introduction of a screening constant. We focus on the low-energy structure, which was observed in recent experiments with a midinfrared wavelength laser field. Both the spectra and, in particular, the observed scaling versus the Keldysh parameter and the ponderomotive energy are reproduced. The theory provides evidence that the origin of the structure lies in the long-range Coulomb interaction.
International Nuclear Information System (INIS)
Purohit, Gunjan; Rawat, Priyanka; Chauhan, Prashant; Mahmoud, Saleh T.
2015-01-01
This article presents higher-order paraxial theory (non-paraxial theory) for the ring ripple formation on an intense Gaussian laser beam and its propagation in plasma, taking into account the relativistic-ponderomotive nonlinearity. The intensity dependent dielectric constant of the plasma has been determined for the main laser beam and ring ripple superimposed on the main laser beam. The dielectric constant of the plasma is modified due to the contribution of the electric field vector of ring ripple. Nonlinear differential equations have been formulated to examine the growth of ring ripple in plasma, self focusing of main laser beam, and ring rippled laser beam in plasma using higher-order paraxial theory. These equations have been solved numerically for different laser intensities and plasma frequencies. The well established experimental laser and plasma parameters are used in numerical calculation. It is observed that the focusing of the laser beams (main and ring rippled) becomes fast in the nonparaxial region by expanding the eikonal and other relevant quantities up to the fourth power of r. The splitted profile of laser beam in the plasma is observed due to uneven focusing/defocusing of the axial and off-axial rays. The growths of ring ripple increase when the laser beam intensity increases. Furthermore, the intensity profile of ring rippled laser beam gets modified due to the contribution of growth rate
Energy Technology Data Exchange (ETDEWEB)
Berthier, G [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1965-07-01
By the analysis of the thermal transition conditions given by the quenching of a sample in a furnace maintained at a high temperature, it is possible to study the thermal diffusivity of some materials and those of solid state structure transformation on a qualitative as well as a quantitative standpoint. For instance the transformation energy of {alpha}-quartz into {beta}-quartz and the Wigner energy stored within neutron-irradiated beryllium oxide have been measured. (author) [French] L'analyse du regime thermique transitoire, obtenu par la trempe d'un echantillon dans l'enceinte d'un four maintenu a tres haute temperature, peut permettre l'etude de la diffusivite thermique de certains materiaux et celle des transformations structurales en phase solide, tant du point de vue qualitatif que du point de vue quantitatif (mesure de l'energie de transformation du quartz {alpha} en quartz {beta} et determination de l'energie Wigner emmagasinee par l'oxyde de beryllium irradie aux neutrons). (auteur)
Between the theory and method: the interpretation of the theory of Emilia Ferreiro for literacy
Directory of Open Access Journals (Sweden)
Fernanda Cargnin Gonçalves
2008-12-01
Full Text Available This article aims to show the difficulty of understanding the theory of Emilia Ferreiro by teachers from first grade at a school of public municipal city of Florianopolis / SC. It presents the theory of real Ferreiro described in his book "Psicogênese of Language Writing," co-authored with Teberosky, and interpretation of literacy observed in their practices of teaching. There are also options for work to teaching a child to escape the labeling of students in the literacy phases, which are based on essays, showing what is possible without turning theory into teaching method.
When Smokey says "No": Fire-less methods for growing plants adapted to cultural fire regimes
Daniela Shebitz; Justine E. James
2010-01-01
Two culturally-significant plants (sweetgrass [Anthoxanthum nitens] and beargrass [Xerophyllum tenax]) are used as case studies for investigating methods of restoring plant populations that are adapted to indigenous burning practices without using fire. Reports from tribal members that the plants of interest were declining in traditional gathering areas provided the...
Gauge-invariant variational methods for Hamiltonian lattice gauge theories
International Nuclear Information System (INIS)
Horn, D.; Weinstein, M.
1982-01-01
This paper develops variational methods for calculating the ground-state and excited-state spectrum of Hamiltonian lattice gauge theories defined in the A 0 = 0 gauge. The scheme introduced in this paper has the advantage of allowing one to convert more familiar tools such as mean-field, Hartree-Fock, and real-space renormalization-group approximation, which are by their very nature gauge-noninvariant methods, into fully gauge-invariant techniques. We show that these methods apply in the same way to both Abelian and non-Abelian theories, and that they are at least powerful enough to describe correctly the physics of periodic quantum electrodynamics (PQED) in (2+1) and (3+1) space-time dimensions. This paper formulates the problem for both Abelian and non-Abelian theories and shows how to reduce the Rayleigh-Ritz problem to that of computing the partition function of a classical spin system. We discuss the evaluation of the effective spin problem which one derives the PQED and then discuss ways of carrying out the evaluation of the partition function for the system equivalent to a non-Abelian theory. The explicit form of the effective partition function for the non-Abelian theory is derived, but because the evaluation of this function is considerably more complicated than the one derived in the Abelian theory no explicit evaluation of this function is presented. However, by comparing the gauge-projected Hartree-Fock wave function for PQED with that of the pure SU(2) gauge theory, we are able to show that extremely interesting differences emerge between these theories even at this simple level. We close with a discussion of fermions and a discussion of how one can extend these ideas to allow the computation of the glueball and hadron spectrum
Classical and modern numerical analysis theory, methods and practice
Ackleh, Azmy S; Kearfott, R Baker; Seshaiyer, Padmanabhan
2009-01-01
Mathematical Review and Computer Arithmetic Mathematical Review Computer Arithmetic Interval ComputationsNumerical Solution of Nonlinear Equations of One Variable Introduction Bisection Method The Fixed Point Method Newton's Method (Newton-Raphson Method) The Univariate Interval Newton MethodSecant Method and Müller's Method Aitken Acceleration and Steffensen's Method Roots of Polynomials Additional Notes and SummaryNumerical Linear Algebra Basic Results from Linear Algebra Normed Linear Spaces Direct Methods for Solving Linear SystemsIterative Methods for Solving Linear SystemsThe Singular Value DecompositionApproximation TheoryIntroduction Norms, Projections, Inner Product Spaces, and Orthogonalization in Function SpacesPolynomial ApproximationPiecewise Polynomial ApproximationTrigonometric ApproximationRational ApproximationWavelet BasesLeast Squares Approximation on a Finite Point SetEigenvalue-Eigenvector Computation Basic Results from Linear Algebra The Power Method The Inverse Power Method Deflation T...
International Nuclear Information System (INIS)
Stephens, C. R.
2006-01-01
In this article I give a brief account of the development of research in the Renormalization Group in Mexico, paying particular attention to novel conceptual and technical developments associated with the tool itself, rather than applications of standard Renormalization Group techniques. Some highlights include the development of new methods for understanding and analysing two extreme regimes of great interest in quantum field theory -- the ''high temperature'' regime and the Regge regime
A density gradient theory based method for surface tension calculations
DEFF Research Database (Denmark)
Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios
2016-01-01
The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...
Theory and design methods of special space orbits
Zhang, Yasheng; Zhou, Haijun
2017-01-01
This book focuses on the theory and design of special space orbits. Offering a systematic and detailed introduction to the hovering orbit, spiral cruising orbit, multi-target rendezvous orbit, initiative approaching orbit, responsive orbit and earth pole-sitter orbit, it also discusses the concept, theory, design methods and application of special space orbits, particularly the design and control method based on kinematics and astrodynamics. In addition the book presents the latest research and its application in space missions. It is intended for researchers, engineers and postgraduates, especially those working in the fields of orbit design and control, as well as space-mission planning and research.
Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.
2018-05-01
The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.
Development of a new IHA method for impact assessment of climate change on flow regime
Yang, Tao; Cui, Tong; Xu, Chong-Yu; Ciais, Philippe; Shi, Pengfei
2017-09-01
The Indicators of Hydrologic Alteration (IHA) based on 33 parameters in five dimensions (flow magnitude, timing, duration, frequency and change rate) have been widely used in evaluation of hydrologic alteration in river systems. Yet, inter-correlation seriously exists amongst those parameters, therefore constantly underestimates or overestimates actual hydrological changes. Toward the end, a new method (Representative-IHA, RIHA) is developed by removing repetitions based on Criteria Importance Through Intercriteria Correlation (CRITIC) algorithm. RIHA is testified in evaluating effects of future climate change on hydro-ecology in the Niger River of Africa. Future flows are projected using three watershed hydrological models forced by five general circulation models (GCMs) under three Representative Concentration Pathways (RCPs) scenarios. Results show that: (1) RIHA is able to eliminate self-correlations amongst IHA indicators and identify the dominant characteristics of hydrological alteration in the Upper Niger River, (2) March streamflow, September streamflow, December streamflow, 30-day annual maximum, low pluses duration and fall rates tends to increase over the period 2010-2099, while July streamflow and 90-day annual minimum streamflow shows decrease, (3) the Niger River will undergo moderate flow alteration under RCP8.5 in 2050s and 2080s and low alteration other scenarios, (4) future flow alteration may induce increase water temperatures, reduction dissolved oxygen and food resources. Consequently, aquatic biodiversity and fish community of Upper Niger River would become more vulnerable in the future. The new method enables more scientific evaluation for multi-dimensional hydrologic alteration under the context of climate change.
The influence of conservation tillage methods on soil water regimes in semi-arid southern Zimbabwe
Mupangwa, W.; Twomlow, S.; Walker, S.
Planting basins and ripper tillage practices are major components of the recently introduced conservation agriculture package that is being extensively promoted for smallholder farming in Zimbabwe. Besides preparing land for crop planting, these two technologies also help in collecting and using rainwater more efficiently in semi-arid areas. The basin tillage is being targeted for households with limited or no access to draught animals while ripping is meant for smallholder farmers with some draught animal power. Trials were established at four farms in Gwanda and Insiza in southern Zimbabwe to determine soil water contributions and runoff water losses from plots under four different tillage treatments. The tillage treatments were hand-dug planting basins, ripping, conventional spring and double ploughing using animal-drawn implements. The initial intention was to measure soil water changes and runoff losses from cropped plots under the four tillage practices. However, due to total crop failure, only soil water and runoff were measured from bare plots between December 2006 and April 2007. Runoff losses were highest under conventional ploughing. Planting basins retained most of the rainwater that fell during each rainfall event. The amount of rainfall received at each farm significantly influenced the volume of runoff water measured. Runoff water volume increased with increase in the amount of rainfall received at each farm. Soil water content was consistently higher under basin tillage than the other three tillage treatments. Significant differences in soil water content were observed across the farms according to soil types from sand to loamy sand. The basin tillage method gives a better control of water losses from the farmers’ fields. The planting basin tillage method has a greater potential for providing soil water to crops than ripper, double and single conventional ploughing practices.
International Nuclear Information System (INIS)
Yu Mingzhou; Lin Jianzhong; Jin Hanhui; Jiang Ying
2011-01-01
The closure of moment equations for nanoparticle coagulation due to Brownian motion in the entire size regime is performed using a newly proposed method of moments. The equations in the free molecular size regime and the continuum plus near-continuum regime are derived separately in which the fractal moments are approximated by three-order Taylor-expansion series. The moment equations for coagulation in the entire size regime are achieved by the harmonic mean solution and the Dahneke’s solution. The results produced by the quadrature method of moments (QMOM), the Pratsinis’s log-normal moment method (PMM), the sectional method (SM), and the newly derived Taylor-expansion moment method (TEMOM) are presented and compared in accuracy and efficiency. The TEMOM method with Dahneke’s solution produces the most accurate results with a high efficiency than other existing moment models in the entire size regime, and thus it is recommended to be used in the following studies on nanoparticle dynamics due to Brownian motion.
Some free boundary problems in potential flow regime usinga based level set method
Energy Technology Data Exchange (ETDEWEB)
Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.
2008-12-09
Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.
Kurchan, Jorge; Parisi, Giorgio; Urbani, Pierfrancesco; Zamponi, Francesco
2013-10-24
We consider the theory of the glass phase and jamming of hard spheres in the large space dimension limit. Building upon the exact expression for the free-energy functional obtained previously, we find that the random first order transition (RFOT) scenario is realized here with two thermodynamic transitions: the usual Kauzmann point associated with entropy crisis and a further transition at higher pressures in which a glassy structure of microstates is developed within each amorphous state. This kind of glass-glass transition into a phase dominating the higher densities was described years ago by Elisabeth Gardner, and may well be a generic feature of RFOT. Microstates that are small excitations of an amorphous matrix-separated by low entropic or energetic barriers-thus emerge naturally, and modify the high pressure (or low temperature) limit of the thermodynamic functions.
Scenistic Methods in Training: Definitions and Theory Grounding
Lyons, Paul
2010-01-01
Purpose: The aim of this article is to describe the scenistic approach to training with corresponding activities and the theory bases that support the approach. Design/methodology/approach: Presented is the definition of the concept of scenistic training along with the step-by-step details of the implementation of the approach. Scenistic methods,…
Advances in computational methods for Quantum Field Theory calculations
Ruijl, B.J.G.
2017-01-01
In this work we describe three methods to improve the performance of Quantum Field Theory calculations. First, we simplify large expressions to speed up numerical integrations. Second, we design Forcer, a program for the reduction of four-loop massless propagator integrals. Third, we extend the R*
The Role of Method and Theory in the IAHR
DEFF Research Database (Denmark)
Geertz, Armin W.; McCutcheon, Russell T.
2016-01-01
A reprint with a new "afterword" of an article published in 2000 in the anthology Perspectives on Method and Theory in the Study of Religion, edited by Armin W. Geertz and Russell T. McCutcheon, Brill, 2000, 3-37....
Models and methods can theory meet the B physics challenge?
Nierste, U
2004-01-01
The B physics experiments of the next generation, BTeV and LHCb, will perform measurements with an unprecedented accuracy. Theory predictions must control hadronic uncertainties with the same precision to extract the desired short-distance information successfully. I argue that this is indeed possible, discuss those theoretical methods in which hadronic uncertainties are under control and list hadronically clean observables.
The Constant Comparative Analysis Method Outside of Grounded Theory
Fram, Sheila M.
2013-01-01
This commentary addresses the gap in the literature regarding discussion of the legitimate use of Constant Comparative Analysis Method (CCA) outside of Grounded Theory. The purpose is to show the strength of using CCA to maintain the emic perspective and how theoretical frameworks can maintain the etic perspective throughout the analysis. My…
Applications of a systematic homogenization theory for nodal diffusion methods
International Nuclear Information System (INIS)
Zhang, Hong-bin; Dorning, J.J.
1992-01-01
The authors recently have developed a self-consistent and systematic lattice cell and fuel bundle homogenization theory based on a multiple spatial scales asymptotic expansion of the transport equation in the ratio of the mean free path to the reactor characteristics dimension for use with nodal diffusion methods. The mathematical development leads naturally to self-consistent analytical expressions for homogenized diffusion coefficients and cross sections and flux discontinuity factors to be used in nodal diffusion calculations. The expressions for the homogenized nuclear parameters that follow from the systematic homogenization theory (SHT) are different from those for the traditional flux and volume-weighted (FVW) parameters. The calculations summarized here show that the systematic homogenization theory developed recently for nodal diffusion methods yields accurate values for k eff and assembly powers even when compared with the results of a fine mesh transport calculation. Thus, it provides a practical alternative to equivalence theory and GET (Ref. 3) and to simplified equivalence theory, which requires auxiliary fine-mesh calculations for assemblies embedded in a typical environment to determine the discontinuity factors and the equivalent diffusion coefficient for a homogenized assembly
Inverse operator theory method and its applications in nonlinear physics
International Nuclear Information System (INIS)
Fang Jinqing
1993-01-01
Inverse operator theory method, which has been developed by G. Adomian in recent years, and its applications in nonlinear physics are described systematically. The method can be an unified effective procedure for solution of nonlinear and/or stochastic continuous dynamical systems without usual restrictive assumption. It is realized by Mathematical Mechanization by us. It will have a profound on the modelling of problems of physics, mathematics, engineering, economics, biology, and so on. Some typical examples of the application are given and reviewed
Regression modeling methods, theory, and computation with SAS
Panik, Michael
2009-01-01
Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,
Method for solving quantum field theory in the Heisenberg picture
International Nuclear Information System (INIS)
Nakanishi, Noboru
2004-01-01
This paper is a review of the method for solving quantum field theory in the Heisenberg picture, developed by Abe and Nakanishi since 1991. Starting from field equations and canonical (anti) commutation relations, one sets up a (q-number) Cauchy problem for the totality of d-dimensional (anti) commutators between the fundamental fields, where d is the number of spacetime dimensions. Solving this Cauchy problem, one obtains the operator solution of the theory. Then one calculates all multiple commutators. A representation of the operator solution is obtained by constructing the set of all Wightman functions for the fundamental fields; the truncated Wightman functions are constructed so as to be consistent with all vacuum expectation values of the multiple commutators mentioned above and with the energy-positivity condition. By applying the method described above, exact solutions to various 2-dimensional gauge-theory and quantum-gravity models are found explicitly. The validity of these solutions is confirmed by comparing them with the conventional perturbation-theoretical results. However, a new anomalous feature, called the ''field-equation anomaly'', is often found to appear, and its perturbation-theoretical counterpart, unnoticed previously, is discussed. The conventional notion of an anomaly with respect to symmetry is reconsidered on the basis of the field-equation anomaly, and the derivation of the critical dimension in the BRS-formulated bosonic string theory is criticized. The method outlined above is applied to more realistic theories by expanding everything in powers of the relevant parameter, but this expansion is not equivalent to the conventional perturbative expansion. The new expansion is BRS-invariant at each order, in contrast to that in the conventional perturbation theory. Higher-order calculations are generally extremely laborious to perform explicitly. (author)
FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES
Directory of Open Access Journals (Sweden)
J. Zhao
2017-09-01
Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.
Foundations of quantum chromodynamics: Perturbative methods in gauge theories
International Nuclear Information System (INIS)
Muta, T.
1986-01-01
This volume develops the techniques of perturbative QCD in great detail starting with field theory. Aside from extensive treatments of the renormalization group technique, the operator product expansion formalism and their applications to short-distance reactions, this book provides a comprehensive introduction to gauge field theories. Examples and exercises are provided to amplify the discussions on important topics. Contents: Introduction; Elements of Quantum Chromodynamics; The Renormalization Group Method; Asymptotic Freedom; Operator Product Expansion Formalism; Applications; Renormalization Scheme Dependence; Factorization Theorem; Further Applications; Power Corrections; Infrared Problem. Power Correlations; Infrared Problem
Generalized perturbation theory (GPT) methods. A heuristic approach
International Nuclear Information System (INIS)
Gandini, A.
1987-01-01
Wigner first proposed a perturbation theory as early as 1945 to study fundamental quantities such as the reactivity worths of different materials. The first formulation, CPT, for conventional perturbation theory is based on universal quantum mechanics concepts. Since that early conception, significant contributions have been made to CPT, in particular, Soodak, who rendered a heuristic interpretation of the adjoint function, (referred to as the GPT method for generalized perturbation theory). The author illustrates the GPT methodology in a variety of linear and nonlinear domains encountered in nuclear reactor analysis. The author begins with the familiar linear neutron field and then generalizes the methodology to other linear and nonlinear fields, using heuristic arguments. The author believes that the inherent simplicity and elegance of the heuristic derivation, although intended here for reactor physics problems might be usefully adopted in collateral fields and includes such examples
Theory of mind in dogs?: examining method and concept.
Horowitz, Alexandra
2011-12-01
In line with other research, Udell, Dorey, and Wynne's (in press) finding that dogs and wolves pass on some trials of a putative theory-of-mind test and fail on others is as informative about the methods and concepts of the research as about the subjects. This commentary expands on these points. The intertrial differences in the target article demonstrate how critical the choice of cues is in experimental design; the intersubject-group differences demonstrate how life histories can interact with experimental design. Even the best-designed theory-of-mind tests have intractable logical problems. Finally, these and previous research results call for the introduction of an intermediate stage of ability, a rudimentary theory of mind, to describe subjects' performance.
Sibley, David; Nold, Andreas; Kalliadasis, Serafim
2015-11-01
Density Functional Theory (DFT), a statistical mechanics of fluids approach, captures microscopic details of the fluid density structure in the vicinity of contact lines, as seen in computations in our recent study. Contact lines describe the location where interfaces between two fluids meet solid substrates, and have stimulated a wealth of research due to both their ubiquity in nature and technological applications and also due to their rich multiscale behaviour. Whilst progress can be made computationally to capture the microscopic to mesoscopic structure from DFT, complete analytical results to fully bridge to the macroscale are lacking. In this work, we describe our efforts to bring asymptotic methods to DFT to obtain results for contact angles and other macroscopic quantities in various parameter regimes. We acknowledge financial support from European Research Council via Advanced Grant No. 247031.
Detecting spatial regimes in ecosystems
Sundstrom, Shana M.; Eason, Tarsha; Nelson, R. John; Angeler, David G.; Barichievy, Chris; Garmestani, Ahjond S.; Graham, Nicholas A.J.; Granholm, Dean; Gunderson, Lance; Knutson, Melinda; Nash, Kirsty L.; Spanbauer, Trisha; Stow, Craig A.; Allen, Craig R.
2017-01-01
Research on early warning indicators has generally focused on assessing temporal transitions with limited application of these methods to detecting spatial regimes. Traditional spatial boundary detection procedures that result in ecoregion maps are typically based on ecological potential (i.e. potential vegetation), and often fail to account for ongoing changes due to stressors such as land use change and climate change and their effects on plant and animal communities. We use Fisher information, an information theory-based method, on both terrestrial and aquatic animal data (U.S. Breeding Bird Survey and marine zooplankton) to identify ecological boundaries, and compare our results to traditional early warning indicators, conventional ecoregion maps and multivariate analyses such as nMDS and cluster analysis. We successfully detected spatial regimes and transitions in both terrestrial and aquatic systems using Fisher information. Furthermore, Fisher information provided explicit spatial information about community change that is absent from other multivariate approaches. Our results suggest that defining spatial regimes based on animal communities may better reflect ecological reality than do traditional ecoregion maps, especially in our current era of rapid and unpredictable ecological change.
Mathematical methods of many-body quantum field theory
Lehmann, Detlef
2004-01-01
Mathematical Methods of Many-Body Quantum Field Theory offers a comprehensive, mathematically rigorous treatment of many-body physics. It develops the mathematical tools for describing quantum many-body systems and applies them to the many-electron system. These tools include the formalism of second quantization, field theoretical perturbation theory, functional integral methods, bosonic and fermionic, and estimation and summation techniques for Feynman diagrams. Among the physical effects discussed in this context are BCS superconductivity, s-wave and higher l-wave, and the fractional quantum Hall effect. While the presentation is mathematically rigorous, the author does not focus solely on precise definitions and proofs, but also shows how to actually perform the computations.Presenting many recent advances and clarifying difficult concepts, this book provides the background, results, and detail needed to further explore the issue of when the standard approximation schemes in this field actually work and wh...
A New Equivalence Theory Method for Doubly Heterogeneous Fuel
International Nuclear Information System (INIS)
Choi, Sooyoung; Lee, Deokjung
2014-01-01
The unique characteristics cannot be handled easily by conventional computer code. A new methodology is being developed to treat resonance self-shielding in a doubly heterogeneous system. The method first homogenizes the material in the fuel compact region using an analytical approximation for the disadvantage factor based on equivalence theory. The disadvantage factor accounts for spatial self-shielding of the resonance flux within the fuel grains. The doubly-heterogeneous effects are accounted by using a modified definition of background cross section, which includes geometry parameters and the cross sections of both the fuel grain and fuel compact regions. For the verification, the new DH methodology was implemented in deterministic transport code TICTOC developed at UNIST which uses equivalence theory for resonance treatment and Method of Characteristics (MOC) for the ray tracing. In previous research, this new methodology was verified for several pin cell problems but further verification is required to confirm the validity of the methodology for various situations. Therefore, in this study, 9 cases for unit pin cell problems are designed and the accuracy of the new DH method is compared to the Monte Carlo code, McCARD. The new method for doubly-heterogeneous self-shielding using equivalence theory was summarized and calculation procedure was presented. The new methods use analytical expression for the disadvantage factor therefore additional complicated module is not required. The new method was verified for 9 pin cell models. As a result, TICTOC with the new DH method predicts the eigenvalues within about 200 pcm error compared with Monte Carlo results for the most of problems
New Methods in Supersymmetric Theories and Emergent Gauge Symmetry
CERN. Geneva
2014-01-01
It is remarkable that light or even massless spin 1 particles can be composite. Consequently, gauge invariance is not fundamental but emergent. This idea can be realized in detail in supersymmetric gauge theories. We will describe the recent development of non-perturbative methods that allow to test this idea. One finds that the emergence of gauge symmetry is linked to some results in contemporary mathematics. We speculate on the possible applications of the idea of emergent gauge symmetry to realistic models.
Advanced methods for scattering amplitudes in gauge theories
Energy Technology Data Exchange (ETDEWEB)
Peraro, Tiziano
2014-09-24
We present new techniques for the evaluation of multi-loop scattering amplitudes and their application to gauge theories, with relevance to the Standard Model phenomenology. We define a mathematical framework for the multi-loop integrand reduction of arbitrary diagrams, and elaborate algebraic approaches, such as the Laurent expansion method, implemented in the software Ninja, and the multivariate polynomial division technique by means of Groebner bases.
Advanced methods for scattering amplitudes in gauge theories
International Nuclear Information System (INIS)
Peraro, Tiziano
2014-01-01
We present new techniques for the evaluation of multi-loop scattering amplitudes and their application to gauge theories, with relevance to the Standard Model phenomenology. We define a mathematical framework for the multi-loop integrand reduction of arbitrary diagrams, and elaborate algebraic approaches, such as the Laurent expansion method, implemented in the software Ninja, and the multivariate polynomial division technique by means of Groebner bases.
International Nuclear Information System (INIS)
Zhou Yunlong; Chen Fei; Sun Bin
2008-01-01
Based on the characteristic that wavelet packet transform image can be decomposed by different scales, a flow regime identification method based on image wavelet packet information entropy feature and genetic neural network was proposed. Gas-liquid two-phase flow images were captured by digital high speed video systems in horizontal pipe. The information entropy feature from transformation coefficients were extracted using image processing techniques and multi-resolution analysis. The genetic neural network was trained using those eigenvectors, which was reduced by the principal component analysis, as flow regime samples, and the flow regime intelligent identification was realized. The test result showed that image wavelet packet information entropy feature could excellently reflect the difference between seven typical flow regimes, and the genetic neural network with genetic algorithm and BP algorithm merits were with the characteristics of fast convergence for simulation and avoidance of local minimum. The recognition possibility of the network could reach up to about 100%, and a new and effective method was presented for on-line flow regime. (authors)
International Nuclear Information System (INIS)
Pilat, Joseph F.; Budlong-Sylvester, K.W.
2004-01-01
Following the 1998 nuclear tests in South Asia and later reinforced by revelations about North Korean and Iraqi nuclear activities, there has been growing concern about increasing proliferation dangers. At the same time, the prospects of radiological/nuclear terrorism are seen to be rising - since 9/11, concern over a proliferation/terrorism nexus has never been higher. In the face of this growing danger, there are urgent calls for stronger measures to strengthen the current international nuclear nonproliferation regime, including recommendations to place civilian processing of weapon-useable material under multinational control. As well, there are calls for entirely new tools, including military options. As proliferation and terrorism concerns grow, the regime is under pressure and there is a temptation to consider fundamental changes to the regime. In this context, this paper will address the following: Do we need to change the regime centered on the Treaty on the Nonproliferation of Nuclear Weapons (NPT) and the International Atomic Energy Agency (IAEA)? What improvements could ensure it will be the foundation for the proliferation resistance and physical protection needed if nuclear power grows? What will make it a viable centerpiece of future nonproliferation and counterterrorism approaches?
International Nuclear Information System (INIS)
Mika, J.
1975-09-01
Originally the work was oriented towards two main topics: a) difference and integral methods in neutron transport theory. Two computers were used for numerical calculations GIER and CYBER-72. During the first year the main effort was shifted towards basic theoretical investigations. At the first step the ANIS code was adopted and later modified to check various finite difference approaches against each other. Then the general finite element method and the singular perturbation method were developed. The analysis of singularities of the one-dimensional neutron transport equation in spherical geometry has been done and presented. Later the same analysis for the case of cylindrical symmetry has been carried out. The second and the third year programme included the following topics: 1) finite difference methods in stationary neutron transport theory; 2)mathematical fundamentals of approximate methods for solving the transport equation; 3) singular perturbation method for the time-dependent transport equation; 4) investigation of various iterative procedures in reactor calculations. This investigation will help to better understanding of the mathematical basis for existing and developed numerical methods resulting in more effective algorithms for reactor computer codes
Variational methods in electron-atom scattering theory
Nesbet, Robert K
1980-01-01
The investigation of scattering phenomena is a major theme of modern physics. A scattered particle provides a dynamical probe of the target system. The practical problem of interest here is the scattering of a low energy electron by an N-electron atom. It has been difficult in this area of study to achieve theoretical results that are even qualitatively correct, yet quantitative accuracy is often needed as an adjunct to experiment. The present book describes a quantitative theoretical method, or class of methods, that has been applied effectively to this problem. Quantum mechanical theory relevant to the scattering of an electron by an N-electron atom, which may gain or lose energy in the process, is summarized in Chapter 1. The variational theory itself is presented in Chapter 2, both as currently used and in forms that may facilitate future applications. The theory of multichannel resonance and threshold effects, which provide a rich structure to observed electron-atom scattering data, is presented in Cha...
Novel welding image processing method based on fractal theory
Institute of Scientific and Technical Information of China (English)
陈强; 孙振国; 肖勇; 路井荣
2002-01-01
Computer vision has come into used in the fields of welding process control and automation. In order to improve precision and rapidity of welding image processing, a novel method based on fractal theory has been put forward in this paper. Compared with traditional methods, the image is preliminarily processed in the macroscopic regions then thoroughly analyzed in the microscopic regions in the new method. With which, an image is divided up to some regions according to the different fractal characters of image edge, and the fuzzy regions including image edges are detected out, then image edges are identified with Sobel operator and curved by LSM (Lease Square Method). Since the data to be processed have been decreased and the noise of image has been reduced, it has been testified through experiments that edges of weld seam or weld pool could be recognized correctly and quickly.
Household energy studies: the gap between theory and method
Energy Technology Data Exchange (ETDEWEB)
Crosbie, T.
2006-09-15
At the level of theory it is now widely accepted that energy consumption patterns are a complex technical and socio-cultural phenomenon and to understand this phenomenon, it must be viewed from both engineering and social science perspectives. However, the methodological approaches taken in household energy studies lag behind the theoretical advances made in the last ten or fifteen years. The quantitative research methods traditionally used within the fields of building science, economics, and psychology continue to dominate household energy studies, while the qualitative ethnographic approaches to examining social and cultural phenomena traditionally used within anthropology and sociology are most frequently overlooked. This paper offers a critical review of the research methods used in household energy studies which illustrates the scope and limitations of both qualitative and quantitative research methods in this area of study. In doing so it demonstrates that qualitative research methods are essential to designing effective energy efficiency interventions. [Author].
The Gaussian radial basis function method for plasma kinetic theory
Energy Technology Data Exchange (ETDEWEB)
Hirvijoki, E., E-mail: eero.hirvijoki@chalmers.se [Department of Applied Physics, Chalmers University of Technology, SE-41296 Gothenburg (Sweden); Candy, J.; Belli, E. [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States); Embréus, O. [Department of Applied Physics, Chalmers University of Technology, SE-41296 Gothenburg (Sweden)
2015-10-30
Description of a magnetized plasma involves the Vlasov equation supplemented with the non-linear Fokker–Planck collision operator. For non-Maxwellian distributions, the collision operator, however, is difficult to compute. In this Letter, we introduce Gaussian Radial Basis Functions (RBFs) to discretize the velocity space of the entire kinetic system, and give the corresponding analytical expressions for the Vlasov and collision operator. Outlining the general theory, we also highlight the connection to plasma fluid theories, and give 2D and 3D numerical solutions of the non-linear Fokker–Planck equation. Applications are anticipated in both astrophysical and laboratory plasmas. - Highlights: • A radically new method to address the velocity space discretization of the non-linear kinetic equation of plasmas. • Elegant and physically intuitive, flexible and mesh-free. • Demonstration of numerical solution of both 2-D and 3-D non-linear Fokker–Planck relaxation problem.
Directory of Open Access Journals (Sweden)
Danilo Icaza Ortiz
2013-01-01
Full Text Available This paper is a review of the competition regime works of various authors, published under the auspices of the University of the Hemispheres and the Corporation for Studies and Publications. Analyzes the structure, the general concepts, case law taken for development. Includes comments on the usefulness of this work for the study of competition law and the contribution to the lawyers who want to practice in this branch of economic law.
Bhattacharya, S.; Maiti, R.; Saha, S.; Das, A. C.; Mondal, S.; Ray, S. K.; Bhaktha, S. B. N.; Datta, P. K.
2016-04-01
Graphene Oxide (GO) has been prepared by modified Hummers method and it has been reduced using an IR bulb (800-2000 nm). Both as grown GO and reduced graphene oxide (RGO) have been characterized using Raman spectroscopy and X-ray photoelectron spectroscopy (XPS). Raman spectra shows well documented Dband and G-band for both the samples while blue shift of G-band confirms chemical functionalization of graphene with different oxygen functional group. The XPS result shows that the as-prepared GO contains 52% of sp2 hybridized carbon due to the C=C bonds and 33% of carbon atoms due to the C-O bonds. As for RGO, increment of the atomic % of the sp2 hybridized carbon atom to 83% and rapid decrease in atomic % of C=O bonds confirm an efficient reduction with infrared radiation. UV-Visible absorption spectrum also confirms increment of conjugation with increased reduction. Non-linear optical properties of both GO and RGO are measured using single beam open aperture Z-Scan technique in femtosecond regime. Intensity dependent nonlinear phenomena are observed. Depending upon the intensity, both saturable absorption and two photon absorption contribute to the non-linearity of both the samples. Saturation dominates at low intensity (~ 127 GW/cm2) while two photon absorption become prominent at higher intensities (from 217 GW/cm2 to 302 GW/cm2). We have calculated the two-photon absorption co-efficient and saturation intensity for both the samples. The value of two photon absorption co-efficient (for GO~ 0.0022-0.0037 cm/GW and for RGO~ 0.0128-0.0143 cm/GW) and the saturation intensity (for GO~57 GW/cm2 and for RGO~ 194GW/cm2) is increased with reduction. Increase in two photon absorption coefficient with increasing intensity can also suggest that there may be multi-photon absorption is taking place.
Food powders flowability characterization: theory, methods, and applications.
Juliano, Pablo; Barbosa-Cánovas, Gustavo V
2010-01-01
Characterization of food powders flowability is required for predicting powder flow from hoppers in small-scale systems such as vending machines or at the industrial scale from storage silos or bins dispensing into powder mixing systems or packaging machines. This review covers conventional and new methods used to measure flowability in food powders. The method developed by Jenike (1964) for determining hopper outlet diameter and hopper angle has become a standard for the design of bins and is regarded as a standard method to characterize flowability. Moreover, there are a number of shear cells that can be used to determine failure properties defined by Jenike's theory. Other classic methods (compression, angle of repose) and nonconventional methods (Hall flowmeter, Johanson Indicizer, Hosokawa powder tester, tensile strength tester, powder rheometer), used mainly for the characterization of food powder cohesiveness, are described. The effect of some factors preventing flow, such as water content, temperature, time consolidation, particle composition and size distribution, is summarized for the characterization of specific food powders with conventional and other methods. Whereas time-consuming standard methods established for hopper design provide flow properties, there is yet little comparative evidence demonstrating that other rapid methods may provide similar flow prediction.
International Nuclear Information System (INIS)
Sun Bin; Zhou Yunlong; Zhao Peng; Guan Yuebo
2007-01-01
Aiming at the non-stationary characteristics of differential pressure fluctuation signals of gas-liquid two-phase flow, and the slow convergence of learning and liability of dropping into local minima for BP neural networks, flow regime identification method based on Singular Value Decomposition (SVD) and Least Square Support Vector Machine (LS-SVM) is presented. First of all, the Empirical Mode Decomposition (EMD) method is used to decompose the differential pressure fluctuation signals of gas-liquid two-phase flow into a number of stationary Intrinsic Mode Functions (IMFs) components from which the initial feature vector matrix is formed. By applying the singular vale decomposition technique to the initial feature vector matrixes, the singular values are obtained. Finally, the singular values serve as the flow regime characteristic vector to be LS-SVM classifier and flow regimes are identified by the output of the classifier. The identification result of four typical flow regimes of air-water two-phase flow in horizontal pipe has shown that this method achieves a higher identification rate. (authors)
Algorithmic and experimental methods in algebra, geometry, and number theory
Decker, Wolfram; Malle, Gunter
2017-01-01
This book presents state-of-the-art research and survey articles that highlight work done within the Priority Program SPP 1489 “Algorithmic and Experimental Methods in Algebra, Geometry and Number Theory”, which was established and generously supported by the German Research Foundation (DFG) from 2010 to 2016. The goal of the program was to substantially advance algorithmic and experimental methods in the aforementioned disciplines, to combine the different methods where necessary, and to apply them to central questions in theory and practice. Of particular concern was the further development of freely available open source computer algebra systems and their interaction in order to create powerful new computational tools that transcend the boundaries of the individual disciplines involved. The book covers a broad range of topics addressing the design and theoretical foundations, implementation and the successful application of algebraic algorithms in order to solve mathematical research problems. It off...
Thought Suppression Research Methods: Paradigms, Theories, Methodological Concerns
Directory of Open Access Journals (Sweden)
Niczyporuk Aneta
2016-12-01
Full Text Available It is hard to provide an unequivocal answer to the question of whether or not thought suppression is effective. Two thought suppression paradigms - the “white bear” paradigm and the think/no-think paradigm - give mixed results. Generally, “white bear” experiments indicate that thought suppression is counterproductive, while experiments in the think/no-think paradigm suggest that it is possible to effectively suppress a thought. There are also alternative methods used to study thought suppression, for instance the directed forgetting paradigm or the Stroop task. In the article, I describe the research methods used to explore thought suppression efficacy. I focus on the “white bear” and the think/no-think paradigms and discuss theories proposed to explain the results obtained. I also consider the internal and external validity of the methods used.
Applications of Symmetry Methods to the Theory of Plasma Physics
Directory of Open Access Journals (Sweden)
Giampaolo Cicogna
2006-02-01
Full Text Available The theory of plasma physics offers a number of nontrivial examples of partial differential equations, which can be successfully treated with symmetry methods. We propose three different examples which may illustrate the reciprocal advantage of this "interaction" between plasma physics and symmetry techniques. The examples include, in particular, the complete symmetry analysis of system of two PDE's, with the determination of some conditional and partial symmetries, the construction of group-invariant solutions, and the symmetry classification of a nonlinear PDE.
Algebraic methods in statistical mechanics and quantum field theory
Emch, Dr Gérard G
2009-01-01
This systematic algebraic approach concerns problems involving a large number of degrees of freedom. It extends the traditional formalism of quantum mechanics, and it eliminates conceptual and mathematical difficulties common to the development of statistical mechanics and quantum field theory. Further, the approach is linked to research in applied and pure mathematics, offering a reflection of the interplay between formulation of physical motivations and self-contained descriptions of the mathematical methods.The four-part treatment begins with a survey of algebraic approaches to certain phys
Method of T-products in polaron theory
International Nuclear Information System (INIS)
Bogolubov, N.N. Jr.; Kurbatov, A.M.; Kireev, A.N.
1985-11-01
T-products method is used for the investigation of equilibrium thermodynamic properties of Frohlich's model in polaron theory. Polaron free energy at finite temperatures is calculated on the basis of Bogolubov's variational principle. A trial function is chosen in the most general form corresponding to arbitrary number of oscillators harmonically interacting with electron. The upper bound to the polaron ground state energy in limiting case of weak interaction and low temperatures is obtained and investigated in detail. It is shown that the result becomes more exact by increasing the number of oscillators. (author)
Variational configuration interaction methods and comparison with perturbation theory
International Nuclear Information System (INIS)
Pople, J.A.; Seeger, R.; Krishnan, R.
1977-01-01
A configuration interaction (CI) procedure which includes all single and double substitutions from an unrestricted Hartree-Fock single determinant is described. This has the feature that Moller-Plesset perturbation results to second and third order are obtained in the first CI iterative cycle. The procedure also avoids the necessity of a full two-electron integral transformation. A simple expression for correcting the final CI energy for lack of size consistency is proposed. Finally, calculations on a series of small molecules are presented to compare these CI methods with perturbation theory
Rateb, Ashraf; Kuo, Chung-Yen; Imani, Moslem; Tseng, Kuo-Hsin; Lan, Wen-Hau; Ching, Kuo-En; Tseng, Tzu-Pang
2017-03-10
Spherical harmonics (SH) and mascon solutions are the two most common types of solutions for Gravity Recovery and Climate Experiment (GRACE) mass flux observations. However, SH signals are degraded by measurement and leakage errors. Mascon solutions (the Jet Propulsion Laboratory (JPL) release, herein) exhibit weakened signals at submascon resolutions. Both solutions require a scale factor examined by the CLM4.0 model to obtain the actual water storage signal. The Slepian localization method can avoid the SH leakage errors when applied to the basin scale. In this study, we estimate SH errors and scale factors for African hydrological regimes. Then, terrestrial water storage (TWS) in Africa is determined based on Slepian localization and compared with JPL-mascon and SH solutions. The three TWS estimates show good agreement for the TWS of large-sized and humid regimes but present discrepancies for the TWS of medium and small-sized regimes. Slepian localization is an effective method for deriving the TWS of arid zones. The TWS behavior in African regimes and its spatiotemporal variations are then examined. The negative TWS trends in the lower Nile and Sahara at -1.08 and -6.92 Gt/year, respectively, are higher than those previously reported.
Directory of Open Access Journals (Sweden)
Ashraf Rateb
2017-03-01
Full Text Available Spherical harmonics (SH and mascon solutions are the two most common types of solutions for Gravity Recovery and Climate Experiment (GRACE mass flux observations. However, SH signals are degraded by measurement and leakage errors. Mascon solutions (the Jet Propulsion Laboratory (JPL release, herein exhibit weakened signals at submascon resolutions. Both solutions require a scale factor examined by the CLM4.0 model to obtain the actual water storage signal. The Slepian localization method can avoid the SH leakage errors when applied to the basin scale. In this study, we estimate SH errors and scale factors for African hydrological regimes. Then, terrestrial water storage (TWS in Africa is determined based on Slepian localization and compared with JPL-mascon and SH solutions. The three TWS estimates show good agreement for the TWS of large-sized and humid regimes but present discrepancies for the TWS of medium and small-sized regimes. Slepian localization is an effective method for deriving the TWS of arid zones. The TWS behavior in African regimes and its spatiotemporal variations are then examined. The negative TWS trends in the lower Nile and Sahara at −1.08 and −6.92 Gt/year, respectively, are higher than those previously reported.
Grassmann phase space methods for fermions. II. Field theory
Energy Technology Data Exchange (ETDEWEB)
Dalton, B.J., E-mail: bdalton@swin.edu.au [Centre for Quantum and Optical Science, Swinburne University of Technology, Melbourne, Victoria 3122 (Australia); Jeffers, J. [Department of Physics, University of Strathclyde, Glasgow G4ONG (United Kingdom); Barnett, S.M. [School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom)
2017-02-15
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker–Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.
Grassmann phase space methods for fermions. II. Field theory
International Nuclear Information System (INIS)
Dalton, B.J.; Jeffers, J.; Barnett, S.M.
2017-01-01
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker–Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.
Examining Philosophy of Technology Using Grounded Theory Methods
Directory of Open Access Journals (Sweden)
Mark David Webster
2016-03-01
Full Text Available A qualitative study was conducted to examine the philosophy of technology of K-12 technology leaders, and explore the influence of their thinking on technology decision making. The research design aligned with CORBIN and STRAUSS grounded theory methods, and I proceeded from a research paradigm of critical realism. The subjects were school technology directors and instructional technology specialists, and data collection consisted of interviews and a written questionnaire. Data analysis involved the use of grounded theory methods including memo writing, open and axial coding, constant comparison, the use of purposive and theoretical sampling, and theoretical saturation of categories. Three broad philosophy of technology views were widely held by participants: an instrumental view of technology, technological optimism, and a technological determinist perspective that saw technological change as inevitable. Technology leaders were guided by two main approaches to technology decision making, represented by the categories Educational goals and curriculum should drive technology, and Keep up with technology (or be left behind. The core category and central phenomenon that emerged was that technology leaders approached technology leadership by placing greater emphasis on keeping up with technology, being influenced by an ideological orientation to technological change, and being concerned about preparing students for a technological future. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs160252
Disobeying Power Laws: Perils for Theory and Method
Directory of Open Access Journals (Sweden)
G. Christopher Crawford
2012-08-01
Full Text Available The “norm of normality” is a myth that organization design scholars should believe only at their peril. In contrast to the normal (bell-shaped distribution with independent observations and linear relationships assumed by Gaussian statistics, research shows that nearly every input and outcome in organizational domains is power-law (Pareto distributed. These highly skewed distributions exhibit unstable means, unlimited variance, underlying interdependence, and extreme outcomes that disproportionally influence the entire system, making Gaussian methods and assumptions largely invalid. By developing more focused research designs and using methods that assume interdependence and potentially nonlinear relationships, organization design scholars can develop theories that more closely depict empirical reality and provide more useful insights to practitioners and other stakeholders.
Simplified theory of plastic zones based on Zarka's method
Hübel, Hartwig
2017-01-01
The present book provides a new method to estimate elastic-plastic strains via a series of linear elastic analyses. For a life prediction of structures subjected to variable loads, frequently encountered in mechanical and civil engineering, the cyclically accumulated deformation and the elastic plastic strain ranges are required. The Simplified Theory of Plastic Zones (STPZ) is a direct method which provides the estimates of these and all other mechanical quantities in the state of elastic and plastic shakedown. The STPZ is described in detail, with emphasis on the fact that not only scientists but engineers working in applied fields and advanced students are able to get an idea of the possibilities and limitations of the STPZ. Numerous illustrations and examples are provided to support the reader's understanding.
Developing feasible loading patterns using perturbation theory methods
International Nuclear Information System (INIS)
White, J.R.; Avila, K.M.
1990-01-01
This work illustrates an approach to core reload design that combines the power of integer programming with the efficiency of generalized perturbation theory. The main use of the method is as a tool to help the design engineer identify feasible loading patterns with minimum time and effort. The technique is highly successful for the burnable poison (BP) loading problem, but the unpredictable behavior of the branch-and-bound algorithm degrades overall performance for large problems. Unfortunately, the combined fuel shuffling plus BP optimization problem falls into this latter classification. Overall, however, the method shows great promise for significantly reducing the manpower time required for the reload design process. And it may even give the further benefit of better designs and improved performance
The maximum entropy method of moments and Bayesian probability theory
Bretthorst, G. Larry
2013-08-01
The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.
Pricing Options and Equity-Indexed Annuities in a Regime-switching Model by Trinomial Tree Method
Directory of Open Access Journals (Sweden)
Fei Lung Yuen
2011-12-01
Full Text Available In this paper we summarize the main idea and results of Yuen and Yang (2009, 2010a, 2010b and provide some results on pricing of Parisian options under the Markov regime-switching model (MRSM. The MRSM allows the parameters of the market model depending on a Markovian process, and the model can reflect the information of the market environment which cannot be modeled solely by linear Gaussian process. However, when the parameters of the stock price model are not constant but governed by a Markovian process, the pricing of the options becomes complex. We present a fast and simple trinomial tree model to price options in MRSM. In recent years, the pricing of modern insurance products, such as Equity-Indexed annuity (EIA and variable annuities (VAs, has become a popular topic. We show here that our trinomial tree model can been used to price EIA with strong path dependent exotic options in the regime switching model.
Numerical perturbative methods in the quantum theory of physical systems
International Nuclear Information System (INIS)
Adam, G.
1980-01-01
During the last two decades, development of digital electronic computers has led to the deployment of new, distinct methods in theoretical physics. These methods, based on the advances of modern numerical analysis as well as on specific equations describing physical processes, enabled to perform precise calculations of high complexity which have completed and sometimes changed our image of many physical phenomena. Our efforts have concentrated on the development of numerical methods with such intrinsic performances as to allow a successful approach of some Key issues in present theoretical physics on smaller computation systems. The basic principle of such methods is to translate, in numerical analysis language, the theory of perturbations which is suited to numerical rather than to analytical computation. This idea has been illustrated by working out two problems which arise from the time independent Schroedinger equation in the non-relativistic approximation, within both quantum systems with a small number of particles and systems with a large number of particles, respectively. In the first case, we are led to the numerical solution of some quadratic ordinary differential equations (first section of the thesis) and in the second case, to the solution of some secular equations in the Brillouin area (second section). (author)
Teaching organization theory for healthcare management: three applied learning methods.
Olden, Peter C
2006-01-01
Organization theory (OT) provides a way of seeing, describing, analyzing, understanding, and improving organizations based on patterns of organizational design and behavior (Daft 2004). It gives managers models, principles, and methods with which to diagnose and fix organization structure, design, and process problems. Health care organizations (HCOs) face serious problems such as fatal medical errors, harmful treatment delays, misuse of scarce nurses, costly inefficiency, and service failures. Some of health care managers' most critical work involves designing and structuring their organizations so their missions, visions, and goals can be achieved-and in some cases so their organizations can survive. Thus, it is imperative that graduate healthcare management programs develop effective approaches for teaching OT to students who will manage HCOs. Guided by principles of education, three applied teaching/learning activities/assignments were created to teach OT in a graduate healthcare management program. These educationalmethods develop students' competency with OT applied to HCOs. The teaching techniques in this article may be useful to faculty teaching graduate courses in organization theory and related subjects such as leadership, quality, and operation management.
Cross section recondensation method via generalized energy condensation theory
International Nuclear Information System (INIS)
Douglass, Steven; Rahnema, Farzad
2011-01-01
Highlights: → A new method is presented which corrects for core environment error from specular boundaries at the lattice cell level. → Solution obtained with generalized energy condensation provides improved approximation to the core level fine-group flux. → Iterative recondensation of the cross sections and unfolding of the flux provides on-the-fly updating of the core cross sections. → Precomputation of energy integrals and fine-group cross sections allows for easy implementation and efficient solution. → Method has been implemented in 1D and shown to correct the environment error, particularly in strongly heterogeneous cores. - Abstract: The standard multigroup method used in whole-core reactor analysis relies on energy condensed (coarse-group) cross sections generated from single lattice cell calculations, typically with specular reflective boundary conditions. Because these boundary conditions are an approximation and not representative of the core environment for that lattice, an error is introduced in the core solution (both eigenvalue and flux). As current and next generation reactors trend toward increasing assembly and core heterogeneity, this error becomes more significant. The method presented here corrects for this error by generating updated coarse-group cross sections on-the-fly within whole-core reactor calculations without resorting to additional cell calculations. In this paper, the fine-group core flux is unfolded by making use of the recently published Generalized Condensation Theory and the cross sections are recondensed at the whole-core level. By iteratively performing this recondensation, an improved core solution is found in which the core-environment has been fully taken into account. This recondensation method is both easy to implement and computationally very efficient because it requires precomputation and storage of only the energy integrals and fine-group cross sections. In this work, the theoretical basis and development
[Basic theory and research method of urban forest ecology].
He, Xingyuan; Jin, Yingshan; Zhu, Wenquan; Xu, Wenduo; Chen, Wei
2002-12-01
With the development of world economy and the increment of urban population, the urban environment problem hinders the urban sustainable development. Now, more and more people realized the importance of urban forests in improving the quality of urban ecology. Therefore, a new subject, urban forest ecology, and correlative new concept frame in the field formed. The theoretic foundation of urban forest ecology derived from the mutual combination of theory relating to forest ecology, landscape ecology, landscape architecture ecology and anthrop-ecology. People survey the development of city from the view of ecosystem, and regard the environment, a colony of human, animals and plants, as main factors of the system. The paper introduces systematically the urban forest ecology as follows: 1) the basic concept of urban forest ecology; 2) the meaning of urban forest ecology; 3) the basic principle and theoretic base of urban forest ecology; 4) the research method of urban forest ecology; 5) the developmental expectation of urban forest ecology.
Integrating financial theory and methods in electricity resource planning
Energy Technology Data Exchange (ETDEWEB)
Felder, F.A. [Economics Resource Group, Cambridge, MA (United States)
1996-02-01
Decision makers throughout the world are introducing risk and market forces in the electric power industry to lower costs and improve services. Incentive based regulation (IBR), which replaces cost of service ratemaking with an approach that divorces costs from revenues, exposes the utility to the risk of profits or losses depending on their performance. Regulators also are allowing for competition within the industry, most notably in the wholesale market and possibly in the retail market. Two financial approaches that incorporate risk in resource planning are evaluated: risk adjusted discount rates (RADR) and options theory (OT). These two complementary approaches are an improvement over the standard present value revenue requirement (PVRR). However, each method has some important limitations. By correctly using RADR and OT and understanding their limitations, decision makers can improve their ability to value risk properly in power plant projects and integrated resource plans. (Author)
Grounded Theory Method: Sociology's Quest for Exclusive Items of Inquiry
Directory of Open Access Journals (Sweden)
Edward Tolhurst
2012-09-01
Full Text Available The genesis and development of grounded theory method (GTM is evaluated with reference to sociology's attempt to demarcate exclusive referents of inquiry. The links of objectivist GTM to positivistic terminology and to the natural scientific distinction from "common sense" are explored. It is then considered how the biological sciences have prompted reorientation towards constructivist GTM, underpinned by the metaphysics of social constructionism. GTM has been shaped by the endeavor to attain the sense of exactitude associated with positivism, whilst also seeking exclusive referents of inquiry that are distinct from the empirical realm of the natural sciences. This has generated complex research techniques underpinned by tortuous methodological debate: eschewing the perceived requirement to define and defend an academic niche could help to facilitate the development of a more useful and pragmatic orientation to qualitative social research. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1203261
Designing modular manufacturing systems using mass customisation theories and methods
DEFF Research Database (Denmark)
Jørgensen, Steffen Nordahl; Hvilshøj, Mads; Madsen, Ole
2012-01-01
Today, manufacturing systems are developed as engineered to order (ETO) solutions tailored to produce a specific product or a limited product mix. However, such dedicated systems are not consistent with the current market demands for rapid product changes, high product variety, and customisation....... In response, modular manufacturing systems (MMS) are evolving, which are aimed to possess the required responsiveness and to be the manufacturing paradigm of mass customisation (MC). Hereby, MMS brings the development process of manufacturing systems against configured to order (CTO). Up to now, research...... in MMS has primarily focused on potential benefits, basic principles, and enabling technologies, while the approaches of actually designing and creating modular architectures have received less attention. A potential to fill these gaps by applying MC theories and methods is identified based...
Bootstrapping conformal field theories with the extremal functional method.
El-Showk, Sheer; Paulos, Miguel F
2013-12-13
The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.
Statistical physics and computational methods for evolutionary game theory
Javarone, Marco Alberto
2018-01-01
This book presents an introduction to Evolutionary Game Theory (EGT) which is an emerging field in the area of complex systems attracting the attention of researchers from disparate scientific communities. EGT allows one to represent and study several complex phenomena, such as the emergence of cooperation in social systems, the role of conformity in shaping the equilibrium of a population, and the dynamics in biological and ecological systems. Since EGT models belong to the area of complex systems, statistical physics constitutes a fundamental ingredient for investigating their behavior. At the same time, the complexity of some EGT models, such as those realized by means of agent-based methods, often require the implementation of numerical simulations. Therefore, beyond providing an introduction to EGT, this book gives a brief overview of the main statistical physics tools (such as phase transitions and the Ising model) and computational strategies for simulating evolutionary games (such as Monte Carlo algor...
Evolutionary game theory using agent-based methods.
Adami, Christoph; Schossau, Jory; Hintze, Arend
2016-12-01
Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright Â© 2016 Elsevier B.V. All rights reserved.
Detecting spatial regimes in ecosystems | Science Inventory ...
Research on early warning indicators has generally focused on assessing temporal transitions with limited application of these methods to detecting spatial regimes. Traditional spatial boundary detection procedures that result in ecoregion maps are typically based on ecological potential (i.e. potential vegetation), and often fail to account for ongoing changes due to stressors such as land use change and climate change and their effects on plant and animal communities. We use Fisher information, an information theory based method, on both terrestrial and aquatic animal data (US Breeding Bird Survey and marine zooplankton) to identify ecological boundaries, and compare our results to traditional early warning indicators, conventional ecoregion maps, and multivariate analysis such as nMDS (non-metric Multidimensional Scaling) and cluster analysis. We successfully detect spatial regimes and transitions in both terrestrial and aquatic systems using Fisher information. Furthermore, Fisher information provided explicit spatial information about community change that is absent from other multivariate approaches. Our results suggest that defining spatial regimes based on animal communities may better reflect ecological reality than do traditional ecoregion maps, especially in our current era of rapid and unpredictable ecological change. Use an information theory based method to identify ecological boundaries and compare our results to traditional early warning
A computational chemistry analysis of six unique tautomers of cyromazine, a pesticide used for fly control, was performed with density functional theory (DFT) and canonical second order Møller–Plesset perturbation theory (MP2) methods to gain insight into the contributions of molecular structure to ...
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
Brooke, D.; Vondrasek, D. V.
1978-01-01
The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.
Toric Methods in F-Theory Model Building
Directory of Open Access Journals (Sweden)
Johanna Knapp
2011-01-01
Full Text Available We discuss recent constructions of global F-theory GUT models and explain how to make use of toric geometry to do calculations within this framework. After introducing the basic properties of global F-theory GUTs, we give a self-contained review of toric geometry and introduce all the tools that are necessary to construct and analyze global F-theory models. We will explain how to systematically obtain a large class of compact Calabi-Yau fourfolds which can support F-theory GUTs by using the software package PALP.
Angular parallelization of a curvilinear Sn transport theory method
International Nuclear Information System (INIS)
Haghighat, A.
1991-01-01
In this paper a parallel algorithm for angular domain decomposition (or parallelization) of an r-dependent spherical S n transport theory method is derived. The parallel formulation is incorporated into TWOTRAN-II using the IBM Parallel Fortran compiler and implemented on an IBM 3090/400 (with four processors). The behavior of the parallel algorithm for different physical problems is studied, and it is concluded that the parallel algorithm behaves differently in the presence of a fission source as opposed to the absence of a fission source; this is attributed to the relative contributions of the source and the angular redistribution terms in the S s algorithm. Further, the parallel performance of the algorithm is measured for various problem sizes and different combinations of angular subdomains or processors. Poor parallel efficiencies between ∼35 and 50% are achieved in situations where the relative difference of parallel to serial iterations is ∼50%. High parallel efficiencies between ∼60% and 90% are obtained in situations where the relative difference of parallel to serial iterations is <35%
Directory of Open Access Journals (Sweden)
A. V. Malov
2018-01-01
Full Text Available The review article reveals the content of the concept of Food Regime, which is little-known in the Russian academic reference. The author monitored and codified the semantic dynamic of the terminological unit from its original interpretations to modern formulations based on the retrospective analysis. The rehabilitation of the academic merits of D. Puchala and R. Hopkins — authors who used the concept Food Regime for a few years before its universally recognized origin and official scientific debut, was accomplished with help of historical and comparative methods. The author implemented the method of ascension from the abstract to the concrete to demonstrating the classification of Food Regimes compiled on the basis of geopolitical interests in the sphere of international production, consumption, and distribution of foodstuffs. The characteristic features of historically formed Food Regime were described in the chronological order, as well as modern tendencies possessing reformist potential were identified. In particular, it has been established that the idea of Food Sovereignty (which is an alternative to the modern Corporate Food Regime is the subject for acute academic disputes. The discussion between P. McMichael P. and H. Bernstein devoted to the “peasant question” — mobilization frame of the Food Sovereignty strategy was analyzed using the secondary data processing method. Due to the critical analysis, the author comes to the conclusion that it is necessary to follow the principles of the Food Sovereignty strategy to prevent the catastrophic prospects associated with ecosystem degradation, accelerated erosion of soils, the complete disappearance of biodiversity and corporate autoc racy successfully. The author is convinced that the idea of Food Sovereignty can ward off energetic liberalization of nature, intensive privatization of life and rapid monetization of unconditioned human reflexes.
Introduction to functional and path integral methods in quantum field theory
International Nuclear Information System (INIS)
Strathdee, J.
1991-11-01
The following aspects concerning the use of functional and path integral methods in quantum field theory are discussed: generating functionals and the effective action, perturbation series, Yang-Mills theory and BRST symmetry. 10 refs, 3 figs
Directory of Open Access Journals (Sweden)
P.Ye. Mikhalichenko
2012-04-01
Full Text Available In the article the notion of current and instantaneous spectrum is introduced for the analysis of the deterministic functions of electric values of the system of DC electric traction supply in the case of its emergency operation regimes.
Operator ordering in quantum optics theory and the development of Dirac's symbolic method
International Nuclear Information System (INIS)
Fan Hongyi
2003-01-01
We present a general unified approach for arranging quantum operators of optical fields into ordered products (normal ordering, antinormal ordering, Weyl ordering (or symmetric ordering)) by fashioning Dirac's symbolic method and representation theory. We propose the technique of integration within an ordered product (IWOP) of operators to realize our goal. The IWOP makes Dirac's representation theory and the symbolic method more transparent and consequently more easily understood. The beauty of Dirac's symbolic method is further revealed. Various applications of the IWOP technique, such as in developing the entangled state representation theory, nonlinear coherent state theory, Wigner function theory, etc, are presented. (review article)
Modeling of hydrogen Stark line shapes with kinetic theory methods
Rosato, J.; Capes, H.; Stamm, R.
2012-12-01
The unified formalism for Stark line shapes is revisited and extended to non-binary interactions between an emitter and the surrounding perturbers. The accuracy of this theory is examined through comparisons with ab initio numerical simulations.
Introducing Evidence Through Research "Push": Using Theory and Qualitative Methods.
Morden, Andrew; Ong, Bie Nio; Brooks, Lauren; Jinks, Clare; Porcheret, Mark; Edwards, John J; Dziedzic, Krysia S
2015-11-01
A multitude of factors can influence the uptake and implementation of complex interventions in health care. A plethora of theories and frameworks recognize the need to establish relationships, understand organizational dynamics, address context and contingency, and engage key decision makers. Less attention is paid to how theories that emphasize relational contexts can actually be deployed to guide the implementation of an intervention. The purpose of the article is to demonstrate the potential role of qualitative research aligned with theory to inform complex interventions. We detail a study underpinned by theory and qualitative research that (a) ensured key actors made sense of the complex intervention at the earliest stage of adoption and (b) aided initial engagement with the intervention. We conclude that using theoretical approaches aligned with qualitative research can provide insights into the context and dynamics of health care settings that in turn can be used to aid intervention implementation. © The Author(s) 2015.
International Nuclear Information System (INIS)
Baryshev, Vyacheslav N
2012-01-01
Frequency stabilisation of diode laser radiation has been implemented by the Pound - Drever - Hall method using a new acousto-optic phase modulator, operating in the pure Raman - Nath diffraction regime. It is experimentally shown that, as in the case of saturated-absorption spectroscopy in atomic vapour, the spatial divergence of the frequency-modulated output spectrum of this modulator does not interfere with obtaining error signals by means of heterodyne frequency-modulation spectroscopy with a frequency discriminator based on a high-Q Fabry - Perot cavity with finesse of several tens of thousands.
Convergent close-coupling method: a `complete scattering theory`?
Energy Technology Data Exchange (ETDEWEB)
Bray, I; Fursa, D V
1995-09-01
It is demonstrated that a single convergent close-coupling (CCC) calculation of 100 eV electron impact on the ground state of helium is able to provide accurate elastic and inelastic (n {<=} 3 levels) differential cross sections, as well as singly-, doubly-, and triply-, differential ionization cross sections. Hence, it is suggested that the CCC theory deserve the title of a `complete scattering theory`. 28 refs., 5 figs.
International Nuclear Information System (INIS)
Van der Neut Kolfschoten, M.E.
2008-01-01
In the past few years the Netherlands and Great Britain have seen a significant increase in the number of connection applications from both conventional and renewable generators. As there is insufficient transmission capacity to accommodate these applications a queue system was introduced. These queues are considered an obstacle for meeting the government's renewable targets and therefore in both countries a review of the current access regime was kicked off. Despite or perhaps due to their 'consensus culture' the Dutch government has decided on the way forward, whereas in Great Britain the options - including capacity auctions - are still being debated. In the Netherlands the connection queue will be abolished, every generator will be able to connect before wider system reinforcements have been carried out and constraints will be resolved by the introduction of congestion management. Although this may seem a sensible way forward as it is expected that it will result in indirect priority access for renewables, it may still be useful to consider the mixed British experience with regards to congestion management. The article describes the background to the connection queues and it provides a high-level overview of the regulatory framework and the developments and ongoing debates in the Netherlands and Great Britain [nl
Basko, D M
2014-02-01
We study the discrete nonlinear Schröinger equation with weak disorder, focusing on the regime when the nonlinearity is, on the one hand, weak enough for the normal modes of the linear problem to remain well resolved but, on the other, strong enough for the dynamics of the normal mode amplitudes to be chaotic for almost all modes. We show that in this regime and in the limit of high temperature, the macroscopic density ρ satisfies the nonlinear diffusion equation with a density-dependent diffusion coefficient, D(ρ) = D(0)ρ(2). An explicit expression for D(0) is obtained in terms of the eigenfunctions and eigenvalues of the linear problem, which is then evaluated numerically. The role of the second conserved quantity (energy) in the transport is also quantitatively discussed.
Grounded Theory as a "Family of Methods": A Genealogical Analysis to Guide Research
Babchuk, Wayne A.
2011-01-01
This study traces the evolution of grounded theory from a nuclear to an extended family of methods and considers the implications that decision-making based on informed choices throughout all phases of the research process has for realizing the potential of grounded theory for advancing adult education theory and practice. [This paper was…
Transition from weak wave turbulence regime to solitonic regime
Hassani, Roumaissa; Mordant, Nicolas
2017-11-01
The Weak Turbulence Theory (WTT) is a statistical theory describing the interaction of a large ensemble of random waves characterized by very different length scales. For both weak non-linearity and weak dispersion a different regime is predicted where solitons propagate while keeping their shape unchanged. The question under investigation here is which regime between weak turbulence or soliton gas does the system choose ? We report an experimental investigation of wave turbulence at the surface of finite depth water in the gravity-capillary range. We tune the wave dispersion and the level of nonlinearity by modifying the depth of water and the forcing respectively. We use space-time resolved profilometry to reconstruct the deformed surface of water. When decreasing the water depth, we observe a drastic transition between weak turbulence at the weakest forcing and a solitonic regime at stronger forcing. We characterize the transition between both states by studying their Fourier Spectra. We also study the efficiency of energy transfer in the weak turbulence regime. We report a loss of efficiency of angular transfer as the dispersion of the wave is reduced until the system bifurcates into the solitonic regime. This project has recieved funding from the European Research Council (ERC, Grant Agreement No. 647018-WATU).
Methods of Approximation Theory in Complex Analysis and Mathematical Physics
Saff, Edward
1993-01-01
The book incorporates research papers and surveys written by participants ofan International Scientific Programme on Approximation Theory jointly supervised by Institute for Constructive Mathematics of University of South Florida at Tampa, USA and the Euler International Mathematical Instituteat St. Petersburg, Russia. The aim of the Programme was to present new developments in Constructive Approximation Theory. The topics of the papers are: asymptotic behaviour of orthogonal polynomials, rational approximation of classical functions, quadrature formulas, theory of n-widths, nonlinear approximation in Hardy algebras,numerical results on best polynomial approximations, wavelet analysis. FROM THE CONTENTS: E.A. Rakhmanov: Strong asymptotics for orthogonal polynomials associated with exponential weights on R.- A.L. Levin, E.B. Saff: Exact Convergence Rates for Best Lp Rational Approximation to the Signum Function and for Optimal Quadrature in Hp.- H. Stahl: Uniform Rational Approximation of x .- M. Rahman, S.K. ...
Methods of qualitative theory of differential equations and related topics
Lerman, L; Shilnikov, L
2000-01-01
Dedicated to the memory of Professor E. A. Leontovich-Andronova, this book was composed by former students and colleagues who wished to mark her contributions to the theory of dynamical systems. A detailed introduction by Leontovich-Andronova's close colleague, L. Shilnikov, presents biographical data and describes her main contribution to the theory of bifurcations and dynamical systems. The main part of the volume is composed of research papers presenting the interests of Leontovich-Andronova, her students and her colleagues. Included are articles on traveling waves in coupled circle maps, b
An Introduction to Perturbative Methods in Gauge Theories
International Nuclear Information System (INIS)
T Muta
1998-01-01
This volume develops the techniques of perturbative QCD in great pedagogical detail starting with field theory. Aside from extensive treatments of the renormalization group technique, the operator product expansion formalism and their applications to short-distance reactions, this book provides a comprehensive introduction to gauge theories. Examples and exercises are provided to amplify the discussions on important topics. This is an ideal textbook on the subject of quantum chromodynamics and is essential for researchers and graduate students in high energy physics, nuclear physics and mathematical physics
Lattice Field Theory with the Sign Problem and the Maximum Entropy Method
Directory of Open Access Journals (Sweden)
Masahiro Imachi
2007-02-01
Full Text Available Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the θ term. We reconsider this problem from the point of view of the maximum entropy method.
Linking Symbolic Interactionism and Grounded Theory Methods in a Research Design
Directory of Open Access Journals (Sweden)
Jennifer Chamberlain-Salaun
2013-09-01
Full Text Available This article focuses on Corbin and Strauss’ evolved version of grounded theory. In the third edition of their seminal text, Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, the authors present 16 assumptions that underpin their conception of grounded theory methodology. The assumptions stem from a symbolic interactionism perspective of social life, including the themes of meaning, action and interaction, self and perspectives. As research design incorporates both methodology and methods, the authors aim to expose the linkages between the 16 assumptions and essential grounded theory methods, highlighting the application of the latter in light of the former. Analyzing the links between symbolic interactionism and essential grounded theory methods provides novice researchers and researchers new to grounded theory with a foundation from which to design an evolved grounded theory research study.
The Navier-Stokes Equations Theory and Numerical Methods
Masuda, Kyûya; Rautmann, Reimund; Solonnikov, Vsevolod
1990-01-01
These proceedings contain original (refereed) research articles by specialists from many countries, on a wide variety of aspects of Navier-Stokes equations. Additionally, 2 survey articles intended for a general readership are included: one surveys the present state of the subject via open problems, and the other deals with the interplay between theory and numerical analysis.
Constructivist Teaching/Learning Theory and Participatory Teaching Methods
Fernando, Sithara Y. J. N.; Marikar, Faiz M. M. T.
2017-01-01
Evidence for the teaching involves transmission of knowledge, superiority of guided transmission is explained in the context of our knowledge, but it is also much more that. In this study we have examined General Sir John Kotelawala Defence University's cadet and civilian students' response to constructivist learning theory and participatory…
QuantCrit: Rectifying Quantitative Methods through Critical Race Theory
Garcia, Nichole M.; López, Nancy; Vélez, Verónica N.
2018-01-01
Critical race theory (CRT) in education centers, examines, and seeks to transform the relationship that undergirds race, racism, and power. CRT scholars have applied a critical race framework to advance research methodologies, namely qualitative interventions. Informed by this work, and 15 years later, this article reconsiders the possibilities of…
Team Performance Pay and Motivation Theory: A Mixed Methods Study
Wells, Pamela; Combs, Julie P.; Bustamante, Rebecca M.
2013-01-01
This study was conducted to explore teachers' perceptions of a team performance pay program in a large suburban school district through the lens of motivation theories. Mixed data analysis was used to analyze teacher responses from two archival questionnaires (Year 1, n = 368; Year 2, n = 649). Responses from teachers who participated in the team…
Long-memory time series theory and methods
Palma, Wilfredo
2007-01-01
Wilfredo Palma, PhD, is Chairman and Professor of Statistics in the Department of Statistics at Pontificia Universidad Católica de Chile. Dr. Palma has published several refereed articles and has received over a dozen academic honors and awards. His research interests include time series analysis, prediction theory, state space systems, linear models, and econometrics.
Grassmann methods in lattice field theory and statistical mechanics
International Nuclear Information System (INIS)
Bilgici, E.; Gattringer, C.; Huber, P.
2006-01-01
Full text: In two dimensions models of loops can be represented as simple Grassmann integrals. In our work we explore the generalization of these techniques to lattice field theories and statistical mechanic systems in three and four dimensions. We discuss possible strategies and applications for representations of loop and surface models as Grassmann integrals. (author)
Kaiplavil, Sreekumar; Rivens, Ian; ter Haar, Gail
2013-07-01
Ultrasound imparted air-recoil resonance (UIAR), a new method for acoustic power estimation, is introduced with emphasis on therapeutic high-intensity focused ultrasound (HIFU) monitoring applications. Advantages of this approach over existing practices include fast response; electrical and magnetic inertness, and hence MRI compatibility; portability; high damage threshold and immunity to vibration and interference; low cost; etc. The angle of incidence should be fixed for accurate measurement. However, the transducer-detector pair can be aligned in any direction with respect to the force of gravity. In this sense, the operation of the device is orientation independent. The acoustic response of a pneumatically coupled pair of Helmholtz resonators, with one of them acting as the sensor head, is used for the estimation of acoustic power. The principle is valid in the case of pulsed/ burst as well as continuous ultrasound exposure, the former being more sensitive and accurate. An electro-acoustic theory has been developed for describing the dynamics of pressure flow and resonance in the system considering various thermo- viscous loss mechanisms. Experimental observations are found to be in agreement with theoretical results. Assuming the window damage threshold (~10 J·mm(-2)) and accuracy of RF power estimation are the upper and lower scale-limiting factors, the performance of the device was examined for an RF power range of 5 mW to 100 W with a HIFU transducer operating at 1.70 MHz, and an average nonlinearity of ~1.5% was observed. The device is also sensitive to sub-milliwatt powers. The frequency response was analyzed at 0.85, 1.70, 2.55, and 3.40 MHz and the results are presented with respective theoretical estimates. Typical response time is in the millisecond regime. Output drift is about 3% for resonant and 5% for nonresonant modes. The principle has been optimized to demonstrate a general-purpose acoustic power meter.
Ensemble method: Community detection based on game theory
Zhang, Xia; Xia, Zhengyou; Xu, Shengwu; Wang, J. D.
2014-08-01
Timely and cost-effective analytics over social network has emerged as a key ingredient for success in many businesses and government endeavors. Community detection is an active research area of relevance to analyze online social network. The problem of selecting a particular community detection algorithm is crucial if the aim is to unveil the community structure of a network. The choice of a given methodology could affect the outcome of the experiments because different algorithms have different advantages and depend on tuning specific parameters. In this paper, we propose a community division model based on the notion of game theory, which can combine advantages of previous algorithms effectively to get a better community classification result. By making experiments on some standard dataset, it verifies that our community detection model based on game theory is valid and better.
Mood, Method and Affect: Current Shifts in Feminist Theory
Directory of Open Access Journals (Sweden)
Ellen Mortensen
2017-10-01
Full Text Available Epistemic habits in feminist research are constantly changing in scope and emphasis. One of the most striking ruptures that we can observe these days, at least in the humanities, is a renewed epistemic interest among feminists in the question of mood, where both positive and negative affects come into play. Mood figures in a number of theoretical traditions, ranging from the hermeneutics of Heidegger, Gadamer and Ricoeur, as well as in phenomenology, psychoanalytic theories of affect and in Deleuzian affect theory. In the article I want to explore two different approaches to the question of mood in feminist theory. In the first part, I will investigate Rita Felski’s treatment of mood in her recent attack on ‘critique’ as well as in her proposed alternative, her ‘post-critical’ approach to reading and interpretation. In so doing, I will formulate some questions that have emerged in my attempt to grapple with Felski’s post-critical approach. In the second part of this essay, I will delve into another understanding of the concept of mood, namely Deleuzian affect, and more specifically, as it has been embraced by feminist theorists such as Rosi Braidotti and Elizabeth Grosz in their respective theoretical works. In the concluding part of this article, I will discuss some of the implications of the different takes on mood for feminist epistemic habits.
A method in search of a theory: peer education and health promotion.
Turner, G; Shepherd, J
1999-04-01
Peer education has grown in popularity and practice in recent years in the field of health promotion. However, advocates of peer education rarely make reference to theories in their rationale for particular projects. In this paper the authors review a selection of commonly cited theories, and examine to what extent they have value and relevance to peer education in health promotion. Beginning from an identification of 10 claims made for peer education, each theory is examined in terms of the scope of the theory and evidence to support it in practice. The authors conclude that, whilst most theories have something to offer towards an explanation of why peer education might be effective, most theories are limited in scope and there is little empirical evidence in health promotion practice to support them. Peer education would seem to be a method in search of a theory rather than the application of theory to practice.
Are There Two Methods of Grounded Theory? Demystifying the Methodological Debate
Directory of Open Access Journals (Sweden)
Cheri Ann Hernandez, RN, Ph.D., CDE
2008-06-01
Full Text Available Grounded theory is an inductive research method for the generation of substantive or formal theory, using qualitative or quantitative data generated from research interviews, observation, or written sources, or some combination thereof (Glaser & Strauss, 1967. In recent years there has been much controversy over the etiology of its discovery, as well as, the exact way in which grounded theory research is to be operationalized. Unfortunately, this situation has resulted in much confusion, particularly among novice researchers who wish to utilize this research method. In this article, the historical, methodological and philosophical roots of grounded theory are delineated in a beginning effort to demystify this methodological debate. Grounded theory variants such as feminist grounded theory (Wuest, 1995 or constructivist grounded theory (Charmaz, 1990 are beyond the scope of this discussion.
A Novel Method of Enhancing Grounded Theory Memos with Voice Recording
Stocker, Rachel; Close, Helen
2013-01-01
In this article the authors present the recent discovery of a novel method of supplementing written grounded theory memos with voice recording, the combination of which may provide significant analytical advantages over solely the traditional written method. Memo writing is an essential component of a grounded theory study, however it is often…
Particle transport methods for LWR dosimetry developed by the Penn State transport theory group
International Nuclear Information System (INIS)
Haghighat, A.; Petrovic, B.
1997-01-01
This paper reviews advanced particle transport theory methods developed by the Penn State Transport Theory Group (PSTTG) over the past several years. These methods have been developed in response to increasing needs for accuracy of results and for three-dimensional modeling of nuclear systems
Theory of difference equations numerical methods and applications
Lakshmikantham, Vangipuram
1988-01-01
In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat
Directory of Open Access Journals (Sweden)
Metin Varan
2017-08-01
Full Text Available Field theory is one of the two sub-field theories in electrical and electronics engineering that for creates difficulties for undergraduate students. In undergraduate period, field theory has been taught under the theory of electromagnetic fields by which describes using partial differential equations and integral methods. Analytical methods for solution of field problems on the basis of a mathematical model may result the understanding difficulties for undergraduate students due to their mathematical and physical infrastructure. The analytical methods which can be applied in simple model lose their applicability to more complex models. In this case, the numerical methods are used to solve more complex equations. In this study, by preparing some field theory‘s web-based graphical user interface numerical methods of applications it has been aimed to increase learning levels of field theory problems for undergraduate and graduate students while taking in mind their computer programming capabilities.
Method development at Nordic School of Public Health NHV: Phenomenology and Grounded Theory.
Strandmark, Margaretha
2015-08-01
Qualitative methods such as phenomenology and grounded theory have been valuable tools in studying public health problems. A description and comparison of these methods. Phenomenology emphasises an inside perspective in form of consciousness and subjectively lived experiences, whereas grounded theory emanates from the idea that interactions between people create new insights and knowledge. Fundamental aspects of phenomenology include life world, consciousness, phenomenological reduction and essence. Significant elements in grounded theory are coding, categories and core categories, which develop a theory. There are differences in the philosophical approach, the name of the concept and the systematic tools between the methods. Thus, the phenomenological method is appropriate when studying emotional and existential research problems, and grounded theory is a method more suited to investigate processes. © 2015 the Nordic Societies of Public Health.
Guillemin, Ernst A
2013-01-01
An eminent electrical engineer and authority on linear system theory presents this advanced treatise, which approaches the subject from the viewpoint of classical dynamics and covers Fourier methods. This volume will assist upper-level undergraduates and graduate students in moving from introductory courses toward an understanding of advanced network synthesis. 1963 edition.
Comparison of Kernel Equating and Item Response Theory Equating Methods
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
Group-geometric methods in supergravity and superstring theories
International Nuclear Information System (INIS)
Castellani, L.
1992-01-01
The purpose of this paper is to give a brief and pedagogical account of the group-geometric approach to (super)gravity and superstring theories. The authors summarize the main ideas and apply them to selected examples. Group geometry provides a natural and unified formulation of gravity and gauge theories. The invariance of both are interpreted as diffeomorphisms on a suitable group manifold. This geometrical framework has a fruitful output, in that it provides a systematic algorithm for the gauging of Lie algebras and the construction of (super)gravity or (super)string Lagrangians. The basic idea is to associate fundamental fields to the group generators. This is done by considering first a basis of tangent vectors on the group manifold. These vectors close on the same algebra as the abstract group generators. The dual basis, i.e. the vielbeins (cotangent basis of one-forms) is then identified with the set of fundamental fields. Thus, for example, the vielbein V a and the spin connection ω ab of ordinary Einstein-Cartan gravity are seen as the duals of the tangent vectors corresponding to translations and Lorentz rotations, respectively
Energy Technology Data Exchange (ETDEWEB)
Bragin, V A; Lyadkin, V Ya
1969-01-01
A potentiometric model is used to simulate the behavior of a reservoir in which pressure was dropped rapidly and solution gas migrated to the top of the structure forming a gas cap. Behavior of the system was represented by a differential equation, which was solved by an electrointegrator. The potentiometric model was found to closely represent past history of the reservoir, and to predict its future behavior. When this method is used in reservoirs where large pressure drops occur, repeated determination should be made at various time intervals, so that changes in relative permeability are taken into account.
Energy Technology Data Exchange (ETDEWEB)
Zahariev, Federico; Gordon, Mark S., E-mail: mark@si.msg.chem.iastate.edu [Department of Chemistry, Iowa State University, Ames, Iowa 50011 (United States)
2014-05-14
This work presents an extension of the linear response TDDFT/EFP method to the nonlinear-response regime together with the implementation of nonlinear-response TDDFT/EFP in the quantum-chemistry computer package GAMESS. Included in the new method is the ability to calculate the two-photon absorption cross section and to incorporate solvent effects via the EFP method. The nonlinear-response TDDFT/EFP method is able to make correct qualitative predictions for both gas phase values and aqueous solvent shifts of several important nonlinear properties.
Method validation in pharmaceutical analysis: from theory to practical optimization
Directory of Open Access Journals (Sweden)
Jaqueline Kaleian Eserian
2015-01-01
Full Text Available The validation of analytical methods is required to obtain high-quality data. For the pharmaceutical industry, method validation is crucial to ensure the product quality as regards both therapeutic efficacy and patient safety. The most critical step in validating a method is to establish a protocol containing well-defined procedures and criteria. A well planned and organized protocol, such as the one proposed in this paper, results in a rapid and concise method validation procedure for quantitative high performance liquid chromatography (HPLC analysis. Type: Commentary
Functional methods underlying classical mechanics, relativity and quantum theory
International Nuclear Information System (INIS)
Kryukov, A
2013-01-01
The paper investigates the physical content of a recently proposed mathematical framework that unifies the standard formalisms of classical mechanics, relativity and quantum theory. In the framework states of a classical particle are identified with Dirac delta functions. The classical space is ''made'' of these functions and becomes a submanifold in a Hilbert space of states of the particle. The resulting embedding of the classical space into the space of states is highly non-trivial and accounts for numerous deep relations between classical and quantum physics and relativity. One of the most striking results is the proof that the normal probability distribution of position of a macroscopic particle (equivalently, position of the corresponding delta state within the classical space submanifold) yields the Born rule for transitions between arbitrary quantum states.
Theory of Mind: Mechanisms, Methods, and New Directions
Directory of Open Access Journals (Sweden)
Lindsey Jacquelyn Byom
2013-08-01
Full Text Available Theory of Mind (ToM has received significant research attention. Traditional ToM research has provided important understanding of how humans reason about mental states by utilizing shared world knowledge, social cues, and the interpretation of actions, however many current behavioral paradigms are limited to static, third-person protocols. Emerging experimental approaches such as cognitive simulation and simulated social interaction offer opportunities to investigate ToM in interactive, first-person and second-person scenarios while affording greater experimental control. The advantages and limitations of traditional and emerging ToM methodologies are discussed with the intent of advancing the understanding of ToM in socially mediated situations.
A new method for multi-channel Fabry-Perot spectroscopy of light pulses in the nanosecond regime
International Nuclear Information System (INIS)
Behn, R.
1975-01-01
The demand for powerful multichannel spectrometers raised, e.g., in laser scattering plasma diagnostics, gave rise to the question if it would not be possible to avoid the light losses occuring in the use of multichannel Fabry-Perot spectrometers. These losses can be avoided with the technique presented here. The reflected light is collected and fed back to the interferometer at a different angle. It can thus be recovered for registration in another spectral channel. This method is particularly suitable for the investigation of short light pulses. A spectrum can thus be scanned step by step with full utilization of the transit time of the light pulse. In addition to light recovery, there is another advantage in that only one detector is used for multichannel analysis, thus eliminating calibration problems. In the annex to the report, emission spectres of different dye laser versions are presented and explained. (orig./GG) [de
Mixed Methods in Intervention Research: Theory to Adaptation
Nastasi, Bonnie K.; Hitchcock, John; Sarkar, Sreeroopa; Burkholder, Gary; Varjas, Kristen; Jayasena, Asoka
2007-01-01
The purpose of this article is to demonstrate the application of mixed methods research designs to multiyear programmatic research and development projects whose goals include integration of cultural specificity when generating or translating evidence-based practices. The authors propose a set of five mixed methods designs related to different…
Methods of geometric function theory in classical and modern problems for polynomials
International Nuclear Information System (INIS)
Dubinin, Vladimir N
2012-01-01
This paper gives a survey of classical and modern theorems on polynomials, proved using methods of geometric function theory. Most of the paper is devoted to results of the author and his students, established by applying majorization principles for holomorphic functions, the theory of univalent functions, the theory of capacities, and symmetrization. Auxiliary results and the proofs of some of the theorems are presented. Bibliography: 124 titles.
Performance of density functional theory methods to describe ...
Indian Academy of Sciences (India)
Unknown
Chemical compounds present different types of isomer- ism. When two isomers differ by ... of DFT methods to describe intramolecular hydrogen shifts. Three small ..... qualitative descriptions of intramolecular hydrogen shifts when large basis ...
Robust methods and asymptotic theory in nonlinear econometrics
Bierens, Herman J
1981-01-01
This Lecture Note deals with asymptotic properties, i.e. weak and strong consistency and asymptotic normality, of parameter estimators of nonlinear regression models and nonlinear structural equations under various assumptions on the distribution of the data. The estimation methods involved are nonlinear least squares estimation (NLLSE), nonlinear robust M-estimation (NLRME) and non linear weighted robust M-estimation (NLWRME) for the regression case and nonlinear two-stage least squares estimation (NL2SLSE) and a new method called minimum information estimation (MIE) for the case of structural equations. The asymptotic properties of the NLLSE and the two robust M-estimation methods are derived from further elaborations of results of Jennrich. Special attention is payed to the comparison of the asymptotic efficiency of NLLSE and NLRME. It is shown that if the tails of the error distribution are fatter than those of the normal distribution NLRME is more efficient than NLLSE. The NLWRME method is appropriate ...
On the Possibility of a Scientific Theory of Scientific Method.
Nola, Robert
1999-01-01
Discusses the philosophical strengths and weaknesses of Laudan's normative naturalism, which understands the principles of scientific method to be akin to scientific hypotheses, and therefore open to test like any principle of science. Contains 19 references. (Author/WRM)
Generalized series method in the theory of atomic nucleus
International Nuclear Information System (INIS)
Gorbatov, A.M.
1991-01-01
On a hypersphere of a prescribed radius the so-called genealogical basis has been constructed. By making use of this basis, the many-body Schroedinger equation has been obtained for bound states of various physical systems. The genealogical series method, being in general outline the extension of the angular potential functions method, deals with the potential harmonics of any generation needed. The new approach provides an exact numerical description of the hadron systems with two-body higher interaction
A wave propagation matrix method in semiclassical theory
International Nuclear Information System (INIS)
Lee, S.Y.; Takigawa, N.
1977-05-01
A wave propagation matrix method is used to derive the semiclassical formulae of the multiturning point problem. A phase shift matrix and a barrier transformation matrix are introduced to describe the processes of a particle travelling through a potential well and crossing a potential barrier respectively. The wave propagation matrix is given by the products of phase shift matrices and barrier transformation matrices. The method to study scattering by surface transparent potentials and the Bloch wave in solids is then applied
Atomic diffusion theory challenging the Cahn-Hilliard method
International Nuclear Information System (INIS)
Nastar, M.
2014-01-01
Our development of the self-consistent mean-field (SCMF) kinetic theory for nonuniform alloys leads to the statement that kinetic correlations induced by the vacancy diffusion mechanism have a dramatic effect on nano-scale diffusion phenomena, leading to nonlinear features of the interdiffusion coefficients. Lattice rate equations of alloys including nonuniform gradients of chemical potential are derived within the Bragg-Williams statistical approximation and the third shell kinetic approximation of the SCMF theory. General driving forces including deviations of the free energy from a local equilibrium thermodynamic formulation are introduced. These deviations are related to the variation of vacancy motion due to the spatial variation of the alloy composition. During the characteristic time of atomic diffusion, multiple exchanges of the vacancy with the same atoms may happen, inducing atomic kinetic correlations that depend as well on the spatial variation of the alloy composition. As long as the diffusion driving forces are uniform, the rate equations are shown to obey in this form the Onsager formalism of thermodynamics of irreversible processes (TIP) and the TIP-based Cahn-Hilliard diffusion equation. If now the chemical potential gradients are not uniform, the continuous limit of the present SCMF kinetic equations does not coincide with the Cahn-Hilliard (CH) equation. In particular, the composition gradient and higher derivative terms depending on kinetic parameters add to the CH thermodynamic-based composition gradient term. Indeed, a diffusion equation written as a mobility multiplied by a thermodynamic formulation of the driving forces is shown to be inadequate. In the reciprocal space, the thermodynamic driving force has to be multiplied by a nonlinear function of the wave vector accounting for the variation of kinetic correlations with composition inhomogeneities. Analytical expressions of the effective interdiffusion coefficient are given for two limit
Energy Technology Data Exchange (ETDEWEB)
Wahlen-Strothman, J. M. [Rice Univ., Houston, TX (United States); Henderson, T. H. [Rice Univ., Houston, TX (United States); Hermes, M. R. [Rice Univ., Houston, TX (United States); Degroote, M. [Rice Univ., Houston, TX (United States); Qiu, Y. [Rice Univ., Houston, TX (United States); Zhao, J. [Rice Univ., Houston, TX (United States); Dukelsky, J. [Consejo Superior de Investigaciones Cientificas (CSIC), Madrid (Spain). Inst. de Estructura de la Materia; Scuseria, G. E. [Rice Univ., Houston, TX (United States)
2018-01-03
Coupled cluster and symmetry projected Hartree-Fock are two central paradigms in electronic structure theory. However, they are very different. Single reference coupled cluster is highly successful for treating weakly correlated systems, but fails under strong correlation unless one sacrifices good quantum numbers and works with broken-symmetry wave functions, which is unphysical for finite systems. Symmetry projection is effective for the treatment of strong correlation at the mean-field level through multireference non-orthogonal configuration interaction wavefunctions, but unlike coupled cluster, it is neither size extensive nor ideal for treating dynamic correlation. We here examine different scenarios for merging these two dissimilar theories. We carry out this exercise over the integrable Lipkin model Hamiltonian, which despite its simplicity, encompasses non-trivial physics for degenerate systems and can be solved via diagonalization for a very large number of particles. We show how symmetry projection and coupled cluster doubles individually fail in different correlation limits, whereas models that merge these two theories are highly successful over the entire phase diagram. Despite the simplicity of the Lipkin Hamiltonian, the lessons learned in this work will be useful for building an ab initio symmetry projected coupled cluster theory that we expect to be accurate in the weakly and strongly correlated limits, as well as the recoupling regime.
Storberg-Walker, Julia; Chermack, Thomas J.
2007-01-01
The purpose of this article is to describe four methods for completing the conceptual development phase of theory building research for single or multiparadigm research. The four methods selected for this review are (1) Weick's method of "theorizing as disciplined imagination" (1989); (2) Whetten's method of "modeling as theorizing" (2002); (3)…
The Operation Method of Smarter City Based on Ecological Theory
Fan, C.; Fan, H. Y.
2017-10-01
As the city and urbanization’s accelerated pace has caused galloping population, the urban framework is extending with increasingly complex social problems. The urban management tends to become complicated and the governance seems more difficult to pursue. exploring the urban management’s new model has attracted local governments’ urgent attention. tcombines the guiding ideology and that management’s practices based on ecological theory, explains the Smarter city Ecology Managementmodel’s formation, makes modern urban management’s comparative analysis and further defines the aforesaid management mode’s conceptual model. Based on the smarter city system theory’s ecological carrying capacity, the author uses mathematical model to prove the coordination relationship between the smarter city Ecology Managementmode’s subsystems, demonstrates that it can improve the urban management’s overall level, emphasizes smarter city management integrity, believing that urban system’s optimization is based on each subsystem being optimized, attaching the importance to elements, structure, and balance between each subsystem and between internal elements. Through the establishment of the smarter city Ecology Managementmodel’s conceptual model and theoretical argumentation, it provides a theoretical basis and technical guidance to that model’s innovation.
Introduction to modern methods of quantum many-body theory and their applications
Fantoni, Stefano; Krotscheck, Eckhard S
2002-01-01
This invaluable book contains pedagogical articles on the dominant nonstochastic methods of microscopic many-body theories - the methods of density functional theory, coupled cluster theory, and correlated basis functions - in their widest sense. Other articles introduce students to applications of these methods in front-line research, such as Bose-Einstein condensates, the nuclear many-body problem, and the dynamics of quantum liquids. These keynote articles are supplemented by experimental reviews on intimately connected topics that are of current relevance. The book addresses the striking l
Kou, Jisheng; Sun, Shuyu
2014-01-01
The gradient theory for the surface tension of simple fluids and mixtures is rigorously analyzed based on mathematical theory. The finite element approximation of surface tension is developed and analyzed, and moreover, an adaptive finite element method based on a physical-based estimator is proposed and it can be coupled efficiently with Newton's method as well. The numerical tests are carried out both to verify the proposed theory and to demonstrate the efficiency of the proposed method. © 2013 Elsevier B.V. All rights reserved.
Kou, Jisheng
2014-01-01
The gradient theory for the surface tension of simple fluids and mixtures is rigorously analyzed based on mathematical theory. The finite element approximation of surface tension is developed and analyzed, and moreover, an adaptive finite element method based on a physical-based estimator is proposed and it can be coupled efficiently with Newton\\'s method as well. The numerical tests are carried out both to verify the proposed theory and to demonstrate the efficiency of the proposed method. © 2013 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Malec, L.; Skacel, F. [Department of Gas, Coke and Air Protection, Institute of Chemical Technology in Prague, (Czech Republic)]. E-mail: Lukas.Malec@vscht.cz; Fousek, T. [Institute of Public Health, District of Central Czech Republic, Kladno (Czech Republic); Tekac, V. [Department of Gas, Coke and Air Protection, Institute of Chemical Technology in Prague, (Czech Republic); Kral, P. [Institute of Public Health, District of Central Czech Republic, Kladno (Czech Republic)
2008-07-15
Tropospheric ozone is a secondary air pollutant, changes in the ambient content of which are affected by both, the emission rates of primary pollutants and the variability of meteorological conditions. In this paper, we use two multivariate statistical methods to analyze the impact of the meteorological conditions associated with pollutant transformation processes. First, we evaluated the variability of the spatial and temporal distribution of ozone precursor parameters by using discriminant analysis (DA) in locations close to the industrial area of Kladno (a city in the Czech Republic). Second, we interpreted the data set by using factor analysis (FA) to examine the differences between ozone formation processes in summer and in winter. To avoid temperature dependency between the variables, as well as to describe tropospheric washout processes, we used water vapour content rather than the more commonly employed relative humidity parameter. In this way, we were able to successfully determine and subsequently evaluate the various processes of ozone formation, together with the distribution of ozone precursors. High air temperature, radiation and low water content relate to summer pollution episodes, while radiation and wind speed prove to be the most important parameters during winter. [Spanish] El ozono troposferico es un contaminante fotoquimico secundario cuyos contenidos estan influidos tanto por las razones de emision de las sustancias contaminantes primarias como por la variabilidad de las condiciones meteorologicas. En este trabajo utilizamos dos metodos estadisticos multivariados para el analisis de la influencia de las condiciones meteorologicas relacionadas con los procesos de transformacion de las sustancias contaminantes. Primero, estimamos la variabilidad de la descomposicion espacial y temporal de los precursores de ozono mediante el analisis discriminante (DA) en las areas cercanas a la zona industrial de Kladno (una ciudad de la Republica Checa
Multipolar Ewald methods, 1: theory, accuracy, and performance.
Giese, Timothy J; Panteva, Maria T; Chen, Haoyuan; York, Darrin M
2015-02-10
The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment.
van den Bogaart, Antoine C. M.; Bilderbeek, Richel J. C.; Schaap, Harmen; Hummel, Hans G. K.; Kirschner, Paul A.
2016-01-01
This article introduces a dedicated, computer-supported method to construct and formatively assess open, annotated concept maps of Personal Professional Theories (PPTs). These theories are internalised, personal bodies of formal and practical knowledge, values, norms and convictions that professionals use as a reference to interpret and acquire…
International Nuclear Information System (INIS)
Du Yanjun; Liu Qingcheng; Liu Hongzhang; Qin Guoxiu
2009-01-01
In order to find the feasibility of calculating mine radiation dose based on γ field theory, this paper calculates the γ radiation dose of a mine by means of γ field theory based calculation method. The results show that the calculated radiation dose is of small error and can be used to monitor mine environment of nuclear radiation. (authors)
The TEACH Method: An Interactive Approach for Teaching the Needs-Based Theories Of Motivation
Moorer, Cleamon, Jr.
2014-01-01
This paper describes an interactive approach for explaining and teaching the Needs-Based Theories of Motivation. The acronym TEACH stands for Theory, Example, Application, Collaboration, and Having Discussion. This method can help business students to better understand and distinguish the implications of Maslow's Hierarchy of Needs,…
Advanced quantitative magnetic nondestructive evaluation methods - Theory and experiment
Barton, J. R.; Kusenberger, F. N.; Beissner, R. E.; Matzkanin, G. A.
1979-01-01
The paper reviews the scale of fatigue crack phenomena in relation to the size detection capabilities of nondestructive evaluation methods. An assessment of several features of fatigue in relation to the inspection of ball and roller bearings suggested the use of magnetic methods; magnetic domain phenomena including the interaction of domains and inclusions, and the influence of stress and magnetic field on domains are discussed. Experimental results indicate that simplified calculations can be used to predict many features of these results; the data predicted by analytic models which use finite element computer analysis predictions do not agree with respect to certain features. Experimental analyses obtained on rod-type fatigue specimens which show experimental magnetic measurements in relation to the crack opening displacement and volume and crack depth should provide methods for improved crack characterization in relation to fracture mechanics and life prediction.
Stabilization of the Lattice Boltzmann Method Using Information Theory
Wilson, Tyler L; Pugh, Mary; Dawson, Francis
2018-01-01
A novel Lattice Boltzmann method is derived using the Principle of Minimum Cross Entropy (MinxEnt) via the minimization of Kullback-Leibler Divergence (KLD). By carrying out the actual single step Newton-Raphson minimization (MinxEnt-LBM) a more accurate and stable Lattice Boltzmann Method can be implemented. To demonstrate this, 1D shock tube and 2D lid-driven cavity flow simulations are carried out and compared to Single Relaxation Time LBM, Two Relaxation Time LBM, Multiple Relaxation Time...
Method of a covering space in quantum field theory
International Nuclear Information System (INIS)
Serebryanyj, E.M.
1982-01-01
To construct the Green function of the Laplace operator in the domain M bounded by conducting surfaces the generalized method of images is used. It is based on replacement of the domain M by its discrete bundle and that is why the term ''method of covering space'' is used. Continuing one of the coordinates to imaginary values the euclidean Green function is transformed into the causal one. This allows one to compute vacuum stress-energy tensor of the scalar massless field if the vacuum is stable [ru
Control rod computer code IAMCOS: general theory and numerical methods
International Nuclear Information System (INIS)
West, G.
1982-11-01
IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr
Civic Capacity in Educational Reform Efforts: Emerging and Established Regimes in Rust Belt Cities
Mitra, Dana L.; Frick, William C.
2011-01-01
Using urban regime theory, the article examines two Rust Belt cities that tried to break the cycle of social reproduction in their communities by reforming their schools. The article contributes to the development of urban regime theory by comparing an "emerging" regime to an "established" regime. The comparison highlights the interdependent…
Hybrid Fundamental Solution Based Finite Element Method: Theory and Applications
Directory of Open Access Journals (Sweden)
Changyong Cao
2015-01-01
Full Text Available An overview on the development of hybrid fundamental solution based finite element method (HFS-FEM and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field are employed. The formulations for all cases are derived from the modified variational functionals and the fundamental solutions to a given problem. Generation of elemental stiffness equations from the modified variational principle is also described. Typical numerical examples are given to demonstrate the validity and performance of the HFS-FEM. Finally, a brief summary of the approach is provided and future trends in this field are identified.
Nystro¨m Method in transport theory
Energy Technology Data Exchange (ETDEWEB)
Dalmolin, Débora; Azevedo, Fabio Souto de; Sauter, Esequia, E-mail: mtmdalmolin@gmail.com, E-mail: fabio.azevedo@ufrgs.br, E-mail: esequia.sauter@ufrgs.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Departamento de Matemática Pura e Aplicada
2017-07-01
We consider a system of equations modelling the steady-state transport equation in a participative medium with internal sources and semi-reflective boundaries. Based on this model, we discuss the implementation of the Nystro¨m method to solve the integral formulation of this transport equation. The analytical problems of existence and uniqueness of solution as well as numerical results for these equations have already been established in the literature. To obtain a numerical solution for the scalar flux for this problem, we will write the equation as a Fredholm equation of the second type and analyze quadrature schemes such as the Boole and Gauss-Legendre rules. Analytical and computational techniques were implemented to deal with singularities. We show the efficiency of the proposed method through some numerical tests and compare our results with those that can be found in the literature. (author)
Viscous wing theory development. Volume 1: Analysis, method and results
Chow, R. R.; Melnik, R. E.; Marconi, F.; Steinhoff, J.
1986-01-01
Viscous transonic flows at large Reynolds numbers over 3-D wings were analyzed using a zonal viscid-inviscid interaction approach. A new numerical AFZ scheme was developed in conjunction with the finite volume formulation for the solution of the inviscid full-potential equation. A special far-field asymptotic boundary condition was developed and a second-order artificial viscosity included for an improved inviscid solution methodology. The integral method was used for the laminar/turbulent boundary layer and 3-D viscous wake calculation. The interaction calculation included the coupling conditions of the source flux due to the wing surface boundary layer, the flux jump due to the viscous wake, and the wake curvature effect. A method was also devised incorporating the 2-D trailing edge strong interaction solution for the normal pressure correction near the trailing edge region. A fully automated computer program was developed to perform the proposed method with one scalar version to be used on an IBM-3081 and two vectorized versions on Cray-1 and Cyber-205 computers.
International Nuclear Information System (INIS)
Killingbeck, J.
1979-01-01
By using the methods of perturbation theory it is possible to construct simple formulae for the numerical integration of the Schroedinger equation, and also to calculate expectation values solely by means of simple eigenvalue calculations. (Auth.)
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; Fu, Shubin; Efendiev, Yalchin R.
2015-01-01
The development of reliable methods for upscaling fine-scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters
Renormalization method and singularities in the theory of Langmuir turbulence
International Nuclear Information System (INIS)
Pelletier, G.
1977-01-01
The method of renormalization, using propagators and diagrams, is recalled with enough mathematical details to be read and used by a non-specialist. The Markovian models are discussed and applied to plasma turbulence. The physical meaning of the diagrams is exhibited. In addition to the usual resonance broadening, an improved renormalization is set out, including broadening of the nonlinear resonance with a beat wave by induced scattering. This improved renormalization is emphasized. In the case of Langmuir turbulence, it removes difficulties arising at the group velocity, and enhances large-scale induced-scattering diffusion. (author)
Vibrational Spectroscopic Studies of Tenofovir Using Density Functional Theory Method
Directory of Open Access Journals (Sweden)
G. R. Ramkumaar
2013-01-01
Full Text Available A systematic vibrational spectroscopic assignment and analysis of tenofovir has been carried out by using FTIR and FT-Raman spectral data. The vibrational analysis was aided by electronic structure calculations—hybrid density functional methods (B3LYP/6-311++G(d,p, B3LYP/6-31G(d,p, and B3PW91/6-31G(d,p. Molecular equilibrium geometries, electronic energies, IR intensities, and harmonic vibrational frequencies have been computed. The assignments proposed based on the experimental IR and Raman spectra have been reviewed and complete assignment of the observed spectra have been proposed. UV-visible spectrum of the compound was also recorded and the electronic properties such as HOMO and LUMO energies and were determined by time-dependent DFT (TD-DFT method. The geometrical, thermodynamical parameters, and absorption wavelengths were compared with the experimental data. The B3LYP/6-311++G(d,p-, B3LYP/6-31G(d,p-, and B3PW91/6-31G(d,p-based NMR calculation procedure was also done. It was used to assign the 13C and 1H NMR chemical shift of tenofovir.
Renewing Theories, Methods and Design Practices: Challenges for Architectural Education
Directory of Open Access Journals (Sweden)
Andri Yatmo Yandi
2018-01-01
Full Text Available Architectural education should promote the advancement of knowledge that is necessary as the basis for the development of excellent design practice. Architectural education needs to respond appropriately to the current issues in the society. To find its way into the society in an appropriate way, architecture needs to be liquid. The ability to address the liquidity of architecture requires educational approach that promotes the ability to work with a range of design methods and approaches. There are several principles that become the basis for developing architectural education that could strengthen its position within the society: to promote knowledge-based design practice, to embrace variety of design methods and approaches; to keep a balance between design knowledge and design skills; while at the same time to aim for mastery and excellence in design. These principles should be the basis for defining and developing the curriculum and the process of design learning architectural education. Then the main challenge is on our willingness to be liquid in developing architectural education, which needs continuous renewal and update to respond to the changing context of knowledge, technology and society.
Coulomb drag in the mesoscopic regime
DEFF Research Database (Denmark)
Mortensen, N. Asger; Flensberg, Karsten; Jauho, Antti-Pekka
2002-01-01
We present a theory for Coulomb drug between two mesoscopic systems which expresses the drag in terms of scattering matrices and wave functions. The formalism can be applied to both ballistic and disordered systems and the consequences can be studied either by numerical simulations or analytic...... means such as perturbation theory or random matrix theory. The physics of Coulomb drag in the mesoscopic regime is very different from Coulomb drag between extended electron systems. In the mesoscopic regime we in general find fluctuations of the drag comparable to the mean value. Examples are vanishing...
Coulomb drag in the mesoscopic regime
DEFF Research Database (Denmark)
Mortensen, N.A.; Flensberg, Karsten; Jauho, Antti-Pekka
2002-01-01
We present a theory for Coulomb drag between two mesoscopic systems which expresses the drag in terms of scattering matrices and wave functions. The formalism can be applied to both ballistic and disordered systems and the consequences can be studied either by numerical simulations or analytic...... means such as perturbation theory or random matrix theory. The physics of Coulomb drag in the mesoscopic regime is very different from Coulomb drag between extended electron systems. In the mesoscopic regime we in general find fluctuations of the drag comparable to the mean value. Examples are vanishing...
Generalized perturbation theory based on the method of cyclic characteristics
Energy Technology Data Exchange (ETDEWEB)
Assawaroongruengchot, M.; Marleau, G. [Institut de Genie Nucleaire, Departement de Genie Physique, Ecole Polytechnique de Montreal, 2900 Boul. Edouard-Montpetit, Montreal, Que. H3T 1J4 (Canada)
2006-07-01
A GPT algorithm for estimation of eigenvalues and reaction-rate ratios is developed for the neutron transport problems in 2D fuel assemblies with isotropic scattering. In our study the GPT formulation is based on the integral transport equations. The mathematical relationship between the generalized flux importance and generalized source importance functions is applied to transform the generalized flux importance transport equations into the integro-differential forms. The resulting adjoint and generalized adjoint transport equations are then solved using the method of cyclic characteristics (MOCC). Because of the presence of negative adjoint sources, a biasing/decontamination scheme is applied to make the generalized adjoint functions positive in such a way that it can be used for the multigroup re-balance technique. To demonstrate the efficiency of the algorithms, perturbative calculations are performed on a 17 x 17 PWR lattice. (authors)
Generalized perturbation theory based on the method of cyclic characteristics
International Nuclear Information System (INIS)
Assawaroongruengchot, M.; Marleau, G.
2006-01-01
A GPT algorithm for estimation of eigenvalues and reaction-rate ratios is developed for the neutron transport problems in 2D fuel assemblies with isotropic scattering. In our study the GPT formulation is based on the integral transport equations. The mathematical relationship between the generalized flux importance and generalized source importance functions is applied to transform the generalized flux importance transport equations into the integro-differential forms. The resulting adjoint and generalized adjoint transport equations are then solved using the method of cyclic characteristics (MOCC). Because of the presence of negative adjoint sources, a biasing/decontamination scheme is applied to make the generalized adjoint functions positive in such a way that it can be used for the multigroup re-balance technique. To demonstrate the efficiency of the algorithms, perturbative calculations are performed on a 17 x 17 PWR lattice. (authors)
The construction of optimal stated choice experiments theory and methods
Street, Deborah J
2007-01-01
The most comprehensive and applied discussion of stated choice experiment constructions available The Construction of Optimal Stated Choice Experiments provides an accessible introduction to the construction methods needed to create the best possible designs for use in modeling decision-making. Many aspects of the design of a generic stated choice experiment are independent of its area of application, and until now there has been no single book describing these constructions. This book begins with a brief description of the various areas where stated choice experiments are applicable, including marketing and health economics, transportation, environmental resource economics, and public welfare analysis. The authors focus on recent research results on the construction of optimal and near-optimal choice experiments and conclude with guidelines and insight on how to properly implement these results. Features of the book include: Construction of generic stated choice experiments for the estimation of main effects...
Multiattribute Grey Target Decision Method Based on Soft Set Theory
Directory of Open Access Journals (Sweden)
Xia Wang
2014-01-01
Full Text Available With respect to the Multiattribute decision-making problems in which the evaluation attribute sets are different and the evaluating values of alternatives are interval grey numbers, a multiattribute grey target decision-making method in which the attribute sets are different was proposed. The concept of grey soft set was defined, and its “AND” operation was assigned by combining the intersection operation of grey number. The expression approach of new grey soft set of attribute sets considering by all decision makers were gained by applying the “AND” operation of grey soft set, and the weights of synthesis attribute were proved. The alternatives were ranked according to the size of distance of bull’s eyes of each alternative under synthetic attribute sets. The green supplier selection was illustrated to demonstrate the effectiveness of proposed model.
Newton’s method an updated approach of Kantorovich’s theory
Ezquerro Fernández, José Antonio
2017-01-01
This book shows the importance of studying semilocal convergence in iterative methods through Newton's method and addresses the most important aspects of the Kantorovich's theory including implicated studies. Kantorovich's theory for Newton's method used techniques of functional analysis to prove the semilocal convergence of the method by means of the well-known majorant principle. To gain a deeper understanding of these techniques the authors return to the beginning and present a deep-detailed approach of Kantorovich's theory for Newton's method, where they include old results, for a historical perspective and for comparisons with new results, refine old results, and prove their most relevant results, where alternative approaches leading to new sufficient semilocal convergence criteria for Newton's method are given. The book contains many numerical examples involving nonlinear integral equations, two boundary value problems and systems of nonlinear equations related to numerous physical phenomena. The book i...
The spherical harmonics method, 1 (general development of the theory)
International Nuclear Information System (INIS)
Mark, C.
1957-02-01
A method of obtaining approximate solutions of the transport equation is presented in a form applicable in principle to any geometry. The approximation will give good results in cases where the angular distribution is not very anisotropic. The basis of the approximation is to expand the density per unit solid angle Ψ(→/r, →/Ω) in spherical harmonic tensors formed from →/Ω the unit vector in the direction of velocity, and to break off the expansion. A differential equation whose degree increases with the order of the approximation is obtained for the total density Ψ (o) (r). This equation has the form where the numbers ν i depend on the order of the approximation and on the value of the parameter a of the medium, but not at all on the geometry. When the equation for the total density is an ordinary equation, we simulate the physical condition of continuity of Ψ(→/r, →/Ω) at a boundary in a multi-medium problem by requiring that the spherical harmonic moments of Ψ(→/r, →/Ω) which we retain be continuous; and this determines the constants in the solution for Ψ (o) (→/r. The form of the solution for the total density and the necessary moments in an approximation of general order is given explicitly for plane and spherical symmetry; and for cylindrical symmetry the solution is given for two low-order approximations. In a later report (CRT-338, Revised) the application of the method to several problems involving plane and spherical symmetry will be discussed in detail and the results of a number of examples already worked will also be given. (author)
The spherical harmonics method, 1 (general development of the theory)
Energy Technology Data Exchange (ETDEWEB)
Mark, C
1957-02-15
A method of obtaining approximate solutions of the transport equation is presented in a form applicable in principle to any geometry. The approximation will give good results in cases where the angular distribution is not very anisotropic. The basis of the approximation is to expand the density per unit solid angle {Psi}({yields}/r, {yields}/{Omega}) in spherical harmonic tensors formed from {yields}/{Omega} the unit vector in the direction of velocity, and to break off the expansion. A differential equation whose degree increases with the order of the approximation is obtained for the total density {Psi}{sup (o)}(r). This equation has the form where the numbers {nu}{sub i} depend on the order of the approximation and on the value of the parameter a of the medium, but not at all on the geometry. When the equation for the total density is an ordinary equation, we simulate the physical condition of continuity of {Psi}({yields}/r, {yields}/{Omega}) at a boundary in a multi-medium problem by requiring that the spherical harmonic moments of {Psi}({yields}/r, {yields}/{Omega}) which we retain be continuous; and this determines the constants in the solution for {Psi}{sup (o)}({yields}/r. The form of the solution for the total density and the necessary moments in an approximation of general order is given explicitly for plane and spherical symmetry; and for cylindrical symmetry the solution is given for two low-order approximations. In a later report (CRT-338, Revised) the application of the method to several problems involving plane and spherical symmetry will be discussed in detail and the results of a number of examples already worked will also be given. (author)
TYPES OF POLITICAL REGIMES IN THE IRKUTSK REGION
Directory of Open Access Journals (Sweden)
И В Орлова
2017-12-01
Full Text Available The authors consider contemporary western and Russian classifications of regional political regimes and their applicability for Russia. Based on the analysis of political theories, the authors chose the traditional typology of regional political regimes focusing on the minimalist interpretation of democracy (electoral competition and methods for identifying regional scenarios introduced by V.Ya. Gelman. The authors study the case of the Irkutsk Region as a region with conflicting elites, in which in a short period several regional heads were replaced. Based on the contemporary political history, the authors analyze the regional political regime using the following criteria: democracy/autocracy, consolidation/oligo-poly, compromise/conflict relations within the ruling elite. The results of the analysis prove the existence of checks and balances in the political system of the Irkutsk Region. Such a system restrains strong politicians attempts to monopolize the political power in the region. When any political player gains too much influence, other centers of power unite against him and together return the situation to the status quo. The political regime of the Irkutsk Region ensures a relatively high level of political competition, at the same time it is a part of the uncompetitive political regime of the Russian Federation, therefore it is a ‘hybrid democracy’. The authors’ analysis of intra-elite relations in the region revealed a high predisposition to conflicts with the dominant scenario of ‘war of all against all’.
Eastwood, John G; Kemp, Lynn A; Jalaludin, Bin B
2016-01-01
We have recently described a protocol for a study that aims to build a theory of neighbourhood context and postnatal depression. That protocol proposed a critical realist Explanatory Theory Building Method comprising of an: (1) emergent phase, (2) construction phase, and (3) confirmatory phase. A concurrent triangulated mixed method multilevel cross-sectional study design was described. The protocol also described in detail the Theory Construction Phase which will be presented here. The Theory Construction Phase will include: (1) defining stratified levels; (2) analytic resolution; (3) abductive reasoning; (4) comparative analysis (triangulation); (5) retroduction; (6) postulate and proposition development; (7) comparison and assessment of theories; and (8) conceptual frameworks and model development. The stratified levels of analysis in this study were predominantly social and psychological. The abductive analysis used the theoretical frames of: Stress Process; Social Isolation; Social Exclusion; Social Services; Social Capital, Acculturation Theory and Global-economic level mechanisms. Realist propositions are presented for each analysis of triangulated data. Inference to best explanation is used to assess and compare theories. A conceptual framework of maternal depression, stress and context is presented that includes examples of mechanisms at psychological, social, cultural and global-economic levels. Stress was identified as a necessary mechanism that has the tendency to cause several outcomes including depression, anxiety, and health harming behaviours. The conceptual framework subsequently included conditional mechanisms identified through the retroduction including the stressors of isolation and expectations and buffers of social support and trust. The meta-theory of critical realism is used here to generate and construct social epidemiological theory using stratified ontology and both abductive and retroductive analysis. The findings will be applied to the
Haataja, Mikko; Gránásy, László; Löwen, Hartmut
2010-08-01
Herein we provide a brief summary of the background, events and results/outcome of the CECAM workshop 'Classical density functional theory methods in soft and hard matter held in Lausanne between October 21 and October 23 2009, which brought together two largely separately working communities, both of whom employ classical density functional techniques: the soft-matter community and the theoretical materials science community with interests in phase transformations and evolving microstructures in engineering materials. After outlining the motivation for the workshop, we first provide a brief overview of the articles submitted by the invited speakers for this special issue of Journal of Physics: Condensed Matter, followed by a collection of outstanding problems identified and discussed during the workshop. 1. Introduction Classical density functional theory (DFT) is a theoretical framework, which has been extensively employed in the past to study inhomogeneous complex fluids (CF) [1-4] and freezing transitions for simple fluids, amongst other things. Furthermore, classical DFT has been extended to include dynamics of the density field, thereby opening a new avenue to study phase transformation kinetics in colloidal systems via dynamical DFT (DDFT) [5]. While DDFT is highly accurate, the computations are numerically rather demanding, and cannot easily access the mesoscopic temporal and spatial scales where diffusional instabilities lead to complex solidification morphologies. Adaptation of more efficient numerical methods would extend the domain of DDFT towards this regime of particular interest to materials scientists. In recent years, DFT has re-emerged in the form of the so-called 'phase-field crystal' (PFC) method for solid-state systems [6, 7], and it has been successfully employed to study a broad variety of interesting materials phenomena in both atomic and colloidal systems, including elastic and plastic deformations, grain growth, thin film growth, solid
Experiences with the quadratic Korringa-Kohn-Rostoker band theory method
International Nuclear Information System (INIS)
Faulkner, J.S.
1992-01-01
This paper reports on the Quadratic Korriga-Kohn-Rostoker method which is a fast band theory method in the sense that all eigenvalues for a given k are obtained from one matrix diagonalization, but it differs from other fast band theory methods in that it is derived entirely from multiple-scattering theory, without the introduction of a Rayleigh-Ritz variations step. In this theory, the atomic potentials are shifted by Δσ(r) with Δ equal to E-E 0 and σ(r) equal to one when r is inside the Wigner-Seitz cell and zero otherwise, and it turns out that the matrix of coefficients is an entire function of Δ. This matrix can be terminated to give a linear KKR, quadratic KKR, cubic KKR,..., or not terminated at all to give the pivoted multiple-scattering equations. Full potential are no harder to deal with than potentials with a shape approximation
Theories and Diagnostic Methods of Land Use Conflicts
Institute of Scientific and Technical Information of China (English)
Yongfang; YANG; Lianqi; ZHU
2013-01-01
With social and economic development, the land resources are becoming increasingly scarce, and the land use conflicts are getting more frequent, deeper, more diversified and more severe. Besides, the factors that induce land use conflicts are more and more complicated. Therefore, the key to solve many difficult problems in regional sustainable land use lies in the research of land use conflicts, scientific evaluation of the intensity of regional land use conflicts, and the further reveal of external forms as well as intrinsic mechanisms of land use conflicts. Based on the review of both domestic and foreign literatures, this paper has completed the theoretical framework as well as the contents of land use conflicts research, established the diagnostic models and methods of land use conflicts intensity and proposed the key research areas of future studies. The purpose is to promote the evolution of spatial structure of China’s land resources to the positive direction and achieve integrated and coordinated management of land use through improving spatial allocation efficiency of land factors and buffering the pressure on land resources.
Is Hidden Crossings Theory a New MOCC Method?
Krstić, Predrag; Schultz, David
1998-05-01
We find un unitary transformation of the scaled adiabatic Hamiltonian of a two-center, one-electron collision system which yields a new representation for the matrix elements of nonadiabatic radial coupling, valid for low-to-intermediate collision velocities. These are given in analytic form once the topology of the branch points of the adiabatic Hamiltonian in the plane of complex internuclear distance R is known. The matrix elements do not depend on origin of electronic coordinates and properly vanish at large internuclear distances. The role of the rotational couplings in the new representation is also discussed. The aproach is appropriately extended and compared with the PSS treatment in the fully quantal description of the collision. We apply new radial and rotational matrix elements in the standard Molecular Orbital Close Coupling (MOCC) approach to describe excitation and ionization in collisions of antiprotons with He^+ and of alpha-particles with hydrogen(P.S. Krstić et al, J. Phys. B. 31, in press (1998).). The results are compared with those obtained from the standard MOCC method and from the direct solutions of the Schrödinger equation on lattice (LTDSE)(D.R. Schultz et al, Phys. Rev. A 56, 3710 (1997)).
Theories and calculation methods for regional objective ET
Institute of Scientific and Technical Information of China (English)
QIN DaYong; LO JinYan; LIU JiaHong; WANG MingNa
2009-01-01
The regional objective ET (Evapotranspiration) is a new concept in water resources research, which refers to the total amount of water that could be exhausted from a region in the form of vapor per year. The objective-ET based water resources management allocates water to different regions in terms of ET. It controls the water exhausted from a region to meet the objective ET. The regional objective ET must be adapted to fit the region's local available water resources. By improving the water utilization effi-ciency and reducing the unrecoverable water in the social water circle, it is saved so that water related production is maintained or even increased under the same water consumption conditions. Regional water balance is realized by rationally deploying the available water among different industries, adjusting industrial structures, and adopting new water-saving technologies, therefore to meeting the requirements for groundwater conservation, agricultural income stability, and avoiding environmental damages. Furthermore, water competition among various departments and industries (including envi-ronmental and ecological water use) may be avoided. This paper proposes an innovative definition of objective ET, and its principles, sub-index systems. Besides, a computational method for regional ob-jective ET is developed by combining the distributed hydrological model and the soil moisture model.
On theTranslation Methods and Theory of International Advertising
Institute of Scientific and Technical Information of China (English)
Yang Xuemei
2012-01-01
Advertisement, as a way to propagandize products, always plays its role in the special stage. A successful advertisement would help the manufacturer to achieve a large sale while an unsuccessful even a bad one would do the contrast work. Advertising is an activity containing intelligence, patience, art and diligence. With the globalization and China' s entering WT0, more and more Chinese products get the opportunity of entering world market. In this battle without smoke of gun powder, the most powerful weapon is commercial advertisement. Therefore, advertisement becomes more important. At the same time, the translation of advertising plays its role as to internationalize advertisement to the world. It seems as a bridge overpass different countries and languages. However, due to the difference among cultures, problems appear how to be a good advertising translator and how to make an excellent translation of advertising. This thesis tries to analyses the criteria and strategies of advertising translation after the discussion of the types, structure and features of style of advertisements and states that there is no an established method for advertising translation. What we should do is to be flexible to deal with the various advertisements we meet.
Classical tokamak transport theory
International Nuclear Information System (INIS)
Nocentini, Aldo
1982-01-01
A qualitative treatment of the classical transport theory of a magnetically confined, toroidal, axisymmetric, two-species plasma is presented. The 'weakly collisional' ('banana' and 'plateau') and 'collision dominated' ('Pfirsch-Schlueter' and 'highly collisional') regimes, as well as the Ware effect are discussed. The method used to evaluate the diffusion coffieicnts of particles and heat in the weakly collisional regime is based on stochastic argument, that requires an analysis of the characteristic collision frequencies and lengths for particles moving in a tokamak-like magnetic field. The same method is used to evaluate the Ware effect. In the collision dominated regime on the other hand, the particle and heat fluxes across the magnetic field lines are dominated by macroscopic effects so that, although it is possible to present them as diffusion (in fact, the fluxes turn out to be proportional to the density and temperature gradients), a macroscopic treatment is more appropriate. Hence, fluid equations are used to inveatigate the collision dominated regime, to which particular attention is devoted, having been shown relatively recently that it is more complicated than the usual Pfirsch-Schlueter regime. The whole analysis presented here is qualitative, aiming to point out the relevant physical mechanisms involved in the various regimes more than to develop a rigorous mathematical derivation of the diffusion coefficients, for which appropriate references are given. (author)
Unrenormalizable theories can be predictive
Kubo, J
2003-01-01
Unrenormalizable theories contain infinitely many free parameters. Considering these theories in terms of the Wilsonian renormalization group (RG), we suggest a method for removing this large ambiguity. Our basic assumption is the existence of a maximal ultraviolet cutoff in a cutoff theory, and we require that the theory be so fine tuned as to reach the maximal cutoff. The theory so obtained behaves as a local continuum theory to the shortest distance. In concrete examples of the scalar theory we find that at least in a certain approximation to the Wilsonian RG, this requirement enables us to make unique predictions in the infrared regime in terms of a finite number of independent parameters. Therefore, this method might provide a way for calculating quantum corrections in a low-energy effective theory of quantum gravity. (orig.)
Characterizing multistationarity regimes in biochemical reaction networks.
Directory of Open Access Journals (Sweden)
Irene Otero-Muras
Full Text Available Switch like responses appear as common strategies in the regulation of cellular systems. Here we present a method to characterize bistable regimes in biochemical reaction networks that can be of use to both direct and reverse engineering of biological switches. In the design of a synthetic biological switch, it is important to study the capability for bistability of the underlying biochemical network structure. Chemical Reaction Network Theory (CRNT may help at this level to decide whether a given network has the capacity for multiple positive equilibria, based on their structural properties. However, in order to build a working switch, we also need to ensure that the bistability property is robust, by studying the conditions leading to the existence of two different steady states. In the reverse engineering of biological switches, knowledge collected about the bistable regimes of the underlying potential model structures can contribute at the model identification stage to a drastic reduction of the feasible region in the parameter space of search. In this work, we make use and extend previous results of the CRNT, aiming not only to discriminate whether a biochemical reaction network can exhibit multiple steady states, but also to determine the regions within the whole space of parameters capable of producing multistationarity. To that purpose we present and justify a condition on the parameters of biochemical networks for the appearance of multistationarity, and propose an efficient and reliable computational method to check its satisfaction through the parameter space.
International Nuclear Information System (INIS)
Santos, Adimir dos; Borges, A.A.
2000-01-01
A new method for the calculation of sensitivity coefficients is developed. The new method is a combination of two methodologies used for calculating these coefficients, which are the differential and the generalized perturbation theory methods. The proposed method utilizes as integral parameter the average flux in an arbitrary region of the system. Thus, the sensitivity coefficient contains only the component corresponding to the neutron flux. To obtain the new sensitivity coefficient, the derivates of the integral parameter, φ(ξ), with respect to σ are calculated using the perturbation method and the functional derivates of this generic integral parameter with respect to σ and φ are calculated using the differential method. The new method merges the advantages of the differential and generalized perturbation theory methods and eliminates their disadvantages. (author)
González-Lezana, Tomás; Honvault, Pascal; Scribano, Yohann
2013-08-07
The D(+) +H2(v = 0, j = 0, 1) → HD+H(+) reaction has been investigated at the low energy regime by means of a statistical quantum mechanical (SQM) method. Reaction probabilities and integral cross sections (ICSs) between a collisional energy of 10(-4) eV and 0.1 eV have been calculated and compared with previously reported results of a time independent quantum mechanical (TIQM) approach. The TIQM results exhibit a dense profile with numerous narrow resonances down to Ec ~ 10(-2) eV and for the case of H2(v = 0, j = 0) a prominent peak is found at ~2.5 × 10(-4) eV. The analysis at the state-to-state level reveals that this feature is originated in those processes which yield the formation of rotationally excited HD(v' = 0, j' > 0). The statistical predictions reproduce reasonably well the overall behaviour of the TIQM ICSs at the larger energy range (Ec ≥ 10(-3) eV). Thermal rate constants are in qualitative agreement for the whole range of temperatures investigated in this work, 10-100 K, although the SQM values remain above the TIQM results for both initial H2 rotational states, j = 0 and 1. The enlargement of the asymptotic region for the statistical approach is crucial for a proper description at low energies. In particular, we find that the SQM method leads to rate coefficients in terms of the energy in perfect agreement with previously reported measurements if the maximum distance at which the calculation is performed increases noticeably with respect to the value employed to reproduce the TIQM results.
Energy Technology Data Exchange (ETDEWEB)
González-Lezana, Tomás [Instituto de Física Fundamental, IFF-CSIC, Serrano 123, 28006 Madrid (Spain); Honvault, Pascal [Lab. Interdisciplinaire Carnot de Bourgogne, UMR CNRS 6303, Univ. Bourgogne, 21078 Dijon Cedex, France and UFR Sciences et Techniques, Univ. de Franche-Comté, 25030 Besançon cedex (France); Scribano, Yohann [Lab. Univers et Particules de Montpellier, Univ. de Montpellier II, LUPM - UMR CNRS 5299, 34095 Montpellier Cedex (France)
2013-08-07
The D{sup +}+H{sub 2}(v= 0, j= 0, 1) → HD+H{sup +} reaction has been investigated at the low energy regime by means of a statistical quantum mechanical (SQM) method. Reaction probabilities and integral cross sections (ICSs) between a collisional energy of 10{sup −4} eV and 0.1 eV have been calculated and compared with previously reported results of a time independent quantum mechanical (TIQM) approach. The TIQM results exhibit a dense profile with numerous narrow resonances down to E{sub c}∼ 10{sup −2} eV and for the case of H{sub 2}(v= 0, j= 0) a prominent peak is found at ∼2.5 × 10{sup −4} eV. The analysis at the state-to-state level reveals that this feature is originated in those processes which yield the formation of rotationally excited HD(v′= 0, j′ > 0). The statistical predictions reproduce reasonably well the overall behaviour of the TIQM ICSs at the larger energy range (E{sub c}⩾ 10{sup −3} eV). Thermal rate constants are in qualitative agreement for the whole range of temperatures investigated in this work, 10–100 K, although the SQM values remain above the TIQM results for both initial H{sub 2} rotational states, j= 0 and 1. The enlargement of the asymptotic region for the statistical approach is crucial for a proper description at low energies. In particular, we find that the SQM method leads to rate coefficients in terms of the energy in perfect agreement with previously reported measurements if the maximum distance at which the calculation is performed increases noticeably with respect to the value employed to reproduce the TIQM results.
International Nuclear Information System (INIS)
González-Lezana, Tomás; Honvault, Pascal; Scribano, Yohann
2013-01-01
The D + +H 2 (v= 0, j= 0, 1) → HD+H + reaction has been investigated at the low energy regime by means of a statistical quantum mechanical (SQM) method. Reaction probabilities and integral cross sections (ICSs) between a collisional energy of 10 −4 eV and 0.1 eV have been calculated and compared with previously reported results of a time independent quantum mechanical (TIQM) approach. The TIQM results exhibit a dense profile with numerous narrow resonances down to E c ∼ 10 −2 eV and for the case of H 2 (v= 0, j= 0) a prominent peak is found at ∼2.5 × 10 −4 eV. The analysis at the state-to-state level reveals that this feature is originated in those processes which yield the formation of rotationally excited HD(v′= 0, j′ > 0). The statistical predictions reproduce reasonably well the overall behaviour of the TIQM ICSs at the larger energy range (E c ⩾ 10 −3 eV). Thermal rate constants are in qualitative agreement for the whole range of temperatures investigated in this work, 10–100 K, although the SQM values remain above the TIQM results for both initial H 2 rotational states, j= 0 and 1. The enlargement of the asymptotic region for the statistical approach is crucial for a proper description at low energies. In particular, we find that the SQM method leads to rate coefficients in terms of the energy in perfect agreement with previously reported measurements if the maximum distance at which the calculation is performed increases noticeably with respect to the value employed to reproduce the TIQM results
Spectral methods in chemistry and physics applications to kinetic theory and quantum mechanics
Shizgal, Bernard
2015-01-01
This book is a pedagogical presentation of the application of spectral and pseudospectral methods to kinetic theory and quantum mechanics. There are additional applications to astrophysics, engineering, biology and many other fields. The main objective of this book is to provide the basic concepts to enable the use of spectral and pseudospectral methods to solve problems in diverse fields of interest and to a wide audience. While spectral methods are generally based on Fourier Series or Chebychev polynomials, non-classical polynomials and associated quadratures are used for many of the applications presented in the book. Fourier series methods are summarized with a discussion of the resolution of the Gibbs phenomenon. Classical and non-classical quadratures are used for the evaluation of integrals in reaction dynamics including nuclear fusion, radial integrals in density functional theory, in elastic scattering theory and other applications. The subject matter includes the calculation of transport coefficient...
International Nuclear Information System (INIS)
Lawrence, R.D.; Dorning, J.J.
1980-01-01
A coarse-mesh discrete nodal integral transport theory method has been developed for the efficient numerical solution of multidimensional transport problems of interest in reactor physics and shielding applications. The method, which is the discrete transport theory analogue and logical extension of the nodal Green's function method previously developed for multidimensional neutron diffusion problems, utilizes the same transverse integration procedure to reduce the multidimensional equations to coupled one-dimensional equations. This is followed by the conversion of the differential equations to local, one-dimensional, in-node integral equations by integrating back along neutron flight paths. One-dimensional and two-dimensional transport theory test problems have been systematically studied to verify the superior computational efficiency of the new method
Theory building trends in international management research: an archival review of preferred methods
Directory of Open Access Journals (Sweden)
Drikus Kriek
2011-08-01
Full Text Available A number of distinguished scholars believe that for theory development to occur within a field, qualitative research must precede quantitative research in order for the field to progress toward maturity. The purpose of this study was to investigate the international management literature from 1991-2007 to ascertain current levels of use of qualitative, quantitative, conceptual and joint (quantitative and qualitative research methods in the field. Results indicate scholars employ quantitative methods more than qualitative methods. The implications of these findings for future theory development and the generation of context relevant international management knowledge are discussed.
The Pade approximate method for solving problems in plasma kinetic theory
International Nuclear Information System (INIS)
Jasperse, J.R.; Basu, B.
1992-01-01
The method of Pade Approximates has been a powerful tool in solving for the time dependent propagator (Green function) in model quantum field theories. We have developed a modified Pade method which we feel has promise for solving linearized collisional and weakly nonlinear problems in plasma kinetic theory. In order to illustrate the general applicability of the method, in this paper we discuss Pade solutions for the linearized collisional propagator and the collisional dielectric function for a model collisional problem. (author) 3 refs., 2 tabs
Inverse problem theory methods for data fitting and model parameter estimation
Tarantola, A
2002-01-01
Inverse Problem Theory is written for physicists, geophysicists and all scientists facing the problem of quantitative interpretation of experimental data. Although it contains a lot of mathematics, it is not intended as a mathematical book, but rather tries to explain how a method of acquisition of information can be applied to the actual world.The book provides a comprehensive, up-to-date description of the methods to be used for fitting experimental data, or to estimate model parameters, and to unify these methods into the Inverse Problem Theory. The first part of the book deals wi
New numerical method for iterative or perturbative solution of quantum field theory
International Nuclear Information System (INIS)
Hahn, S.C.; Guralnik, G.S.
1999-01-01
A new computational idea for continuum quantum Field theories is outlined. This approach is based on the lattice source Galerkin methods developed by Garcia, Guralnik and Lawson. The method has many promising features including treating fermions on a relatively symmetric footing with bosons. As a spin-off of the technology developed for 'exact' solutions, the numerical methods used have a special case application to perturbation theory. We are in the process of developing an entirely numerical approach to evaluating graphs to high perturbative order. (authors)
Сlassification of methods of production of computer forensic by usage approach of graph theory
Directory of Open Access Journals (Sweden)
Anna Ravilyevna Smolina
2016-06-01
Full Text Available Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.
Сlassification of methods of production of computer forensic by usage approach of graph theory
Anna Ravilyevna Smolina; Alexander Alexandrovich Shelupanov
2016-01-01
Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.
Carter, Shani D.
2008-01-01
The paper proposes a theory that trainees have varying ability levels across different factors of cognitive ability, and that these abilities are used in varying levels by different training methods. The paper reviews characteristics of training methods and matches these characteristics to different factors of cognitive ability. The paper proposes…
Phenomenography and Grounded Theory as Research Methods in Computing Education Research Field
Kinnunen, Paivi; Simon, Beth
2012-01-01
This paper discusses two qualitative research methods, phenomenography and grounded theory. We introduce both methods' data collection and analysis processes and the type or results you may get at the end by using examples from computing education research. We highlight some of the similarities and differences between the aim, data collection and…
Analytical methods applied to the study of lattice gauge and spin theories
International Nuclear Information System (INIS)
Moreo, Adriana.
1985-01-01
A study of interactions between quarks and gluons is presented. Certain difficulties of the quantum chromodynamics to explain the behaviour of quarks has given origin to the technique of lattice gauge theories. First the phase diagrams of the discrete space-time theories are studied. The analysis of the phase diagrams is made by numerical and analytical methods. The following items were investigated and studied: a) A variational technique was proposed to obtain very accurated values for the ground and first excited state energy of the analyzed theory; b) A mean-field-like approximation for lattice spin models in the link formulation which is a generalization of the mean-plaquette technique was developed; c) A new method to study lattice gauge theories at finite temperature was proposed. For the first time, a non-abelian model was studied with analytical methods; d) An abelian lattice gauge theory with fermionic matter at the strong coupling limit was analyzed. Interesting results applicable to non-abelian gauge theories were obtained. (M.E.L.) [es
Heshi, Kamal Nosrati; Nasrabadi, Hassanali Bakhtiyar
2016-01-01
The present paper attempts to recognize principles and methods of education based on Wittgenstein's picture theory of language. This qualitative research utilized inferential analytical approach to review the related literature and extracted a set of principles and methods from his theory on picture language. Findings revealed that Wittgenstein…
Rodriguez, Ernesto; Kim, Yunjin; Durden, Stephen L.
1992-01-01
A numerical evaluation is presented of the regime of validity for various rough surface scattering theories against numerical results obtained by employing the method of moments. The contribution of each theory is considered up to second order in the perturbation expansion for the surface current. Considering both vertical and horizontal polarizations, the unified perturbation method provides best results among all theories weighed.
The Development of the TPR-DB as Grounded Theory Method
DEFF Research Database (Denmark)
Carl, Michael; Schaeffer, Moritz
2018-01-01
and refine the emerging concepts and categories and to validate the developing theories, the TPR-DB has been extended with further translation studies in different languages and translation modes. In this respect, it shares many features with Grounded Theory Method. This method was discovered in 1967...... and used in qualitative research in social science ad many other research areas. We analyze the TPR-DB development as a Grounded Theory Method....... on quantitative assessment of well-defined research questions on cognitive processes in human translation production, the integration of the data into the TPR-DB allowed for broader qualitative and exploratory research which has led to new codes, categories and research themes. In a constant effort to develop...
International Nuclear Information System (INIS)
Borges, Antonio Andrade
1998-01-01
A new method for the calculation of sensitivity coefficients is developed. The new method is a combination of two methodologies used for calculating theses coefficients, which are the differential and the generalized perturbation theory methods. The method utilizes as integral parameter the average flux in an arbitrary region of the system. Thus, the sensitivity coefficient contains only the component corresponding to the neutron flux. To obtain the new sensitivity coefficient, the derivatives of the integral parameter, Φ, with respect to σ are calculated using the perturbation method and the functional derivatives of this generic integral parameter with respect to σ and Φ are calculated using the differential method. (author)
A new theory and method of preventing harmful waste landfill from pollution to groundwater
International Nuclear Information System (INIS)
Liu Changli; Zhang Yun; Song Shuhong; Hou Hongbing
2006-01-01
It is limited in conventional Soil Liner theory of waste landfill, we must update the theory and the calculational methods must be broke, so that the cost of waste landfill could be reduced in wide scope, this is important to develop economy and environment in sustaining rate. It is an innovation in the theory of the pollution control in the waste landfill groundwater through translated the theories of 'excluding infiltrate to groundwater' into 'insulating waste, allowing water into groundwater', the theory of waste landfill from pollution to groundwater came true. Clayey Soil not only can prevent seepage, but also can obstruct waste. If we can make use of its filtration adequately, just as using experimentation in laboratory to research filtration capability, calculation, we could made new testing technique and calculated technique of liner parameters. This paper take an example of which calculate to liner parameters, such as 'filtration capability' and 'adequacy thickness of effective liner', and make a programming of landfill site by this theory and method in Beijing plain. (authors)
Wakabayashi, Hideaki; Asai, Masamitsu; Matsumoto, Keiji; Yamakita, Jiro
2016-11-01
Nakayama's shadow theory first discussed the diffraction by a perfectly conducting grating in a planar mounting. In the theory, a new formulation by use of a scattering factor was proposed. This paper focuses on the middle regions of a multilayered dielectric grating placed in conical mounting. Applying the shadow theory to the matrix eigenvalues method, we compose new transformation and improved propagation matrices of the shadow theory for conical mounting. Using these matrices and scattering factors, being the basic quantity of diffraction amplitudes, we formulate a new description of three-dimensional scattering fields which is available even for cases where the eigenvalues are degenerate in any region. Some numerical examples are given for cases where the eigenvalues are degenerate in the middle regions.
Theory and application of deterministic multidimensional pointwise energy lattice physics methods
International Nuclear Information System (INIS)
Zerkle, M.L.
1999-01-01
The theory and application of deterministic, multidimensional, pointwise energy lattice physics methods are discussed. These methods may be used to solve the neutron transport equation in multidimensional geometries using near-continuous energy detail to calculate equivalent few-group diffusion theory constants that rigorously account for spatial and spectral self-shielding effects. A dual energy resolution slowing down algorithm is described which reduces the computer memory and disk storage requirements for the slowing down calculation. Results are presented for a 2D BWR pin cell depletion benchmark problem
Geospatial Big Data Handling Theory and Methods: A Review and Research Challenges
DEFF Research Database (Denmark)
Li, Songnian; Dragicevic, Suzana; Anton, François
2016-01-01
Big data has now become a strong focus of global interest that is increasingly attracting the attention of academia, industry, government and other organizations. Big data can be situated in the disciplinary area of traditional geospatial data handling theory and methods. The increasing volume...... for Photogrammetry and Remote Sensing (ISPRS) Technical Commission II (TC II) revisits the existing geospatial data handling methods and theories to determine if they are still capable of handling emerging geospatial big data. Further, the paper synthesises problems, major issues and challenges with current...... developments as well as recommending what needs to be developed further in the near future....
Sustainable urban regime adjustments
DEFF Research Database (Denmark)
Quitzau, Maj-Britt; Jensen, Jens Stissing; Elle, Morten
2013-01-01
The endogenous agency that urban governments increasingly portray by making conscious and planned efforts to adjust the regimes they operate within is currently not well captured in transition studies. There is a need to acknowledge the ambiguity of regime enactment at the urban scale. This direc...
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. Flux scaling: Ultimate regime. With the Nusselt number and the mixing length scales, we get the Nusselt number and Reynolds number (w'd/ν) scalings: and or. and. scaling expected to occur at extremely high Ra Rayleigh-Benard convection. Get the ultimate regime ...
Research advances in theories and methods of community assembly and succession
Directory of Open Access Journals (Sweden)
WenJun Zhang
2014-09-01
Full Text Available Community succession refers to the regular and predictable process of species replacement in the environment that all species had been eliminated or that had been disturbed. Community assembly is the process that species growth and interact to establish a community. Community assembly stresses the change of community over a single phase. So far a lot of theories and methods have been proposed for community assembly and succession. In present article I introduced research advances in theories and methods of community assembly and succession. Finally, continuing my past propositions, I further proposed the unified theory and methodology on community assembly and succession. I suggested that community assembly and succession is a process of self-organization. It follows the major principles and mechanisms of self-organization. Agentbased modeling was suggested being used to describe the dynamics of community assembly and succession.
The method of rigged spaces in singular perturbation theory of self-adjoint operators
Koshmanenko, Volodymyr; Koshmanenko, Nataliia
2016-01-01
This monograph presents the newly developed method of rigged Hilbert spaces as a modern approach in singular perturbation theory. A key notion of this approach is the Lax-Berezansky triple of Hilbert spaces embedded one into another, which specifies the well-known Gelfand topological triple. All kinds of singular interactions described by potentials supported on small sets (like the Dirac δ-potentials, fractals, singular measures, high degree super-singular expressions) admit a rigorous treatment only in terms of the equipped spaces and their scales. The main idea of the method is to use singular perturbations to change inner products in the starting rigged space, and the construction of the perturbed operator by the Berezansky canonical isomorphism (which connects the positive and negative spaces from a new rigged triplet). The approach combines three powerful tools of functional analysis based on the Birman-Krein-Vishik theory of self-adjoint extensions of symmetric operators, the theory of singular quadra...
Perturbative method for the derivation of quantum kinetic theory based on closed-time-path formalism
International Nuclear Information System (INIS)
Koide, Jun
2002-01-01
Within the closed-time-path formalism, a perturbative method is presented, which reduces the microscopic field theory to the quantum kinetic theory. In order to make this reduction, the expectation value of a physical quantity must be calculated under the condition that the Wigner distribution function is fixed, because it is the independent dynamical variable in the quantum kinetic theory. It is shown that when a nonequilibrium Green function in the form of the generalized Kadanoff-Baym ansatz is utilized, this condition appears as a cancellation of a certain part of contributions in the diagrammatic expression of the expectation value. Together with the quantum kinetic equation, which can be derived in the closed-time-path formalism, this method provides a basis for the kinetic-theoretical description
e-Research and Learning Theory: What Do Sequence and Process Mining Methods Contribute?
Reimann, Peter; Markauskaite, Lina; Bannert, Maria
2014-01-01
This paper discusses the fundamental question of how data-intensive e-research methods could contribute to the development of learning theories. Using methodological developments in research on self-regulated learning as an example, it argues that current applications of data-driven analytical techniques, such as educational data mining and its…
Kumar, Swapna; Antonenko, Pavlo
2014-01-01
From an instrumental view, conceptual frameworks that are carefully assembled from existing literature in Educational Technology and related disciplines can help students structure all aspects of inquiry. In this article we detail how the development of a conceptual framework that connects theory, practice and method is scaffolded and facilitated…
Audience studies 2.0: on the theory, politics and method of qualitative audience research
Hermes, J.
2009-01-01
Audience research, this paper suggests, is an excellent field to test the claims of Media Studies 2.0. Moreover, 2.0 claims are a good means to review qualitative audience research itself too. Working from a broad strokes analysis of the theory, politics and method of interpretative research with
Some basic mathematical methods of diffusion theory. [emphasis on atmospheric applications
Giere, A. C.
1977-01-01
An introductory treatment of the fundamentals of diffusion theory is presented, starting with molecular diffusion and leading up to the statistical methods of turbulent diffusion. A multilayer diffusion model, designed to permit concentration and dosage calculations downwind of toxic clouds from rocket vehicles, is described. The concepts and equations of diffusion are developed on an elementary level, with emphasis on atmospheric applications.
Application of nuclear theory methods to new family of fermi systems
International Nuclear Information System (INIS)
Nesterenko, V.O.
1995-01-01
Application of nuclear theory methods to the description of the properties of the new family of small Fermi systems (metal clusters, fullerenes, helium clusters and quantum dots) is briefly reviewed. The main attention is paid to giant resonances in these systems. 52 refs., 7 figs
Theory of direct-interband-transition line shapes based on Mori's method
International Nuclear Information System (INIS)
Sam Nyung Yi; Jai Yon Ryu; Ok Hee Chung; Joung Young Sug; Sang Don Choi; Yeon Choon Chung
1987-01-01
A theory of direct interband optical transition in the electron-phonon system is introduced on the basis of the Kubo formalism and by using Mori's method of calculation. The line shape functions are introduced in two different ways and are compared with those obtained by Choi and Chung based on Argyres and Sigel's projection technique
Statistical methods of discrimination and classification advances in theory and applications
Choi, Sung C
1986-01-01
Statistical Methods of Discrimination and Classification: Advances in Theory and Applications is a collection of papers that tackles the multivariate problems of discriminating and classifying subjects into exclusive population. The book presents 13 papers that cover that advancement in the statistical procedure of discriminating and classifying. The studies in the text primarily focus on various methods of discriminating and classifying variables, such as multiple discriminant analysis in the presence of mixed continuous and categorical data; choice of the smoothing parameter and efficiency o
Valuing companies by cash flow discounting: Ten methods and nine theories
Fernández , Pablo
2002-01-01
This paper is a summarized compendium of all the methods and theories on company valuation using cash flow discounting. The paper shows the ten most commonly used methods for valuing companies by cash flow discounting: 1) free cash flow discounted at the WACC; 2) equity cash flows discounted at the required return to equity; 3) capital cash flows discounted at the WACC before tax; 4) APV (Adjusted Present Value); 5) the business's risk-adjusted free cash flows discounted at the required retur...
Quasilinear theory of plasma turbulence. Origins, ideas, and evolution of the method
Bakunin, O. G.
2018-01-01
The quasilinear method of describing weak plasma turbulence is one of the most important elements of current plasma physics research. Today, this method is not only a tool for solving individual problems but a full-fledged theory of general physical interest. The author's objective is to show how the early ideas of describing the wave-particle interactions in a plasma have evolved as a result of the rapid expansion of the research interests of turbulence and turbulent transport theorists.
UN Method For The Critical Slab Problem In One-Speed Neutron Transport Theory
International Nuclear Information System (INIS)
Oeztuerk, Hakan; Guengoer, Sueleyman
2008-01-01
The Chebyshev polynomial approximation (U N method) is used to solve the critical slab problem in one-speed neutron transport theory using Marshak boundary condition. The isotropic scattering kernel with the combination of forward and backward scattering is chosen for the neutrons in a uniform finite slab. Numerical results obtained by the U N method are presented in the tables together with the results obtained by the well-known P N method for comparison. It is shown that the method converges rapidly with its easily executable equations.
International Nuclear Information System (INIS)
Abad, J.; Esteve, J.G.; Pacheco, A.F.
1985-01-01
An approximation technique to construct the low-lying energy eigenstates of any bosonic field theory on the lattice is proposed. It is based on the SLAC blocking method, after performing a finite-spin approximation to the individual degrees of freedom of the problem. General expressions for any polynomial self-interacting theory are given. Numerical results for phi 2 and phi 4 theories in 1+1 dimensions are offered; they exhibit a fast convergence rate. The complete low-lying energy spectrum of the phi 4 theory in 1+1 dimensions is calculated
Improved numerical methods for quantum field theory (Outstanding junior investigator award)
International Nuclear Information System (INIS)
Sokal, A.D.
1992-01-01
We are developing new and more efficient numerical methods for problems in quantum field theory. Our principal goal is to achieve radical reductions in critical slowing-down. We are concentrating at present on three new families of algorithms: multi-grid Monte Carlo, Swendsen-Wang and generalized Wolff-type embedding algorithms. In addition, we are making a high-precision numerical study of the hyperscaling conjecture for the self-avoiding walk, which is closely related to the triviality problem for var-phi 4 quantum field theory
Improved numerical methods for quantum field theory (Outstanding junior investigator award)
International Nuclear Information System (INIS)
Sokal, A.D.
1993-01-01
We are developing new and more efficient numerical methods for problems in quantum field theory. Our principal goal is to achieve radical reductions in critical slowing-down. We are concentrating at present on three new families of algorithms: multi-grid Monte Carlo (MGMC), Swendsen-Wang (SW) and generalized Wolff-type embedding algorithms. In addition, we are making a high-precision numerical study of the hyperscaling conjecture for the self-avoiding walk, which is closely related to the triviality problem for var-phi 4 quantum field theory
Thulesius, Hans; Barfod, Toke; Ekström, Helene; Håkansson, Anders
2004-09-30
Grounded theory (GT) is a popular research method for exploring human behavior. GT was developed by the medical sociologists Glaser and Strauss while they studied dying in hospitals in the 1960s resulting in the book "Awareness of dying". The goal of a GT is to generate conceptual theories by using all types of data but without applying existing theories and hypotheses. GT procedures are mostly inductive as opposed to deductive research where hypotheses are tested. A good GT has a core variable that is a central concept connected to many other concepts explaining the main action in the studied area. A core variable answers the question "What's going on?". Examples of core variables are: "Cutting back after a heart attack"--how people adapt to life after a serious illness; and "Balancing in palliative cancer care"--a process of weighing, shifting, compensating and compromising when treating people with a progressive and incurable illness trajectory.
Directory of Open Access Journals (Sweden)
Xiao-ping Bai
2013-01-01
Full Text Available Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.
Application of the gradient method to Hartree-Fock-Bogoliubov theory
International Nuclear Information System (INIS)
Robledo, L. M.; Bertsch, G. F.
2011-01-01
A computer code is presented for solving the equations of the Hartree-Fock-Bogoliubov (HFB) theory by the gradient method, motivated by the need for efficient and robust codes to calculate the configurations required by extensions of the HFB theory, such as the generator coordinate method. The code is organized with a separation between the parts that are specific to the details of the Hamiltonian and the parts that are generic to the gradient method. This permits total flexibility in choosing the symmetries to be imposed on the HFB solutions. The code solves for both even and odd particle-number ground states, with the choice determined by the input data stream. Application is made to the nuclei in the sd shell using the universal sd-shell interaction B (USDB) shell-model Hamiltonian.
Bai, Xiao-ping; Zhang, Xi-wei
2013-01-01
Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.
Unification of field theory and maximum entropy methods for learning probability densities
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Energy Technology Data Exchange (ETDEWEB)
Kim, Dong Hyun; Kim, Hak Sung [Hanyang University, Seoul (Korea, Republic of); Kim, Hyo Chan; Yang, Yong Sik; In, Wang kee [KAERI, Daejeon (Korea, Republic of)
2016-05-15
In this paper, an analytical method based on thick walled theory has been studied to calculate stress and strain of ATF cladding. In order to prescribe boundary conditions of the analytical method, two algorithms were employed which are called subroutine 'Cladf' and 'Couple' of FRACAS, respectively. To evaluate the developed method, equivalent model using finite element method was established and stress components of the method were compared with those of equivalent FE model. One of promising ATF concepts is the coated cladding, which take advantages such as high melting point, a high neutron economy, and low tritium permeation rate. To evaluate the mechanical behavior and performance of the coated cladding, we need to develop the specified model to simulate the ATF behaviors in the reactor. In particular, the model for simulation of stress and strain for the coated cladding should be developed because the previous model, which is 'FRACAS', is for one body model. The FRACAS module employs the analytical method based on thin walled theory. According to thin-walled theory, radial stress is defined as zero but this assumption is not suitable for ATF cladding because value of the radial stress is not negligible in the case of ATF cladding. Recently, a structural model for multi-layered ceramic cylinders based on thick-walled theory was developed. Also, FE-based numerical simulation such as BISON has been developed to evaluate fuel performance. An analytical method that calculates stress components of ATF cladding was developed in this study. Thick-walled theory was used to derive equations for calculating stress and strain. To solve for these equations, boundary and loading conditions were obtained by subroutine 'Cladf' and 'Couple' and applied to the analytical method. To evaluate the developed method, equivalent FE model was established and its results were compared to those of analytical model. Based on the
Application of activity theory to analysis of human-related accidents: Method and case studies
International Nuclear Information System (INIS)
Yoon, Young Sik; Ham, Dong-Han; Yoon, Wan Chul
2016-01-01
This study proposes a new approach to human-related accident analysis based on activity theory. Most of the existing methods seem to be insufficient for comprehensive analysis of human activity-related contextual aspects of accidents when investigating the causes of human errors. Additionally, they identify causal factors and their interrelationships with a weak theoretical basis. We argue that activity theory offers useful concepts and insights to supplement existing methods. The proposed approach gives holistic contextual backgrounds for understanding and diagnosing human-related accidents. It also helps identify and organise causal factors in a consistent, systematic way. Two case studies in Korean nuclear power plants are presented to demonstrate the applicability of the proposed method. Human Factors Analysis and Classification System (HFACS) was also applied to the case studies. The results of using HFACS were then compared with those of using the proposed method. These case studies showed that the proposed approach could produce a meaningful set of human activity-related contextual factors, which cannot easily be obtained by using existing methods. It can be especially effective when analysts think it is important to diagnose accident situations with human activity-related contextual factors derived from a theoretically sound model and to identify accident-related contextual factors systematically. - Highlights: • This study proposes a new method for analysing human-related accidents. • The method was developed based on activity theory. • The concept of activity system model and contradiction was used in the method. • Two case studies in nuclear power plants are presented. • The method is helpful to consider causal factors systematically and comprehensively.
Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory
Directory of Open Access Journals (Sweden)
Feng Hu
2012-01-01
Full Text Available The divide and conquer method is a typical granular computing method using multiple levels of abstraction and granulations. So far, although some achievements based on divided and conquer method in the rough set theory have been acquired, the systematic methods for knowledge reduction based on divide and conquer method are still absent. In this paper, the knowledge reduction approaches based on divide and conquer method, under equivalence relation and under tolerance relation, are presented, respectively. After that, a systematic approach, named as the abstract process for knowledge reduction based on divide and conquer method in rough set theory, is proposed. Based on the presented approach, two algorithms for knowledge reduction, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented. Some experimental evaluations are done to test the methods on uci data sets and KDDCUP99 data sets. The experimental results illustrate that the proposed approaches are efficient to process large data sets with good recognition rate, compared with KNN, SVM, C4.5, Naive Bayes, and CART.
International Nuclear Information System (INIS)
Zhu, Qingjun; Song, Fengquan; Ren, Jie; Chen, Xueyong; Zhou, Bin
2014-01-01
To further expand the application of an artificial neural network in the field of neutron spectrometry, the criteria for choosing between an artificial neural network and the maximum entropy method for the purpose of unfolding neutron spectra was presented. The counts of the Bonner spheres for IAEA neutron spectra were used as a database, and the artificial neural network and the maximum entropy method were used to unfold neutron spectra; the mean squares of the spectra were defined as the differences between the desired and unfolded spectra. After the information entropy of each spectrum was calculated using information entropy theory, the relationship between the mean squares of the spectra and the information entropy was acquired. Useful information from the information entropy guided the selection of unfolding methods. Due to the importance of the information entropy, the method for predicting the information entropy using the Bonner spheres' counts was established. The criteria based on the information entropy theory can be used to choose between the artificial neural network and the maximum entropy method unfolding methods. The application of an artificial neural network to unfold neutron spectra was expanded. - Highlights: • Two neutron spectra unfolding methods, ANN and MEM, were compared. • The spectrum's entropy offers useful information for selecting unfolding methods. • For the spectrum with low entropy, the ANN was generally better than MEM. • The spectrum's entropy was predicted based on the Bonner spheres' counts
'CANDLE' burnup regime after LWR regime
International Nuclear Information System (INIS)
Sekimoto, Hiroshi; Nagata, Akito
2008-01-01
CANDLE (Constant Axial shape of Neutron flux, nuclide densities and power shape During Life of Energy producing reactor) burnup strategy can derive many merits. From safety point of view, the change of excess reactivity along burnup is theoretically zero, and the core characteristics, such as power feedback coefficients and power peaking factor, are not changed along burnup. Application of this burnup strategy to neutron rich fast reactors makes excellent performances. Only natural or depleted uranium is required for the replacing fuels. About 40% of natural or depleted uranium undergoes fission without the conventional reprocessing and enrichment. If the LWR produced energy of X Joules, the CANDLE reactor can produce about 50X Joules from the depleted uranium left at the enrichment facility for the LWR fuel. If we can say LWRs have produced energy sufficient for full 20 years, we can produce the energy for 1000 years by using the CANDLE reactors with depleted uranium. We need not mine any uranium ore, and do not need reprocessing facility. The burnup of spent fuel becomes 10 times. Therefore, the spent fuel amount per produced energy is also reduced to one-tenth. The details of the scenario of CANDLE burnup regime after LWR regime will be presented at the symposium. (author)
Quantum chemistry the development of ab initio methods in molecular electronic structure theory
Schaefer III, Henry F
2004-01-01
This guide is guaranteed to prove of keen interest to the broad spectrum of experimental chemists who use electronic structure theory to assist in the interpretation of their laboratory findings. A list of 150 landmark papers in ab initio molecular electronic structure methods, it features the first page of each paper (which usually encompasses the abstract and introduction). Its primary focus is methodology, rather than the examination of particular chemical problems, and the selected papers either present new and important methods or illustrate the effectiveness of existing methods in predi
Mathematical correlation of modal-parameter-identification methods via system-realization theory
Juang, Jer-Nan
1987-01-01
A unified approach is introduced using system-realization theory to derive and correlate modal-parameter-identification methods for flexible structures. Several different time-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal-parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research toward the unification of the many possible approaches for modal-parameter identification.
Mathematical correlation of modal parameter identification methods via system realization theory
Juang, J. N.
1986-01-01
A unified approach is introduced using system realization theory to derive and correlate modal parameter identification methods for flexible structures. Several different time-domain and frequency-domain methods are analyzed and treated. A basic mathematical foundation is presented which provides insight into the field of modal parameter identification for comparison and evaluation. The relation among various existing methods is established and discussed. This report serves as a starting point to stimulate additional research towards the unification of the many possible approaches for modal parameter identification.
Network Theory and Effects of Transcranial Brain Stimulation Methods on the Brain Networks
Directory of Open Access Journals (Sweden)
Sema Demirci
2014-12-01
Full Text Available In recent years, there has been a shift from classic localizational approaches to new approaches where the brain is considered as a complex system. Therefore, there has been an increase in the number of studies involving collaborations with other areas of neurology in order to develop methods to understand the complex systems. One of the new approaches is graphic theory that has principles based on mathematics and physics. According to this theory, the functional-anatomical connections of the brain are defined as a network. Moreover, transcranial brain stimulation techniques are amongst the recent research and treatment methods that have been commonly used in recent years. Changes that occur as a result of applying brain stimulation techniques on physiological and pathological networks help better understand the normal and abnormal functions of the brain, especially when combined with techniques such as neuroimaging and electroencephalography. This review aims to provide an overview of the applications of graphic theory and related parameters, studies conducted on brain functions in neurology and neuroscience, and applications of brain stimulation systems in the changing treatment of brain network models and treatment of pathological networks defined on the basis of this theory.
Theoretical Coalescence: A Method to Develop Qualitative Theory: The Example of Enduring.
Morse, Janice M
Qualitative research is frequently context bound, lacks generalizability, and is limited in scope. The purpose of this article was to describe a method, theoretical coalescence, that provides a strategy for analyzing complex, high-level concepts and for developing generalizable theory. Theoretical coalescence is a method of theoretical expansion, inductive inquiry, of theory development, that uses data (rather than themes, categories, and published extracts of data) as the primary source for analysis. Here, using the development of the lay concept of enduring as an example, I explore the scientific development of the concept in multiple settings over many projects and link it within the Praxis Theory of Suffering. As comprehension emerges when conducting theoretical coalescence, it is essential that raw data from various different situations be available for reinterpretation/reanalysis and comparison to identify the essential features of the concept. The concept is then reconstructed, with additional inquiry that builds description, and evidence is conducted and conceptualized to create a more expansive concept and theory. By utilizing apparently diverse data sets from different contexts that are linked by certain characteristics, the essential features of the concept emerge. Such inquiry is divergent and less bound by context yet purposeful, logical, and with significant pragmatic implications for practice in nursing and beyond our discipline. Theoretical coalescence is a means by which qualitative inquiry is broadened to make an impact, to accommodate new theoretical shifts and concepts, and to make qualitative research applied and accessible in new ways.
A prediction method based on grey system theory in equipment condition based maintenance
International Nuclear Information System (INIS)
Yan, Shengyuan; Yan, Shengyuan; Zhang, Hongguo; Zhang, Zhijian; Peng, Minjun; Yang, Ming
2007-01-01
Grey prediction is a modeling method based on historical or present, known or indefinite information, which can be used for forecasting the development of the eigenvalues of the targeted equipment system and setting up the model by using less information. In this paper, the postulate of grey system theory, which includes the grey generating, the sorts of grey generating and the grey forecasting model, is introduced first. The concrete application process, which includes the grey prediction modeling, grey prediction, error calculation, equal dimension and new information approach, is introduced secondly. Application of a so-called 'Equal Dimension and New Information' (EDNI) technology in grey system theory is adopted in an application case, aiming at improving the accuracy of prediction without increasing the amount of calculation by replacing old data with new ones. The proposed method can provide a new way for solving the problem of eigenvalue data exploding in equal distance effectively, short time interval and real time prediction. The proposed method, which was based on historical or present, known or indefinite information, was verified by the vibration prediction of induced draft fan of a boiler of the Yantai Power Station in China, and the results show that the proposed method based on grey system theory is simple and provides a high accuracy in prediction. So, it is very useful and significant to the controlling and controllable management in safety production. (authors)
Energy Technology Data Exchange (ETDEWEB)
Sun, Hao; Ping, Xueliang; Cao, Yi; Lie, Ke [Jiangnan University, Wuxi (China); Chen, Peng [Mie University, Mie (Japan); Wang, Huaqing [Beijing University, Beijing (China)
2014-04-15
This study proposes a novel intelligent fault diagnosis method for rotating machinery using ant colony optimization (ACO) and possibility theory. The non-dimensional symptom parameters (NSPs) in the frequency domain are defined to reflect the features of the vibration signals measured in each state. A sensitive evaluation method for selecting good symptom parameters using principal component analysis (PCA) is proposed for detecting and distinguishing faults in rotating machinery. By using ACO clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. A fuzzy diagnosis method using sequential inference and possibility theory is also proposed, by which the conditions of the machinery can be identified sequentially. Lastly, the proposed method is compared with a conventional neural networks (NN) method. Practical examples of diagnosis for a V-belt driving equipment used in a centrifugal fan are provided to verify the effectiveness of the proposed method. The results verify that the faults that often occur in V-belt driving equipment, such as a pulley defect state, a belt defect state and a belt looseness state, are effectively identified by the proposed method, while these faults are difficult to detect using conventional NN.
Directory of Open Access Journals (Sweden)
A. N. Ostrikov
2015-01-01
Full Text Available Consumer properties of food raw material formed during the heat treatment. New physical, flavoring and aromatic properties of the products of plant origin, formed during drying due to substantial changes in the composition of the raw materia l occurring as a result of biochemical reactions. In the production of dried and roasted products is very important to follow the parameters that contribute to the passage of biochemical processes aimed at creating a product with high nutritional qualities, strong aroma and pleasant taste. We studied the basic kinetics of the drying process of food raw material (in terms of artichoke in a dense interspersed layer, which formed the basis for the rational choice of the drying regime with due consideration of changes in the moisture content of the product are studied. The nature of the effect of the dried product movement hydrodynamic conditions on a layer height and intensity of drying is established. As a result of food raw material drying process kinetics analysis (in terms of artichoke multistep drying regimes were chosen. Analysis of the artichoke particles drying by air, air-steam mixture and superheated steam intensity showed the presence of two parts: the horizontal one and gradually diminishing one. Kinetic laws of the artichoke drying process in a dense interspersed layer were the basis of engineering calculation of dryer with a transporting body in the form of a "traveling wave". Application of the dryer with the transporting body in the form of a "traveling wave" for food raw material drying allow to achieve uniform drying of the product due to the use of soft, gentle regimes of oversleeping while preserving to the utmost particles of the product; to improve the quality of the finished product through the use of interspersed layer that reduces clumping of product to be dried.
Projection and nested force-gradient methods for quantum field theories
Energy Technology Data Exchange (ETDEWEB)
Shcherbakov, Dmitry
2017-07-26
For the Hybrid Monte Carlo algorithm (HMC), often used to study the fundamental quantum field theory of quarks and gluons, quantum chromodynamics (QCD), on the lattice, one is interested in efficient numerical time integration schemes which preserve geometric properties of the flow and are optimal in terms of computational costs per trajectory for a given acceptance rate. High order numerical methods allow the use of larger step sizes, but demand a larger computational effort per step; low order schemes do not require such large computational costs per step, but need more steps per trajectory. So there is a need to balance these opposing effects. In this work we introduce novel geometric numerical time integrators, namely, projection and nested force-gradient methods in order to improve the efficiency of the HMC algorithm in application to the problems of quantum field theories.
International Nuclear Information System (INIS)
Kondorskiy, A.; Nakamura, H.
2004-01-01
The title theory is developed by combining the Herman-Kluk semiclassical theory for adiabatic propagation on single potential-energy surface and the semiclassical Zhu-Nakamura theory for nonadiabatic transition. The formulation with use of natural mathematical principles leads to a quite simple expression for the propagator based on classical trajectories and simple formulas are derived for overall adiabatic and nonadiabatic processes. The theory is applied to electronically nonadiabatic photodissociation processes: a one-dimensional problem of H 2 + in a cw (continuous wave) laser field and a two-dimensional model problem of H 2 O in a cw laser field. The theory is found to work well for the propagation duration of several molecular vibrational periods and wide energy range. Although the formulation is made for the case of laser induced nonadiabatic processes, it is straightforwardly applicable to ordinary electronically nonadiabatic chemical dynamics
Proshutinsky, Andrey; Dukhovskoy, Dmitry; Timmermans, Mary-Louise; Krishfield, Richard; Bamber, Jonathan L
2015-10-13
Between 1948 and 1996, mean annual environmental parameters in the Arctic experienced a well-pronounced decadal variability with two basic circulation patterns: cyclonic and anticyclonic alternating at 5 to 7 year intervals. During cyclonic regimes, low sea-level atmospheric pressure (SLP) dominated over the Arctic Ocean driving sea ice and the upper ocean counterclockwise; the Arctic atmosphere was relatively warm and humid, and freshwater flux from the Arctic Ocean towards the subarctic seas was intensified. By contrast, during anticylonic circulation regimes, high SLP dominated driving sea ice and the upper ocean clockwise. Meanwhile, the atmosphere was cold and dry and the freshwater flux from the Arctic to the subarctic seas was reduced. Since 1997, however, the Arctic system has been under the influence of an anticyclonic circulation regime (17 years) with a set of environmental parameters that are atypical for this regime. We discuss a hypothesis explaining the causes and mechanisms regulating the intensity and duration of Arctic circulation regimes, and speculate how changes in freshwater fluxes from the Arctic Ocean and Greenland impact environmental conditions and interrupt their decadal variability. © 2015 The Authors.
Directory of Open Access Journals (Sweden)
Gavril PANDI
2011-03-01
Full Text Available The influenced flow regimes. The presence and activities ofhumanity influences the uniform environmental system, and in this context, therivers water resources. In concordance with this, the natural runoff regime suffersbigger and deeper changes. The nature of these changes depending on the type anddegree of water uses. The multitude of the use cause different types of influence,whit different quantitative aspects. In the same time, the influences havequalitative connotations, too, regarding to the modifications of the yearly watervolume runoff. So the natural runoff regime is modified. After analyzing thedistribution laws of the monthly runoff, there have been differenced four types ofinfluenced runoff regimes. In the excess type the influenced runoff is bigger thanthe natural, continuously in the whole year. The deficient type is characterized byinverse rapports like the first type, in the whole year. In the sinusoidal type, theinfluenced runoff is smaller than the natural in the period when the water isretained in the lake reservoirs, and in the depletion period the situation inverts. Atthe irregular type the ratio between influenced and natural runoff is changeable ina random meaner monthly. The recognition of the influenced regime and the gradeof influence are necessary in the evaluation and analysis of the usable hydrologicalriver resources, in the flood defence activities, in the complex scheme of thehydrographic basins, in the environment design and so on.
International Nuclear Information System (INIS)
Zaza, Chady
2015-01-01
The numerical simulation of steam generators of pressurized water reactors is a complex problem, involving different flow regimes and a wide range of length and time scales. An accidental scenario may be associated with very fast variations of the flow with an important Mach number. In contrast in the nominal regime the flow may be stationary, at low Mach number. Moreover whatever the regime under consideration, the array of U-tubes is modelled by a porous medium in order to avoid taking into account the complex geometry of the steam generator, which entails the issue of the coupling conditions at the interface with the free-fluid. We propose a new pressure-correction scheme for cell-centered finite volumes for solving the compressible Navier-Stokes and Euler equations at all Mach number. The existence of a discrete solution, the consistency of the scheme in the Lax sense and the positivity of the internal energy were proved. Then the scheme was extended to the homogeneous two-phase flow models of the GENEPI code developed at CEA. Lastly a multigrid-AMR algorithm was adapted for using our pressure-correction scheme on adaptive grids. Regarding the second issue addressed in this work, the numerical simulation of a fluid flow over a porous bed involves very different length scales. Macroscopic interface models - such as Ochoa-Tapia-Whitaker or Beavers-Joseph law for a viscous flow - represent the transition region between the free-fluid and the porous region by an interface of discontinuity associated with specific transmission conditions. An extension to the Beavers-Joseph law was proposed for the convective regime. By introducing a jump in the kinetic energy at the interface, we recover an interface condition close to the Beavers-Joseph law but with a non-linear slip coefficient, which depends on the free-fluid velocity at the interface and on the Darcy velocity. The validity of this new transmission condition was assessed with direct numerical simulations at
Unification of field theory and maximum entropy methods for learning probability densities
Kinney, Justin B.
2014-01-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...
INFORMATIONAL-METHODICAL SUPPORT OF THE COURSE «MATHEMATICAL LOGIC AND THEORY OF ALGORITHMS»
Directory of Open Access Journals (Sweden)
Y. I. Sinko
2010-06-01
Full Text Available In this article the basic principles of training technique of future teachers of mathematics to foundations of mathematical logic and theory of algorithms in the Kherson State University with the use of information technologies are examined. General description of functioning of the methodical system of learning of mathematical logic with the use of information technologies, in that variant, when information technologies are presented by the integrated specialized programmatic environment of the educational purpose «MatLog» is given.
Core design and operation optimization methods based on time-dependent perturbation theory
International Nuclear Information System (INIS)
Greenspan, E.
1983-08-01
A general approach for the optimization of nuclear reactor core design and operation is outlined; it is based on two cornerstones: a newly developed time-dependent (or burnup-dependent) perturbation theory for nonlinear problems and a succesive iteration technique. The resulting approach is capable of handling realistic reactor models using computational methods of any degree of sophistication desired, while accounting for all the constraints imposed. Three general optimization strategies, different in the way for handling the constraints, are formulated. (author)
Analytical and hybrid methods in the theory of slot-hole coupling of electrodynamic volumes
Katrich, Victor A; Berdnik, Sergey L; Berdnik, Sergey L
2008-01-01
Narration of the text is both laconic and visually accessible, providing the reader with the possibility of rapid study and application of methods of computer analysis of electrodynamic problemsThe book is aimed at university professors, researchers and those specialists who are interested in theory and practical analysis of waveguide devices and systems using slot coupling elementsTopics included in the book are directly based on the original research results obtained by the authors and otherwise unknown earlier.
Mathematical foundations of transport theory
International Nuclear Information System (INIS)
Ershov, Yu.I.; Shikhov, S.B.
1985-01-01
Main items of application of the operator equations analyzing method in transport theory problems are considered. The mathematical theory of a reactor critical state is presented. Theorems of existence of positive solutions of non-linear non-stationary equations taking into account the temperature and xenon feedbacks are proved. Conditions for stability and asymptotic stability of steady-state regimes for different distributed models of a nuclear reactor are obtained on the basis of the modern operator perturbation theory, certain problems on control using an absorber are considered
International Nuclear Information System (INIS)
Sabouri, Pouya
2013-01-01
This thesis presents a comprehensive study of sensitivity/uncertainty analysis for reactor performance parameters (e.g. the k-effective) to the base nuclear data from which they are computed. The analysis starts at the fundamental step, the Evaluated Nuclear Data File and the uncertainties inherently associated with the data they contain, available in the form of variance/covariance matrices. We show that when a methodical and consistent computation of sensitivity is performed, conventional deterministic formalisms can be sufficient to propagate nuclear data uncertainties with the level of accuracy obtained by the most advanced tools, such as state-of-the-art Monte Carlo codes. By applying our developed methodology to three exercises proposed by the OECD (Uncertainty Analysis for Criticality Safety Assessment Benchmarks), we provide insights of the underlying physical phenomena associated with the used formalisms. (author)
An information theory criteria based blind method for enumerating active users in DS-CDMA system
Samsami Khodadad, Farid; Abed Hodtani, Ghosheh
2014-11-01
In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.
Pal, Pinaki; Valorani, Mauro; Arias, Paul G.; Im, Hong G.; Wooldridge, Margaret S.; Ciottoli, Pietro P.; Galassi, Riccardo M.
2016-01-01
) was applied to characterize the auto-ignition phenomena. All results supported that the observed ignition behaviors were consistent with the expected ignition regimes predicted by the theory of the regime diagram. This work provides new high-fidelity data
DEFF Research Database (Denmark)
Nielsen, Max
2006-01-01
Supply in fisheries is traditionally known for its backward bending nature, owing to externalities in production. Such a supply regime, however, exist only for pure open access fisheries. Since most fisheries worldwide are neither pure open access, nor optimally managed, rather between the extremes......, the traditional understanding of supply regimes in fisheries needs modification. This paper identifies through a case study of the East Baltic cod fishery supply regimes in fisheries, taking alternative fisheries management schemes and mesh size limitations into account. An age-structured Beverton-Holt based bio......-economic supply model with mesh sizes is developed. It is found that in the presence of realistic management schemes, the supply curves are close to vertical in the relevant range. Also, the supply curve under open access with mesh size limitations is almost vertical in the relevant range, owing to constant...
Development of a new loss allocation method for a hybrid electricity market using graph theory
International Nuclear Information System (INIS)
Lim, Valerie S.C.; McDonald, John D.F.; Saha, Tapan K.
2009-01-01
This paper introduces a new method for allocating losses in a power system using a loop-based representation of system behaviour. Using the new method, network behaviour is formulated as a series of presumed power transfers directly between market participants. In contrast to many existing loss allocation methods, this makes it easier to justify the resulting loss distribution. In addition to circumventing the problems of non-unique loss allocations, a formalised process of loop identification, using graph theory concepts, is introduced. The proposed method is applied to both the IEEE 14-bus system and a modified CIGRE Nordic 32-bus system. The results provide a demonstration of the capability of the proposed method to allocate losses in the hybrid market, and demonstrate the approach's capacity to link the technical performance of the network to market instruments. (author)
Pulsar timing arrays and gravity tests in the radiative regime
Lee, K. J.
2013-11-01
In this paper, we focus on testing gravity theories in the radiative regime using pulsar timing array observations. After reviewing current techniques to measure the dispersion and alternative polarization of gravitational waves, we extend the framework to the most general situations, where the combinations of a massive graviton and alternative polarization modes are considered. The atlas of the Hellings-Downs functions is completed by the new calculations for these dispersive alternative polarization modes. We find that each mode and corresponding graviton mass introduce characteristic features in the Hellings-Downs function. Thus, in principal, we can not only detect each polarization mode, measure the corresponding graviton mass, but also discriminate the different scenarios. In this way, we can test gravity theories in the radiative regime in a generalized fashion, and such method is a direct experiment, where one can address the gauge symmetry of the gravity theories in their linearized limits. Although current pulsar timing still lacks enough stable pulsars and sensitivity for such practices, we expect that future telescopes with larger collecting areas could make such experiments feasible.
Phase Coordinate System and p-q Theory Based Methods in Active Filtering Implementation
Directory of Open Access Journals (Sweden)
POPESCU, M.
2013-02-01
Full Text Available This paper is oriented towards implementation of the main theories of powers in the compensating current generation stage of a three-phase three-wire shunt active power system. The system control is achieved through a dSPACE 1103 platform which is programmed under the Matlab/Simulink environment. Four calculation blocks included in a specifically designed Simulink library are successively implemented in the experimental setup. The first two approaches, namely those based on the Fryze-Buchholz-Depenbrock theory and the generalized instantaneous reactive power theory, make use of phase quantities without any transformation of the coordinate system and provide the basis for calculating the compensating current when total compensation is desired. The others are based on the p-q theory concepts and require the direct and reverse transformation to/from the two-phases stationary reference frame. They are used for total compensation and partial compensation of the current harmonic distortion. The experimental results, in terms of active filtering performances, validate the control strategies implementation and provide arguments in choosing the most appropriate method.
An analytical transport theory method for calculating flux distribution in slab cells
International Nuclear Information System (INIS)
Abdel Krim, M.S.
2001-01-01
A transport theory method for calculating flux distributions in slab fuel cell is described. Two coupled integral equations for flux in fuel and moderator are obtained; assuming partial reflection at moderator external boundaries. Galerkin technique is used to solve these equations. Numerical results for average fluxes in fuel and moderator and the disadvantage factor are given. Comparison with exact numerical methods, that is for total reflection moderator outer boundaries, show that the Galerkin technique gives accurate results for the disadvantage factor and average fluxes. (orig.)
Optimization of Candu fuel management with gradient methods using generalized perturbation theory
International Nuclear Information System (INIS)
Chambon, R.; Varin, E.; Rozon, D.
2005-01-01
CANDU fuel management problems are solved using time-average representation of the core. Optimization problems based on this representation have been defined in the early nineties. The mathematical programming using the generalized perturbation theory (GPT) that was developed has been implemented in the reactor code DONJON. The use of the augmented Lagrangian (AL) method is presented and evaluated in this paper. This approach is mandatory for new constraint problems. Combined with the classical Lemke method, it proves to be very efficient to reach optimal solution in a very limited number of iterations. (authors)
Variational methods for problems from plasticity theory and for generalized Newtonian fluids
Fuchs, Martin
2000-01-01
Variational methods are applied to prove the existence of weak solutions for boundary value problems from the deformation theory of plasticity as well as for the slow, steady state flow of generalized Newtonian fluids including the Bingham and Prandtl-Eyring model. For perfect plasticity the role of the stress tensor is emphasized by studying the dual variational problem in appropriate function spaces. The main results describe the analytic properties of weak solutions, e.g. differentiability of velocity fields and continuity of stresses. The monograph addresses researchers and graduate students interested in applications of variational and PDE methods in the mechanics of solids and fluids.
Flenady, Tracy; Dwyer, Trudy; Applegarth, Judith
2017-09-01
Abnormal respiratory rates are one of the first indicators of clinical deterioration in emergency department(ED) patients. Despite the importance of respiratory rate observations, this vital sign is often inaccurately recorded on ED observation charts, compromising patient safety. Concurrently, there is a paucity of research reporting why this phenomenon occurs. To develop a substantive theory explaining ED registered nurses' reasoning when they miss or misreport respiratory rate observations. This research project employed a classic grounded theory analysis of qualitative data. Seventy-nine registered nurses currently working in EDs within Australia. Data collected included detailed responses from individual interviews and open-ended responses from an online questionnaire. Classic grounded theory (CGT) research methods were utilised, therefore coding was central to the abstraction of data and its reintegration as theory. Constant comparison synonymous with CGT methods were employed to code data. This approach facilitated the identification of the main concern of the participants and aided in the generation of theory explaining how the participants processed this issue. The main concern identified is that ED registered nurses do not believe that collecting an accurate respiratory rate for ALL patients at EVERY round of observations is a requirement, and yet organizational requirements often dictate that a value for the respiratory rate be included each time vital signs are collected. The theory 'Rationalising Transgression', explains how participants continually resolve this problem. The study found that despite feeling professionally conflicted, nurses often erroneously record respiratory rate observations, and then rationalise this behaviour by employing strategies that adjust the significance of the organisational requirement. These strategies include; Compensating, when nurses believe they are compensating for errant behaviour by enhancing the patient's outcome
Hu, Ping; Liu, Li-zhong; Zhu, Yi-guo
2013-01-01
Over the last 15 years, the application of innovative steel concepts in the automotive industry has increased steadily. Numerical simulation technology of hot forming of high-strength steel allows engineers to modify the formability of hot forming steel metals and to optimize die design schemes. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming focuses on hot and cold forming theories, numerical methods, relative simulation and experiment techniques for high-strength steel forming and die design in the automobile industry. Theories, Methods and Numerical Technology of Sheet Metal Cold and Hot Forming introduces the general theories of cold forming, then expands upon advanced hot forming theories and simulation methods, including: • the forming process, • constitutive equations, • hot boundary constraint treatment, and • hot forming equipment and experiments. Various calculation methods of cold and hot forming, based on the authors’ experience in commercial CAE software f...
Flocking regimes in a simple lattice model.
Raymond, J R; Evans, M R
2006-03-01
We study a one-dimensional lattice flocking model incorporating all three of the flocking criteria proposed by Reynolds [Computer Graphics 21, 4 (1987)]: alignment, centering, and separation. The model generalizes that introduced by O. J. O'Loan and M. R. Evans [J. Phys. A. 32, L99 (1999)]. We motivate the dynamical rules by microscopic sampling considerations. The model exhibits various flocking regimes: the alternating flock, the homogeneous flock, and dipole structures. We investigate these regimes numerically and within a continuum mean-field theory.
Chapter 29: Unproved and controversial methods and theories in allergy-immunology.
Shah, Rachna; Greenberger, Paul A
2012-01-01
Unproved methods and controversial theories in the diagnosis and management of allergy-immunology are those that lack scientific credibility. Some definitions are provided for perspective because in chronic medical conditions, frequently, nonscientifically based treatments are developed that can have a very positive psychological effect on the patients in the absence of objective physical benefit. Standard practice can be described as "the methods of diagnosis and treatment used by reputable physicians in a particular subspecialty or primary care practice" with the understanding that diagnosis and treatment options are consistent with established mechanisms of conditions or diseases.(3) Conventional medicine (Western or allopathic medicine) is that which is practiced by the majority of MDs, DOs, psychologists, RNs, and physical therapists. Complementary medicine uses the practice of conventional medicine with complementary and alternative medicine such as using acupuncture for pain relief in addition to opioids. Alternative medicine implies use of complementary and alternative practices in place of conventional medicine. Unproved and controversial methods and theories do not have supporting data, validation, and sufficient scientific scrutiny, and they should not be used in the practice of allergy-immunology. Some examples of unproven theories about allergic immunologic conditions include allergic toxemia, idiopathic environmental intolerance, association with childhood vaccinations, and adrenal fatigue. Unconventional (unproved) diagnostic methods for allergic-immunologic conditions include cytotoxic tests, provocation-neutralization, electrodermal diagnosis, applied kinesiology assessments, and serum IgG or IgG(4) testing. Unproven treatments and intervention methods for allergic-immunologic conditions include acupuncture, homeopathy ("likes cure likes"), halotherapy, and autologous urine injections.
Shegog, Ross; Bartholomew, L Kay; Gold, Robert S; Pierrel, Elaine; Parcel, Guy S; Sockrider, Marianna M; Czyzewski, Danita I; Fernandez, Maria E; Berlin, Nina J; Abramson, Stuart
2006-01-01
Translating behavioral theories, models, and strategies to guide the development and structure of computer-based health applications is well recognized, although a continued challenge for program developers. A stepped approach to translate behavioral theory in the design of simulations to teach chronic disease management to children is described. This includes the translation steps to: 1) define target behaviors and their determinants, 2) identify theoretical methods to optimize behavioral change, and 3) choose educational strategies to effectively apply these methods and combine these into a cohesive computer-based simulation for health education. Asthma is used to exemplify a chronic health management problem and a computer-based asthma management simulation (Watch, Discover, Think and Act) that has been evaluated and shown to effect asthma self-management in children is used to exemplify the application of theory to practice. Impact and outcome evaluation studies have indicated the effectiveness of these steps in providing increased rigor and accountability, suggesting their utility for educators and developers seeking to apply simulations to enhance self-management behaviors in patients.
The method of finite-gap integration in classical and semi-classical string theory
International Nuclear Information System (INIS)
Vicedo, Benoit
2011-01-01
In view of proving the AdS/CFT correspondence one day, a deeper understanding of string theory on certain curved backgrounds such as AdS 5 x S 5 is required. In this review we make a step in this direction by focusing on RxS 3 . It was discovered in recent years that string theory on AdS 5 x S 5 admits a Lax formulation. However, the complete statement of integrability requires not only the existence of a Lax formulation but also that the resulting integrals of motion are in pairwise involution. This idea is central to the first part of this review. Exploiting this integrability we apply algebro-geometric methods to string theory on RxS 3 and obtain the general finite-gap solution. The construction is based on an invariant algebraic curve previously found in the AdS 5 x S 5 case. However, encoding the dynamics of the solution requires specification of additional marked points. By restricting the symplectic structure of the string to these algebro-geometric data we derive the action-angle variables of the system. We then perform a first-principle semiclassical quantization of string theory on RxS 3 as a toy model for strings on AdS 5 x S 5 . The result is exactly what one expects from the dual gauge theory perspective, namely the underlying algebraic curve discretizes in a natural way. We also derive a general formula for the fluctuation energies around the generic finite-gap solution. The ideas used can be generalized to AdS 5 x S 5 . (review)
Crane Safety Assessment Method Based on Entropy and Cumulative Prospect Theory
Directory of Open Access Journals (Sweden)
Aihua Li
2017-01-01
Full Text Available Assessing the safety status of cranes is an important problem. To overcome the inaccuracies and misjudgments in such assessments, this work describes a safety assessment method for cranes that combines entropy and cumulative prospect theory. Firstly, the proposed method transforms the set of evaluation indices into an evaluation vector. Secondly, a decision matrix is then constructed from the evaluation vectors and evaluation standards, and an entropy-based technique is applied to calculate the index weights. Thirdly, positive and negative prospect value matrices are established from reference points based on the positive and negative ideal solutions. Thus, this enables the crane safety grade to be determined according to the ranked comprehensive prospect values. Finally, the safety status of four general overhead traveling crane samples is evaluated to verify the rationality and feasibility of the proposed method. The results demonstrate that the method described in this paper can precisely and reasonably reflect the safety status of a crane.
Scattering theory on the lattice and with a Monte Carlo method
International Nuclear Information System (INIS)
Kroeger, H.; Moriarty, K.J.M.; Potvin, J.
1990-01-01
We present an alternative time-dependent method of calculating the S matrix in quantum systems governed by a Hamiltonian. In the first step one constructs a new Hamiltonian that describes the physics of scattering at energy E with a reduced number of degrees of freedom. Its matrix elements are computed with a Monte Carlo projector method. In the second step the scattering matrix is computed algebraically via diagonalization and exponentiation of the new Hamiltonian. Although we have in mind applications in many-body systems and quantum field theory, the method should be applicable and useful in such diverse areas as atomic and molecular physics, nuclear physics, high-energy physics and solid-state physics. As an illustration of the method, we compute s-wave scattering of two nucleons in a nonrelativistic potential model (Yamaguchi potential), for which the S matrix is known exactly
Grey situation group decision-making method based on prospect theory.
Zhang, Na; Fang, Zhigeng; Liu, Xiaqing
2014-01-01
This paper puts forward a grey situation group decision-making method on the basis of prospect theory, in view of the grey situation group decision-making problems that decisions are often made by multiple decision experts and those experts have risk preferences. The method takes the positive and negative ideal situation distance as reference points, defines positive and negative prospect value function, and introduces decision experts' risk preference into grey situation decision-making to make the final decision be more in line with decision experts' psychological behavior. Based on TOPSIS method, this paper determines the weight of each decision expert, sets up comprehensive prospect value matrix for decision experts' evaluation, and finally determines the optimal situation. At last, this paper verifies the effectiveness and feasibility of the method by means of a specific example.
2001-01-01
There are at present at least three international regimes of maritime cargo liability in force in different countries of the world - the original Hague rules (1924), the updated version known as the Hague-Visby rules (1968, further amended 1979), and...
Six, Frédérique; Verhoest, Koen
2017-01-01
Within political and administrative sciences generally, trust as a concept is contested, especially in the field of regulatory governance. This groundbreaking book is the first to systematically explore the role and dynamics of trust within regulatory regimes. Conceptualizing, mapping and analyzing
DEFF Research Database (Denmark)
Abrahamson, Peter
2017-01-01
The paper asks if East Asian welfare regimes are still productivist and Confucian? And, have they developed public care policies? The literature is split on the first question but (mostly) confirmative on the second. Care has to a large, but insufficient extent, been rolled out in the region...
On the generalized eigenvalue method for energies and matrix elements in lattice field theory
Energy Technology Data Exchange (ETDEWEB)
Blossier, Benoit [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Paris-XI Univ., 91 - Orsay (France). Lab. de Physique Theorique; Morte, Michele della [CERN, Geneva (Switzerland). Physics Dept.]|[Mainz Univ. (Germany). Inst. fuer Kernphysik; Hippel, Georg von; Sommer, Rainer [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Mendes, Tereza [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Sao Paulo Univ. (Brazil). IFSC
2009-02-15
We discuss the generalized eigenvalue problem for computing energies and matrix elements in lattice gauge theory, including effective theories such as HQET. It is analyzed how the extracted effective energies and matrix elements converge when the time separations are made large. This suggests a particularly efficient application of the method for which we can prove that corrections vanish asymptotically as exp(-(E{sub N+1}-E{sub n}) t). The gap E{sub N+1}-E{sub n} can be made large by increasing the number N of interpolating fields in the correlation matrix. We also show how excited state matrix elements can be extracted such that contaminations from all other states disappear exponentially in time. As a demonstration we present numerical results for the extraction of ground state and excited B-meson masses and decay constants in static approximation and to order 1/m{sub b} in HQET. (orig.)
Geometric Methods in the Algebraic Theory of Quadratic Forms : Summer School
2004-01-01
The geometric approach to the algebraic theory of quadratic forms is the study of projective quadrics over arbitrary fields. Function fields of quadrics have been central to the proofs of fundamental results since the renewal of the theory by Pfister in the 1960's. Recently, more refined geometric tools have been brought to bear on this topic, such as Chow groups and motives, and have produced remarkable advances on a number of outstanding problems. Several aspects of these new methods are addressed in this volume, which includes - an introduction to motives of quadrics by Alexander Vishik, with various applications, notably to the splitting patterns of quadratic forms under base field extensions; - papers by Oleg Izhboldin and Nikita Karpenko on Chow groups of quadrics and their stable birational equivalence, with application to the construction of fields which carry anisotropic quadratic forms of dimension 9, but none of higher dimension; - a contribution in French by Bruno Kahn which lays out a general fra...
A new method for the design of slot antenna arrays: Theory and experiment
Clauzier, Sebastien
2016-04-10
The present paper proposes and validates a new general design methodology that can be used to automatically find proper positions and orientations of waveguide-based radiating slots capable of realizing any given radiation beam profile. The new technique combines basic radiation theory and waveguide propagation theory in a novel analytical model that allows the prediction of the radiation characteristics of generic slots without the need to perform full-wave numerical solution. The analytical model is then used to implement a low-cost objective function within a global optimization scheme (here genetic algorithm.) The algorithm is then deployed to find optimum positions and orientations of clusters of radiating slots cut into the waveguide surface such that any desired beam pattern can be obtained. The method is verified using both full-wave numerical solution and experiment.
Teaching Theory in Occupational Therapy Using a Cooperative Learning: A Mixed-Methods Study.
Howe, Tsu-Hsin; Sheu, Ching-Fan; Hinojosa, Jim
2018-01-01
Cooperative learning provides an important vehicle for active learning, as knowledge is socially constructed through interaction with others. This study investigated the effect of cooperative learning on occupational therapy (OT) theory knowledge attainment in professional-level OT students in a classroom environment. Using a pre- and post-test group design, 24 first-year, entry-level OT students participated while taking a theory course in their second semester of the program. Cooperative learning methods were implemented via in-class group assignments. The students were asked to complete two questionnaires regarding their attitudes toward group environments and their perception toward group learning before and after the semester. MANCOVA was used to examine changes in attitudes and perceived learning among groups. Students' summary sheets for each in-class assignment and course evaluations were collected for content analysis. Results indicated significant changes in students' attitude toward working in small groups regardless of their prior group experience.
Energy Technology Data Exchange (ETDEWEB)
Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)
2016-06-08
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.
International Nuclear Information System (INIS)
Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.
2016-01-01
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.
DEFF Research Database (Denmark)
Hedegård, Erik D.; Olsen, Jógvan Magnus Haugaard; Knecht, Stefan
2015-01-01
. To demonstrate the capabilities of PE-MC-srDFT, we also investigated the retinylidene Schiff base chromophore embedded in the channelrhodopsin protein. While using a much more compact reference wave function in terms of active space, our PE-MC-srDFT approach yields excitation energies comparable in quality......We present here the coupling of a polarizable embedding (PE) model to the recently developed multiconfiguration short-range density functional theory method (MC-srDFT), which can treat multiconfigurational systems with a simultaneous account for dynamical and static correlation effects. PE......-MC-srDFT is designed to combine efficient treatment of complicated electronic structures with inclusion of effects from the surrounding environment. The environmental effects encompass classical electrostatic interactions as well as polarization of both the quantum region and the environment. Using response theory...
On the generalized eigenvalue method for energies and matrix elements in lattice field theory
International Nuclear Information System (INIS)
Blossier, Benoit; Mendes, Tereza; Sao Paulo Univ.
2009-02-01
We discuss the generalized eigenvalue problem for computing energies and matrix elements in lattice gauge theory, including effective theories such as HQET. It is analyzed how the extracted effective energies and matrix elements converge when the time separations are made large. This suggests a particularly efficient application of the method for which we can prove that corrections vanish asymptotically as exp(-(E N+1 -E n ) t). The gap E N+1 -E n can be made large by increasing the number N of interpolating fields in the correlation matrix. We also show how excited state matrix elements can be extracted such that contaminations from all other states disappear exponentially in time. As a demonstration we present numerical results for the extraction of ground state and excited B-meson masses and decay constants in static approximation and to order 1/m b in HQET. (orig.)
A Dynamic Resource Scheduling Method Based on Fuzzy Control Theory in Cloud Environment
Directory of Open Access Journals (Sweden)
Zhijia Chen
2015-01-01
Full Text Available The resources in cloud environment have features such as large-scale, diversity, and heterogeneity. Moreover, the user requirements for cloud computing resources are commonly characterized by uncertainty and imprecision. Hereby, to improve the quality of cloud computing service, not merely should the traditional standards such as cost and bandwidth be satisfied, but also particular emphasis should be laid on some extended standards such as system friendliness. This paper proposes a dynamic resource scheduling method based on fuzzy control theory. Firstly, the resource requirements prediction model is established. Then the relationships between resource availability and the resource requirements are concluded. Afterwards fuzzy control theory is adopted to realize a friendly match between user needs and resources availability. Results show that this approach improves the resources scheduling efficiency and the quality of service (QoS of cloud computing.
International Nuclear Information System (INIS)
Rossi, Lubianka Ferrari Russo
2014-01-01
The main target of this study is to introduce a new method for calculating the coefficients of sensibility through the union of differential method and generalized perturbation theory, which are the two methods generally used in reactor physics to obtain such variables. These two methods, separated, have some issues turning the sensibility coefficients calculation slower or computationally exhaustive. However, putting them together, it is possible to repair these issues and build a new equation for the coefficient of sensibility. The method introduced in this study was applied in a PWR reactor, where it was performed the sensibility analysis for the production and 239 Pu conversion rate during 120 days (1 cycle) of burnup. The computational code used for both burnup and sensibility analysis, the CINEW, was developed in this study and all the results were compared with codes widely used in reactor physics, such as CINDER and SERPENT. The new mathematical method for calculating the sensibility coefficients and the code CINEW provide good numerical agility and also good efficiency and security, once the new method, when compared with traditional ones, provide satisfactory results, even when the other methods use different mathematical approaches. The burnup analysis, performed using the code CINEW, was compared with the code CINDER, showing an acceptable variation, though CINDER presents some computational issues due to the period it was built. The originality of this study is the application of such method in problems involving temporal dependence and, not least, the elaboration of the first national code for burnup and sensitivity analysis. (author)
Convergence of the Light-Front Coupled-Cluster Method in Scalar Yukawa Theory
Usselman, Austin
We use Fock-state expansions and the Light-Front Coupled-Cluster (LFCC) method to study mass eigenvalue problems in quantum field theory. Specifically, we study convergence of the method in scalar Yukawa theory. In this theory, a single charged particle is surrounded by a cloud of neutral particles. The charged particle can create or annihilate neutral particles, causing the n-particle state to depend on the n + 1 and n - 1-particle state. Fock state expansion leads to an infinite set of coupled equations where truncation is required. The wave functions for the particle states are expanded in a basis of symmetric polynomials and a generalized eigenvalue problem is solved for the mass eigenvalue. The mass eigenvalue problem is solved for multiple values for the coupling strength while the number of particle states and polynomial basis order are increased. Convergence of the mass eigenvalue solutions is then obtained. Three mass ratios between the charged particle and neutral particles were studied. This includes a massive charged particle, equal masses and massive neutral particles. Relative probability between states can also be explored for more detailed understanding of the process of convergence with respect to the number of Fock sectors. The reliance on higher order particle states depended on how large the mass of the charge particle was. The higher the mass of the charged particle, the more the system depended on higher order particle states. The LFCC method solves this same mass eigenvalue problem using an exponential operator. This exponential operator can then be truncated instead to form a finite system of equations that can be solved using a built in system solver provided in most computational environments, such as MatLab and Mathematica. First approximation in the LFCC method allows for only one particle to be created by the new operator and proved to be not powerful enough to match the Fock state expansion. The second order approximation allowed one
Gao, Kai
2015-06-05
The development of reliable methods for upscaling fine-scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. Therefore, we have proposed a numerical homogenization algorithm based on multiscale finite-element methods for simulating elastic wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that was similar to the rotated staggered-grid finite-difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity in which the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.
Using Grounded Theory Method to Capture and Analyze Health Care Experiences
Foley, Geraldine; Timonen, Virpi
2015-01-01
Objective Grounded theory (GT) is an established qualitative research method, but few papers have encapsulated the benefits, limits, and basic tenets of doing GT research on user and provider experiences of health care services. GT can be used to guide the entire study method, or it can be applied at the data analysis stage only. Methods We summarize key components of GT and common GT procedures used by qualitative researchers in health care research. We draw on our experience of conducting a GT study on amyotrophic lateral sclerosis patients’ experiences of health care services. Findings We discuss why some approaches in GT research may work better than others, particularly when the focus of study is hard-to-reach population groups. We highlight the flexibility of procedures in GT to build theory about how people engage with health care services. Conclusion GT enables researchers to capture and understand health care experiences. GT methods are particularly valuable when the topic of interest has not previously been studied. GT can be applied to bring structure and rigor to the analysis of qualitative data. PMID:25523315
Applying Critical Race Theory to Group Model Building Methods to Address Community Violence.
Frerichs, Leah; Lich, Kristen Hassmiller; Funchess, Melanie; Burrell, Marcus; Cerulli, Catherine; Bedell, Precious; White, Ann Marie
2016-01-01
Group model building (GMB) is an approach to building qualitative and quantitative models with stakeholders to learn about the interrelationships among multilevel factors causing complex public health problems over time. Scant literature exists on adapting this method to address public health issues that involve racial dynamics. This study's objectives are to (1) introduce GMB methods, (2) present a framework for adapting GMB to enhance cultural responsiveness, and (3) describe outcomes of adapting GMB to incorporate differences in racial socialization during a community project seeking to understand key determinants of community violence transmission. An academic-community partnership planned a 1-day session with diverse stakeholders to explore the issue of violence using GMB. We documented key questions inspired by critical race theory (CRT) and adaptations to established GMB "scripts" (i.e., published facilitation instructions). The theory's emphasis on experiential knowledge led to a narrative-based facilitation guide from which participants created causal loop diagrams. These early diagrams depict how violence is transmitted and how communities respond, based on participants' lived experiences and mental models of causation that grew to include factors associated with race. Participants found these methods useful for advancing difficult discussion. The resulting diagrams can be tested and expanded in future research, and will form the foundation for collaborative identification of solutions to build community resilience. GMB is a promising strategy that community partnerships should consider when addressing complex health issues; our experience adapting methods based on CRT is promising in its acceptability and early system insights.
Differential regularization and renormalization: a new method of calculation in quantum field theory
International Nuclear Information System (INIS)
Freedman, D.Z.; Johnson, K.; Latorre, J.I.
1992-01-01
Most primitively divergent Feynman diagrams are well defined in x-space but too singular at short distances for transformation to p-space. A new method of regularization is developed in which singular functions are written as derivatives of less singular functions which contain a logarithmic mass scale. The Fourier transform is then defined by formal integration by parts. The procedure is extended to graphs with divergent subgraphs. No explicit cutoff or counterterms are required, and the method automatically delivers renormalized amplitudes which satisfy Callan-Symanzik equations. These features are thoroughly explored in massless φ 4 theory through 3-loop order, and the method yields explicit functional forms for all amplitudes with less difficulty than conventional methods which use dimensional regularization in p-space. The procedure also appears to be compatible with gauge invariance and the chiral structure of the standard model. This aspect is tested in extensive 1-loop calculations which include the Ward identity in quantum electrodynamics, the chiral anomaly, and the background field algorithm in non-abelian gauge theories. (orig.)
Neuhauser, Linda; Kreps, Gary L
2014-12-01
Traditional communication theory and research methods provide valuable guidance about designing and evaluating health communication programs. However, efforts to use health communication programs to educate, motivate, and support people to adopt healthy behaviors often fail to meet the desired goals. One reason for this failure is that health promotion issues are complex, changeable, and highly related to the specific needs and contexts of the intended audiences. It is a daunting challenge to effectively influence health behaviors, particularly culturally learned and reinforced behaviors concerning lifestyle factors related to diet, exercise, and substance (such as alcohol and tobacco) use. Too often, program development and evaluation are not adequately linked to provide rapid feedback to health communication program developers so that important revisions can be made to design the most relevant and personally motivating health communication programs for specific audiences. Design science theory and methods commonly used in engineering, computer science, and other fields can address such program and evaluation weaknesses. Design science researchers study human-created programs using tightly connected build-and-evaluate loops in which they use intensive participatory methods to understand problems and develop solutions concurrently and throughout the duration of the program. Such thinking and strategies are especially relevant to address complex health communication issues. In this article, the authors explore the history, scientific foundation, methods, and applications of design science and its potential to enhance health communication programs and their evaluation.
International Nuclear Information System (INIS)
Potard, C.
1975-01-01
A new method was developed to study and control solidification processes by means of differential dilatometry. A mathematical analysis of this method is made and first results are presented. A relation is established between the variations of the volume of the sample and that of the solid obtained. The gravimetric method used for volume measurement is also mathematically analyzed. These results are applied to two solidification experiments on InSb, in strongly perturbed and controlled cooling regimes. Precisions are given on the limits of this method, and further developments towards phase transformation studies and control are envisaged [fr
Directory of Open Access Journals (Sweden)
Aldo Merlino
2007-01-01
Full Text Available Qualitative methods present a wide spectrum of application possibilities as well as opportunities for combining qualitative and quantitative methods. In the social sciences fruitful theoretical discussions and a great deal of empirical research have taken place. This article introduces an empirical investigation which demonstrates the logic of combining methodologies as well as the collection and interpretation, both sequential as simultaneous, of qualitative and quantitative data. Specifically, the investigation process will be described, beginning with a grounded theory methodology and its combination with the techniques of structural semiotics discourse analysis to generate—in a first phase—an instrument for quantitative measuring and to understand—in a second phase—clusters obtained by quantitative analysis. This work illustrates how qualitative methods allow for the comprehension of the discursive and behavioral elements under study, and how they function as support making sense of and giving meaning to quantitative data. URN: urn:nbn:de:0114-fqs0701219
An improved method of continuous LOD based on fractal theory in terrain rendering
Lin, Lan; Li, Lijun
2007-11-01
With the improvement of computer graphic hardware capability, the algorithm of 3D terrain rendering is going into the hot topic of real-time visualization. In order to solve conflict between the rendering speed and reality of rendering, this paper gives an improved method of terrain rendering which improves the traditional continuous level of detail technique based on fractal theory. This method proposes that the program needn't to operate the memory repeatedly to obtain different resolution terrain model, instead, obtains the fractal characteristic parameters of different region according to the movement of the viewpoint. Experimental results show that the method guarantees the authenticity of landscape, and increases the real-time 3D terrain rendering speed.
The Green Function cellular method and its relation to multiple scattering theory
International Nuclear Information System (INIS)
Butler, W.H.; Zhang, X.G.; Gonis, A.
1992-01-01
This paper investigates techniques for solving the wave equation which are based on the idea of obtaining exact local solutions within each potential cell, which are then joined to form a global solution. The authors derive full potential multiple scattering theory (MST) from the Lippmann-Schwinger equation and show that it as well as a closely related cellular method are techniques of this type. This cellular method appears to have all of the advantages of MST and the added advantage of having a secular matrix with only nearest neighbor interactions. Since this cellular method is easily linearized one can rigorously reduce electronic structure calculation to the problem of solving a nearest neighbor tight-binding problem
Ranking Journals Using Social Choice Theory Methods: A Novel Approach in Bibliometrics
Energy Technology Data Exchange (ETDEWEB)
Aleskerov, F.T.; Pislyakov, V.; Subochev, A.N.
2016-07-01
We use data on economic, management and political science journals to produce quantitative estimates of (in)consistency of evaluations based on seven popular bibliometric indica (impact factor, 5-year impact factor, immediacy index, article influence score, h-index, SNIP and SJR). We propose a new approach to aggregating journal rankings: since rank aggregation is a multicriteria decision problem, ordinal ranking methods from social choice theory may solve it. We apply either a direct ranking method based on majority rule (the Copeland rule, the Markovian method) or a sorting procedure based on a tournament solution, such as the uncovered set and the minimal externally stable set. We demonstrate that aggregate rankings reduce the number of contradictions and represent the set of single-indicator-based rankings better than any of the seven rankings themselves. (Author)
Directory of Open Access Journals (Sweden)
Dirk vom Lehn
2006-09-01
Full Text Available There is a curious ignorance of interactionist theory and research in German sociology. Whilst Symbolic Interactionism plays a central role in courses on social theory, such courses often neglect more recent interactionist concepts and studies. Jörg STRÜBING's book introduces some of these concepts and ideas to German sociology by revealing their contribution to science and technology research and to social theory. The book explains in detail the development of interactionism and its contribution to science and technology studies. It is of interest to those studying science and technology research as well as to those interested in social theory. It should be added to reading lists of courses on science and technology studies and should contribute to the wider dissemination of interactionist theories and studies as well as to interactionist science and technology research. URN: urn:nbn:de:0114-fqs0604249
Pala, M. G.; Esseni, D.
2018-03-01
This paper presents the theory, implementation, and application of a quantum transport modeling approach based on the nonequilibrium Green's function formalism and a full-band empirical pseudopotential Hamiltonian. We here propose to employ a hybrid real-space/plane-wave basis that results in a significant reduction of the computational complexity compared to a full plane-wave basis. To this purpose, we provide a theoretical formulation in the hybrid basis of the quantum confinement, the self-energies of the leads, and the coupling between the device and the leads. After discussing the theory and the implementation of the new simulation methodology, we report results for complete, self-consistent simulations of different electron devices, including a silicon Esaki diode, a thin-body silicon field effect transistor (FET), and a germanium tunnel FET. The simulated transistors have technologically relevant geometrical features with a semiconductor film thickness of about 4 nm and a channel length ranging from 10 to 17 nm. We believe that the newly proposed formalism may find applications also in transport models based on ab initio Hamiltonians, as those employed in density functional theory methods.
Wilkinson, Denise M; Smallidge, Dianne; Boyd, Linda D; Giblin, Lori
2015-10-01
Health care education requires students to connect classroom learning with patient care. The purpose of this study was to explore dental hygiene students' perceptions of teaching tools, activities and teaching methods useful in closing the gap between theory and practice as students transition from classroom learning into the clinical phase of their training. This was an exploratory qualitative study design examining retrospective data from journal postings of a convenience sample of dental hygiene students (n=85). Open-ended questions related to patient care were given to junior and senior students to respond in a reflective journaling activity. A systematic approach was used to establish themes. Junior students predicted hands-on experiences (51%), critical thinking exercises (42%) and visual aids (27%) would be the most supportive in helping them connect theory to practice. Senior students identified critical thinking exercises (44%) and visual aids (44%) as the most beneficial in connecting classroom learning to patient care. Seniors also identified barriers preventing them from connecting theory to patient care. Barriers most often cited were not being able to see firsthand what is in the text (56%) and being unsure that what was seen during clinical practice was the same as what was taught (28%). Students recognized the benefits of critical thinking and problem solving skills after having experienced patient care and were most concerned with performance abilities prior to patient care experiences. This information will be useful in developing curricula to enhance critical thinking and problem solving skills. Copyright © 2015 The American Dental Hygienists’ Association.
Directory of Open Access Journals (Sweden)
Jian Guo
2013-01-01
Full Text Available Information system (IS project selection is of critical importance to every organization in dynamic competing environment. The aim of this paper is to develop a hybrid multicriteria group decision making approach based on intuitionistic fuzzy theory for IS project selection. The decision makers’ assessment information can be expressed in the form of real numbers, interval-valued numbers, linguistic variables, and intuitionistic fuzzy numbers (IFNs. All these evaluation pieces of information can be transformed to the form of IFNs. Intuitionistic fuzzy weighted averaging (IFWA operator is utilized to aggregate individual opinions of decision makers into a group opinion. Intuitionistic fuzzy entropy is used to obtain the entropy weights of the criteria. TOPSIS method combined with intuitionistic fuzzy set is proposed to select appropriate IS project in group decision making environment. Finally, a numerical example for information system projects selection is given to illustrate application of hybrid multi-criteria group decision making (MCGDM method based on intuitionistic fuzzy theory and TOPSIS method.
International Nuclear Information System (INIS)
Trakhtenberg, A.M.
1987-01-01
A principle possibility of applying the vibrational stabilization method to nuclear reactors is studied. The problem of securing the stability of nuclear reactor operation steady-state regimes is one of the central ones in dynamics theory and nuclear reaction operation experience. In particular, the problem of xenon oscillation suppressing in a reactor, occuring as a result of steady-state regime instability is urgent. Investigation is conducted using the simpliest reactor model, repesenting it as a non-linear object with concentrated parameters. It is proved that vibrational stabilization is achieved by periodic fluctuations of the control rod positions in the reactor core and boric acid concentration in the coolant with period 1s 4 s. In practice stabilization is effective, when the steady-state regime is located near the stability boundary, which appears to be dangerous, i.e. self-oscillations with inadmissibly high amplitude occure in the reactor
Surfactant Sputtering: Theory of a new method of surface nanostructuring by ion beams
International Nuclear Information System (INIS)
Kree, R.; Yasseri, T.; Hartmann, A.K.
2009-01-01
We present a new Monte Carlo model and a new continuum theory of surface pattern formation due to 'surfactant sputtering', i.e. erosion by ion beam sputtering including a submonolayer coverage of additional, co-sputtered surfactant atoms. This setup, which has been realized in recent experiments in a controlled way leads to a number of interesting possibilities to modify pattern forming processing conditions. We will present three simple scenarios, which illustrate some potential applications of the method. In all three cases, simple Bradley-Harper type ripples appear in the absence of surfactant, whereas new, interesting structures emerge during surfactant sputtering.
Theory of music and method of "Harmony" in J.Cepler's book "Harmony of Universe"
Smirnov, V. A.
In the Cepler's book "Harmony of Universe" edited in 1619 the theory of music as a science of that time is presented. Also the investigation of proportion corresponding to musical between orbital parameters of planets is presented. J.Cepler used comparison of musical proportion for investigation movement of celestial bodies. So that Cepler's third law was formulated as following: "Proportion between periods rotation of any two planets is one and a half of proportion average distans of this planets exactly". The Cepler's method of "Harmony" lead to explanation of existence anti-entropyc processes which are widely spreaded in nature. [Johannes Kepler. Weltharmonik. Munchen-Berlin 1939 ].
Coherent states with classical motion: from an analytic method complementary to group theory
International Nuclear Information System (INIS)
Nieto, M.M.
1982-01-01
From the motivation of Schroedinger, that of finding states which follow the motion which a classical particle would have in a given potential, we discuss generalizations of the coherent states of the harmonic oscillator. We focus on a method which is the analytic complement to the group theory point of view. It uses a minimum uncertainty formalism as its basis. We discuss the properties and time evolution of these states, always keeping in mind the desire to find quantum states which follow the classical motion
Freeman, Tim
2013-08-01
Health service managers face potential conflicts between corporate and professional agendas, a tension sharpened for trainees by their junior status and relative inexperience. While academic leadership theory forms an integral part of contemporary management development programmes, relatively little is known of trainees' patterned subjectivities in relation to leadership theories. The objective of this study was to explore such subjectivities within a cohort of trainees on the National Health Service Graduate Management Training Scheme (NHS GMTS), a 'fast-track' programme which prepares graduate entrants for director-level health service management posts. A Q-method design was used and four shared subjectivities were identified: leadership as collaborative social process ('relational'); leadership as integrity ('moral'); leadership as effective support of subordinates ('team'); and leadership as construction of a credible leadership persona ('identity'). While the factors broadly map onto competencies indicated within the NHS Leadership Qualities Framework which underpin assessments of performance for this student group, it is important not to overstate the governance effect of the assessment regime. Rather, factors reflect tensions between required competencies, namely the mobilisation of diverse interest groups, the ethical base of decisions and the identity work required to convince others of leadership status. Indeed, factor 2 ('moral') effectively defines leadership as the embodiment of public service ethos. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Kiani, Keivan
2017-09-01
Large deformation regime of micro-scale slender beam-like structures subjected to axially pointed loads is of high interest to nanotechnologists and applied mechanics community. Herein, size-dependent nonlinear governing equations are derived by employing modified couple stress theory. Under various boundary conditions, analytical relations between axially applied loads and deformations are presented. Additionally, a novel Galerkin-based assumed mode method (AMM) is established to solve the highly nonlinear equations. In some particular cases, the predicted results by the analytical approach are also checked with those of AMM and a reasonably good agreement is reported. Subsequently, the key role of the material length scale on the load-deformation of microbeams is discussed and the deficiencies of the classical elasticity theory in predicting such a crucial mechanical behavior are explained in some detail. The influences of slenderness ratio and thickness of the microbeam on the obtained results are also examined. The present work could be considered as a pivotal step in better realizing the postbuckling behavior of nano-/micro- electro-mechanical systems consist of microbeams.
Background field method in gauge theories and on linear sigma models
International Nuclear Information System (INIS)
van de Ven, A.E.M.
1986-01-01
This dissertation constitutes a study of the ultraviolet behavior of gauge theories and two-dimensional nonlinear sigma-models by means of the background field method. After a general introduction in chapter 1, chapter 2 presents algorithms which generate the divergent terms in the effective action at one-loop for arbitrary quantum field theories in flat spacetime of dimension d ≤ 11. It is demonstrated that global N = 1 supersymmetric Yang-Mills theory in six dimensions in one-loop UV-finite. Chapter 3 presents an algorithm which produces the divergent terms in the effective action at two-loops for renormalizable quantum field theories in a curved four-dimensional background spacetime. Chapter 4 presents a study of the two-loop UV-behavior of two-dimensional bosonic and supersymmetric non-linear sigma-models which include a Wess-Zumino-Witten term. It is found that, to this order, supersymmetric models on quasi-Ricci flat spaces are UV-finite and the β-functions for the bosonic model depend only on torsionful curvatures. Chapter 5 summarizes a superspace calculation of the four-loop β-function for two-dimensional N = 1 and N = 2 supersymmetric non-linear sigma-models. It is found that besides the one-loop contribution which vanishes on Ricci-flat spaces, the β-function receives four-loop contributions which do not vanish in the Ricci-flat case. Implications for superstrings are discussed. Chapters 6 and 7 treat the details of these calculations
DeVille, R. E. Lee; Harkin, Anthony; Holzer, Matt; Josić, Krešimir; Kaper, Tasso J.
2008-06-01
For singular perturbation problems, the renormalization group (RG) method of Chen, Goldenfeld, and Oono [Phys. Rev. E. 49 (1994) 4502-4511] has been shown to be an effective general approach for deriving reduced or amplitude equations that govern the long time dynamics of the system. It has been applied to a variety of problems traditionally analyzed using disparate methods, including the method of multiple scales, boundary layer theory, the WKBJ method, the Poincaré-Lindstedt method, the method of averaging, and others. In this article, we show how the RG method may be used to generate normal forms for large classes of ordinary differential equations. First, we apply the RG method to systems with autonomous perturbations, and we show that the reduced or amplitude equations generated by the RG method are equivalent to the classical Poincaré-Birkhoff normal forms for these systems up to and including terms of O(ɛ2), where ɛ is the perturbation parameter. This analysis establishes our approach and generalizes to higher order. Second, we apply the RG method to systems with nonautonomous perturbations, and we show that the reduced or amplitude equations so generated constitute time-asymptotic normal forms, which are based on KBM averages. Moreover, for both classes of problems, we show that the main coordinate changes are equivalent, up to translations between the spaces in which they are defined. In this manner, our results show that the RG method offers a new approach for deriving normal forms for nonautonomous systems, and it offers advantages since one can typically more readily identify resonant terms from naive perturbation expansions than from the nonautonomous vector fields themselves. Finally, we establish how well the solution to the RG equations approximates the solution of the original equations on time scales of O(1/ɛ).
Quader, Syed Manzur
2004-01-01
In recent years, many developing countries having a history of high inflation, unfavorable balance of payment situation and a high level of foreign currencies denominated debt, have switched or are in the process of switching to a more flexible exchange rate regime. Therefore, the stability of the exchange rate and the dynamics of its volatility are more crucial than before to prevent financial crises and macroeconomic disturbances. This paper is designed to find out the reasons behind Bangla...
DEFF Research Database (Denmark)
Paidarová, Ivana; Sauer, Stephan P. A.
2012-01-01
We have compared the performance of density functional theory (DFT) using five different exchange-correlation functionals with four coupled cluster theory based wave function methods in the calculation of geometrical derivatives of the polarizability tensor of methane. The polarizability gradient...
Jia, Weile; Lin, Lin
2017-10-01
Fermi operator expansion (FOE) methods are powerful alternatives to diagonalization type methods for solving Kohn-Sham density functional theory (KSDFT). One example is the pole expansion and selected inversion (PEXSI) method, which approximates the Fermi operator by rational matrix functions and reduces the computational complexity to at most quadratic scaling for solving KSDFT. Unlike diagonalization type methods, the chemical potential often cannot be directly read off from the result of a single step of evaluation of the Fermi operator. Hence multiple evaluations are needed to be sequentially performed to compute the chemical potential to ensure the correct number of electrons within a given tolerance. This hinders the performance of FOE methods in practice. In this paper, we develop an efficient and robust strategy to determine the chemical potential in the context of the PEXSI method. The main idea of the new method is not to find the exact chemical potential at each self-consistent-field (SCF) iteration but to dynamically and rigorously update the upper and lower bounds for the true chemical potential, so that the chemical potential reaches its convergence along the SCF iteration. Instead of evaluating the Fermi operator for multiple times sequentially, our method uses a two-level strategy that evaluates the Fermi operators in parallel. In the regime of full parallelization, the wall clock time of each SCF iteration is always close to the time for one single evaluation of the Fermi operator, even when the initial guess is far away from the converged solution. We demonstrate the effectiveness of the new method using examples with metallic and insulating characters, as well as results from ab initio molecular dynamics.
A discussion on validity of the diffusion theory by Monte Carlo method
Peng, Dong-qing; Li, Hui; Xie, Shusen
2008-12-01
Diffusion theory was widely used as a basis of the experiments and methods in determining the optical properties of biological tissues. A simple analytical solution could be obtained easily from the diffusion equation after a series of approximations. Thus, a misinterpret of analytical solution would be made: while the effective attenuation coefficient of several semi-infinite bio-tissues were the same, the distribution of light fluence in the tissues would be the same. In order to assess the validity of knowledge above, depth resolved internal fluence of several semi-infinite biological tissues which have the same effective attenuation coefficient were simulated with wide collimated beam in the paper by using Monte Carlo method in different condition. Also, the influence of bio-tissue refractive index on the distribution of light fluence was discussed in detail. Our results showed that, when the refractive index of several bio-tissues which had the same effective attenuation coefficient were the same, the depth resolved internal fluence would be the same; otherwise, the depth resolved internal fluence would be not the same. The change of refractive index of tissue would have affection on the light depth distribution in tissue. Therefore, the refractive index is an important optical property of tissue, and should be taken in account while using the diffusion approximation theory.
Chen, Chenglong; Ni, Jiangqun; Shen, Zhaoyi; Shi, Yun Qing
2017-06-01
Geometric transformations, such as resizing and rotation, are almost always needed when two or more images are spliced together to create convincing image forgeries. In recent years, researchers have developed many digital forensic techniques to identify these operations. Most previous works in this area focus on the analysis of images that have undergone single geometric transformations, e.g., resizing or rotation. In several recent works, researchers have addressed yet another practical and realistic situation: successive geometric transformations, e.g., repeated resizing, resizing-rotation, rotation-resizing, and repeated rotation. We will also concentrate on this topic in this paper. Specifically, we present an in-depth analysis in the frequency domain of the second-order statistics of the geometrically transformed images. We give an exact formulation of how the parameters of the first and second geometric transformations influence the appearance of periodic artifacts. The expected positions of characteristic resampling peaks are analytically derived. The theory developed here helps to address the gap left by previous works on this topic and is useful for image security and authentication, in particular, the forensics of geometric transformations in digital images. As an application of the developed theory, we present an effective method that allows one to distinguish between the aforementioned four different processing chains. The proposed method can further estimate all the geometric transformation parameters. This may provide useful clues for image forgery detection.
Minenkov, Yury
2017-11-29
We tested a battery of density functional theory (DFT) methods ranging from generalized gradient approximation (GGA) via meta-GGA to hybrid meta-GGA schemes as well as Møller–Plesset perturbation theory of the second order and a single and double excitation coupled-cluster (CCSD) theory for their ability to reproduce accurate gas-phase structures of di- and triatomic molecules derived from microwave spectroscopy. We obtained the most accurate molecular structures using the hybrid and hybrid meta-GGA approximations with B3PW91, APF, TPSSh, mPW1PW91, PBE0, mPW1PBE, B972, and B98 functionals, resulting in lowest errors. We recommend using these methods to predict accurate three-dimensional structures of inorganic molecules when intramolecular dispersion interactions play an insignificant role. The structures that the CCSD method predicts are of similar quality although at considerably larger computational cost. The structures that GGA and meta-GGA schemes predict are less accurate with the largest absolute errors detected with BLYP and M11-L, suggesting that these methods should not be used if accurate three-dimensional molecular structures are required. Because of numerical problems related to the integration of the exchange–correlation part of the functional and large scattering of errors, most of the Minnesota models tested, particularly MN12-L, M11, M06-L, SOGGA11, and VSXC, are also not recommended for geometry optimization. When maintaining a low computational budget is essential, the nonseparable gradient functional N12 might work within an acceptable range of error. As expected, the DFT-D3 dispersion correction had a negligible effect on the internuclear distances when combined with the functionals tested on nonweakly bonded di- and triatomic inorganic molecules. By contrast, the dispersion correction for the APF-D functional has been found to shorten the bonds significantly, up to 0.064 Å (AgI), in Ag halides, BaO, BaS, BaF, BaCl, Cu halides, and Li and
J-matrix method of scattering in one dimension: The nonrelativistic theory
International Nuclear Information System (INIS)
Alhaidari, A.D.; Bahlouli, H.; Abdelmonem, M.S.
2009-01-01
We formulate a theory of nonrelativistic scattering in one dimension based on the J-matrix method. The scattering potential is assumed to have a finite range such that it is well represented by its matrix elements in a finite subset of a basis that supports a tridiagonal matrix representation for the reference wave operator. Contrary to our expectation, the 1D formulation reveals a rich and highly nontrivial structure compared to the 3D formulation. Examples are given to demonstrate the utility and accuracy of the method. It is hoped that this formulation constitutes a viable alternative to the classical treatment of 1D scattering problem and that it will help unveil new and interesting applications.
Directory of Open Access Journals (Sweden)
P. B. Lanjewar
2016-06-01
Full Text Available The evaluation and selection of energy technologies involve a large number of attributes whose selection and weighting is decided in accordance with the social, environmental, technical and economic framework. In the present work an integrated multiple attribute decision making methodology is developed by combining graph theory and analytic hierarchy process methods to deal with the evaluation and selection of energy technologies. The energy technology selection attributes digraph enables a quick visual appraisal of the energy technology selection attributes and their interrelationships. The preference index provides a total objective score for comparison of energy technologies alternatives. Application of matrix permanent offers a better appreciation of the considered attributes and helps to analyze the different alternatives from combinatorial viewpoint. The AHP is used to assign relative weights to the attributes. Four examples of evaluation and selection of energy technologies are considered in order to demonstrate and validate the proposed method.
Analytical theory of Doppler reflectometry in slab plasma model
Energy Technology Data Exchange (ETDEWEB)
Gusakov, E.Z.; Surkov, A.V. [Ioffe Institute, Politekhnicheskaya 26, St. Petersburg (Russian Federation)
2004-07-01
Doppler reflectometry is considered in slab plasma model in the frameworks of analytical theory. The diagnostics locality is analyzed for both regimes: linear and nonlinear in turbulence amplitude. The toroidal antenna focusing of probing beam to the cut-off is proposed and discussed as a method to increase diagnostics spatial resolution. It is shown that even in the case of nonlinear regime of multiple scattering, the diagnostics can be used for an estimation (with certain accuracy) of plasma poloidal rotation profile. (authors)
Method for stability analysis based on the Floquet theory and Vidyn calculations
Energy Technology Data Exchange (ETDEWEB)
Ganander, Hans
2005-03-01
This report presents the activity 3.7 of the STEM-project Aerobig and deals with aeroelastic stability of the complete wind turbine structure at operation. As a consequence of the increase of sizes of wind turbines dynamic couplings are being more important for loads and dynamic properties. The steady ambition to increase the cost competitiveness of wind turbine energy by using optimisation methods lowers design margins, which in turn makes questions about stability of the turbines more important. The main objective of the project is to develop a general stability analysis tool, based on the VIDYN methodology regarding the turbine dynamic equations and the Floquet theory for the stability analysis. The reason for selecting the Floquet theory is that it is independent of number of blades, thus can be used for 2 as well as 3 bladed turbines. Although the latter ones are dominating on the market, the former has large potential when talking about offshore large turbines. The fact that cyclic and individual blade pitch controls are being developed as a mean for reduction of fatigue also speaks for general methods as Floquet. The first step of a general system for stability analysis has been developed, the code VIDSTAB. Together with other methods, as the snap shot method, the Coleman transformation and the use of Fourier series, eigenfrequences and modes can be analysed. It is general with no restrictions on the number of blades nor the symmetry of the rotor. The derivatives of the aerodynamic forces are calculated numerically in this first version. Later versions would include state space formulations of these forces. This would also be the case for the controllers of turbine rotation speed, yaw direction and pitch angle.
International Nuclear Information System (INIS)
Ackroyd, R.T.
1987-01-01
A least squares principle is described which uses a penalty function treatment of boundary and interface conditions. Appropriate choices of the trial functions and vectors employed in a dual representation of an approximate solution established complementary principles for the diffusion equation. A geometrical interpretation of the principles provides weighted residual methods for diffusion theory, thus establishing a unification of least squares, variational and weighted residual methods. The complementary principles are used with either a trial function for the flux or a trial vector for the current to establish for regular meshes a connection between finite element, finite difference and nodal methods, which can be exact if the mesh pitches are chosen appropriately. Whereas the coefficients in the usual nodal equations have to be determined iteratively, those derived via the complementary principles are given explicitly in terms of the data. For the further development of the connection between finite element, finite difference and nodal methods, some hybrid variational methods are described which employ both a trial function and a trial vector. (author)
Irreducible Greens' Functions method in the theory of highly correlated systems
International Nuclear Information System (INIS)
Kuzemsky, A.L.
1994-09-01
The self-consistent theory of the correlation effects in Highly Correlated Systems (HCS) is presented. The novel Irreducible Green's Function (IGF) method is discussed in detail for the Hubbard model and random Hubbard model. The interpolation solution for the quasiparticle spectrum, which is valid for both the atomic and band limit is obtained. The (IGF) method permits to calculate the quasiparticle spectra of many-particle systems with the complicated spectra and strong interaction in a very natural and compact way. The essence of the method deeply related to the notion of the Generalized Mean Fields (GMF), which determine the elastic scattering corrections. The inelastic scattering corrections leads to the damping of the quasiparticles and are the main topic of the present consideration. The calculation of the damping has been done in a self-consistent way for both limits. For the random Hubbard model the weak coupling case has been considered and the self-energy operator has been calculated using the combination of the IGF method and Coherent Potential Approximation (CPA). The other applications of the method to the s-f model, Anderson model, Heisenberg antiferromagnet, electron-phonon interaction models and quasiparticle tunneling are discussed briefly. (author). 79 refs
A study on basic theory for CDCC method for three-body model of deuteron scattering
International Nuclear Information System (INIS)
Kawai, Mitsuji
1988-01-01
Recent studies have revealed that the CDCC method is valid for treating the decomposition process involved in deuteron scattering on the basis of a three-body model. However, theoretical support has not been developed for this method. The present study is aimed at determining whether a solution by the CDCC method can be obtained 'correctly' from a 'realistic' model Hamiltonian for deuteron scattering. Some researchers have recently pointed out that there are some problems with the conventional CDCC calculation procedure in view of the general scattering theory. These problems are associated with asymptotic froms of the wave functions, convergence of calculations, and boundary conditions. Considerations show that the problem with asymptotic forms of the wave function is not a fatal defect, though some compromise is necessary. The problem with the convergence of calculations is not very serious either. Discussions are made of the handling of boundary conditions. Thus, the present study indicates that the CDCC method can be applied satisfactorily to actual deuteron scattering, and that the model wave function for the CDCC method is consistent with the model Hamiltonian. (Nogami, K.)
PREFACE: Euro-TMCS I: Theory, Modelling and Computational Methods for Semiconductors
Gómez-Campos, F. M.; Rodríguez-Bolívar, S.; Tomić, S.
2015-05-01
The present issue contains a selection of the best contributed works presented at the first Euro-TMCS conference (Theory, Modelling and Computational Methods for Semiconductors, European Session). The conference was held at Faculty of Sciences, Universidad de Granada, Spain on 28st-30st January 2015. This conference is the first European edition of the TMCS conference series which started in 2008 at the University of Manchester and has always been held in the United Kingdom. Four previous conferences have been previously carried out (Manchester 2008, York 2010, Leeds 2012 and Salford 2014). Euro-TMCS is run for three days; the first one devoted to giving invited tutorials, aimed particularly at students, on recent development of theoretical methods. On this occasion the session was focused on the presentation of widely-used computational methods for the modelling of physical processes in semiconductor materials. Freely available simulation software (SIESTA, Quantum Espresso and Yambo) as well as commercial software (TiberCad and MedeA) were presented in the conference by members of their development team, offering to the audience an overview of their capabilities for research. The second part of the conference showcased prestigious invited and contributed oral presentations, alongside poster sessions, in which direct discussion with authors was promoted. The scope of this conference embraces modelling, theory and the use of sophisticated computational tools in semiconductor science and technology. Theoretical approaches represented in this meeting included: Density Functional Theory, Semi-empirical Electronic Structure Methods, Multi-scale Approaches, Modelling of PV devices, Electron Transport, and Graphene. Topics included, but were not limited to: Optical Properties of Quantum Nanostructures including Colloids and Nanotubes, Plasmonics, Magnetic Semiconductors, Photonic Structures, and Electronic Devices. The Editors Acknowledgments: We would like to thank all
Large J expansion in ABJM theory revisited.
Dimov, H; Mladenov, S; Rashkov, R C
Recently there has been progress in the computation of the anomalous dimensions of gauge theory operators at strong coupling by making use of the AdS/CFT correspondence. On the string theory side they are given by dispersion relations in the semiclassical regime. We revisit the problem of a large-charge expansion of the dispersion relations for simple semiclassical strings in an [Formula: see text] background. We present the calculation of the corresponding anomalous dimensions of the gauge theory operators to an arbitrary order using three different methods. Although the results of the three methods look different, power series expansions show their consistency.
McInerney, Patricia A; Green-Thompson, Lionel P
2017-04-01
The objective of this scoping review is to determine the theories of teaching and learning, and/or models and/or methods used in teaching in postgraduate education in the health sciences. The longer term objective is to use the information gathered to design a workshop for teachers of postgraduate students.The question that this review seeks to answer is: what theories of teaching and learning, and/or models and/or methods of teaching are used in postgraduate teaching?
Pieterse, Arwen H; de Vries, Marieke; Kunneman, Marleen; Stiggelbout, Anne M; Feldman-Stewart, Deb
2013-01-01
Healthcare decisions, particularly those involving weighing benefits and harms that may significantly affect quality and/or length of life, should reflect patients' preferences. To support patients in making choices, patient decision aids and values clarification methods (VCM) in particular have been developed. VCM intend to help patients to determine the aspects of the choices that are important to their selection of a preferred option. Several types of VCM exist. However, they are often designed without clear reference to theory, which makes it difficult for their development to be systematic and internally coherent. Our goal was to provide theory-informed recommendations for the design of VCM. Process theories of decision making specify components of decision processes, thus, identify particular processes that VCM could aim to facilitate. We conducted a review of the MEDLINE and PsycINFO databases and of references to theories included in retrieved papers, to identify process theories of decision making. We selected a theory if (a) it fulfilled criteria for a process theory; (b) provided a coherent description of the whole process of decision making; and (c) empirical evidence supports at least some of its postulates. Four theories met our criteria: Image Theory, Differentiation and Consolidation theory, Parallel Constraint Satisfaction theory, and Fuzzy-trace Theory. Based on these, we propose that VCM should: help optimize mental representations; encourage considering all potentially appropriate options; delay selection of an initially favoured option; facilitate the retrieval of relevant values from memory; facilitate the comparison of options and their attributes; and offer time to decide. In conclusion, our theory-based design recommendations are explicit and transparent, providing an opportunity to test each in a systematic manner. Copyright © 2012 Elsevier Ltd. All rights reserved.
METHOD OF STRATEGIC PLANNING AND MANAGEMENT DECISION-MAKING CONSIDERING THE LIFE CYCLE THEORY
Directory of Open Access Journals (Sweden)
Tetiana Kniazieva
2017-12-01
are made. Results of the survey are to substantiate the methodology of strategic planning under conditions of external environment uncertainty with the consideration of the life cycle theory. Practical implications: the possibilities of using life-cycle models allow: 1. reasonably predicting sales and plan production program; 2. determining the basic strategies at different stages of development; 3. determining the sequence of stages of enterprise development; 4. ensuring harmonious interaction of organizational characteristics with the external environment factors that influence the process of organizational development. Increasing the sustainability of the organization’s development can be achieved by re-establishment of dynamic changes in the plan in terms of using effective methods for forecasting with the consideration of the life cycle theory. It is necessary to take into account the interconnection between all levels of life cycles: industry, technology, enterprises, product; ensuring the competitive advantage of the organization. Using the theory of optimal solutions making in uncertain conditions under the analysis of long-term projects allows transferring qualitative factors into quantitative indicators that can be used in the future to bring investment projects to the same kind and choose the best. In conditions of increased uncertainty of the external environment, it is necessary to develop the theory of enterprise management, taking into account its life cycle, as well as the life cycle of its separate elements and processes at all levels. Combination of strategic management with the life cycles theory will increase the objectivity and effectiveness of taken management decisions. The accounting of the organization life cycles in strategic planning allows choosing an effective strategy.
Early detection of ecosystem regime shifts
DEFF Research Database (Denmark)
Lindegren, Martin; Dakos, Vasilis; Groeger, Joachim P.
2012-01-01
methods may have limited utility in ecosystem-based management as they show no or weak potential for early-warning. We therefore propose a multiple method approach for early detection of ecosystem regime shifts in monitoring data that may be useful in informing timely management actions in the face...
Generating or developing grounded theory: methods to understand health and illness.
Woods, Phillip; Gapp, Rod; King, Michelle A
2016-06-01
Grounded theory is a qualitative research methodology that aims to explain social phenomena, e.g. why particular motivations or patterns of behaviour occur, at a conceptual level. Developed in the 1960s by Glaser and Strauss, the methodology has been reinterpreted by Strauss and Corbin in more recent times, resulting in different schools of thought. Differences arise from different philosophical perspectives concerning knowledge (epistemology) and the nature of reality (ontology), demanding that researchers make clear theoretical choices at the commencement of their research when choosing this methodology. Compared to other qualitative methods it has ability to achieve understanding of, rather than simply describing, a social phenomenon. Achieving understanding however, requires theoretical sampling to choose interviewees that can contribute most to the research and understanding of the phenomenon, and constant comparison of interviews to evaluate the same event or process in different settings or situations. Sampling continues until conceptual saturation is reached, i.e. when no new concepts emerge from the data. Data analysis focusses on categorising data (finding the main elements of what is occurring and why), and describing those categories in terms of properties (conceptual characteristics that define the category and give meaning) and dimensions (the variations within properties which produce specificity and range). Ultimately a core category which theoretically explains how all other categories are linked together is developed from the data. While achieving theoretical abstraction in the core category, it should be logical and capture all of the variation within the data. Theory development requires understanding of the methodology not just working through a set of procedures. This article provides a basic overview, set in the literature surrounding grounded theory, for those wanting to increase their understanding and quality of research output.
Status and future of lattice gauge theory
International Nuclear Information System (INIS)
Hoek, J.
1989-07-01
The current status of lattice Quantum Chromo Dynamics (QCD) calculations, the computer requirements to obtain physical results and the direction computing is taking are described. First of all, there is a lot of evidence that QCD is the correct theory of strong interactions. Since it is an asymptotically free theory we can use perturbation theory to solve it in the regime of very hard collisions. However even in the case of very hard parton collisions the end-results of the collisions are bound states of quarks and perturbation theory is not sufficient to calculate these final stages. The way to solve the theory in this regime was opened by Wilson. He contemplated replacing the space-time continuum by a discrete lattice, with a lattice spacing a. Continuum physics is then recovered in the limit where the correlation length of the theory, say ξ. is large with respect to the lattice spacing. This will be true if the lattice spacing becomes very small, which for asymptotically free theories also implies that the coupling g becomes small. The lattice approach to QCD is in many respects analogous to the use of finite element methods to solve classical field theories. These finite element methods are easy to apply in 2-dimensional simulations but are computationally demanding in the 3-dimensional case. Therefore it is not unexpected that the 4-dimensional simulations needed for lattice gauge theories have led to an explosion in demand for computing power by theorists. (author)
The Stability Analysis Method of the Cohesive Granular Slope on the Basis of Graph Theory.
Guan, Yanpeng; Liu, Xiaoli; Wang, Enzhi; Wang, Sijing
2017-02-27
This paper attempted to provide a method to calculate progressive failure of the cohesivefrictional granular geomaterial and the spatial distribution of the stability of the cohesive granular slope. The methodology can be divided into two parts: the characterization method of macro-contact and the analysis of the slope stability. Based on the graph theory, the vertexes, the edges and the edge sequences are abstracted out to characterize the voids, the particle contact and the macro-contact, respectively, bridging the gap between the mesoscopic and macro scales of granular materials. This paper adopts this characterization method to extract a graph from a granular slope and characterize the macro sliding surface, then the weighted graph is analyzed to calculate the slope safety factor. Each edge has three weights representing the sliding moment, the anti-sliding moment and the braking index of contact-bond, respectively, . The safety factor of the slope is calculated by presupposing a certain number of sliding routes and reducing Weight repeatedly and counting the mesoscopic failure of the edge. It is a kind of slope analysis method from mesoscopic perspective so it can present more detail of the mesoscopic property of the granular slope. In the respect of macro scale, the spatial distribution of the stability of the granular slope is in agreement with the theoretical solution.
Failure Mode and Effect Analysis using Soft Set Theory and COPRAS Method
Directory of Open Access Journals (Sweden)
Ze-Ling Wang
2017-01-01
Full Text Available Failure mode and effect analysis (FMEA is a risk management technique frequently applied to enhance the system performance and safety. In recent years, many researchers have shown an intense interest in improving FMEA due to inherent weaknesses associated with the classical risk priority number (RPN method. In this study, we develop a new risk ranking model for FMEA based on soft set theory and COPRAS method, which can deal with the limitations and enhance the performance of the conventional FMEA. First, trapezoidal fuzzy soft set is adopted to manage FMEA team membersr linguistic assessments on failure modes. Then, a modified COPRAS method is utilized for determining the ranking order of the failure modes recognized in FMEA. Especially, we treat the risk factors as interdependent and employ the Choquet integral to obtain the aggregate risk of failures in the new FMEA approach. Finally, a practical FMEA problem is analyzed via the proposed approach to demonstrate its applicability and effectiveness. The result shows that the FMEA model developed in this study outperforms the traditional RPN method and provides a more reasonable risk assessment of failure modes.
Projected coupled cluster theory.
Qiu, Yiheng; Henderson, Thomas M; Zhao, Jinmo; Scuseria, Gustavo E
2017-08-14
Coupled cluster theory is the method of choice for weakly correlated systems. But in the strongly correlated regime, it faces a symmetry dilemma, where it either completely fails to describe the system or has to artificially break certain symmetries. On the other hand, projected Hartree-Fock theory captures the essential physics of many kinds of strong correlations via symmetry breaking and restoration. In this work, we combine and try to retain the merits of these two methods by applying symmetry projection to broken symmetry coupled cluster wave functions. The non-orthogonal nature of states resulting from the application of symmetry projection operators furnishes particle-hole excitations to all orders, thus creating an obstacle for the exact evaluation of overlaps. Here we provide a solution via a disentanglement framework theory that can be approximated rigorously and systematically. Results of projected coupled cluster theory are presented for molecules and the Hubbard model, showing that spin projection significantly improves unrestricted coupled cluster theory while restoring good quantum numbers. The energy of projected coupled cluster theory reduces to the unprojected one in the thermodynamic limit, albeit at a much slower rate than projected Hartree-Fock.
Olender, M.; Krenczyk, D.
2016-08-01
Modern enterprises have to react quickly to dynamic changes in the market, due to changing customer requirements and expectations. One of the key area of production management, that must continuously evolve by searching for new methods and tools for increasing the efficiency of manufacturing systems is the area of production flow planning and control. These aspects are closely connected with the ability to implement the concept of Virtual Enterprises (VE) and Virtual Manufacturing Network (VMN) in which integrated infrastructure of flexible resources are created. In the proposed approach, the players role perform the objects associated with the objective functions, allowing to solve the multiobjective production flow planning problems based on the game theory, which is based on the theory of the strategic situation. For defined production system and production order models ways of solving the problem of production route planning in VMN on computational examples for different variants of production flow is presented. Possible decision strategy to use together with an analysis of calculation results is shown.
Burbrink, Frank T; McKelvy, Alexander D; Pyron, R Alexander; Myers, Edward A
2015-11-22
Predicting species presence and richness on islands is important for understanding the origins of communities and how likely it is that species will disperse and resist extinction. The equilibrium theory of island biogeography (ETIB) and, as a simple model of sampling abundances, the unified neutral theory of biodiversity (UNTB), predict that in situations where mainland to island migration is high, species-abundance relationships explain the presence of taxa on islands. Thus, more abundant mainland species should have a higher probability of occurring on adjacent islands. In contrast to UNTB, if certain groups have traits that permit them to disperse to islands better than other taxa, then phylogeny may be more predictive of which taxa will occur on islands. Taking surveys of 54 island snake communities in the Eastern Nearctic along with mainland communities that have abundance data for each species, we use phylogenetic assembly methods and UNTB estimates to predict island communities. Species richness is predicted by island area, whereas turnover from the mainland to island communities is random with respect to phylogeny. Community structure appears to be ecologically neutral and abundance on the mainland is the best predictor of presence on islands. With regard to young and proximate islands, where allopatric or cladogenetic speciation is not a factor, we find that simple neutral models following UNTB and ETIB predict the structure of island communities. © 2015 The Author(s).
Solving black box computation problems using expert knowledge theory and methods
International Nuclear Information System (INIS)
Booker, Jane M.; McNamara, Laura A.
2004-01-01
The challenge problems for the Epistemic Uncertainty Workshop at Sandia National Laboratories provide common ground for comparing different mathematical theories of uncertainty, referred to as General Information Theories (GITs). These problems also present the opportunity to discuss the use of expert knowledge as an important constituent of uncertainty quantification. More specifically, how do the principles and methods of eliciting and analyzing expert knowledge apply to these problems and similar ones encountered in complex technical problem solving and decision making? We will address this question, demonstrating how the elicitation issues and the knowledge that experts provide can be used to assess the uncertainty in outputs that emerge from a black box model or computational code represented by the challenge problems. In our experience, the rich collection of GITs provides an opportunity to capture the experts' knowledge and associated uncertainties consistent with their thinking, problem solving, and problem representation. The elicitation process is rightly treated as part of an overall analytical approach, and the information elicited is not simply a source of data. In this paper, we detail how the elicitation process itself impacts the analyst's ability to represent, aggregate, and propagate uncertainty, as well as how to interpret uncertainties in outputs. While this approach does not advocate a specific GIT, answers under uncertainty do result from the elicitation
Development of new multigrid schemes for the method of characteristics in neutron transport theory
International Nuclear Information System (INIS)
Grassi, G.
2006-01-01
This dissertation is based upon our doctoral research that dealt with the conception and development of new non-linear multigrid techniques for the Method of the Characteristics (MOC) within the TDT code. Here we focus upon a two-level scheme consisting of a fine level on which the neutron transport equation is iteratively solved using the MOC algorithm, and a coarse level defined by a more coarsely discretized phase space on which a low-order problem is considered. The solution of this problem is then used in order to correct the angular flux moments resulting from the previous transport iteration. A flux-volume homogenization procedure is employed to evaluate the coarse-level material properties after each transport iteration. This entails the non-linearity of the methods. According to the Generalised Equivalence Theory (GET), additional degrees of freedom are introduced for the low-order problem so that the convergence of the acceleration scheme can be ensured. We present two classes of non-linear methods: transport-like methods and discussion-like methods. Transport-like methods consider a homogenized low-order transport problem on the coarse level. This problem is iteratively solved using the same MOC algorithm as for the transport problem on the fine level. Discontinuity factors are then employed, per region or per surface, in order to reconstruct the currents evaluated by the low-order operator, which ensure the convergence of the acceleration scheme. On the other hand, discussion-like methods consider a low-order problem inspired by diffusion. We studied the non-linear Coarse Mesh Finite Difference (CMFD) method, already present in literature, in the perspective of integrating it into TDT code. Then, we developed a new non-linear method on the model of CMFD. From the latter, we borrowed the idea to establish a simple relation between currents and fluxes in order to obtain a problem involving only coarse fluxes. Finally, those non-linear methods have been
Workshop report on large-scale matrix diagonalization methods in chemistry theory institute
Energy Technology Data Exchange (ETDEWEB)
Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S. [eds.
1996-10-01
The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems as well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of
Corpus methods and their reflection in linguistic theories of the 20th century
Directory of Open Access Journals (Sweden)
Simon Krek
2013-05-01
Full Text Available In the 20th century structuralism established itself as the central linguistic theory, in the first half mainly through its originator Ferdinand de Saussure, and in the second half with the figure of Noam Chomsky. The latter consistently refused to acknowledge analysis of extensive quantity of texts as a valuable method, and favoured linguistic intuition of a native speaker instead. In parallel with structuralism other trends in linguistics emerged which pointed to the inadequateness of the prevailing linguistic paradigm and to theoretical insights which were only possible after the systematic analysis of large quantities of texts. The paper discusses some of the dilemmas stemming from this dichotomy and places corpus linguistics in a broader linguistic context.
International Nuclear Information System (INIS)
Pilipchuk, L. A.; Pilipchuk, A. S.
2015-01-01
In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure
Change Of Learning Environment Using Game Production Theory, Methods And Practice
DEFF Research Database (Denmark)
Reng, Lars; Kofoed, Lise; Schoenau-Fog, Henrik
2018-01-01
will focus on cases in which development of games did change the learning environments into production units where students or employees were producing games as part of the learning process. The cases indicate that the motivation as well as the learning curve became very high. The pedagogical theories......Game Based Learning has proven to have many possibilities for supporting better learning outcomes, when using educational or commercial games in the classroom. However, there is also a great potential in using game development as a motivator in other kinds of learning scenarios. This study...... and methods are based on Problem Based Learning (PBL), but are developed further by combining PBL with a production-oriented/design based approach. We illustrate the potential of using game production as a learning environment with investigation of three game productions. We can conclude that using game...
An object recognition method based on fuzzy theory and BP networks
Wu, Chuan; Zhu, Ming; Yang, Dong
2006-01-01
It is difficult to choose eigenvectors when neural network recognizes object. It is possible that the different object eigenvectors is similar or the same object eigenvectors is different under scaling, shifting, rotation if eigenvectors can not be chosen appropriately. In order to solve this problem, the image is edged, the membership function is reconstructed and a new threshold segmentation method based on fuzzy theory is proposed to get the binary image. Moment invariant of binary image is extracted and normalized. Some time moment invariant is too small to calculate effectively so logarithm of moment invariant is taken as input eigenvectors of BP network. The experimental results demonstrate that the proposed approach could recognize the object effectively, correctly and quickly.
Hybrid systems, optimal control and hybrid vehicles theory, methods and applications
Böhme, Thomas J
2017-01-01
This book assembles new methods showing the automotive engineer for the first time how hybrid vehicle configurations can be modeled as systems with discrete and continuous controls. These hybrid systems describe naturally and compactly the networks of embedded systems which use elements such as integrators, hysteresis, state-machines and logical rules to describe the evolution of continuous and discrete dynamics and arise inevitably when modeling hybrid electric vehicles. They can throw light on systems which may otherwise be too complex or recondite. Hybrid Systems, Optimal Control and Hybrid Vehicles shows the reader how to formulate and solve control problems which satisfy multiple objectives which may be arbitrary and complex with contradictory influences on fuel consumption, emissions and drivability. The text introduces industrial engineers, postgraduates and researchers to the theory of hybrid optimal control problems. A series of novel algorithmic developments provides tools for solving engineering pr...
Advances in dynamic and mean field games theory, applications, and numerical methods
Viscolani, Bruno
2017-01-01
This contributed volume considers recent advances in dynamic games and their applications, based on presentations given at the 17th Symposium of the International Society of Dynamic Games, held July 12-15, 2016, in Urbino, Italy. Written by experts in their respective disciplines, these papers cover various aspects of dynamic game theory including mean-field games, stochastic and pursuit-evasion games, and computational methods for dynamic games. Topics covered include Pedestrian flow in crowded environments Models for climate change negotiations Nash Equilibria for dynamic games involving Volterra integral equations Differential games in healthcare markets Linear-quadratic Gaussian dynamic games Aircraft control in wind shear conditions Advances in Dynamic and Mean-Field Games presents state-of-the-art research in a wide spectrum of areas. As such, it serves as a testament to the continued vitality and growth of the field of dynamic games and their applications. It will be of interest to an interdisciplinar...
Description of two-proton radioactivity by the methods of the quantum theory of ternary fission
International Nuclear Information System (INIS)
Kadmenskij, S.G.
2005-01-01
Two-proton decay of spherical nuclei has been investigated on the base of the formalism of quantum mechanical theory of ternary fission. The suggested method of construction of partial two-proton-decay-width amplitudes and of asymptotics of the decaying nucleus wave functions allows to solve a problem of two-proton radioactivity description without the traditionally used in R-matrix approaches laborious sewing procedure for internal and external parent nucleus wave functions in three-body scheme. In the frame of diagonal approximation, the wave-function structure for Cooper pair of two emitted protons in parent nucleus was analyzed as well as the behavior of the wave function describing potential scattering of two-proton-decay products with taking into account decay channel coupling and properties of interaction potentials between these products [ru
Gas-Kinetic Theory Based Flux Splitting Method for Ideal Magnetohydrodynamics
Xu, Kun
1998-01-01
A gas-kinetic solver is developed for the ideal magnetohydrodynamics (MHD) equations. The new scheme is based on the direct splitting of the flux function of the MHD equations with the inclusion of "particle" collisions in the transport process. Consequently, the artificial dissipation in the new scheme is much reduced in comparison with the MHD Flux Vector Splitting Scheme. At the same time, the new scheme is compared with the well-developed Roe-type MHD solver. It is concluded that the kinetic MHD scheme is more robust and efficient than the Roe- type method, and the accuracy is competitive. In this paper the general principle of splitting the macroscopic flux function based on the gas-kinetic theory is presented. The flux construction strategy may shed some light on the possible modification of AUSM- and CUSP-type schemes for the compressible Euler equations, as well as to the development of new schemes for a non-strictly hyperbolic system.
CONSTRUCTION THEORY AND NOISE ANALYSIS METHOD OF GLOBAL CGCS2000 COORDINATE FRAME
Directory of Open Access Journals (Sweden)
Z. Jiang
2018-04-01
Full Text Available The definition, renewal and maintenance of geodetic datum has been international hot issue. In recent years, many countries have been studying and implementing modernization and renewal of local geodetic reference coordinate frame. Based on the precise result of continuous observation for recent 15 years from state CORS (continuously operating reference system network and the mainland GNSS (Global Navigation Satellite System network between 1999 and 2007, this paper studies the construction of mathematical model of the Global CGCS2000 frame, mainly analyzes the theory and algorithm of two-step method for Global CGCS2000 Coordinate Frame formulation. Finally, the noise characteristic of the coordinate time series are estimated quantitatively with the criterion of maximum likelihood estimation.
Energy Technology Data Exchange (ETDEWEB)
Pilipchuk, L. A., E-mail: pilipchik@bsu.by [Belarussian State University, 220030 Minsk, 4, Nezavisimosti avenue, Republic of Belarus (Belarus); Pilipchuk, A. S., E-mail: an.pilipchuk@gmail.com [The Natural Resources and Environmental Protestion Ministry of the Republic of Belarus, 220004 Minsk, 10 Kollektornaya Street, Republic of Belarus (Belarus)
2015-11-30
In this paper we propose the theory of decomposition, methods, technologies, applications and implementation in Wol-fram Mathematica for the constructing the solutions of the sparse linear systems. One of the applications is the Sensor Location Problem for the symmetric graph in the case when split ratios of some arc flows can be zeros. The objective of that application is to minimize the number of sensors that are assigned to the nodes. We obtain a sparse system of linear algebraic equations and research its matrix rank. Sparse systems of these types appear in generalized network flow programming problems in the form of restrictions and can be characterized as systems with a large sparse sub-matrix representing the embedded network structure.
Group-theoretical method in the many-beam theory of electron diffraction
International Nuclear Information System (INIS)
Kogiso, Motokazu; Takahashi, Hidewo.
1977-01-01
A group-theoretical method is developed for the many-beam dynamical theory of the symmetric Laue case. When the incident wave is directed so that the Laue point lies on a symmetric position in the reciprocal lattice, the dispersion matrix in the fundamental equation can be reduced to a block diagonal form. The transformation matrix is composed of column vectors belonging to irreducible representations of the group of the incident wave vector. Without performing reduction, the reduced form of the dispersion matrix is determined from characters of representations. Practical application is made to the case of symmorphic crystals, where general reduced forms and all solvable examples are given in terms of some geometrical factors of reciprocal lattice arrangements. (auth.)
A new method for the design of slot antenna arrays: Theory and experiment
Clauzier, Sebastien; Mikki, Said M.; Shamim, Atif; Antar, Yahia M. M.
2016-01-01
technique combines basic radiation theory and waveguide propagation theory in a novel analytical model that allows the prediction of the radiation characteristics of generic slots without the need to perform full-wave numerical solution. The analytical model
Bipolar harmonics method in the semiclassical theory of sub-doppler cooling
International Nuclear Information System (INIS)
Bezverbnyi, A.V.
2000-01-01
The bipolar harmonics method is extended to the case of complex elliptic polarization vectors. The method is used to study, on the basis of the semiclassical theory, the multipole moments of the ground state of atoms under conditions of sub-Doppler cooling with a monochromatic light field possessing spatial gradients of the polarization. It is shown that for stationary atoms with an initial isotropic distribution over sublevels the multipole moments of rank κ decompose, in accordance with the parity κ of the rank, according to one of two minimal sets of bipolar harmonics with different symmetry under inversion. An expansion of the corrections, which are linear in the velocity, to the multipole moments with respect to the indicated minimal sets of bipolar harmonics is studied for a stationary state, and the expansion coefficients are analyzed. The orientation vector J of the atomic ensemble is studied on the basis of the proposed method for the dipole transition 1/2 → 1/2, and the light-induced forces for a specific 2D configuration of the light field, including radiation friction forces and Lorentz-type forces, are analyzed