RISM theory distribution functions for Lennard--Jones interaction site fluids
International Nuclear Information System (INIS)
Johnson, E.; Hazoume, R.P.
1978-01-01
Reference interaction site model (RISM) theory distribution functions for Lennard-Jones interaction site fluids are discussed. The comparison with computer simulation results suggests that these distribution functions are as accurate as RISM distribution functions for fused hard sphere molecular fluids
Application of the RISM theory to Lennard-Jones interaction site molecular fluids
International Nuclear Information System (INIS)
Johnson, E.; Hazoume, R.P.
1979-01-01
It seems that reference interaction site model (RISM) theory atom--atom distribution functions have been obtained directly from the RISM equations only for fused hard sphere molecular fluids. RISM distribution functions for Lennard-Jones interaction site fluids are presented. Results presented suggest that these distribution functions are as accurate as RISM distribution functions for fused hard sphere molecular fluids
3D RISM theory with fast reciprocal-space electrostatics
Energy Technology Data Exchange (ETDEWEB)
Heil, Jochen; Kast, Stefan M., E-mail: stefan.kast@tu-dortmund.de [Physikalische Chemie III, Technische Universität Dortmund, Otto-Hahn-Str. 6, 44227 Dortmund (Germany)
2015-03-21
The calculation of electrostatic solute-solvent interactions in 3D RISM (“three-dimensional reference interaction site model”) integral equation theory is recast in a form that allows for a computational treatment analogous to the “particle-mesh Ewald” formalism as used for molecular simulations. In addition, relations that connect 3D RISM correlation functions and interaction potentials with thermodynamic quantities such as the chemical potential and average solute-solvent interaction energy are reformulated in a way that calculations of expensive real-space electrostatic terms on the 3D grid are completely avoided. These methodical enhancements allow for both, a significant speedup particularly for large solute systems and a smoother convergence of predicted thermodynamic quantities with respect to box size, as illustrated for several benchmark systems.
3D RISM theory with fast reciprocal-space electrostatics.
Heil, Jochen; Kast, Stefan M
2015-03-21
The calculation of electrostatic solute-solvent interactions in 3D RISM ("three-dimensional reference interaction site model") integral equation theory is recast in a form that allows for a computational treatment analogous to the "particle-mesh Ewald" formalism as used for molecular simulations. In addition, relations that connect 3D RISM correlation functions and interaction potentials with thermodynamic quantities such as the chemical potential and average solute-solvent interaction energy are reformulated in a way that calculations of expensive real-space electrostatic terms on the 3D grid are completely avoided. These methodical enhancements allow for both, a significant speedup particularly for large solute systems and a smoother convergence of predicted thermodynamic quantities with respect to box size, as illustrated for several benchmark systems.
3D RISM theory with fast reciprocal-space electrostatics
International Nuclear Information System (INIS)
Heil, Jochen; Kast, Stefan M.
2015-01-01
The calculation of electrostatic solute-solvent interactions in 3D RISM (“three-dimensional reference interaction site model”) integral equation theory is recast in a form that allows for a computational treatment analogous to the “particle-mesh Ewald” formalism as used for molecular simulations. In addition, relations that connect 3D RISM correlation functions and interaction potentials with thermodynamic quantities such as the chemical potential and average solute-solvent interaction energy are reformulated in a way that calculations of expensive real-space electrostatic terms on the 3D grid are completely avoided. These methodical enhancements allow for both, a significant speedup particularly for large solute systems and a smoother convergence of predicted thermodynamic quantities with respect to box size, as illustrated for several benchmark systems
Yoshida, Norio
2018-05-01
A new method for finding the minimum free energy pathway (MFEP) of ions and small molecule transportation through a protein based on the three-dimensional reference interaction site model (3D-RISM) theory combined with the string method has been proposed. The 3D-RISM theory produces the distribution function, or the potential of mean force (PMF), for transporting substances around the given protein structures. By applying the string method to the PMF surface, one can readily determine the MFEP on the PMF surface. The method has been applied to consider the Na+ conduction pathway of channelrhodopsin as an example.
Analysis of biomolecular solvation sites by 3D-RISM theory.
Sindhikara, Daniel J; Hirata, Fumio
2013-06-06
We derive, implement, and apply equilibrium solvation site analysis for biomolecules. Our method utilizes 3D-RISM calculations to quickly obtain equilibrium solvent distributions without either necessity of simulation or limits of solvent sampling. Our analysis of these distributions extracts highest likelihood poses of solvent as well as localized entropies, enthalpies, and solvation free energies. We demonstrate our method on a structure of HIV-1 protease where excellent structural and thermodynamic data are available for comparison. Our results, obtained within minutes, show systematic agreement with available experimental data. Further, our results are in good agreement with established simulation-based solvent analysis methods. This method can be used not only for visual analysis of active site solvation but also for virtual screening methods and experimental refinement.
Huang, WenJuan; Blinov, Nikolay; Kovalenko, Andriy
2015-04-30
The octanol-water partition coefficient is an important physical-chemical characteristic widely used to describe hydrophobic/hydrophilic properties of chemical compounds. The partition coefficient is related to the transfer free energy of a compound from water to octanol. Here, we introduce a new protocol for prediction of the partition coefficient based on the statistical-mechanical, 3D-RISM-KH molecular theory of solvation. It was shown recently that with the compound-solvent correlation functions obtained from the 3D-RISM-KH molecular theory of solvation, the free energy functional supplemented with the correction linearly related to the partial molar volume obtained from the Kirkwood-Buff/3D-RISM theory, also called the "universal correction" (UC), provides accurate prediction of the hydration free energy of small compounds, compared to explicit solvent molecular dynamics [ Palmer , D. S. ; J. Phys.: Condens. Matter 2010 , 22 , 492101 ]. Here we report that with the UC reparametrized accordingly this theory also provides an excellent agreement with the experimental data for the solvation free energy in nonpolar solvent (1-octanol) and so accurately predicts the octanol-water partition coefficient. The performance of the Kovalenko-Hirata (KH) and Gaussian fluctuation (GF) functionals of the solvation free energy, with and without UC, is tested on a large library of small compounds with diverse functional groups. The best agreement with the experimental data for octanol-water partition coefficients is obtained with the KH-UC solvation free energy functional.
Including diverging electrostatic potential in 3D-RISM theory: The charged wall case
Vyalov, Ivan; Rocchia, Walter
2018-03-01
Although three-dimensional site-site molecular integral equations of liquids are a powerful tool of the modern theoretical chemistry, their applications to the problem of characterizing the electrical double layer originating at the solid-liquid interface with a macroscopic substrate are severely limited by the fact that an infinitely extended charged plane generates a divergent electrostatic potential. Such potentials cannot be treated within the standard 3D-Reference Interaction Site Model equation solution framework since it leads to functions that are not Fourier transformable. In this paper, we apply a renormalization procedure to overcome this obstacle. We then check the validity and numerical accuracy of the proposed computational scheme on the prototypical gold (111) surface in contact with water/alkali chloride solution. We observe that despite the proposed method requires, to achieve converged charge densities, a higher spatial resolution than that suited to the estimation of biomolecular solvation with either 3D-RISM or continuum electrostatics approaches, it still is computationally efficient. Introducing the electrostatic potential of an infinite wall, which is periodic in 2 dimensions, we avoid edge effects, permit a robust integration of Poisson's equation, and obtain the 3D electrostatic potential profile for the first time in such calculations. We show that the potential within the electrical double layer presents oscillations which are not grasped by the Debye-Hückel and Gouy-Chapman theories. This electrostatic potential deviates from its average of up to 1-2 V at small distances from the substrate along the lateral directions. Applications of this theoretical development are relevant, for example, for liquid scanning tunneling microscopy imaging.
Energy Technology Data Exchange (ETDEWEB)
Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp [Department of Chemistry, Graduate School of Science, Nagoya University, Chikusa, Nagoya 464-8602 (Japan); Institute of Transformative Bio-Molecules (WPI-ITbM), Nagoya University, Chikusa, Nagoya 464-8602 (Japan)
2016-09-07
Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.
Directory of Open Access Journals (Sweden)
Alexander E. Kobryn
2016-04-01
Full Text Available Although better means to model the properties of bulk heterojunction molecular blends are much needed in the field of organic optoelectronics, only a small subset of methods based on molecular dynamics- and Monte Carlo-based approaches have been hitherto employed to guide or replace empirical characterization and testing. Here, we present the first use of the integral equation theory of molecular liquids in modelling the structural properties of blends of phenyl-C61-butyric acid methyl ester (PCBM with poly(3-hexylthiophene (P3HT and a carboxylated poly(3-butylthiophene (P3BT, respectively. For this, we use the Reference Interaction Site Model (RISM with the Universal Force Field (UFF to compute the microscopic structure of blends and obtain insight into the miscibility of its components. Input parameters for RISM, such as optimized molecular geometries and charge distribution of interaction sites, are derived by the Density Functional Theory (DFT methods. We also run Molecular Dynamics (MD simulation to compare the diffusivity of the PCBM in binary blends with P3HT and P3BT, respectively. A remarkably good agreement with available experimental data and results of alternative modelling/simulation is observed for PCBM in the P3HT system. We interpret this as a step in the validation of the use of our approach for organic photovoltaics and support of its results for new systems that do not have reference data for comparison or calibration. In particular, for the less-studied P3BT, our results show that expectations about its performance in binary blends with PCBM may be overestimated, as it does not demonstrate the required level of miscibility and short-range structural organization. In addition, the simulated mobility of PCBM in P3BT is somewhat higher than what is expected for polymer blends and falls into a range typical for fluids. The significance of our predictive multi-scale modelling lies in the insights it offers into nanoscale
Range Information Systems Management (RISM) Phase 1 Report
Bastin, Gary L.; Harris, William G.; Nelson, Richard A.
2002-01-01
RISM investigated alternative approaches, technologies, and communication network architectures to facilitate building the Spaceports and Ranges of the future. RISM started by document most existing US ranges and their capabilities. In parallel, RISM obtained inputs from the following: 1) NASA and NASA-contractor engineers and managers, and; 2) Aerospace leaders from Government, Academia, and Industry, participating through the Space Based Range Distributed System Working Group (SBRDSWG), many of whom are also; 3) Members of the Advanced Range Technology Working Group (ARTWG) subgroups, and; 4) Members of the Advanced Spaceport Technology Working Group (ASTWG). These diverse inputs helped to envision advanced technologies for implementing future Ranges and Range systems that builds on today s cabled and wireless legacy infrastructures while seamlessly integrating both today s emerging and tomorrow s building-block communication techniques. The fundamental key is to envision a transition to a Space Based Range Distributed Subsystem. The enabling concept is to identify the specific needs of Range users that can be solved through applying emerging communication tech
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
International Nuclear Information System (INIS)
Cao, Siqin; Sheong, Fu Kit; Huang, Xuhui
2015-01-01
Reference interaction site model (RISM) has recently become a popular approach in the study of thermodynamical and structural properties of the solvent around macromolecules. On the other hand, it was widely suggested that there exists water density depletion around large hydrophobic solutes (>1 nm), and this may pose a great challenge to the RISM theory. In this paper, we develop a new analytical theory, the Reference Interaction Site Model with Hydrophobicity induced density Inhomogeneity (RISM-HI), to compute solvent radial distribution function (RDF) around large hydrophobic solute in water as well as its mixture with other polyatomic organic solvents. To achieve this, we have explicitly considered the density inhomogeneity at the solute-solvent interface using the framework of the Yvon-Born-Green hierarchy, and the RISM theory is used to obtain the solute-solvent pair correlation. In order to efficiently solve the relevant equations while maintaining reasonable accuracy, we have also developed a new closure called the D2 closure. With this new theory, the solvent RDFs around a large hydrophobic particle in water and different water-acetonitrile mixtures could be computed, which agree well with the results of the molecular dynamics simulations. Furthermore, we show that our RISM-HI theory can also efficiently compute the solvation free energy of solute with a wide range of hydrophobicity in various water-acetonitrile solvent mixtures with a reasonable accuracy. We anticipate that our theory could be widely applied to compute the thermodynamic and structural properties for the solvation of hydrophobic solute
Mahmoud, Hosam M
2011-01-01
A cutting-edge look at the emerging distributional theory of sorting Research on distributions associated with sorting algorithms has grown dramatically over the last few decades, spawning many exact and limiting distributions of complexity measures for many sorting algorithms. Yet much of this information has been scattered in disparate and highly specialized sources throughout the literature. In Sorting: A Distribution Theory, leading authority Hosam Mahmoud compiles, consolidates, and clarifies the large volume of available research, providing a much-needed, comprehensive treatment of the
Theoretical analysis of the domain-swapped dimerization of cytochrome c: An MD and 3D-RISM approach
Yoshida, Norio; Higashi, Masahiro; Motoki, Hideyoshi; Hirota, Shun
2018-01-01
The structural stability of a cytochrome c domain-swapped dimer compared with that of the monomer was investigated by molecular dynamics (MD) simulations and by three-dimensional reference interaction site model (3D-RISM) theory. The structural fluctuation and structural energy of cytochrome c were treated by MD simulations, and the solvation thermodynamics was treated by 3D-RISM theory. The domain-swapped dimer state is slightly less stable than the monomer state, which is consistent with experimental observations; the total free energy difference is calculated as 25 kcal mol-1. The conformational change and translational/rotational entropy change contribute to the destabilization of the dimer, whereas the hydration and vibrational entropy contribute to the stabilization. Further analyses on the residues located at the hinge loop for swapping were conducted, and the results reveal details at the molecular level of the structural and interaction changes upon dimerization.
International Nuclear Information System (INIS)
Curro, J.G.; Schweizer, K.S.; Grest, G.S.; Kremer, K.; Corporate Research Science Laboratory, Exxon Research and Engineering Company, Annandale, New Jersey 08801; Institut fur Festkorperforschung der Kernforschungsanlage Julich, D-5170 Julich, Federal Republic of Germany)
1989-01-01
Recently we (J.G.C. and K.S.S.) formulated a tractable ''reference interaction site model'' (RISM) integral equation theory of flexible polymer liquids. The purpose of this paper is to compare the results of the theory with recent molecular dynamics simulations (G.S.G. and K.K.) on dense chain liquids of degree of polymerization N=50 and 200. Specific comparisons were made between theory and simulation for the intramolecular structure factor ω(k) and the intermolecular radial distribution function g(r) in the liquid. In particular it was possible to independently test the assumptions inherent in the RISM theory and the additional ideality approximation that was made in the initial application of the theory. This comparison was accomplished by calculating the intermolecular g(r) using the simulated intramolecular structure factor, as well as, ω(k) derived from a freely jointed chain model.The RISM theory results, using the simulated ω(k), were found to be in excellent agreement, over all length scales, with the g(r) from molecular dynamics simulations. The theoretical predictions using the ''ideal'' intramolecular structure factor tended to underestimate g(r) near contact, indicating local intramolecular expansion of the chains. This local expansion can be incorporated into the theory self consistently by including the effects of the ''medium induced'' potential on the intramolecular structure
Omelyan, Igor; Kovalenko, Andriy
2015-04-14
We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD
Commutative monads as a theory of distributions
DEFF Research Database (Denmark)
Kock, Anders
2012-01-01
It is shown how the theory of commutative monads provides an axiomatic framework for several aspects of distribution theory in a broad sense, including probability distributions, physical extensive quantities, and Schwartz distributions of compact support. Among the particular aspects considered...... here are the notions of convolution, density, expectation, and conditional probability....
Das Répertoire International des Sources Musicales (RISM nach Fünfzig Jahren
Directory of Open Access Journals (Sweden)
Heckmann, Harald
2001-12-01
Full Text Available In 1952, a Commission Mixte formed by members of the International Musicological Society (IMS and the International Association of Music Libraries (lAML constituted in Paris the Répertoire International des Sources Musicales (RISM. In 1971, the first two volumes appeared - the Écrits imprimés concernant la musique edited by François Lesure. In 1960, a central editorial office (Zentralredaktion was founded in Kassel for the Series of the Einzeldrucke vor 1800/Single Prints before 1800. Consisting of 13 volumes, this is now complete with the exception of an index. Since moving to Frankfurt am Main in 1987, the Zentralredaktion has concentrated its work on the Handschriften nach 1600/Manuscripts after 1600. Because of its scope, this project presented RISM with an enormous challenge and, therefore, digitalization was a very early priority. In 1995, the first CD-ROM with the RISM manuscript database appeared followed each year by a cumulative edition; the sixth CD appeared at the end of 2000 containing about 350,000 entries from 575 libraries in 31 countries. Looking at what has been achieved in the last 50 years and taking into account the Series of special catalogs, now expanded to 29 printed volumes, the result deserves respect. There are, however, gaps. These include the fact that in some countries with an especially rich tradition, sources not yet accessible will have to be integrated into the RISM catalog. Spain is such a country and F. Gonzalez Valle has rendered great service in making these sources accessible. His pupils and successors are called upon to mobilize all forces so that within the international RISM community Spain takes up the place appropriate to its past and present rich musical culture.[de] 1952 konstituierte eine Commission Mixte aus Mitgliedern der Internationalen Gesellschaft fur Musikwissenschaft (IGMW und der Internationalen Vereinigung der Musikbibliotheken (IVMB in Paris das Répertoire International des Sources
Distribution theory of algebraic numbers
Yang, Chung-Chun
2008-01-01
The book timely surveys new research results and related developments in Diophantine approximation, a division of number theory which deals with the approximation of real numbers by rational numbers. The book is appended with a list of challenging open problems and a comprehensive list of references. From the contents: Field extensions Algebraic numbers Algebraic geometry Height functions The abc-conjecture Roth''s theorem Subspace theorems Vojta''s conjectures L-functions.
Mathematical theories of distributed sensor networks
Iyengar, Sitharama S; Balakrishnan, N
2014-01-01
Mathematical Theory of Distributed Sensor Networks demonstrates how mathematical theories can be used to provide distributed sensor modeling and to solve important problems such as coverage hole detection and repair. The book introduces the mathematical and computational structure by discussing what they are, their applications and how they differ from traditional systems. The text also explains how mathematics are utilized to provide efficient techniques implementing effective coverage, deployment, transmission, data processing, signal processing, and data protection within distributed sensor networks. Finally, the authors discuss some important challenges facing mathematics to get more incite to the multidisciplinary area of distributed sensor networks.
Distribution system reliability evaluation using credibility theory
African Journals Online (AJOL)
Xufeng Xu, Joydeep Mitra
have found that credibility theory, which broadens the scope of fuzzy set theory, is an effective tool for representing fuzzy events, and have developed a theoretical .... Based on the status of switches, the distribution system can be divided into multiple SPSS, which are connected with tie switches. For example, SPSS.
Distributed hash table theory, platforms and applications
Zhang, Hao; Xie, Haiyong; Yu, Nenghai
2013-01-01
This SpringerBrief summarizes the development of Distributed Hash Table in both academic and industrial fields. It covers the main theory, platforms and applications of this key part in distributed systems and applications, especially in large-scale distributed environments. The authors teach the principles of several popular DHT platforms that can solve practical problems such as load balance, multiple replicas, consistency and latency. They also propose DHT-based applications including multicast, anycast, distributed file systems, search, storage, content delivery network, file sharing and c
Tielker, Nicolas; Heil, Jochen; Kloss, Thomas; Ehrhart, Sebastian; Güssregen, Stefan; Schmidt, K. Friedemann; Kast, Stefan M.
2016-01-01
We predict cyclohexane–water distribution coefficients (log D7.4) for drug-like molecules taken from the SAMPL5 blind prediction challenge by the “embedded cluster reference interaction site model” (EC-RISM) integral equation theory. This task involves the coupled problem of predicting both partition coefficients (log P) of neutral species between the solvents and aqueous acidity constants (pKa) in order to account for a change of protonation states. The first issue is addressed by calibrating an EC-RISM-based model for solvation free energies derived from the “Minnesota Solvation Database” (MNSOL) for both water and cyclohexane utilizing a correction based on the partial molar volume, yielding a root mean square error (RMSE) of 2.4 kcal mol−1 for water and 0.8–0.9 kcal mol−1 for cyclohexane depending on the parametrization. The second one is treated by employing on one hand an empirical pKa model (MoKa) and, on the other hand, an EC-RISM-derived regression of published acidity constants (RMSE...
Distributed computer systems theory and practice
Zedan, H S M
2014-01-01
Distributed Computer Systems: Theory and Practice is a collection of papers dealing with the design and implementation of operating systems, including distributed systems, such as the amoeba system, argus, Andrew, and grapevine. One paper discusses the concepts and notations for concurrent programming, particularly language notation used in computer programming, synchronization methods, and also compares three classes of languages. Another paper explains load balancing or load redistribution to improve system performance, namely, static balancing and adaptive load balancing. For program effici
Mikrokimærisme og kræft blandt kvinder
DEFF Research Database (Denmark)
Hansen, Katrine Pedersbæk; Kamper-Jørgensen, Mads
2017-01-01
Ifølge græsk mytologi var Kimære en uhyrlig hybridmellem en løvinde, en slange og en ged. I moderne videnskab betegner en kimære en organisme, der består af celler eller organer med forskelligt ophav. Mikrokimærisme betegner en tilstand, hvor der i en vært eksisterer små mængder af celler, som...
Learning theory of distributed spectral algorithms
International Nuclear Information System (INIS)
Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan
2017-01-01
Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms. (paper)
Nucleon parton distributions in chiral perturbation theory
International Nuclear Information System (INIS)
Moiseeva, Alena
2013-01-01
Properties of the chiral expansion of nucleon light-cone operators have been studied. In the framework of the chiral perturbation theory we have demonstrated that convergency of the chiral expansion of nucleon parton distributions strongly depends on the value of the variable x. Three regions in x with essentially different analytical properties of the resulting chiral expansion for parton distributions were found. For each of the regions we have elaborated special power counting rules corresponding to the partial resummation of the chiral series. The nonlocal effective operators for the vector and the axial nucleon parton distributions have been constructed at the zeroth and the first chiral order. Using the derived nonlocal operators and the derived power counting rules we have obtained the second order expressions for the nucleon GPDs H(x,ξ,Δ 2 ), H(x,ξ,Δ 2 ),E(x,ξ,Δ 2 ) valid in the region x>or similar a 2 χ .
Applied optimal control theory of distributed systems
Lurie, K A
1993-01-01
This book represents an extended and substantially revised version of my earlierbook, Optimal Control in Problems ofMathematical Physics,originally published in Russian in 1975. About 60% of the text has been completely revised and major additions have been included which have produced a practically new text. My aim was to modernize the presentation but also to preserve the original results, some of which are little known to a Western reader. The idea of composites, which is the core of the modern theory of optimization, was initiated in the early seventies. The reader will find here its implementation in the problem of optimal conductivity distribution in an MHD-generatorchannel flow.Sincethen it has emergedinto an extensive theory which is undergoing a continuous development. The book does not pretend to be a textbook, neither does it offer a systematic presentation of the theory. Rather, it reflects a concept which I consider as fundamental in the modern approach to optimization of dis tributed systems. ...
International Nuclear Information System (INIS)
Curro, J.G.; Schweizer, K.S.
1989-01-01
We have recently developed a new theoretical approach to the study of polymer liquids. The theory is based on the ''reference interaction site model'' (RISM theory) of Chandler and Andersen, which has been successful in describing the structure of small molecule liquids. We have recently extended our polymer RISM theory to the case of polymer blends. In the present investigation we have applied this theory to two special binary blends: (1) the athermal mixture where we isolate structural effects, and (2) the isotopic mixture in which structurally identical polymer chains interact with dissimilar attractive interactions. By studying these two special cases we are able to obtain insights into the molecular factors which control the miscibility in polymer mixtures. 18 refs., 2 figs
Solvation effects on chemical shifts by embedded cluster integral equation theory.
Frach, Roland; Kast, Stefan M
2014-12-11
The accurate computational prediction of nuclear magnetic resonance (NMR) parameters like chemical shifts represents a challenge if the species studied is immersed in strongly polarizing environments such as water. Common approaches to treating a solvent in the form of, e.g., the polarizable continuum model (PCM) ignore strong directional interactions such as H-bonds to the solvent which can have substantial impact on magnetic shieldings. We here present a computational methodology that accounts for atomic-level solvent effects on NMR parameters by extending the embedded cluster reference interaction site model (EC-RISM) integral equation theory to the prediction of chemical shifts of N-methylacetamide (NMA) in aqueous solution. We examine the influence of various so-called closure approximations of the underlying three-dimensional RISM theory as well as the impact of basis set size and different treatment of electrostatic solute-solvent interactions. We find considerable and systematic improvement over reference PCM and gas phase calculations. A smaller basis set in combination with a simple point charge model already yields good performance which can be further improved by employing exact electrostatic quantum-mechanical solute-solvent interaction energies. A larger basis set benefits more significantly from exact over point charge electrostatics, which can be related to differences of the solvent's charge distribution.
The implications of migration theory for distributive justice
Sager, Alex
2012-01-01
This paper explores the implications of empirical theories of migration for normative accounts of migration and distributive justice. It examines neo-classical economics, world-systems theory, dual labor market theory, and feminist approaches to migration and contends that neo-classical economic theory in isolation provides an inadequate understanding of migration. Other theories provide a fuller account of how national and global economic, political, and social institutions cause and shape m...
Applying Distributed Learning Theory in Online Business Communication Courses.
Walker, Kristin
2003-01-01
Focuses on the critical use of technology in online formats that entail relatively new teaching media. Argues that distributed learning theory is valuable for teachers of online business communication courses for several reasons. Discusses the application of distributed learning theory to the teaching of business communication online. (SG)
Diffraction Theory and Almost Periodic Distributions
Strungaru, Nicolae; Terauds, Venta
2016-09-01
We introduce and study the notions of translation bounded tempered distributions, and autocorrelation for a tempered distribution. We further introduce the spaces of weakly, strongly and null weakly almost periodic tempered distributions and show that for weakly almost periodic tempered distributions the Eberlein decomposition holds. For translation bounded measures all these notions coincide with the classical ones. We show that tempered distributions with measure Fourier transform are weakly almost periodic and that for this class, the Eberlein decomposition is exactly the Fourier dual of the Lebesgue decomposition, with the Fourier-Bohr coefficients specifying the pure point part of the Fourier transform. We complete the project by looking at few interesting examples.
Continuous and distributed systems theory and applications
Sadovnichiy, Victor
2014-01-01
In this volume, the authors close the gap between abstract mathematical approaches, such as abstract algebra, number theory, nonlinear functional analysis, partial differential equations, methods of nonlinear and multi-valued analysis, on the one hand, and practical applications in nonlinear mechanics, decision making theory and control theory on the other. Readers will also benefit from the presentation of modern mathematical modeling methods for the numerical solution of complicated engineering problems in hydromechanics, geophysics and mechanics of continua. This compilation will be of interest to mathematicians and engineers working at the interface of these field. It presents selected works of the open seminar series of Lomonosov Moscow State University and the National Technical University of Ukraine “Kyiv Polytechnic Institute”. The authors come from Germany, Italy, Spain, Russia, Ukraine, and the USA.
The neoclassical theory of growth and distribution
Directory of Open Access Journals (Sweden)
Robert M. Solow
2000-12-01
Full Text Available The paper surveys the neoclassical theory of growth. As a preliminary, the meaning of the adjective "neoclassical" is discussed. The basic model is then sketched, and the conditions ensuring a stationary state are illustrated. The issue of the convergence to a stationary state (and that of the speed of convergence is further considered. A discussion of "primary factors" opens the way to the "new" theory of growth, with endogenous technical progress. A number of extensions of the basic model are then recalled: two-sector and multi-sectoral models, overlapping generations models, the role of money in growth models.
Raney Distributions and Random Matrix Theory
Forrester, Peter J.; Liu, Dang-Zheng
2015-03-01
Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.
Latitudinal phytoplankton distribution and the neutral theory of biodiversity
Chust, Guillem; Irigoien, Xabier; Chave, Jé rô me; Harris, Roger P.
2012-01-01
Recent studies have suggested that global diatom distributions are not limited by dispersal, in the case of both extant species and fossil species, but rather that environmental filtering explains their spatial patterns. Hubbell's neutral theory
Radar meteors range distribution model. I. Theory
Czech Academy of Sciences Publication Activity Database
Pecinová, Drahomíra; Pecina, Petr
2007-01-01
Roč. 37, č. 2 (2007), s. 83-106 ISSN 1335-1842 R&D Projects: GA ČR GA205/03/1405 Institutional research plan: CEZ:AV0Z10030501 Keywords : physics of meteors * radar meteors * range distribution Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
Distributed Leadership through the Lens of Activity Theory
Yuen, Jeanne Ho Pau; Victor Chen, Der-Thanq; Ng, David
2016-01-01
Purpose: Using Activity Theory as an interpretive lens to examine the distribution of leadership, this paper shares a case study on how leadership for an ICT project was distributed in a Singapore school. Method: The case study involved observations of 49 meetings and 34 interviews of leaders and the teachers who were involved in the ICT project.…
Small molecule hydration energy and entropy from 3D-RISM
Johnson, J.; Case, D. A.; Yamazaki, T.; Gusarov, S.; Kovalenko, A.; Luchko, T.
2016-09-01
Implicit solvent models offer an attractive way to estimate the effects of a solvent environment on the properties of small or large solutes without the complications of explicit simulations. One common test of accuracy is to compute the free energy of transfer from gas to liquid for a variety of small molecules, since many of these values have been measured. Studies of the temperature dependence of these values (i.e. solvation enthalpies and entropies) can provide additional insights into the performance of implicit solvent models. Here, we show how to compute temperature derivatives of hydration free energies for the 3D-RISM integral equation approach. We have computed hydration free energies of 1123 small drug-like molecules (both neutral and charged). Temperature derivatives were also used to calculate hydration energies and entropies of 74 of these molecules (both neutral and charged) for which experimental data is available. While direct results have rather poor agreement with experiment, we have found that several previously proposed linear hydration free energy correction schemes give good agreement with experiment. These corrections also provide good agreement for hydration energies and entropies though simple extensions are required in some cases.
Small molecule hydration energy and entropy from 3D-RISM
International Nuclear Information System (INIS)
Johnson, J; Case, D A; Yamazaki, T; Gusarov, S; Kovalenko, A; Luchko, T
2016-01-01
Implicit solvent models offer an attractive way to estimate the effects of a solvent environment on the properties of small or large solutes without the complications of explicit simulations. One common test of accuracy is to compute the free energy of transfer from gas to liquid for a variety of small molecules, since many of these values have been measured. Studies of the temperature dependence of these values (i.e. solvation enthalpies and entropies) can provide additional insights into the performance of implicit solvent models. Here, we show how to compute temperature derivatives of hydration free energies for the 3D-RISM integral equation approach. We have computed hydration free energies of 1123 small drug-like molecules (both neutral and charged). Temperature derivatives were also used to calculate hydration energies and entropies of 74 of these molecules (both neutral and charged) for which experimental data is available. While direct results have rather poor agreement with experiment, we have found that several previously proposed linear hydration free energy correction schemes give good agreement with experiment. These corrections also provide good agreement for hydration energies and entropies though simple extensions are required in some cases. (paper)
Critique of the neoclassical theory of growth and distribution
Directory of Open Access Journals (Sweden)
Luigi L. Pasinetti
2000-12-01
Full Text Available The paper surveys the main theories of income distribution in their relationship with the theories of economic growth. First, the Classical approach is considered, focusing on the Ricardian theory. Then the neoclassical theory is discussed, highlighting its origins (Bohm-Bawerk, Wicksell, Clark and the role of the aggregate production function. The emergence of a "Keynesian" theory of income distributionin the wake of Harrod's model of growth is then recalled together with the surprising resurgence of the neoclassical theory (following the contributions of Solow and Meade. But, as the paper shows, the neoclassical theory of income distributionlacks logical consistency and has shaky foundations, as has been revealed by the severecritiques moved to the neoclassical production function. Mainstream economic literature circumvents this problem by simply ignoring it, while the models of endogenous growth exclude the issue of distribution theory from their consideration. However, while mainstream economics bypasses the problems of incomedistribution, this is too relevant an issue to be ignored and a number of new research lines, briefly surveyed, try new approaches to it.
Chiral perturbation theory for nucleon generalized parton distributions
Energy Technology Data Exchange (ETDEWEB)
Diehl, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Manashov, A. [Regensburg Univ. (Germany). Inst. fuer Physik 1 - Theoretische Physik]|[Sankt-Petersburg State Univ. (Russian Federation). Dept. of Theoretical Physics; Schaefer, A. [Regensburg Univ. (Germany). Inst. fuer Physik 1 - Theoretische Physik
2006-08-15
We analyze the moments of the isosinglet generalized parton distributions H, E, H, E of the nucleon in one-loop order of heavy-baryon chiral perturbation theory. We discuss in detail the construction of the operators in the effective theory that are required to obtain all corrections to a given order in the chiral power counting. The results will serve to improve the extrapolation of lattice results to the chiral limit. (orig.)
Towards a simple mathematical theory of citation distributions.
Katchanov, Yurij L
2015-01-01
The paper is written with the assumption that the purpose of a mathematical theory of citation is to explain bibliometric regularities at the level of mathematical formalism. A mathematical formalism is proposed for the appearance of power law distributions in social citation systems. The principal contributions of this paper are an axiomatic characterization of citation distributions in terms of the Ekeland variational principle and a mathematical exploration of the power law nature of citation distributions. Apart from its inherent value in providing a better understanding of the mathematical underpinnings of bibliometric models, such an approach can be used to derive a citation distribution from first principles.
Product Distribution Theory for Control of Multi-Agent Systems
Lee, Chia Fan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for controlling Multi-Agent Systems (MAS's). First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint stare of the agents. Accordingly we can consider a team game in which the shared utility is a performance measure of the behavior of the MAS. For such a scenario the game is at equilibrium - the Lagrangian is optimized - when the joint distribution of the agents optimizes the system's expected performance. One common way to find that equilibrium is to have each agent run a reinforcement learning algorithm. Here we investigate the alternative of exploiting PD theory to run gradient descent on the Lagrangian. We present computer experiments validating some of the predictions of PD theory for how best to do that gradient descent. We also demonstrate how PD theory can improve performance even when we are not allowed to rerun the MAS from different initial conditions, a requirement implicit in some previous work.
Scaling theory of quantum resistance distributions in disordered systems
International Nuclear Information System (INIS)
Jayannavar, A.M.
1991-01-01
The large scale distribution of quantum Ohmic resistance of a disorderd one-dimensional conductor is derived explicitly. It is shown that in the thermodynamic limit this distribution is characterized by two independent parameters for strong disorder, leading to a two-parameter scaling theory of localization. Only in the limit of weak disorder single parameter scaling consistent with existing theoretical treatments is recovered. (author). 33 refs., 4 figs
Scaling theory of quantum resistance distributions in disordered systems
International Nuclear Information System (INIS)
Jayannavar, A.M.
1990-05-01
We have derived explicitly, the large scale distribution of quantum Ohmic resistance of a disordered one-dimensional conductor. We show that in the thermodynamic limit this distribution is characterized by two independent parameters for strong disorder, leading to a two-parameter scaling theory of localization. Only in the limit of weak disorder we recover single parameter scaling, consistent with existing theoretical treatments. (author). 32 refs, 4 figs
Theories of distributive justice and post-apartheid South Africa
Knight, Carl
2014-01-01
South Africa is a highly distributively unequal country, and its inequality continues to be largely along racial lines. Such circumstances call for assessment from the perspective of contemporary theories of distributive justice. Three such theories—Rawlsian justice, utilitarianism, and luck egalitarianism—are described and applied. Rawls' difference principle recommends that the worst off be made as well as they can be, a standard which South Africa clearly falls short of. Utilitarianism rec...
Nuclear properties with realistic Hamiltonians through spectral distribution theory
International Nuclear Information System (INIS)
Vary, J.P.; Belehrad, R.; Dalton, B.J.
1979-01-01
Motivated by the need of non-perturbative methods for utilizing realistic nuclear Hamiltonians H, the authors use spectral distribution theory, based on calculated moments of H, to obtain specific bulk and valence properties of finite nuclei. The primary emphasis here is to present results for the binding energies of nuclei obtained with and without an assumed core. (Auth.)
A Positive and a Normative Theory of Income Distribution
J. Tinbergen (Jan)
1970-01-01
textabstractA positive theory of income distribution based on assumptions concerning the supply of and demand for each type of productive service is presented. The demand function of the organizers of production may be derived from the maximization of profits with the income scale and the production
Species distributions, quantum theory, and the enhancement of biodiversity measures
DEFF Research Database (Denmark)
Real, Raimundo; Barbosa, A. Márcia; Bull, Joseph William
2017-01-01
Species distributions are typically represented by records of their observed occurrence at a given spatial and temporal scale. Such records are inevitably incomplete and contingent on the spatial–temporal circumstances under which the observations were made. Moreover, organisms may respond...... biodiversity”. We show how conceptualizing species’ distributions in this way could help overcome important weaknesses in current biodiversity metrics, both in theory and by using a worked case study of mammal distributions in Spain over the last decade. We propose that considerable theoretical advances could...
Marshall ̶ Olkin Distributions : Advances in Theory and Applications
Durante, Fabrizio; Mulinacci, Sabrina
2015-01-01
This book presents the latest advances in the theory and practice of Marshall-Olkin distributions. These distributions have been increasingly applied in statistical practice in recent years, as they make it possible to describe interesting features of stochastic models like non-exchangeability, tail dependencies and the presence of a singular component. The book presents cutting-edge contributions in this research area, with a particular emphasis on financial and economic applications. It is recommended for researchers working in applied probability and statistics, as well as for practitioners interested in the use of stochastic models in economics. This volume collects selected contributions from the conference “Marshall-Olkin Distributions: Advances in Theory and Applications,” held in Bologna on October 2-3, 2013.
A geometric theory for Lévy distributions
International Nuclear Information System (INIS)
Eliazar, Iddo
2014-01-01
Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts of the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT
A geometric theory for Lévy distributions
Eliazar, Iddo
2014-08-01
Lévy distributions are of prime importance in the physical sciences, and their universal emergence is commonly explained by the Generalized Central Limit Theorem (CLT). However, the Generalized CLT is a geometry-less probabilistic result, whereas physical processes usually take place in an embedding space whose spatial geometry is often of substantial significance. In this paper we introduce a model of random effects in random environments which, on the one hand, retains the underlying probabilistic structure of the Generalized CLT and, on the other hand, adds a general and versatile underlying geometric structure. Based on this model we obtain geometry-based counterparts of the Generalized CLT, thus establishing a geometric theory for Lévy distributions. The theory explains the universal emergence of Lévy distributions in physical settings which are well beyond the realm of the Generalized CLT.
Distribution functions and moments in the theory of coagulation
International Nuclear Information System (INIS)
Pich, J.
1990-04-01
Different distribution functions and their moments used in the Theory of coagulation are summarized and analysed. Relations between the moments of these distribution functions are derived and the physical meaning of individual moments is briefly discussed. The time evolution of the moment of order zero (total number concentration) during the coagulation process is analysed for the general kernel of the Smoluchowski equation. On this basis the time evolution of certain physically important quantities related to this moment such as mean particle size, surface and volume as well as surface concentration is described. Equations for the half time of coagulation for the general collision frequency factor are derived. (orig.) [de
Reservoir theory, groundwater transit time distributions, and lumped parameter models
International Nuclear Information System (INIS)
Etcheverry, D.; Perrochet, P.
1999-01-01
The relation between groundwater residence times and transit times is given by the reservoir theory. It allows to calculate theoretical transit time distributions in a deterministic way, analytically, or on numerical models. Two analytical solutions validates the piston flow and the exponential model for simple conceptual flow systems. A numerical solution of a hypothetical regional groundwater flow shows that lumped parameter models could be applied in some cases to large-scale, heterogeneous aquifers. (author)
Towards Resource Theory of Coherence in Distributed Scenarios
Directory of Open Access Journals (Sweden)
Alexander Streltsov
2017-03-01
Full Text Available The search for a simple description of fundamental physical processes is an important part of quantum theory. One example for such an abstraction can be found in the distance lab paradigm: if two separated parties are connected via a classical channel, it is notoriously difficult to characterize all possible operations these parties can perform. This class of operations is widely known as local operations and classical communication. Surprisingly, the situation becomes comparably simple if the more general class of separable operations is considered, a finding that has been extensively used in quantum information theory for many years. Here, we propose a related approach for the resource theory of quantum coherence, where two distant parties can perform only measurements that do not create coherence and can communicate their outcomes via a classical channel. We call this class local incoherent operations and classical communication. While the characterization of this class is also difficult in general, we show that the larger class of separable incoherent operations has a simple mathematical form, yet still preserves the main features of local incoherent operations and classical communication. We demonstrate the relevance of our approach by applying it to three different tasks: assisted coherence distillation, quantum teleportation, and single-shot quantum state merging. We expect that the results we obtain in this work also transfer to other concepts of coherence that are discussed in recent literature. The approach we present here opens new ways to study the resource theory of coherence in distributed scenarios.
Towards Resource Theory of Coherence in Distributed Scenarios
Streltsov, Alexander; Rana, Swapan; Bera, Manabendra Nath; Lewenstein, Maciej
2017-01-01
The search for a simple description of fundamental physical processes is an important part of quantum theory. One example for such an abstraction can be found in the distance lab paradigm: if two separated parties are connected via a classical channel, it is notoriously difficult to characterize all possible operations these parties can perform. This class of operations is widely known as local operations and classical communication. Surprisingly, the situation becomes comparably simple if the more general class of separable operations is considered, a finding that has been extensively used in quantum information theory for many years. Here, we propose a related approach for the resource theory of quantum coherence, where two distant parties can perform only measurements that do not create coherence and can communicate their outcomes via a classical channel. We call this class local incoherent operations and classical communication. While the characterization of this class is also difficult in general, we show that the larger class of separable incoherent operations has a simple mathematical form, yet still preserves the main features of local incoherent operations and classical communication. We demonstrate the relevance of our approach by applying it to three different tasks: assisted coherence distillation, quantum teleportation, and single-shot quantum state merging. We expect that the results we obtain in this work also transfer to other concepts of coherence that are discussed in recent literature. The approach we present here opens new ways to study the resource theory of coherence in distributed scenarios.
Chiral perturbation theory for generalized parton distributions and baryon distribution amplitudes
Energy Technology Data Exchange (ETDEWEB)
Wein, Philipp
2016-05-06
In this thesis we apply low-energy effective field theory to the first moments of generalized parton distributions and to baryon distribution amplitudes, which are both highly relevant for the parametrization of the nonperturbative part in hard processes. These quantities yield complementary information on hadron structure, since the former treat hadrons as a whole and, thus, give information about the (angular) momentum carried by an entire parton species on average, while the latter parametrize the momentum distribution within an individual Fock state. By performing one-loop calculations within covariant baryon chiral perturbation theory, we obtain sensible parametrizations of the quark mass dependence that are ideally suited for the subsequent analysis of lattice QCD data.
Distribution theory with applications in engineering and physics
Teodorescu, Petre P; Toma, Antonela
2013-01-01
In this comprehensive monograph, the authors apply modern mathematical methods to the study of mechanical and physical phenomena or techniques in acoustics, optics, and electrostatics, where classical mathematical tools fail.They present a general method of approaching problems, pointing out different aspects and difficulties that may occur. With respect to the theory of distributions, only the results and the principle theorems are given as well as some mathematical results. The book also systematically deals with a large number of applications to problems of general Newtonian mechanics,
Parallel Distributed Processing Theory in the Age of Deep Networks.
Bowers, Jeffrey S
2017-12-01
Parallel distributed processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely that all knowledge is coded in a distributed format and cognition is mediated by non-symbolic computations. These claims have long been debated in cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks learn units that respond selectively to meaningful categories, and researchers are finding that deep networks need to be supplemented with symbolic systems to perform some tasks. Given the close links between PDP and deep networks, it is surprising that research with deep networks is challenging PDP theory. Copyright © 2017. Published by Elsevier Ltd.
Latitudinal phytoplankton distribution and the neutral theory of biodiversity
Chust, Guillem
2012-11-16
Recent studies have suggested that global diatom distributions are not limited by dispersal, in the case of both extant species and fossil species, but rather that environmental filtering explains their spatial patterns. Hubbell\\'s neutral theory of biodiversity provides a framework in which to test these alternatives. Our aim is to test whether the structure of marine phytoplankton (diatoms, dinoflagellates and coccolithophores) assemblages across the Atlantic agrees with neutral theory predictions. We asked: (1) whether intersite variance in phytoplankton diversity is explained predominantly by dispersal limitation or by environmental conditions; and (2) whether species abundance distributions are consistent with those expected by the neutral model. Location: Meridional transect of the Atlantic (50° N-50° S). Methods: We estimated the relative contributions of environmental factors and geographic distance to phytoplankton composition using similarity matrices, Mantel tests and variation partitioning of the species composition based upon canonical ordination methods. We compared the species abundance distribution of phytoplankton with the neutral model using Etienne\\'s maximum-likelihood inference method. Results: Phytoplankton communities are slightly more determined by niche segregation (24%), than by dispersal limitation and ecological drift (17%). In 60% of communities, the assumption of neutrality in species\\' abundance distributions could not be rejected. In tropical zones, where oceanic gyres enclose large stable water masses, most communities showed low species immigration rates; in contrast, we infer that communities in temperate areas, out of oligotrophic gyres, have higher rates of species immigration. Conclusions: Phytoplankton community structure is consistent with partial niche assembly and partial dispersal and drift assembly (neutral processes). The role of dispersal limitation is almost as important as habitat filtering, a fact that has been
Application of spectral distributions in effective interaction theory
International Nuclear Information System (INIS)
Chang, B.D.
1980-01-01
The calculation of observable quantities in a large many-particle space is very complicated and often impractical. In effective interaction theory, to simplify the calculation, the full many-particle space is truncated to a small, manageable model space and the operators associated with the observables are renormalized to accommodate the truncation effects. The operator that has been most extensively studied for renormalization is the Hamiltonian. The renormalized Hamiltonian, often called the effective Hamiltonian, can be defined such that it not only gives the eigenvalues, but also the projections of the full-space (true) eigen-functions onto the model space. These projected wave functions then provide a convenient basis for renormalization of other operators. The usual framework for renormalization is perturbation theory. Unfortunately, the conventional perturbation series for effective Hamiltonians have problems with convergence and their high order terms (especially 4th or higher) are also difficult to calculate. The characteristics of spectral distributions can be helptul in determining the model space and calculating the effective Hamiltonian. In this talk applications of spectral distributions are discussed in the following areas: (1) truncation of many particle spaces by selection of configurations; (2) orthogonal polynomial expansions for the effective Hamiltonian; and (3) establishing new criteria for the effective Hamiltonian
Jeans' criterion and nonextensive velocity distribution function in kinetic theory
International Nuclear Information System (INIS)
Du Jiulin
2004-01-01
The effect of nonextensivity of self-gravitating systems on the Jeans' criterion for gravitational instability is studied in the framework of Tsallis statistics. The nonextensivity is introduced in the Jeans problem by a generalized q-nonextensive velocity distribution function through the equation of state of ideal gas in nonextensive kinetic theory. A new Jeans' criterion is deduced with a factor √(2/(5-3q)) that, however, differs from that one in [Astron. Astrophys. 396 (2002) 309] and new results of gravitational instability are analyzed for the nonextensive parameter q. An understanding of physical meaning of q and a possible seismic observation to find astronomical evidence for a value of q different from unity are also discussed
From evolution theory to parallel and distributed genetic
CERN. Geneva
2007-01-01
Lecture #1: From Evolution Theory to Evolutionary Computation. Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture #2: Parallel and Distributed Genetic Programming. The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an increasing number of researchers to apply these techniques to a large set of problems. Given the difficulty of some problems, much effort has been applied to improving the efficiency of GP during the last few years. Among the available proposals,...
Palmer, David S; Mišin, Maksim; Fedorov, Maxim V; Llinas, Antonio
2015-09-08
We report a method to predict physicochemical properties of druglike molecules using a classical statistical mechanics based solvent model combined with machine learning. The RISM-MOL-INF method introduced here provides an accurate technique to characterize solvation and desolvation processes based on solute-solvent correlation functions computed by the 1D reference interaction site model of the integral equation theory of molecular liquids. These functions can be obtained in a matter of minutes for most small organic and druglike molecules using existing software (RISM-MOL) (Sergiievskyi, V. P.; Hackbusch, W.; Fedorov, M. V. J. Comput. Chem. 2011, 32, 1982-1992). Predictions of caco-2 cell permeability and hydration free energy obtained using the RISM-MOL-INF method are shown to be more accurate than the state-of-the-art tools for benchmark data sets. Due to the importance of solvation and desolvation effects in biological systems, it is anticipated that the RISM-MOL-INF approach will find many applications in biophysical and biomedical property prediction.
Dirichlet and Related Distributions Theory, Methods and Applications
Ng, Kai Wang; Tang, Man-Lai
2011-01-01
The Dirichlet distribution appears in many areas of application, which include modelling of compositional data, Bayesian analysis, statistical genetics, and nonparametric inference. This book provides a comprehensive review of the Dirichlet distribution and two extended versions, the Grouped Dirichlet Distribution (GDD) and the Nested Dirichlet Distribution (NDD), arising from likelihood and Bayesian analysis of incomplete categorical data and survey data with non-response. The theoretical properties and applications are also reviewed in detail for other related distributions, such as the inve
GCPSO in cooperation with graph theory to distribution network reconfiguration for energy saving
International Nuclear Information System (INIS)
Assadian, Mehdi; Farsangi, Malihe M.; Nezamabadi-pour, Hossein
2010-01-01
Network reconfiguration for loss reduction in distribution system is an important way to save energy. This paper investigates the ability of guaranteed convergence particle swarm optimization (GCPSO) and particle swarm optimization (PSO) in cooperation with graph theory for network reconfiguration to reduce the power loss and enhancement of voltage profile of distribution systems. Numerical results of three distribution systems are presented which illustrate the feasibility of the proposed method by GCPSO and PSO using the graph theory. To validate the obtained results, genetic algorithm (GA) using graph theory is also applied and is compared with the proposed GCPSO and PSO using graph theory.
A Stochastic Theory for Deep Bed Filtration Accounting for Dispersion and Size Distributions
DEFF Research Database (Denmark)
Shapiro, Alexander; Bedrikovetsky, P. G.
2010-01-01
We develop a stochastic theory for filtration of suspensions in porous media. The theory takes into account particle and pore size distributions, as well as the random character of the particle motion, which is described in the framework of the theory of continuous-time random walks (CTRW...
Singularity in the Laboratory Frame Angular Distribution Derived in Two-Body Scattering Theory
Dick, Frank; Norbury, John W.
2009-01-01
The laboratory (lab) frame angular distribution derived in two-body scattering theory exhibits a singularity at the maximum lab scattering angle. The singularity appears in the kinematic factor that transforms the centre of momentum (cm) angular distribution to the lab angular distribution. We show that it is caused in the transformation by the…
Imai, Takashi; Kovalenko, Andriy; Hirata, Fumio
2005-04-14
The three-dimensional reference interaction site model (3D-RISM) theory is applied to the analysis of hydration effects on the partial molar volume of proteins. For the native structure of some proteins, the partial molar volume is decomposed into geometric and hydration contributions using the 3D-RISM theory combined with the geometric volume calculation. The hydration contributions are correlated with the surface properties of the protein. The thermal volume, which is the volume of voids around the protein induced by the thermal fluctuation of water molecules, is directly proportional to the accessible surface area of the protein. The interaction volume, which is the contribution of electrostatic interactions between the protein and water molecules, is apparently governed by the charged atomic groups on the protein surface. The polar atomic groups do not make any contribution to the interaction volume. The volume differences between low- and high-pressure structures of lysozyme are also analyzed by the present method.
Synthesising Theory and Practice: Distributed Leadership in Higher Education
Jones, Sandra; Harvey, Marina; Lefoe, Geraldine; Ryland, Kevin
2014-01-01
Changes facing higher education from increased government, student and community demands are resulting in a greater focus on leadership within universities. Attempts to adapt to higher education theory that underpins leadership in other sectors have been criticised for failing to recognise its unique role in the development of creative and…
Parallel Distributed Processing theory in the age of deep networks
Bowers, Jeffrey
2017-01-01
Parallel Distributed Processing (PDP) models in psychology are the precursors of deep networks used in computer science. However, only PDP models are associated with two core psychological claims, namely, that all knowledge is coded in a distributed format, and cognition is mediated by non-symbolic computations. These claims have long been debated within cognitive science, and recent work with deep networks speaks to this debate. Specifically, single-unit recordings show that deep networks le...
Continuous and distributed systems II theory and applications
Zgurovsky, Mikhail
2015-01-01
As in the previous volume on the topic, the authors close the gap between abstract mathematical approaches, such as applied methods of modern algebra and analysis, fundamental and computational mechanics, nonautonomous and stochastic dynamical systems, on the one hand, and practical applications in nonlinear mechanics, optimization, decision making theory and control theory on the other. Readers will also benefit from the presentation of modern mathematical modeling methods for the numerical solution of complicated engineering problems in biochemistry, geophysics, biology and climatology. This compilation will be of interest to mathematicians and engineers working at the interface of these fields. It presents selected works of the joint seminar series of Lomonosov Moscow State University and the Institute for Applied System Analysis at National Technical University of Ukraine “Kyiv Polytechnic Institute”. The authors come from Brazil, Germany, France, Mexico, Spain, Poland, Russia, Ukraine, and the USA. ...
THEORY OF CORRELATIONS AND FLUCTUATIONS IN NEUTRON DISTRIBUTIONS
Energy Technology Data Exchange (ETDEWEB)
Osborn, R. K.; Yip, S.
1963-06-15
Equations are derived for the first and second order densities for neutrons and alpha particles. The implications of the equations are examined by reducing them to their diffusion theory equivalents, and the one-speed equations are obtained. Results show that in cases where the singlet density can be approximated as spatially uniform, the same approximation may not apply to the doublet density. (D.C.W.)
Analyzing capture zone distributions (CZD) in growth: Theory and applications
Einstein, Theodore L.; Pimpinelli, Alberto; Luis González, Diego
2014-09-01
We have argued that the capture-zone distribution (CZD) in submonolayer growth can be well described by the generalized Wigner distribution (GWD) P(s) =asβ exp(-bs2), where s is the CZ area divided by its average value. This approach offers arguably the most robust (least sensitive to mass transport) method to find the critical nucleus size i, since β ≈ i + 2. Various analytical and numerical investigations, which we discuss, show that the simple GWD expression is inadequate in the tails of the distribution, it does account well for the central regime 0.5 < s < 2, where the data is sufficiently large to be reliably accessible experimentally. We summarize and catalog the many experiments in which this method has been applied.
A theory of distributed objects asynchrony, mobility, groups, components
Caromel, Denis; Henrio, Ludovic
2005-01-01
Distributed and communicating objects are becoming ubiquitous. In global, Grid and Peer-to-Peer computing environments, extensive use is made of objects interacting through method calls. So far, no general formalism has been proposed for the foundation of such systems. Caromel and Henrio are the first to define a calculus for distributed objects interacting using asynchronous method calls with generalized futures, i.e., wait-by-necessity -- a must in large-scale systems, providing both high structuring and low coupling, and thus scalability. The authors provide very generic results on expressiveness and determinism, and the potential of their approach is further demonstrated by its capacity to cope with advanced issues such as mobility, groups, and components. Researchers and graduate students will find here an extensive review of concurrent languages and calculi, with comprehensive figures and summaries. Developers of distributed systems can adopt the many implementation strategies that are presented and ana...
On widths of mass distributions in statistical theory of fission
International Nuclear Information System (INIS)
Volkov, N.G.; Emel'yanov, V.M.
1979-01-01
The process of nucleon tunneling from one fragment to another near the point of the compoUnd-nucleus fragmentation has been studied in the model of a two-center oscillator. The effect of the number of transferred nucleons on the mass distribution of fragments is estimated. Sensitivity of the model to the form of the single-particle potential, excitation eneraies and deformation of fragments is examined. The calculations performed show that it is possible to calculate the mass distributions at the point of fragment contact in the statistical fission model, taking account of the nucleon exchange between fragments
The distribution of prime numbers and associated problems in number theory
International Nuclear Information System (INIS)
Nair, M.
1991-01-01
Some problems in number theory, namely the gaps between consecutive primes, the distribution of primes in arithmetic progressions, Brun-Titchmarsh theorem, Fermat's last theorem, The Thue equation, the gaps between square-free numbers are discussed
Distribution system reliability evaluation using credibility theory | Xu ...
African Journals Online (AJOL)
In this paper, a hybrid algorithm based on fuzzy simulation and Failure Mode and Effect Analysis (FMEA) is applied to determine fuzzy reliability indices of distribution system. This approach can obtain fuzzy expected values and their variances of reliability indices, and the credibilities of reliability indices meeting specified ...
Optimizing Sparse Representations of Kinetic Distributions via Information Theory
2017-07-31
Information Theory Robert Martin and Daniel Eckhardt Air Force Research Laboratory (AFMC) AFRL/RQRS 1 Ara Drive Edwards AFB, CA 93524-7013 Air Force...momentum, energy, and physical entropy. N/A Unclassified Unclassified Unclassified SAR 7 Robert Martin N/A Research in Industrial Projects for Students...Journal of Computational Physics, vol. 145, no. 1, pp. 382 – 405, 1998. [7] R. S. Martin , H. Le, D. L. Bilyeu, and S. Gildea, “Plasma model V&V of
Equilibrium distribution of hard-sphere systems and revised Enskog theory
Beijeren, H. van
1983-01-01
A revised Enskog theory (RET) is shown to lead to a correct equilibrium distribution in hard-sphere systems in a stationary external potential, while the standard Enskog theory (SET) does not. Attention is given to the s-component hard-sphere mixture with constant external potential acting on
Log-concave Probability Distributions: Theory and Statistical Testing
DEFF Research Database (Denmark)
An, Mark Yuing
1996-01-01
This paper studies the broad class of log-concave probability distributions that arise in economics of uncertainty and information. For univariate, continuous, and log-concave random variables we prove useful properties without imposing the differentiability of density functions. Discrete...... and multivariate distributions are also discussed. We propose simple non-parametric testing procedures for log-concavity. The test statistics are constructed to test one of the two implicati ons of log-concavity: increasing hazard rates and new-is-better-than-used (NBU) property. The test for increasing hazard...... rates are based on normalized spacing of the sample order statistics. The tests for NBU property fall into the category of Hoeffding's U-statistics...
Theory of Nanocluster Size Distributions from Ion Beam Synthesis
Energy Technology Data Exchange (ETDEWEB)
Yuan, C.W.; Yi, D.O.; Sharp, I.D.; Shin, S.J.; Liao, C.Y.; Guzman, J.; Ager III, J.W.; Haller, E.E.; Chrzan, D.C.
2008-06-13
Ion beam synthesis of nanoclusters is studied via both kinetic Monte Carlo simulations and the self-consistent mean-field solution to a set of coupled rate equations. Both approaches predict the existence of a steady state shape for the cluster size distribution that depends only on a characteristic length determined by the ratio of the effective diffusion coefficient to the ion flux. The average cluster size in the steady state regime is determined by the implanted species/matrix interface energy.
Hengst, Julie A
2015-01-01
This article proposes distributed communication as a promising theoretical framework for building supportive environments for child language development. Distributed communication is grounded in an emerging intersection of cultural-historical activity theory (CHAT) and theories of communicative practices that argue for integrating accounts of language, cognition and culture. The article first defines and illustrates through selected research articles, three key principles of distributed communication: (a) language and all communicative resources are inextricably embedded in activity; (b) successful communication depends on common ground built up through short- and long-term histories of participation in activities; and (c) language cannot act alone, but is always orchestrated with other communicative resources. It then illustrates how these principles are fully integrated in everyday interactions by drawing from my research on Cindy Magic, a verbal make-believe game played by a father and his two daughters. Overall, the research presented here points to the remarkably complex communicative environments and sophisticated forms of distributed communication children routinely engage in as they interact with peer and adult communication partners in everyday settings. The article concludes by considering implications of these theories for, and examples of, distributed communication relevant to clinical intervention. Readers will learn about (1) distributed communication as a conceptual tool grounded in an emerging intersection of cultural-historical activity theory and theories of communicative practices and (2) how to apply distributed communication to the study of child language development and to interventions for children with communication disorders. Copyright © 2015 Elsevier Inc. All rights reserved.
Theory of dressed bosons and nuclear matter distributions
International Nuclear Information System (INIS)
Tomaselli, M.; Liu, L.C.; Tanihata, I.
2002-09-01
The structure of nuclei with large neutron or proton-neutron excess, i.e., with large isospin components, is investigated in the Boson Dynamic Correlation Model where the valence particle pairs are dressed by their interactions with the microscopic clusters of the core. The mixed-mode states of the model are the eigenstates of a set of nonlinear equations. We solve these equations in terms of the cluster factorizations that are introduced to compute the n-boson matrix elements. Our calculation of the energy levels of 18 O reveals a strong mixing between the valence and core clusters which leads to a large reduction of the spectroscopic factors as calculated in Shell-Model approximations. The coupling of valence- to core-clusters gives a new insight into the halo formation in neutron-rich nuclei, namely, the halo is also a consequence of the excitation of the core protons. The calculated matter distributions of 6 He and 6 Li exhibit strong similarities, which indicate that halo formation in nuclei with proton-neutron excess must be postulated. The matter distributions of these two isotopes reproduce well the differential cross sections obtained in the proton elastic scattering experiments performed at GSI in inverse kinematics at an energy of 0.7 GeV/u. (orig.)
Analysis of Product Distribution Strategy in Digital Publishing Industry Based on Game-Theory
Xu, Li-ping; Chen, Haiyan
2017-04-01
The digital publishing output increased significantly year by year. It has been the most vigorous point of economic growth and has been more important to press and publication industry. Its distribution channel has been diversified, which is different from the traditional industry. A deep research has been done in digital publishing industry, for making clear of the constitution of the industry chain and establishing the model of industry chain. The cooperative and competitive relationship between different distribution channels have been analyzed basing on a game-theory. By comparing the distribution quantity and the market size between the static distribution strategy and dynamic distribution strategy, we get the theory evidence about how to choose the distribution strategy to get the optimal benefit.
Statistical distribution of partial widths in the microscopic theory of nuclear reactions
International Nuclear Information System (INIS)
Bunakov, V.E.; Ogloblin, S.G.
1978-01-01
Using the microscopic theory of nuclear reaction the distribution function of neutron reduced partial widths is obtained. It is shown that the distribution of reduced partial widths of a radiative transition is of the same form. The distribution obtained differs from the Porter-Thomas law for neutron widths only in the presence of intermediate structures. It is noteworthy that the presence of an intermediate structure leads to a greater dispersion
Radiographic information theory: correction for x-ray spectral distribution
International Nuclear Information System (INIS)
Brodie, I.; Gutcheck, R.A.
1983-01-01
A more complete computational method is developed to account for the effect of the spectral distribution of the incident x-ray fluence on the minimum exposure required to record a specified information set in a diagnostic radiograph. It is shown that an earlier, less rigorous, but simpler computational technique does not introduce serious errors provided that both a good estimate of the mean energy per photon can be made and the detector does not contain an absorption edge in the spectral range. Also shown is that to a first approximation, it is immaterial whether the detecting surface counts the number of photons incident from each pixel or measures the energy incident on each pixel. A previous result is confirmed that, for mammography, the present methods of processing data from the detector utilize only a few percent of the incident information, suggesting that techniques can be developed for obtaining mammograms at substantially lower doses than those presently used. When used with film-screen combinations, x-ray tubes with tungsten anodes should require substantially lower exposures than devices using molybdenum anodes, when both are operated at their optimal voltage
Evaluating ecohydrological theories of woody root distribution in the Kalahari.
Directory of Open Access Journals (Sweden)
Abinash Bhattachan
Full Text Available The contribution of savannas to global carbon storage is poorly understood, in part due to lack of knowledge of the amount of belowground biomass. In these ecosystems, the coexistence of woody and herbaceous life forms is often explained on the basis of belowground interactions among roots. However, the distribution of root biomass in savannas has seldom been investigated, and the dependence of root biomass on rainfall regime remains unclear, particularly for woody plants. Here we investigate patterns of belowground woody biomass along a rainfall gradient in the Kalahari of southern Africa, a region with consistent sandy soils. We test the hypotheses that (1 the root depth increases with mean annual precipitation (root optimality and plant hydrotropism hypothesis, and (2 the root-to-shoot ratio increases with decreasing mean annual rainfall (functional equilibrium hypothesis. Both hypotheses have been previously assessed for herbaceous vegetation using global root data sets. Our data do not support these hypotheses for the case of woody plants in savannas. We find that in the Kalahari, the root profiles of woody plants do not become deeper with increasing mean annual precipitation, whereas the root-to-shoot ratios decrease along a gradient of increasing aridity.
Independent test assessment using the extreme value distribution theory.
Almeida, Marcio; Blondell, Lucy; Peralta, Juan M; Kent, Jack W; Jun, Goo; Teslovich, Tanya M; Fuchsberger, Christian; Wood, Andrew R; Manning, Alisa K; Frayling, Timothy M; Cingolani, Pablo E; Sladek, Robert; Dyer, Thomas D; Abecasis, Goncalo; Duggirala, Ravindranath; Blangero, John
2016-01-01
The new generation of whole genome sequencing platforms offers great possibilities and challenges for dissecting the genetic basis of complex traits. With a very high number of sequence variants, a naïve multiple hypothesis threshold correction hinders the identification of reliable associations by the overreduction of statistical power. In this report, we examine 2 alternative approaches to improve the statistical power of a whole genome association study to detect reliable genetic associations. The approaches were tested using the Genetic Analysis Workshop 19 (GAW19) whole genome sequencing data. The first tested method estimates the real number of effective independent tests actually being performed in whole genome association project by the use of an extreme value distribution and a set of phenotype simulations. Given the familiar nature of the GAW19 data and the finite number of pedigree founders in the sample, the number of correlations between genotypes is greater than in a set of unrelated samples. Using our procedure, we estimate that the effective number represents only 15 % of the total number of independent tests performed. However, even using this corrected significance threshold, no genome-wide significant association could be detected for systolic and diastolic blood pressure traits. The second approach implements a biological relevance-driven hypothesis tested by exploiting prior computational predictions on the effect of nonsynonymous genetic variants detected in a whole genome sequencing association study. This guided testing approach was able to identify 2 promising single-nucleotide polymorphisms (SNPs), 1 for each trait, targeting biologically relevant genes that could help shed light on the genesis of the human hypertension. The first gene, PFH14 , associated with systolic blood pressure, interacts directly with genes involved in calcium-channel formation and the second gene, MAP4 , encodes a microtubule-associated protein and had already
Energy Technology Data Exchange (ETDEWEB)
Zhou, Yun, E-mail: zhou.yun.x@gmail.com; Pollak, Eli, E-mail: eli.pollak@weizmann.ac.il [Chemical Physics Department, Weizmann Institute of Science, 76100 Rehovot (Israel); Miret-Artés, Salvador, E-mail: s.miret@iff.csic.es [Instituto de Fisica Fundamental, Consejo Superior de Investigaciones Cientificas, Serrano 123, 28006 Madrid (Spain)
2014-01-14
A second order classical perturbation theory is developed and applied to elastic atom corrugated surface scattering. The resulting theory accounts for experimentally observed asymmetry in the final angular distributions. These include qualitative features, such as reduction of the asymmetry in the intensity of the rainbow peaks with increased incidence energy as well as the asymmetry in the location of the rainbow peaks with respect to the specular scattering angle. The theory is especially applicable to “soft” corrugated potentials. Expressions for the angular distribution are derived for the exponential repulsive and Morse potential models. The theory is implemented numerically to a simplified model of the scattering of an Ar atom from a LiF(100) surface.
The force distribution probability function for simple fluids by density functional theory.
Rickayzen, G; Heyes, D M
2013-02-28
Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.
Zhou, Yun; Pollak, Eli; Miret-Artés, Salvador
2014-01-14
A second order classical perturbation theory is developed and applied to elastic atom corrugated surface scattering. The resulting theory accounts for experimentally observed asymmetry in the final angular distributions. These include qualitative features, such as reduction of the asymmetry in the intensity of the rainbow peaks with increased incidence energy as well as the asymmetry in the location of the rainbow peaks with respect to the specular scattering angle. The theory is especially applicable to "soft" corrugated potentials. Expressions for the angular distribution are derived for the exponential repulsive and Morse potential models. The theory is implemented numerically to a simplified model of the scattering of an Ar atom from a LiF(100) surface.
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
International Nuclear Information System (INIS)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; Snyderman, N. J.; Verbeke, J. M.
2015-01-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutrons in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.
The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.
Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica
2014-05-01
The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.
The Feynman integrand as a white noise distribution beyond perturbation theory
International Nuclear Information System (INIS)
Grothaus, Martin; Vogel, Anna
2008-01-01
In this note the concepts of path integrals and techniques how to construct them are presented. Here we concentrate on a White Noise approach. Combining White Noise techniques with a generalized time-dependent Doss' formula Feynman integrands are constructed as white noise distributions beyond perturbation theory
Chen, Chung-De
2018-04-01
In this paper, a distributed parameter electromechanical model for bimorph piezoelectric energy harvesters based on the refined zigzag theory (RZT) is developed. In this model, the zigzag function is incorporated into the axial displacement, and the zigzag distribution of the displacement between the adjacent layers of the bimorph structure can be considered. The governing equations, including three equations of motions and one equation of circuit, are derived using Hamilton’s principle. The natural frequency, its corresponding modal function and the steady state response of the base excitation motion are given in exact forms. The presented results are benchmarked with the finite element method and two beam theories, the first-order shear deformation theory and the classical beam theory. Comparing examples shows that the RZT provides predictions of output voltage and generated power at high accuracy, especially for the case of a soft middle layer. Variation of the parameters, such as the beam thickness, excitation frequencies and the external electrical loads, is investigated and its effects on the performance of the energy harvesters are studied by using the RZT developed in this paper. Based on this refined theory, analysts and engineers can capture more details on the electromechanical behavior of piezoelectric harvesters.
On the theory of Ostwald ripening: formation of the universal distribution
International Nuclear Information System (INIS)
Alexandrov, D V
2015-01-01
A theoretical description of the final stage of Ostwald ripening given by Lifshitz and Slyozov (LS) predicts that after long times the distribution of particles over sizes tends to a universal form. A qualitative behavior of their theory has been confirmed, but experimental particle size distributions are more broad and squat than the LS asymptotic solution. The origin of discrepancies between the theory and experimental data is caused by the relaxation of solutions from the early to late stages of Ostwald ripening. In other words, the initial conditions at the ripening stage lead to the formation of a transition region near the blocking point of the LS theory and completely determine the distribution function. A new theoretical approach of the present analysis based on the Slezov theory (Slezov 1978 Formation of the universal distribution function in the dimension space for new-phase particles in the diffusive decomposition of the supersaturated solid solution J. Phys. Chem. Solids 39 367–74; Slezov 2009 Kinetics of First-Order Phase Transitions (Weinheim: Wiley, VCH)) focuses on a relaxation dynamics of analytical solutions from the early stage of Ostwald ripening to its concluding state, which is described by the LS asymptotic regime. An algebraic equation for the boundaries of a transition layer independent of all material parameters is derived. A time-dependent function ε(τ) responsible for the evolution of solutions at the ripening stage is found. The distribution function obtained is more broad and flat than the LS asymptotic solution. The particle radius, supersaturation and number density as functions of time are determined. The analytical solutions obtained are in good agreement with experimental data. (paper)
International Nuclear Information System (INIS)
Omelyan, Igor; Kovalenko, Andriy
2013-01-01
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics
Impelluso, Thomas J
2003-06-01
An algorithm for bone remodeling is presented which allows for both a redistribution of density and a continuous change of principal material directions for the orthotropic material properties of bone. It employs a modal analysis to add density for growth and a local effective strain based analysis to redistribute density. General re-distribution functions are presented. The model utilizes theories of cellular solids to relate density and strength. The code predicts the same general density distributions and local orthotropy as observed in reality.
Coalition of distributed generation units to virtual power players - a game theory approach
DEFF Research Database (Denmark)
Morais, Hugo; Sousa, Tiago M; Santos, Gabriel
2015-01-01
and the existence of new management players such as several types of aggregators. This paper proposes a methodology to facilitate the coalition between distributed generation units originating Virtual Power Players (VPP) considering a game theory approach. The proposed approach consists in the analysis...... strategies, size and goals, each parameter has different importance. VPP can also manage other type of energy resources, like storage units, electric vehicles, demand response programs or even parts of the MV and LV distribution network. A case study with twelve VPPs with different characteristics and one...
Directory of Open Access Journals (Sweden)
Robert M. Solow
2012-10-01
Full Text Available The paper surveys the neoclassical theory of growth. As a preliminary, the meaning of the adjective "neoclassical" is discussed. The basic model is then sketched, and the conditions ensuring a stationary state are illustrated. The issue of the convergence to a stationary state (and that of the speed of convergence is further considered. A discussion of "primary factors" opens the way to the "new" theory of growth, with endogenous technical progress. A number of extensions of the basic model are then recalled: two-sector and multi-sectoral models, overlapping generations models, the role of money in growth models. JEL Codes: O41, E25Keywords: Distribution, Growth, Income Distribution, Income
Academic training: From Evolution Theory to Parallel and Distributed Genetic Programming
2007-01-01
2006-2007 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 15, 16 March From 11:00 to 12:00 - Main Auditorium, bldg. 500 From Evolution Theory to Parallel and Distributed Genetic Programming F. FERNANDEZ DE VEGA / Univ. of Extremadura, SP Lecture No. 1: From Evolution Theory to Evolutionary Computation Evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) involving combinatorial optimization problems, which are based to some degree on the evolution of biological life in the natural world. In this tutorial we will review the source of inspiration for this metaheuristic and its capability for solving problems. We will show the main flavours within the field, and different problems that have been successfully solved employing this kind of techniques. Lecture No. 2: Parallel and Distributed Genetic Programming The successful application of Genetic Programming (GP, one of the available Evolutionary Algorithms) to optimization problems has encouraged an ...
International Nuclear Information System (INIS)
Uskov, V.A.; Kondrachenko, O.E.; Kondrachenko, L.A.
1977-01-01
A phenomenological theory of multicomponent diffusion involving interaction between the components is employed to analyze how the interaction between two admixtures affects their simultaneous or consequent diffusion into a semiconductor. The theory uses the equations of multicomponent dissusion under common conditions (constant diffusion coefficients and equilibrium distribution of vacancies). The experiments are described on In and Sb simultaneous diffusion into Ge. The diffusion is performed according to the routine gas phase technology with the use of radioactive isotopes In 114 and Sb 124 . It is shown that the introduction of an additional diffusion coefficient D 12 makes it possible to simply and precisely describe the distribution of interacting admixtures in complex diffusion alloying of semiconductors
Medan, R. T.; Ray, K. S.
1974-01-01
A description of and users manual are presented for a U.S.A. FORTRAN 4 computer program which evaluates spanwise and chordwise loading distributions, lift coefficient, pitching moment coefficient, and other stability derivatives for thin wings in linearized, steady, subsonic flow. The program is based on a kernel function method lifting surface theory and is applicable to a large class of planforms including asymmetrical ones and ones with mixed straight and curved edges.
Directory of Open Access Journals (Sweden)
Luigi Pasinetti
2012-10-01
Full Text Available The paper surveys the main theories of income distribution in their relationship with the theories of economic growth. First, the Classical approach is considered, focusing on the Ricardian theory. Then the neoclassical theory is discussed, highlighting its origins (Bohm-Bawerk, Wicksell, Clark and the role of the aggregate production function. The emergence of a "Keynesian" theory of income distribution in the wake of Harrod's model of growth is then recalled together with the surprising resurgence of the neoclassical theory (following the contributions of Solow and Meade. But, as the paper shows, the neoclassical theory of income distribution lacks logical consistency and has shaky foundations, as has been revealed by the severe critiques moved to the neoclassical production function. Mainstream economic literature circumvents this problem by simply ignoring it; while the models of endogenous growth exclude the issue of distribution theory from their consideration. However, while mainstream economics bypasses the problems of income distribution, this is too relevant an issue to be ignored and a number of new research lines, briefly surveyed, try new approaches to it. JEL Codes: O41, E25Keywords: Distribution, Economic Growth, Growth, Income Distribution, Income
Directory of Open Access Journals (Sweden)
Roger Bruce Mason
2013-11-01
Full Text Available This article proposes that the external environment influences the choice of distribution tactics. Since businesses and markets are complex adaptive systems, using complexity theory to understand such environments is necessary, but it has not been widely researched. A qualitative case method using in-depth interviews investigated four successful, versus less successful, companies in turbulent versus stable environments. The results tentatively confirmed that the more successful company, in a turbulent market, sees distribution activities as less important than other aspects of the marketing mix, but uses them to stabilise customer relationships and to maintain distribution processes. These findings can benefit marketers by emphasising a new way to consider place activities. How marketers can be assisted, and suggestions for further research, are provided.
Pandey, Preeti; Srivastava, Rakesh; Bandyopadhyay, Pradipta
2018-03-01
The relative performance of MM-PBSA and MM-3D-RISM methods to estimate the binding free energy of protein-ligand complexes is investigated by applying these to three proteins (Dihydrofolate Reductase, Catechol-O-methyltransferase, and Stromelysin-1) differing in the number of metal ions they contain. None of the computational methods could distinguish all the ligands based on their calculated binding free energies (as compared to experimental values). The difference between the two comes from both polar and non-polar part of solvation. For charged ligand case, MM-PBSA and MM-3D-RISM give a qualitatively different result for the polar part of solvation.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Directory of Open Access Journals (Sweden)
Yoon Soo ePark
2016-02-01
Full Text Available This study investigates the impact of item parameter drift (IPD on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effect on item parameters and examinee ability.
Rodriguez, G. (Editor)
1983-01-01
Two general themes in the control of large space structures are addressed: control theory for distributed parameter systems and distributed control for systems requiring spatially-distributed multipoint sensing and actuation. Topics include modeling and control, stabilization, and estimation and identification.
Experimental validation of the Wigner distributions theory of phase-contrast imaging
International Nuclear Information System (INIS)
Donnelly, Edwin F.; Price, Ronald R.; Pickens, David R.
2005-01-01
Recently, a new theory of phase-contrast imaging has been proposed by Wu and Liu [Med. Phys. 31, 2378-2384 (2004)]. This theory, based upon Wigner distributions, provides a much stronger foundation for the evaluation of phase-contrast imaging systems than did the prior theories based upon Fresnel-Kirchhoff diffraction theory. In this paper, we compare results of measurements made in our laboratory of phase contrast for different geometries and tube voltages to the predictions of the Wu and Liu model. In our previous publications, we have used an empirical measurement (the edge enhancement index) to parametrize the degree of phase-contrast effects in an image. While the Wu and Liu model itself does not predict image contrast, it does measure the degree of phase contrast that the system can image for a given spatial frequency. We have found that our previously published experimental results relating phase-contrast effects to geometry and x-ray tube voltage are consistent with the predictions of the Wu and Liu model
Theory and calculation of water distribution in bentonite in a thermal field
International Nuclear Information System (INIS)
Carnahan, C.L.
1988-09-01
Highly compacted bentonite is under consideration for use as a buffer material in geological repositories for high-level radioactive wastes. To assess the suitability of bentonite for this use, it is necessary to be able to predict the rate and spatial extent of water uptake and water distribution in highly compacted bentonite in the presence of thermal gradients. The ''Buffer Mass Test'' (BMT) was conducted by workers in Sweden as part of the Stripa Project. The BMT measured uptake and spatial distributions of water infiltrating annuli of compacted MX-80 sodium bentonite heated from within and surrounded by granite rock; the measurements provided a body of data very valuable for comparison to results of theoretical calculations. Results of experiments on adsorption of water by highly compacted MX-80 bentonite have been reported by workers in Switzerland. The experiments included measurements of heats of immersion and adsorption-desorption isotherms. These measurements provide the basis for prediction of water vapor pressures in equilibrium with bentonite having specified adsorbed water contents at various temperatures. The present work offers a phenomenological description of the processes influencing movement of water in compacted bentonite in the presence of a variable thermal field. The theory is applied to the bentonite buffer-water system in an assumed steady state of heat and mass transport, using critical data derived from the experimental work done in Switzerland. Results of the theory are compared to distributions of absorbed water in buffers observed in the Swedish BMT experiments. 9 refs., 2 figs
Liu, Hong; Zhu, Jingping; Wang, Kai
2015-08-24
The geometrical attenuation model given by Blinn was widely used in the geometrical optics bidirectional reflectance distribution function (BRDF) models. Blinn's geometrical attenuation model based on symmetrical V-groove assumption and ray scalar theory causes obvious inaccuracies in BRDF curves and negatives the effects of polarization. Aiming at these questions, a modified polarized geometrical attenuation model based on random surface microfacet theory is presented by combining of masking and shadowing effects and polarized effect. The p-polarized, s-polarized and unpolarized geometrical attenuation functions are given in their separate expressions and are validated with experimental data of two samples. It shows that the modified polarized geometrical attenuation function reaches better physical rationality, improves the precision of BRDF model, and widens the applications for different polarization.
Le consumérisme politique : Une innovation régulatoire à l’ère de la mondialisation
Directory of Open Access Journals (Sweden)
Marie-France Turcotte
2006-04-01
Full Text Available Cet article étudie le potentiel régulatoire du consumérisme politique à travers l’analyse de deux nouveaux mouvements sociaux économiques : la finance et la consommation responsables. La nouvelle génération de mouvement social, dont ces deux innovations témoignent, préside à l’institutionnalisation de mécanismes inédits ayant pour ambition de réguler le marché en fonction de critères sociaux et environnementaux. Mais ces mécanismes sont sujets à une dérive commerciale susceptible d’annihiler leur potentiel transformateur. Ces mécanismes méritent par ailleurs l’attention, dans la mesure où ils révèlent un compromis social au chapitre du contenu de la responsabilité sociale des acteurs économiques et donc des balises régulatoires à l’ère de la mondialisation.This article examines the regulatory potential of political consumerism through the analysis of two new economic social movements: finance and consumption. The new generation of social movement, of which these innovations are a manifestation, reflect the institutionalization of new mechanisms which aim to control the market according to social and environmental criteria. But these mechanisms are prone to a commercial drift likely to destroy their transformational potential. These mechanisms deserve attention insofar as they reveal a social compromise of the contents of the social responsibility of economic actors and thus of regulatory signals in the era of globalization.
Directory of Open Access Journals (Sweden)
Hana Silvana
2018-02-01
-giarisme, penggunaan aplikasi anti plagiarisme masih minim, juga sosialisasi mengenai isu plagia-risme yang masih belum mencukupi kebutuhan informasi yang perlu diketahui oleh mahasiswa. Kegiatan Workshop atau pelatihan penulisan tugas akhir yang belum dilaksanakan sesuai kebu-tuhan.
An analytical transport theory method for calculating flux distribution in slab cells
International Nuclear Information System (INIS)
Abdel Krim, M.S.
2001-01-01
A transport theory method for calculating flux distributions in slab fuel cell is described. Two coupled integral equations for flux in fuel and moderator are obtained; assuming partial reflection at moderator external boundaries. Galerkin technique is used to solve these equations. Numerical results for average fluxes in fuel and moderator and the disadvantage factor are given. Comparison with exact numerical methods, that is for total reflection moderator outer boundaries, show that the Galerkin technique gives accurate results for the disadvantage factor and average fluxes. (orig.)
Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock
Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.
2001-01-01
Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.
Analysis of fission-fragment mass distribution within the quantum-mechanical fragmentation theory
Energy Technology Data Exchange (ETDEWEB)
Singh, Pardeep; Kaur, Harjeet [Guru Nanak Dev University, Department of Physics, Amritsar (India)
2016-11-15
The fission-fragment mass distribution is analysed for the {sup 208}Pb({sup 18}O, f) reaction within the quantum-mechanical fragmentation theory (QMFT). The reaction potential has been calculated by taking the binding energies, Coulomb potential and proximity potential of all possible decay channels and a stationary Schroedinger equation has been solved numerically to calculate the fission-fragment yield. The overall results for mass distribution are compared with those obtained in experiment. Fine structure dips in yield, corresponding to fragment shell closures at Z = 50 and N=82, which are observed by Bogachev et al., are reproduced successfully in the present calculations. These calculations will help to estimate the formation probabilities of fission fragments and to understand many related phenomena occurring in the fission process. (orig.)
Matthews, Thomas J; Whittaker, Robert J
2014-01-01
Published in 2001, The Unified Neutral Theory of Biodiversity and Biogeography (UNTB) emphasizes the importance of stochastic processes in ecological community structure, and has challenged the traditional niche-based view of ecology. While neutral models have since been applied to a broad range of ecological and macroecological phenomena, the majority of research relating to neutral theory has focused exclusively on the species abundance distribution (SAD). Here, we synthesize the large body of work on neutral theory in the context of the species abundance distribution, with a particular focus on integrating ideas from neutral theory with traditional niche theory. First, we summarize the basic tenets of neutral theory; both in general and in the context of SADs. Second, we explore the issues associated with neutral theory and the SAD, such as complications with fitting and model comparison, the underlying assumptions of neutral models, and the difficultly of linking pattern to process. Third, we highlight the advances in understanding of SADs that have resulted from neutral theory and models. Finally, we focus consideration on recent developments aimed at unifying neutral- and niche-based approaches to ecology, with a particular emphasis on what this means for SAD theory, embracing, for instance, ideas of emergent neutrality and stochastic niche theory. We put forward the argument that the prospect of the unification of niche and neutral perspectives represents one of the most promising future avenues of neutral theory research. PMID:25360266
Matthews, Thomas J; Whittaker, Robert J
2014-06-01
Published in 2001, The Unified Neutral Theory of Biodiversity and Biogeography (UNTB) emphasizes the importance of stochastic processes in ecological community structure, and has challenged the traditional niche-based view of ecology. While neutral models have since been applied to a broad range of ecological and macroecological phenomena, the majority of research relating to neutral theory has focused exclusively on the species abundance distribution (SAD). Here, we synthesize the large body of work on neutral theory in the context of the species abundance distribution, with a particular focus on integrating ideas from neutral theory with traditional niche theory. First, we summarize the basic tenets of neutral theory; both in general and in the context of SADs. Second, we explore the issues associated with neutral theory and the SAD, such as complications with fitting and model comparison, the underlying assumptions of neutral models, and the difficultly of linking pattern to process. Third, we highlight the advances in understanding of SADs that have resulted from neutral theory and models. Finally, we focus consideration on recent developments aimed at unifying neutral- and niche-based approaches to ecology, with a particular emphasis on what this means for SAD theory, embracing, for instance, ideas of emergent neutrality and stochastic niche theory. We put forward the argument that the prospect of the unification of niche and neutral perspectives represents one of the most promising future avenues of neutral theory research.
May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M
2018-03-13
Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.
International Nuclear Information System (INIS)
Mantry, Sonny; Petriello, Frank
2010-01-01
We derive a factorization theorem for the Higgs boson transverse momentum (p T ) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for m h >>p T >>Λ QCD , where m h denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the p T scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the p T -scale physics simplifies the implementation of higher order radiative corrections in α s (p T ). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in p T /m h and Λ QCD /p T can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-p T resummation.
Phase transitions and flux distributions of SU(2) lattice gauge theory
International Nuclear Information System (INIS)
Peng, Yingcai.
1993-01-01
The strong interactions between quarks are believed to be described by Quantum Chromodynamics (QCD), which is a non-abelian SU(3) gauge theory. It is known that QCD undergoes a deconfining phase transition at very high temperatures, that is, at low temperatures QCD is in confined phase, at sufficient high temperatures it is in an unconfined phase. Also, quark confinement is believed to be due to string formation. In this dissertation the authors studied SU(2) gauge theory using numerical methods of LGT, which will provide some insights about the properties of QCD because SU(2) is similar to SU(3). They measured the flux distributions of a q bar q pair at various temperatures in different volumes. They find that in the limit of infinite volumes the flux distribution is different in the two phases. In the confined phase strong evidence is found for the string formation, however, in the unconfined phase there is no string formation. On the other hand, in the limit of zero temperature and finite volumes they find a clear signal for string formation in the large volume region, however, the string tension measured in intermediate volumes is due to finite volume effects, there is no intrinsic string formation. The color flux energies (action) of the q bar q pair are described by Michael sum rules. The original Michael sum rules deal with a static q bar q pair at zero temperature in infinite volumes. To check these sum rules with flux data at finite temperatures, they present a complete derivation for the sum rules, thus generalizing them to account for finite temperature effects. They find that the flux data are consistent with the prediction of generalized sum rules. The study elucidates the rich structures of QCD, and provides evidence for quark confinement and string formation. This supports the belief that QCD is a correct theory for strong interactions, and quark confinement can be explained by QCD
Asymptotic distribution of ∆AUC, NRIs, and IDI based on theory of U-statistics.
Demler, Olga V; Pencina, Michael J; Cook, Nancy R; D'Agostino, Ralph B
2017-09-20
The change in area under the curve (∆AUC), the integrated discrimination improvement (IDI), and net reclassification index (NRI) are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues, we unite the ∆AUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ∆AUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ∆AUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ∆AUC, NRIs, or IDI. In the former case, SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ∆AUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ∆AUC. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Sass, D. A.; Schmitt, T. A.; Walker, C. M.
2008-01-01
Item response theory (IRT) procedures have been used extensively to study normal latent trait distributions and have been shown to perform well; however, less is known concerning the performance of IRT with non-normal latent trait distributions. This study investigated the degree of latent trait estimation error under normal and non-normal…
Feil, D.; Feil, Dirk
1992-01-01
Quantum chemistry and the concepts used daily in chemistry are increasingly growing apart. Among the concepts that are able to bridge the gap between theory and experimental practice, electron density distribution has an important place. The study of this distribution has led to new developments in
Anderson, D. E., Jr.; Meier, R. R.; Hodges, R. R., Jr.; Tinsley, B. A.
1987-01-01
The H Balmer alpha nightglow is investigated by using Monte Carlo models of asymmetric geocoronal atomic hydrogen distributions as input to a radiative transfer model of solar Lyman-beta radiation in the thermosphere and atmosphere. It is shown that it is essential to include multiple scattering of Lyman-beta radiation in the interpretation of Balmer alpha airglow data. Observations of diurnal variation in the Balmer alpha airglow showing slightly greater intensities in the morning relative to evening are consistent with theory. No evidence is found for anything other than a single sinusoidal diurnal variation of exobase density. Dramatic changes in effective temperature derived from the observed Balmer alpha line profiles are expected on the basis of changing illumination conditions in the thermosphere and exosphere as different regions of the sky are scanned.
Energy Technology Data Exchange (ETDEWEB)
Vlah, Zvonimir; Seljak, Uroš [Institute for Theoretical Physics, University of Zürich, Zürich (Switzerland); Okumura, Teppei [Institute for the Early Universe, Ewha Womans University, Seoul, S. Korea (Korea, Republic of); Desjacques, Vincent, E-mail: zvlah@physik.uzh.ch, E-mail: seljak@physik.uzh.ch, E-mail: teppei@ewha.ac.kr, E-mail: Vincent.Desjacques@unige.ch [Département de Physique Théorique and Center for Astroparticle Physics (CAP) Université de Genéve, Genéve (Switzerland)
2013-10-01
Numerical simulations show that redshift space distortions (RSD) introduce strong scale dependence in the power spectra of halos, with ten percent deviations relative to linear theory predictions even on relatively large scales (k < 0.1h/Mpc) and even in the absence of satellites (which induce Fingers-of-God, FoG, effects). If unmodeled these effects prevent one from extracting cosmological information from RSD surveys. In this paper we use Eulerian perturbation theory (PT) and Eulerian halo biasing model and apply it to the distribution function approach to RSD, in which RSD is decomposed into several correlators of density weighted velocity moments. We model each of these correlators using PT and compare the results to simulations over a wide range of halo masses and redshifts. We find that with an introduction of a physically motivated halo biasing, and using dark matter power spectra from simulations, we can reproduce the simulation results at a percent level on scales up to k ∼ 0.15h/Mpc at z = 0, without the need to have free FoG parameters in the model.
Roles of water in protein structure and function studied by molecular liquid theory.
Imai, Takashi
2009-01-01
The roles of water in the structure and function of proteins have not been completely elucidated. Although molecular simulation has been widely used for the investigation of protein structure and function, it is not always useful for elucidating the roles of water because the effect of water ranges from atomic to thermodynamic level. The three-dimensional reference interaction site model (3D-RISM) theory, which is a statistical-mechanical theory of molecular liquids, can yield the solvation structure at the atomic level and calculate the thermodynamic quantities from the intermolecular potentials. In the last few years, the author and coworkers have succeeded in applying the 3D-RISM theory to protein aqueous solution systems and demonstrated that the theory is useful for investigating the roles of water. This article reviews some of the recent applications and findings, which are concerned with molecular recognition by protein, protein folding, and the partial molar volume of protein which is related to the pressure effect on protein.
The Density Functional Theory of Flies: Predicting distributions of interacting active organisms
Kinkhabwala, Yunus; Valderrama, Juan; Cohen, Itai; Arias, Tomas
On October 2nd, 2016, 52 people were crushed in a stampede when a crowd panicked at a religious gathering in Ethiopia. The ability to predict the state of a crowd and whether it is susceptible to such transitions could help prevent such catastrophes. While current techniques such as agent based models can predict transitions in emergent behaviors of crowds, the assumptions used to describe the agents are often ad hoc and the simulations are computationally expensive making their application to real-time crowd prediction challenging. Here, we pursue an orthogonal approach and ask whether a reduced set of variables, such as the local densities, are sufficient to describe the state of a crowd. Inspired by the theoretical framework of Density Functional Theory, we have developed a system that uses only measurements of local densities to extract two independent crowd behavior functions: (1) preferences for locations and (2) interactions between individuals. With these two functions, we have accurately predicted how a model system of walking Drosophila melanogaster distributes itself in an arbitrary 2D environment. In addition, this density-based approach measures properties of the crowd from only observations of the crowd itself without any knowledge of the detailed interactions and thus it can make predictions about the resulting distributions of these flies in arbitrary environments, in real-time. This research was supported in part by ARO W911NF-16-1-0433.
The Global Experience of Development of the Theory of Spatial Distribution of Productive Forces
Directory of Open Access Journals (Sweden)
Heiman Oleh A.
2016-01-01
Full Text Available The publication is aimed at theoretical generalization of the global experience of development of the theory of spatial distribution of productive forces as the basis of regional economy. Considering the evolution of scientific views on the spatial development of territories, taking account of the particularities of the distribution of production, one can allocate and identify several paradigms, which replaced each other, but preserved their connection with the placement of productive forces. Each one of these paradigms or all of them as a whole provide an example of a single historical process associated with the productive forces. Characteristic of a methodology based on the spatiotemporal paradigm is consideration of both time and space factors, which, in substance, take on the qualities of economic categories. Speaking of the use of theoretical developments in the practice of regional development, it should be specified that programs, strategies and other regulations must take into account the linkage between the progressive and the negative trends as well as cyclical nature of economic development, including the global economy, identify the factors that accelerate or retard the passage of every evolutionary spiral, and observe consistency of the productive forces of region with the technological patterns of production.
Non-Gaussianities in the topological charge distribution of the SU(3) Yang-Mills theory
Cè, Marco; Consonni, Cristian; Engel, Georg P.; Giusti, Leonardo
2015-10-01
We study the topological charge distribution of the SU(3) Yang-Mills theory with high precision in order to be able to detect deviations from Gaussianity. The computation is carried out on the lattice with high statistics Monte Carlo simulations by implementing a naive discretization of the topological charge evolved with the Yang-Mills gradient flow. This definition is far less demanding than the one suggested from Neuberger's fermions and, as shown in this paper, in the continuum limit its cumulants coincide with those of the universal definition appearing in the chiral Ward identities. Thanks to the range of lattice volumes and spacings considered, we can extrapolate the results for the second and fourth cumulant of the topological charge distribution to the continuum limit with confidence by keeping finite volume effects negligible with respect to the statistical errors. Our best results for the topological susceptibility is t02χ =6.67 (7 )×1 0-4 , where t0 is a standard reference scale, while for the ratio of the fourth cumulant over the second, we obtain R =0.233 (45 ). The latter is compatible with the expectations from the large Nc expansion, while it rules out the θ behavior of the vacuum energy predicted by the dilute instanton model. Its large distance from 1 implies that, in the ensemble of gauge configurations that dominate the path integral, the fluctuations of the topological charge are of quantum nonperturbative nature.
Density-functional theory based on the electron distribution on the energy coordinate
Takahashi, Hideaki
2018-03-01
We developed an electronic density functional theory utilizing a novel electron distribution n(ɛ) as a basic variable to compute ground state energy of a system. n(ɛ) is obtained by projecting the electron density n({\\boldsymbol{r}}) defined on the space coordinate {\\boldsymbol{r}} onto the energy coordinate ɛ specified with the external potential {\\upsilon }ext}({\\boldsymbol{r}}) of interest. It was demonstrated that the Kohn-Sham equation can also be formulated with the exchange-correlation functional E xc[n(ɛ)] that employs the density n(ɛ) as an argument. It turned out an exchange functional proposed in our preliminary development suffices to describe properly the potential energies of several types of chemical bonds with comparable accuracies to the corresponding functional based on local density approximation. As a remarkable feature of the distribution n(ɛ) it inherently involves the spatially non-local information of the exchange hole at the bond dissociation limit in contrast to conventional approximate functionals. By taking advantage of this property we also developed a prototype of the static correlation functional E sc including no empirical parameters, which showed marked improvements in describing the dissociations of covalent bonds in {{{H}}}2,{{{C}}}2{{{H}}}4 and {CH}}4 molecules.
Directory of Open Access Journals (Sweden)
Sarminah Samad
2018-05-01
Full Text Available This study examined the relationship between Theory of Planned Behavior and knowledge sharing among nurses in Patient Computer Management System. Consequently, it determined the moderating effect of distributive justice on the relationship between Theory Planned Behavior and knowledge sharing. A quantitative approach was employed in this study. The research was based on a correlational and cross-sectional study which involved a total of 336 nurses. Data was collected based on random sampling via self-administered questionnaires. Partial Least Squares (PLS (Version 3.0 analysis was used to analyze the data. The study revealed that Theory of Plan Behavior components were significantly related to knowledge sharing. These components were also found to have a significant and positive influence on knowledge sharing. The study revealed that distributive justice had significantly moderated the relationship between two components of Theory Planned Behavior (attitude and subjective norm and knowledge sharing.
Theory for Deducing Volcanic Activity From Size Distributions in Plinian Pyroclastic Fall Deposits
Iriyama, Yu; Toramaru, Atsushi; Yamamoto, Tetsuo
2018-03-01
Stratigraphic variation in the grain size distribution (GSD) of plinian pyroclastic fall deposits reflects volcanic activity. To extract information on volcanic activity from the analyses of deposits, we propose a one-dimensional theory that provides a formula connecting the sediment GSD to the source GSD. As the simplest case, we develop a constant-source model (CS model), in which the source GSD and the source height are constant during the duration of release of particles. We assume power laws of particle radii for the terminal fall velocity and the source GSD. The CS model can describe an overall (i.e., entire vertically variable) feature of the GSD structure of the sediment. It is shown that the GSD structure is characterized by three parameters, that is, the duration of supply of particles to the source scaled by the fall time of the largest particle, ts/tM, and the power indices of the terminal fall velocity p and of the source GSD q. We apply the CS model to samples of the Worzel D ash layer and compare the sediment GSD structure calculated by using the CS model to the observed structure. The results show that the CS model reproduces the overall structure of the observed GSD. We estimate the duration of the eruption and the q value of the source GSD. Furthermore, a careful comparison of the observed and calculated GSDs reveals new interpretation of the original sediment GSD structure of the Worzel D ash layer.
Force-Field Functor Theory: Classical Force-Fields which Reproduce Equilibrium Quantum Distributions
Directory of Open Access Journals (Sweden)
Ryan eBabbush
2013-10-01
Full Text Available Feynman and Hibbs were the first to variationally determine an effective potential whose associated classical canonical ensemble approximates the exact quantum partition function. We examine the existence of a map between the local potential and an effective classical potential which matches the exact quantum equilibrium density and partition function. The usefulness of such a mapping rests in its ability to readily improve Born-Oppenheimer potentials for use with classical sampling. We show that such a map is unique and must exist. To explore the feasibility of using this result to improve classical molecular mechanics, we numerically produce a map from a library of randomly generated one-dimensional potential/effective potential pairs then evaluate its performance on independent test problems. We also apply the map to simulate liquid para-hydrogen, finding that the resulting radial pair distribution functions agree well with path integral Monte Carlo simulations. The surprising accessibility and transferability of the technique suggest a quantitative route to adapting Born-Oppenheimer potentials, with a motivation similar in spirit to the powerful ideas and approximations of density functional theory.
Estimating the Grain Size Distribution of Mars based on Fragmentation Theory and Observations
Charalambous, C.; Pike, W. T.; Golombek, M.
2017-12-01
We present here a fundamental extension to the fragmentation theory [1] which yields estimates of the distribution of particle sizes of a planetary surface. The model is valid within the size regimes of surfaces whose genesis is best reflected by the evolution of fragmentation phenomena governed by either the process of meteoritic impacts, or by a mixture with aeolian transportation at the smaller sizes. The key parameter of the model, the regolith maturity index, can be estimated as an average of that observed at a local site using cratering size-frequency measurements, orbital and surface image-detected rock counts and observations of sub-mm particles at landing sites. Through validation of ground truth from previous landed missions, the basis of this approach has been used at the InSight landing ellipse on Mars to extrapolate rock size distributions in HiRISE images down to 5 cm rock size, both to determine the landing safety risk and the subsequent probability of obstruction by a rock of the deployed heat flow mole down to 3-5 m depth [2]. Here we focus on a continuous extrapolation down to 600 µm coarse sand particles, the upper size limit that may be present through aeolian processes [3]. The parameters of the model are first derived for the fragmentation process that has produced the observable rocks via meteorite impacts over time, and therefore extrapolation into a size regime that is affected by aeolian processes has limited justification without further refinement. Incorporating thermal inertia estimates, size distributions observed by the Spirit and Opportunity Microscopic Imager [4] and Atomic Force and Optical Microscopy from the Phoenix Lander [5], the model's parameters in combination with synthesis methods are quantitatively refined further to allow transition within the aeolian transportation size regime. In addition, due to the nature of the model emerging in fractional mass abundance, the percentage of material by volume or mass that resides
Revisiting the theory of the evolution of pick-up ion distributions: magnetic or adiabatic cooling?
Directory of Open Access Journals (Sweden)
H. J. Fahr
2007-01-01
Full Text Available We study the phasespace behaviour of heliospheric pick-up ions after the time of their injection as newly created ions into the solar wind bulk flow from either charge exchange or photoionization of interplanetary neutral atoms. As interaction with the ambient MHD wave fields we allow for rapid pitch angle diffusion, but for the beginning of this paper we shall neglect the effect of quasilinear or nonlinear energy diffusion (Fermi-2 acceleration induced by counterflowing ambient waves. In the up-to-now literature connected with the convection of pick-up ions by the solar wind only adiabatic cooling of these ions is considered which in the solar wind frame takes care of filling the gap between the injection energy and energies of the thermal bulk of solar wind ions. Here we reinvestigate the basics of the theory behind this assumption of adiabatic pick-up ion reactions and correlated predictions derived from it. We then compare it with the new assumption of a pure magnetic cooling of pick-up ions simply resulting from their being convected in an interplanetary magnetic field which decreases in magnitude with increase of solar distance. We compare the results for pick-up ion distribution functions derived along both ways and can point out essential differences of observational and diagnostic relevance. Furthermore we then include stochastic acceleration processes by wave-particle interactions. As we can show, magnetic cooling in conjunction with diffusive acceleration by wave-particle interaction allows for an unbroken power law with the unique power index γ=−5 beginning from lowest velocities up to highest energy particles of about 100 KeV which just marginally can be in resonance with magnetoacoustic turbulences. Consequences for the resulting pick-up ion pressures are also analysed.
Interplay of charge distribution and conformation in peptides: comparison of theory and experiment.
Makowska, Joanna; Bagińska, Katarzyna; Kasprzykowski, F; Vila, Jorge A; Jagielska, Anna; Liwo, Adam; Chmurzyński, Lech; Scheraga, Harold A
2005-01-01
We assessed the correlation between charge distribution and conformation of flexible peptides by comparing the theoretically calculated potentiometric-titration curves of two model peptides, Ac-Lys5-NHMe (a model of poly-L-lysine) and Ac-Lys-Ala11-Lys-Gly2-Tyr-NH2 (P1) in water and methanol, with the experimental curves. The calculation procedure consisted of three steps: (i) global conformational search of the peptide under study using the electrostatically driven Monte Carlo (EDMC) method with the empirical conformational energy program for peptides (ECEPP)/3 force field plus the surface-hydration (SRFOPT) or the generalized Born surface area (GBSA) solvation model as well as a molecular dynamics method with the assisted model building and energy refinement (AMBER)99/GBSA force field; (ii) reevaluation of the energy in the pH range considered by using the modified Poisson-Boltzmann approach and taking into account all possible protonation microstates of each conformation, and (iii) calculation of the average degree of protonation of the peptide at a given pH value by Boltzmann averaging over conformations. For Ac-Lys5-NHMe, the computed titration curve agrees qualitatively with the experimental curve of poly-L-lysine in 95% methanol. The experimental titration curves of peptide P1 in water and methanol indicate a remarkable downshift of the first pK(a) value compared to the values for reference compounds (n-butylamine and phenol, respectively), suggesting the presence of a hydrogen bond between the tyrosine hydroxyl oxygen and the H(epsilon) proton of a protonated lysine side chain. The theoretical titration curves agree well with the experimental curves, if conformations with such hydrogen bonds constitute a significant part of the ensemble; otherwise, the theory predicts too small a downward pH shift. Copyright 2005 Wiley Periodicals, Inc
Revisiting the theory of the evolution of pick-up ion distributions: magnetic or adiabatic cooling?
Directory of Open Access Journals (Sweden)
H. J. Fahr
2008-01-01
Full Text Available We study the phasespace behaviour of heliospheric pick-up ions after the time of their injection as newly created ions into the solar wind bulk flow from either charge exchange or photoionization of interplanetary neutral atoms. As interaction with the ambient MHD wave fields we allow for rapid pitch angle diffusion, but for the beginning of this paper we shall neglect the effect of quasilinear or nonlinear energy diffusion (Fermi-2 acceleration induced by counterflowing ambient waves. In the up-to-now literature connected with the convection of pick-up ions by the solar wind only adiabatic cooling of these ions is considered which in the solar wind frame takes care of filling the gap between the injection energy and energies of the thermal bulk of solar wind ions. Here we reinvestigate the basics of the theory behind this assumption of adiabatic pick-up ion reactions and correlated predictions derived from it. We then compare it with the new assumption of a pure magnetic cooling of pick-up ions simply resulting from their being convected in an interplanetary magnetic field which decreases in magnitude with increase of solar distance. We compare the results for pick-up ion distribution functions derived along both ways and can point out essential differences of observational and diagnostic relevance. Furthermore we then include stochastic acceleration processes by wave-particle interactions. As we can show, magnetic cooling in conjunction with diffusive acceleration by wave-particle interaction allows for an unbroken power law with the unique power index γ=−5 beginning from lowest velocities up to highest energy particles of about 100 KeV which just marginally can be in resonance with magnetoacoustic turbulences. Consequences for the resulting pick-up ion pressures are also analysed.
Radial distributions of arm-gas offsets as an observational test of spiral theories
Baba, Junichi; Morokuma-Matsui, Kana; Egusa, Fumi
2015-01-01
Theories of stellar spiral arms in disk galaxies can be grouped into two classes based on the longevity of a spiral arm. Although the quasi-stationary density wave theory supposes that spirals are rigidly-rotating, long-lived patterns, the dynamic spiral theory predicts that spirals are differentially-rotating, transient, recurrent patterns. In order to distinguish between the two spiral models from observations, we performed hydrodynamic simulations with steady and dynamic spiral models. Hyd...
Energy Technology Data Exchange (ETDEWEB)
Nakatsuka, Takao [Okayama Shoka University, Laboratory of Information Science, Okayama (Japan); Okei, Kazuhide [Kawasaki Medical School, Dept. of Information Sciences, Kurashiki (Japan); Iyono, Atsushi [Okayama university of Science, Dept. of Fundamental Science, Faculty of Science, Okayama (Japan); Bielajew, Alex F. [Univ. of Michigan, Dept. Nuclear Engineering and Radiological Sciences, Ann Arbor, MI (United States)
2015-12-15
Simultaneous distribution between the deflection angle and the lateral displacement of fast charged particles traversing through matter is derived by applying numerical inverse Fourier transforms on the Fourier spectral density solved analytically under the Moliere theory of multiple scattering, taking account of ionization loss. Our results show the simultaneous Gaussian distribution at the region of both small deflection angle and lateral displacement, though they show the characteristic contour patterns of probability density specific to the single and the double scatterings at the regions of large deflection angle and/or lateral displacement. The influences of ionization loss on the distribution are also investigated. An exact simultaneous distribution is derived under the fixed energy condition based on a well-known model of screened single scattering, which indicates the limit of validity of the Moliere theory applied to the simultaneous distribution. The simultaneous distribution will be valuable for improving the accuracy and the efficiency of experimental analyses and simulation studies relating to charged particle transports. (orig.)
Grabner, Peter
2017-01-01
This volume is dedicated to Robert F. Tichy on the occasion of his 60th birthday. Presenting 22 research and survey papers written by leading experts in their respective fields, it focuses on areas that align with Tichy’s research interests and which he significantly shaped, including Diophantine problems, asymptotic counting, uniform distribution and discrepancy of sequences (in theory and application), dynamical systems, prime numbers, and actuarial mathematics. Offering valuable insights into recent developments in these areas, the book will be of interest to researchers and graduate students engaged in number theory and its applications.
Direct simulation of groundwater transit-time distributions using the reservoir theory
Etcheverry, David; Perrochet, Pierre
Groundwater transit times are of interest for the management of water resources, assessment of pollution from non-point sources, and quantitative dating of groundwaters by the use of environmental isotopes. The age of water is the time water has spent in an aquifer since it has entered the system, whereas the transit time is the age of water as it exits the system. Water at the outlet of an aquifer is a mixture of water elements with different transit times, as a consequence of the different flow-line lengths. In this paper, transit-time distributions are calculated by coupling two existing methods, the reservoir theory and a recent age-simulation method. Based on the derivation of the cumulative age distribution over the whole domain, the approach accounts for the whole hydrogeological framework. The method is tested using an analytical example and its applicability illustrated for a regional layered aquifer. Results show the asymmetry and multimodality of the transit-time distribution even in advection-only conditions, due to the aquifer geometry and to the velocity-field heterogeneity. Résumé Les temps de transit des eaux souterraines sont intéressants à connaître pour gérer l'évaluation des ressources en eau dans le cas de pollution à partir de sources non ponctuelles, et aussi pour dater quantitativement les eaux souterraines au moyen des isotopes du milieu. L'âge de l'eau est le temps qu'elle a passé dans un aquifère depuis qu'elle est entrée dans le système, alors que le temps de transit est l'âge de l'eau au moment où elle quitte le système. L'eau à la sortie d'un aquifère est un mélange d'eaux possédant différents temps de transit, du fait des longueurs différentes des lignes de courant suivies. Dans ce papier, les distributions des temps de transit sont calculées en couplant deux méthodes, la théorie du réservoir et une méthode récente de simulation des âges. Basée sur la dérivation de la distribution cumulées des âges sur
Directory of Open Access Journals (Sweden)
Casault Sébastien
2016-05-01
Full Text Available Oil and gas exploration and production firms have return profiles that are not easily explained by current financial theory – the variation in their market returns is non-Gaussian. In this paper, the nature and underlying reason for these significant deviations from expected behavior are considered. Understanding these differences in financial market behavior is important for a wide range of reasons, including: assessing investments, investor relations, decisions to raise capital, assessment of firm and management performance. We show that using a “thicker tailed” mixture of two normal distributions offers a significantly more accurate model than the traditionally Gaussian approach in describing the behavior of the value of oil and gas firms. This mixture of normal distribution is also more effective in bridging the gap between management theory and practice without the need to introduce complex time-sensitive GARCH and/or jump diffusion dynamics. The mixture distribution is consistent with ambidexterity theory that suggests firms operate in two distinct states driven by the primary focus of the firm: an exploration state with high uncertainty and, an exploitation (or production state with lower uncertainty. The findings have direct implications on improving the accuracy of real option pricing techniques and futures analysis of risk management. Traditional options pricing models assume that commercial returns from these assets are described by a normal random walk. However, a normal random walk model discounts the possibility of large changes to the marketplace from events such as the discovery of important reserves or the introduction of new technology. The mixture distribution proves to be well suited to inherently describe the unusually large risks and opportunities associated with oil and gas production and exploration. A significance testing study of 554 oil and gas exploration and production firms empirically supports using a mixture
Pimpinelli, Alberto; Einstein, T. L.; González, Diego Luis; Sathiyanarayanan, Rajesh; Hamouda, Ajmi Bh.
2011-03-01
Earlier we showed [PRL 99, 226102 (2007)] that the CZD in growth could be well described by P (s) = asβ exp (-bs2) , where s is the CZ area divided by its average value. Painstaking simulations by Amar's [PRE 79, 011602 (2009)] and Evans's [PRL 104, 149601 (2010)] groups showed inadequacies in our mean field Fokker-Planck argument relating β to the critical nucleus size. We refine our derivation to retrieve their β ~ i + 2 [PRL 104, 149602 (2010)]. We discuss applications of this formula and methodology to experiments on Ge/Si(001) and on various organics on Si O2 , as well as to kinetic Monte Carlo studies homoepitaxial growth on Cu(100) with codeposited impurities of different sorts. In contrast to theory, there can be significant changes to β with coverage. Some experiments also show temperature dependence. Supported by NSF-MRSEC at UMD, Grant DMR 05-20471.
Qin, Jiahu; Fu, Weiming; Gao, Huijun; Zheng, Wei Xing
2016-03-03
This paper is concerned with developing a distributed k-means algorithm and a distributed fuzzy c-means algorithm for wireless sensor networks (WSNs) where each node is equipped with sensors. The underlying topology of the WSN is supposed to be strongly connected. The consensus algorithm in multiagent consensus theory is utilized to exchange the measurement information of the sensors in WSN. To obtain a faster convergence speed as well as a higher possibility of having the global optimum, a distributed k-means++ algorithm is first proposed to find the initial centroids before executing the distributed k-means algorithm and the distributed fuzzy c-means algorithm. The proposed distributed k-means algorithm is capable of partitioning the data observed by the nodes into measure-dependent groups which have small in-group and large out-group distances, while the proposed distributed fuzzy c-means algorithm is capable of partitioning the data observed by the nodes into different measure-dependent groups with degrees of membership values ranging from 0 to 1. Simulation results show that the proposed distributed algorithms can achieve almost the same results as that given by the centralized clustering algorithms.
International Nuclear Information System (INIS)
Johnson, E.
1977-01-01
A theory for site-site pair distribution functions of molecular fluids is derived from the Ornstein-Zernike equation. Atom-atom pair distribution functions of this theory which were obtained by using different approximations for the Percus-Yevick site-site direct correlation functions are compared
International Nuclear Information System (INIS)
Takamatsu, Kuniyoshi; Shimakawa, Satoshi; Nojiri, Naoki; Fujimoto, Nozomu
2003-10-01
In the case of evaluations for the highest temperature of the fuels in the HTTR, it is very important to expect the power density distributions accurately; therefore, it is necessary to improve the analytical model with the neutron diffusion and the burn-up theory. The power density distributions are analyzed in terms of two models, the one mixing the fuels and the burnable poisons homogeneously and the other modeling them heterogeneously. Moreover these analytical power density distributions are compared with the ones derived from the gross gamma-ray measurements and the Monte Carlo calculational code with continuous energy. As a result the homogeneous mixed model isn't enough to expect the power density distributions of the core in the axial direction; on the other hand, the heterogeneous model improves the accuracy. (author)
International Nuclear Information System (INIS)
Batistic, Benjamin; Robnik, Marko
2010-01-01
In this work we study the level spacing distribution in the classically mixed-type quantum systems (which are generic), exhibiting regular motion on invariant tori for some initial conditions and chaotic motion for the complementary initial conditions. In the asymptotic regime of the sufficiently deep semiclassical limit (sufficiently small effective Planck constant) the Berry and Robnik (1984 J. Phys. A: Math. Gen. 17 2413) picture applies, which is very well established. We present a new quasi-universal semiempirical theory of the level spacing distribution in a regime away from the Berry-Robnik regime (the near semiclassical limit), by describing both the dynamical localization effects of chaotic eigenstates, and the tunneling effects which couple regular and chaotic eigenstates. The theory works extremely well in the 2D mixed-type billiard system introduced by Robnik (1983 J. Phys. A: Math. Gen. 16 3971) and is also tested in other systems (mushroom billiard and Prosen billiard).
Directory of Open Access Journals (Sweden)
Zhe Zhang
2014-01-01
Full Text Available In order to solve the problems of the existing wide-area backup protection (WABP algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance.
Zhang, Zhe; Kong, Xiangping; Yin, Xianggen; Yang, Zengli; Wang, Lijun
2014-01-01
In order to solve the problems of the existing wide-area backup protection (WABP) algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S) evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance. PMID:25050399
2016-06-02
Retrieval of droplet-size density distribution from multiple-field-of-view cross-polarized lidar signals: theory and experimental validation...Gilles Roy, Luc Bissonnette, Christian Bastille, and Gilles Vallee Multiple-field-of-view (MFOV) secondary-polarization lidar signals are used to...use secondary polarization. A mathematical relation among the PSD, the lidar fields of view, the scattering angles, and the angular depolarization
Larwin, Karen H.; Larwin, David A.
2011-01-01
Bootstrapping methods and random distribution methods are increasingly recommended as better approaches for teaching students about statistical inference in introductory-level statistics courses. The authors examined the effect of teaching undergraduate business statistics students using random distribution and bootstrapping simulations. It is the…
Active integration of electric vehicles in the distribution network - theory, modelling and practice
DEFF Research Database (Denmark)
Knezovic, Katarina
an attractive asset for the distribution system operator (DSO). This thesis investigates how EVs can mitigate the self-induced adverse effects and actively help the distribution grid operation, either autonomously or in coordination, e.g., with an EV aggregator. The general framework for EV integration...
Maximum entropy theory of recoil charge distributions in electron-capture collisions
International Nuclear Information System (INIS)
Aberg, T.; Blomberg, A.; Tulkki, J.; Goscinski, O.
1984-01-01
A generalized Fermi-Dirac distribution is derived and applied to charge-state distributions in single collisions between multiply charged ions and rare-gas atoms. It relates multiple electron loss in single-electron capture to multiple ionization in multiphoton absorption and discloses inner-shell vacancy formation in double- and triple-electron capture
Directory of Open Access Journals (Sweden)
Robert M. Solow
2000-05-01
Full Text Available The paper surveys the neoclassical theory of growth. As a preliminary, the meaning of the adjective "neoclassical" is discussed. The basic model is then sketched, and the conditions ensuring a stationary state are illustrated. The issue of the convergence to a stationary state (and that of the speed of convergence is further considered. A discussion of "primary factors" opens the way to the "new" theory of growth, with endogenous technical progress. A number of extensions of the basic model are then recalled: two-sector and multi-sectoral models, overlapping generations models, the role of money in growth models.
Molecular theory of partial molar volume and its applications to biomolecular systems
Directory of Open Access Journals (Sweden)
T.Imai
2007-09-01
Full Text Available The paial molar volume (PMV is a thermodynamic quantity which contains important information about the solute-solvent interactions as well as the solute structure in solution.Additionally, the PMV is the most essential quantity in the analysis of the pressure effect on chemical reactions. This aicle reviews the recent developments in molecular theories of the PMV, especially the reference interaction site model (RISMtheory of molecular liquids and its three-dimensional generalization version (3D-RISM, which are combined with the Kirkwood-Buff solution theory to calculate the PMV. This aicle also introduces our recent applications of the theory to some interesting issues concerning the PMV of biomolecules. In addition, theoretical representations of the effects of intramolecular fluctuation on the PMV, which are significant for biomacromolecules, are briefly discussed.
International Nuclear Information System (INIS)
Tarasov, Yu.A.
1992-01-01
A hydrodynamic model for the collisions of gluon clusters is used to calculate the charge-particle multiplicity distributions in collisions of nucleons at ISR and collider energies. The separation temperature of the hydrodynamic system is calculated as a function of the rapidity [T k (γ 1 )] for each value of the inelasticity coefficient K. In the central region, this temperature is higher at collider energies than at the ISR energy. The average number of resonance clusters which decay into various (fixed) numbers of charged hadrons is found for each value of K. The number of these clusters fluctuates in accordance with a Poisson distribution. A hadron multiplicity distribution which incorporates these fluctuations is found. This distribution is averaged over the inelasticity coefficient. The distributions p(n ch ) and the KNO functions Ψ(z) are calculated for the overall and central regions of the rapidity, |y| ≤ 1.5. The broadening of the distributions and the violation of KNO scaling at collider energies results from increased contributions from the decays of resonances. The front-back multiplicity correlations are also studied; the decay of resonances is taken into account. The distributions and slope coefficients of the correlation function which are found for the various energies agree with experimental data
Torricelli, F.
2012-01-01
An extended theory of carrier hopping transport in organic transistors is proposed. According to many experimental studies, the density of localized states in organic thin-film transistors can be described by a double-exponential function. In this work, using a percolation model of hopping, the
Directory of Open Access Journals (Sweden)
Zhong-fu Tan
2018-01-01
Full Text Available The installation capacity of wind and solar photovoltaic power is continually increasing, which makes renewable energy grid connection and power generation an important link of China’s power structure optimization. A virtual power plant (VPP is an important way to help distributed energy resource grid connection and promote renewable energy industry development. To study the economic scheduling problem of various distributed energy resources and the profit distribution problem of VPP alliance, this study builds a separate operation scheduling model for individual VPP and a joint operation scheduling model for VPP alliance, as well as the profit distribution model. The case study verifies the feasibility and effectiveness of the proposed model. The sensitivity analysis provides information about VPP decision-making in accordance with the policy environment development trend.
Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.
2010-12-01
Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.
International Nuclear Information System (INIS)
Vaz, L.C.; Alexander, J.M.
1983-01-01
Fission angular distributions have been studied for years and have been treated as classic examples of transition-state theory. Early work involving composite nuclei of relatively low excitation energy Esup(*) ( 2 0 (K 2 0 = Psub(eff)T/(h/2π) 2 ) are presented along with comparissons of Psub(eff) to moments of inertia for saddle-point nuclei from the rotating liquid drop model. This model gives an excellent guide for the intermediate spin zone (30 < or approx. I < or approx. 65), while strong shell and/or pairing effects are evident for excitations less than < or approx. 35 MeV. Observations of strong anisotropies for very high-spin systems signal the demise of certain approximations commonly made in the theory, and suggestions are made toward this end. (orig.)
International Nuclear Information System (INIS)
Abril, J.M.
1998-01-01
Recently much experimental effort has been focused on determining those factors which affect the kinetics and the final equilibrium conditions for the uptake of radionuclides from the aqueous phase by particulate matter. At present, some of these results appear to be either surprising or contradictory and introduce some uncertainty in which parameter values are most appropriate for environmental modelling. In this paper, we study the ionic exchange between the dissolved phase and suspended particles from a microscopic viewpoint, developing a mathematical description of the kinetic transfer and the k d distribution coefficients. The most relevant contribution is the assumption that the exchange of radionuclides occurs in a specific surface layer on the particles, with a non-zero thickness. A wide range of experimental findings can be explained with this theory. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)
Isar, Aurelian
1995-01-01
The harmonic oscillator with dissipation is studied within the framework of the Lindblad theory for open quantum systems. By using the Wang-Uhlenbeck method, the Fokker-Planck equation, obtained from the master equation for the density operator, is solved for the Wigner distribution function, subject to either the Gaussian type or the delta-function type of initial conditions. The obtained Wigner functions are two-dimensional Gaussians with different widths. Then a closed expression for the density operator is extracted. The entropy of the system is subsequently calculated and its temporal behavior shows that this quantity relaxes to its equilibrium value.
McGill, Brian J.; Etienne, Rampal S.; Gray, John S.; Alonso, David; Anderson, Marti J.; Benecha, Habtamu Kassa; Dornelas, Maria; Enquist, Brian J.; Green, Jessica L.; He, Fangliang; Hurlbert, Allen H.; Magurran, Anne E.; Marquet, Pablo A.; Maurer, Brian A.; Ostling, Annette; Soykan, Candan U.; Ugland, Karl I.; White, Ethan P.
2007-01-01
Species abundance distributions (SADs) follow one of ecology's oldest and most universal laws - every community shows a hollow curve or hyperbolic shape on a histogram with many rare species and just a few common species. Here, we review theoretical, empirical and statistical developments in the
McGill, B.J.; Etienne, R.S.; Gray, J.S.; Alonso, D.; Anderson, M.J.; Benecha, H.K.
2007-01-01
Species abundance distributions (SADs) follow one of ecology's oldest and most universal laws ¿ every community shows a hollow curve or hyperbolic shape on a histogram with many rare species and just a few common species. Here, we review theoretical, empirical and statistical developments in the
Air method measurements of apple vessel length distributions with improved apparatus and theory
Shabtal Cohen; John Bennink; Mel Tyree
2003-01-01
Studies showing that rootstock dwarfing potential is related to plant hydraulic conductance led to the hypothesis that xylem properties are also related. Vessel length distribution and other properties of apple wood from a series of varieties were measured using the 'air method' in order to test this hypothesis. Apparatus was built to measure and monitor...
Bhamidi, S.; Van der Hofstad, R.; Hooghiemstra, G.
2010-01-01
We study first passage percolation (FPP) on the configuration model (CM) having power-law degrees with exponent ? ? [1, 2) and exponential edge weights. We derive the distributional limit of the minimal weight of a path between typical vertices in the network and the number of edges on the
Design of air distribution system in operating rooms -theory versus practice
Melhado, M.A.; Loomans, M.G.L.C.; Hensen, J.L.M.; Lamberts, R.
2016-01-01
Air distribution systems need to secure a good indoor air quality in operating rooms (ORs), minimize the risk of surgical site infections, and establish suitable working conditions for the surgical team through the thermal comfort. The paper presents an overview of the design and decision process of
Angular distribution of Xe 5s→epsilonp photoelectrons: Disagreement between experiment and theory
International Nuclear Information System (INIS)
Fahlman, A.; Carlson, T.A.; Krause, M.O.
1983-01-01
The angular asymmetry parameter β for the Xe 5s→epsilonp photoelectrons has been studied with use of synchrotron radiation (hν = 28--65 eV). The present results show that the relativistic random-phase approximation theory does not satisfactorily describe the Xe 5s photoionization process close to the Cooper minimum and thus require a renewed theoretical approach. The 5s partial photoionization cross section was obtained over the same photon region and the results agree with experimental values found in the literature
International Nuclear Information System (INIS)
Barabash, R.I.; Krivoglaz, M.A.; AN Ukrainskoj SSR, Kiev. Inst. Metallofiziki)
1981-01-01
The X-ray scattering by strongly distorted heterogeneous alloys containing inclusions of new phase particles is discussed. Two models describing the lamellar structure with various orientation of inclusion axes in different layers are studied. In the first model the dimensions of inclusions are small in comparison with the layer thickness and they are randomly distributed in it, in the second model lamellar inclusions stretch through the whole layer. It is shown that in both models the Debye broadened line intensity distribution consists of overlapping Lorentz curves. A case of inclusions oriented along directions [100] and layers perpendicular to axes [110] is analyzed in detail. The results obtained for this case are compared with experimental results for the Cu-Be alloy
Energy Technology Data Exchange (ETDEWEB)
Chen, Y W [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Zhang, L F [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China); Huang, J P [Surface Physics Laboratory and Department of Physics, Fudan University, Shanghai 200433 (China)
2007-07-20
By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property.
International Nuclear Information System (INIS)
Chen, Y W; Zhang, L F; Huang, J P
2007-01-01
By using theoretical analysis and computer simulations, we develop the Watts-Strogatz network model by including degree distribution, in an attempt to improve the comparison between characteristic path lengths and clustering coefficients predicted by the original Watts-Strogatz network model and those of the real networks with the small-world property. Good agreement between the predictions of the theoretical analysis and those of the computer simulations has been shown. It is found that the developed Watts-Strogatz network model can fit the real small-world networks more satisfactorily. Some other interesting results are also reported by adjusting the parameters in a model degree-distribution function. The developed Watts-Strogatz network model is expected to help in the future analysis of various social problems as well as financial markets with the small-world property
Input modeling with phase-type distributions and Markov models theory and applications
Buchholz, Peter; Felko, Iryna
2014-01-01
Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence of measurements from a real system...
Monte Carlo Calculation of Sensitivities to Secondary Angular Distributions. Theory and Validation
International Nuclear Information System (INIS)
Perell, R. L.
2002-01-01
The basic methods for solution of the transport equation that are in practical use today are the discrete ordinates (SN) method, and the Monte Carlo (Monte Carlo) method. While the SN method is typically less computation time consuming, the Monte Carlo method is often preferred for detailed and general description of three-dimensional geometries, and for calculations using cross sections that are point-wise energy dependent. For analysis of experimental and calculated results, sensitivities are needed. Sensitivities to material parameters in general, and to the angular distribution of the secondary (scattered) neutrons in particular, can be calculated by well known SN methods, using the fluxes obtained from solution of the direct and the adjoint transport equations. Algorithms to calculate sensitivities to cross-sections with Monte Carlo methods have been known for quite a time. However, only just recently we have developed a general Monte Carlo algorithm for the calculation of sensitivities to the angular distribution of the secondary neutrons
Verron, E.; Gros, A.
2017-09-01
Most network models for soft materials, e.g. elastomers and gels, are dedicated to idealized materials: all chains admit the same number of Kuhn segments. Nevertheless, such standard models are not appropriate for materials involving multiple networks, and some specific constitutive equations devoted to these materials have been derived in the last few years. In nearly all cases, idealized networks of different chain lengths are assembled following an equal strain assumption; only few papers adopt an equal stress assumption, although some authors argue that such hypothesis would reflect the equilibrium of the different networks in contact. In this work, a full-network model with an arbitrary chain length distribution is derived by considering that chains of different lengths satisfy the equal force assumption in each direction of the unit sphere. The derivation is restricted to non-Gaussian freely jointed chains and to affine deformation of the sphere. Firstly, after a proper definition of the undeformed configuration of the network, we demonstrate that the equal force assumption leads to the equality of a normalized stretch in chains of different lengths. Secondly, we establish that the network with chain length distribution behaves as an idealized full-network of which both chain length and density of are provided by the chain length distribution. This approach is finally illustrated with two examples: the derivation of a new expression for the Young modulus of bimodal interpenetrated polymer networks, and the prediction of the change in fluorescence during deformation of mechanochemically responsive elastomers.
Papalexiou, Simon Michael
2018-05-01
Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.
The complete two-loop integrated jet thrust distribution in soft-collinear effective theory
International Nuclear Information System (INIS)
Manteuffel, Andreas von; Schabinger, Robert M.; Zhu, Hua Xing
2014-01-01
In this work, we complete the calculation of the soft part of the two-loop integrated jet thrust distribution in e + e − annihilation. This jet mass observable is based on the thrust cone jet algorithm, which involves a veto scale for out-of-jet radiation. The previously uncomputed part of our result depends in a complicated way on the jet cone size, r, and at intermediate stages of the calculation we actually encounter a new class of multiple polylogarithms. We employ an extension of the coproduct calculus to systematically exploit functional relations and represent our results concisely. In contrast to the individual contributions, the sum of all global terms can be expressed in terms of classical polylogarithms. Our explicit two-loop calculation enables us to clarify the small r picture discussed in earlier work. In particular, we show that the resummation of the logarithms of r that appear in the previously uncomputed part of the two-loop integrated jet thrust distribution is inextricably linked to the resummation of the non-global logarithms. Furthermore, we find that the logarithms of r which cannot be absorbed into the non-global logarithms in the way advocated in earlier work have coefficients fixed by the two-loop cusp anomalous dimension. We also show that in many cases one can straightforwardly predict potentially large logarithmic contributions to the integrated jet thrust distribution at L loops by making use of analogous contributions to the simpler integrated hemisphere soft function
Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory
Pato, Mauricio P.; Oshanin, Gleb
2013-03-01
We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.
Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory
International Nuclear Information System (INIS)
Pato, Mauricio P; Oshanin, Gleb
2013-01-01
We study the probability distribution function P (β) n (w) of the Schmidt-like random variable w = x 2 1 /(∑ j=1 n x 2 j /n), where x j , (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P (β) n (w) converges to the Marčenko–Pastur form, i.e. is defined as P n (β) (w)∼√((4 - w)/w) for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P (β=2) n (w) which are valid for arbitrary n and analyse their behaviour. (paper)
Probability distribution of distance in a uniform ellipsoid: Theory and applications to physics
International Nuclear Information System (INIS)
Parry, Michelle; Fischbach, Ephraim
2000-01-01
A number of authors have previously found the probability P n (r) that two points uniformly distributed in an n-dimensional sphere are separated by a distance r. This result greatly facilitates the calculation of self-energies of spherically symmetric matter distributions interacting by means of an arbitrary radially symmetric two-body potential. We present here the analogous results for P 2 (r;ε) and P 3 (r;ε) which respectively describe an ellipse and an ellipsoid whose major and minor axes are 2a and 2b. It is shown that for ε=(1-b 2 /a 2 ) 1/2 ≤1, P 2 (r;ε) and P 3 (r;ε) can be obtained as an expansion in powers of ε, and our results are valid through order ε 4 . As an application of these results we calculate the Coulomb energy of an ellipsoidal nucleus, and compare our result to an earlier result quoted in the literature. (c) 2000 American Institute of Physics
Tang, Jinjun; Zhang, Shen; Chen, Xinqiang; Liu, Fang; Zou, Yajie
2018-03-01
Understanding Origin-Destination distribution of taxi trips is very important for improving effects of transportation planning and enhancing quality of taxi services. This study proposes a new method based on Entropy-Maximizing theory to model OD distribution in Harbin city using large-scale taxi GPS trajectories. Firstly, a K-means clustering method is utilized to partition raw pick-up and drop-off location into different zones, and trips are assumed to start from and end at zone centers. A generalized cost function is further defined by considering travel distance, time and fee between each OD pair. GPS data collected from more than 1000 taxis at an interval of 30 s during one month are divided into two parts: data from first twenty days is treated as training dataset and last ten days is taken as testing dataset. The training dataset is used to calibrate model while testing dataset is used to validate model. Furthermore, three indicators, mean absolute error (MAE), root mean square error (RMSE) and mean percentage absolute error (MPAE), are applied to evaluate training and testing performance of Entropy-Maximizing model versus Gravity model. The results demonstrate Entropy-Maximizing model is superior to Gravity model. Findings of the study are used to validate the feasibility of OD distribution from taxi GPS data in urban system.
Dekkers, Petrus J; Friedlander, Sheldon K
2002-04-15
Gas-phase synthesis of fine solid particles leads to fractal-like structures whose transport and light scattering properties differ from those of their spherical counterparts. Self-preserving size distribution theory provides a useful methodology for analyzing the asymptotic behavior of such systems. Apparent inconsistencies in previous treatments of the self-preserving size distributions in the free molecule regime are resolved. Integro-differential equations for fractal-like particles in the continuum and near continuum regimes are derived and used to calculate the self-preserving and quasi-self-preserving size distributions for agglomerates formed by Brownian coagulation. The results for the limiting case (the continuum regime) were compared with the results of other authors. For these cases the finite difference method was in good in agreement with previous calculations in the continuum regime. A new analysis of aerosol agglomeration for the entire Knudsen number range was developed and compared with a monodisperse model; Higher agglomeration rates were found for lower fractal dimensions, as expected from previous studies. Effects of fractal dimension, pressure, volume loading and temperature on agglomerate growth were investigated. The agglomeration rate can be reduced by decreasing volumetric loading or by increasing the pressure. In laminar flow, an increase in pressure can be used to control particle growth and polydispersity. For D(f)=2, an increase in pressure from 1 to 4 bar reduces the collision radius by about 30%. Varying the temperature has a much smaller effect on agglomerate coagulation.
The analysis of linear partial differential operators I distribution theory and Fourier analysis
Hörmander, Lars
2003-01-01
The main change in this edition is the inclusion of exercises with answers and hints. This is meant to emphasize that this volume has been written as a general course in modern analysis on a graduate student level and not only as the beginning of a specialized course in partial differen tial equations. In particular, it could also serve as an introduction to harmonic analysis. Exercises are given primarily to the sections of gen eral interest; there are none to the last two chapters. Most of the exercises are just routine problems meant to give some familiarity with standard use of the tools introduced in the text. Others are extensions of the theory presented there. As a rule rather complete though brief solutions are then given in the answers and hints. To a large extent the exercises have been taken over from courses or examinations given by Anders Melin or myself at the University of Lund. I am grateful to Anders Melin for letting me use the problems originating from him and for numerous valuable comm...
Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.
2018-03-01
According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.
Multiobjective Optimization of Water Distribution Networks Using Fuzzy Theory and Harmony Search
Directory of Open Access Journals (Sweden)
Zong Woo Geem
2015-07-01
Full Text Available Thus far, various phenomenon-mimicking algorithms, such as genetic algorithm, simulated annealing, tabu search, shuffled frog-leaping, ant colony optimization, harmony search, cross entropy, scatter search, and honey-bee mating, have been proposed to optimally design the water distribution networks with respect to design cost. However, flow velocity constraint, which is critical for structural robustness against water hammer or flow circulation against substance sedimentation, was seldom considered in the optimization formulation because of computational complexity. Thus, this study proposes a novel fuzzy-based velocity reliability index, which is to be maximized while the design cost is simultaneously minimized. The velocity reliability index is included in the existing cost optimization formulation and this extended multiobjective formulation is applied to two bench-mark problems. Results show that the model successfully found a Pareto set of multiobjective design solutions in terms of cost minimization and reliability maximization.
Directory of Open Access Journals (Sweden)
Mingqi Qiao
2017-01-01
Full Text Available We performed an epidemiological investigation of subjects with premenstrual dysphoric disorder (PMDD to identify the clinical distribution of the major syndromes and symptoms. The pathogenesis of PMDD mainly involves the dysfunction of liver conveyance and dispersion. Excessive liver conveyance and dispersion are associated with liver-qi invasion syndrome, while insufficient liver conveyance and dispersion are expressed as liver-qi depression syndrome. Additionally, a nonconditional logistic regression was performed to analyze the symptomatic features of liver-qi invasion and liver-qi depression. As a result of this analysis, two subtypes of PMDD are proposed, namely, excessive liver conveyance and dispersion (liver-qi invasion syndrome and insufficient liver conveyance and dispersion (liver-qi depression syndrome. Our findings provide an epidemiological foundation for the clinical diagnosis and treatment of PMDD based on the identification of different types.
Morgenthaler, George W.
1989-01-01
The ability to launch-on-time and to send payloads into space has progressed dramatically since the days of the earliest missile and space programs. Causes for delay during launch, i.e., unplanned 'holds', are attributable to several sources: weather, range activities, vehicle conditions, human performance, etc. Recent developments in space program, particularly the need for highly reliable logistic support of space construction and the subsequent planned operation of space stations, large unmanned space structures, lunar and Mars bases, and the necessity of providing 'guaranteed' commercial launches have placed increased emphasis on understanding and mastering every aspect of launch vehicle operations. The Center of Space Construction has acquired historical launch vehicle data and is applying these data to the analysis of space launch vehicle logistic support of space construction. This analysis will include development of a better understanding of launch-on-time capability and simulation of required support systems for vehicle assembly and launch which are necessary to support national space program construction schedules. In this paper, the author presents actual launch data on unscheduled 'hold' distributions of various launch vehicles. The data have been supplied by industrial associate companies of the Center for Space Construction. The paper seeks to determine suitable probability models which describe these historical data and that can be used for several purposes such as: inputs to broader simulations of launch vehicle logistic space construction support processes and the determination of which launch operations sources cause the majority of the unscheduled 'holds', and hence to suggest changes which might improve launch-on-time. In particular, the paper investigates the ability of a compound distribution probability model to fit actual data, versus alternative models, and recommends the most productive avenues for future statistical work.
Regnier, D.; Dubray, N.; Schunck, N.; Verrière, M.
2016-05-01
Background: Accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r process to fuel cycle optimization for nuclear energy. The need for a predictive theory applicable where no data are available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics. Purpose: In this work, we calculate the pre-neutron emission charge and mass distributions of the fission fragments formed in the neutron-induced fission of 239Pu using a microscopic method based on nuclear density functional theory (DFT). Methods: Our theoretical framework is the nuclear energy density functional (EDF) method, where large-amplitude collective motion is treated adiabatically by using the time-dependent generator coordinate method (TDGCM) under the Gaussian overlap approximation (GOA). In practice, the TDGCM is implemented in two steps. First, a series of constrained EDF calculations map the configuration and potential-energy landscape of the fissioning system for a small set of collective variables (in this work, the axial quadrupole and octupole moments of the nucleus). Then, nuclear dynamics is modeled by propagating a collective wave packet on the potential-energy surface. Fission fragment distributions are extracted from the flux of the collective wave packet through the scission line. Results: We find that the main characteristics of the fission charge and mass distributions can be well reproduced by existing energy functionals even in two-dimensional collective spaces. Theory and experiment agree typically within two mass units for the position of the asymmetric peak. As expected, calculations are sensitive to the structure of the initial state and the prescription for the collective inertia. We emphasize that results are also sensitive to the continuity of the collective landscape near scission. Conclusions: Our analysis confirms
Using reactor network for global identification based on residence time distribution theory
Energy Technology Data Exchange (ETDEWEB)
Hocine, S.; Pibouleau, L.; Azzaro-Pantel, C.; Domenech, S. [Laboratoire de Genie Chimique - UMR 5503 CNRS/ INPT ENSIACET, 31 - Toulouse (France)
2006-07-01
In the ventilation systems, the control of transfer contaminants is one of the principal problems during the design and control phases. The installation of a suitable ventilation system for the control of contaminant transfer is essential in industry, because it makes it possible to detect and to prevent chemical and radiological risks. Research on air distribution in ventilated rooms traditionally involves full-scale experiments, scale -model experiments and application of the computational fluid dynamics (C.F.D.) tools. Most of the time, particularly in our case of large and cluttered enclosures, the predictive approach based on C.F.D. codes can not be used. The solution retained here is the establishment of a model based on the well known residence time distribution. This model is widely used in chemical engineering to treat non-ideal flows. The proposed method is based on the experimental determination of the residence time distribution curve, generally obtained through the response of the system to tracer release. A superstructure involving the set of all the possible solutions corresponding to the physical reactor is then defined, and the model will be selected from this superstructure according to its simulated response. The superstructure is identified as a combination of elementary systems, representing ideal flow patterns, as perfect mixed flows, plug flows, continuous stirred tank reactors, etc. The selected model is derived from the comparison between the simulated response to a stimulus, and the experimental response. The structure and parameters of the model are simultaneously optimized in order to fit the experimental curve with a minimal number of elementary units, constituting a key point for future control purposes of the process. This problem is a dynamic M.I.N.L.P. (Mixed Integer Non Linear Programming) problem with bilinear equality constraints. Generally, these constraints lead to numerical difficulties for reaching an optimum solution (even a
Directory of Open Access Journals (Sweden)
O. Klemp
2006-01-01
Full Text Available In order to satisfy the stringent demand for an accurate prediction of MIMO channel capacity and diversity performance in wireless communications, more effective and suitable models that account for real antenna radiation behavior have to be taken into account. One of the main challenges is the accurate modeling of antenna correlation that is directly related to the amount of channel capacity or diversity gain which might be achieved in multi element antenna configurations. Therefore spherical wave theory in electromagnetics is a well known technique to express antenna far fields by means of a compact field expansion with a reduced number of unknowns that was recently applied to derive an analytical approach in the computation of antenna pattern correlation. In this paper we present a novel and efficient computational technique to determine antenna pattern correlation based on the evaluation of the surface current distribution by means of a spherical mode expansion.
A theory of power-law distributions in financial market fluctuations.
Gabaix, Xavier; Gopikrishnan, Parameswaran; Plerou, Vasiliki; Stanley, H Eugene
2003-05-15
Insights into the dynamics of a complex system are often gained by focusing on large fluctuations. For the financial system, huge databases now exist that facilitate the analysis of large fluctuations and the characterization of their statistical behaviour. Power laws appear to describe histograms of relevant financial fluctuations, such as fluctuations in stock price, trading volume and the number of trades. Surprisingly, the exponents that characterize these power laws are similar for different types and sizes of markets, for different market trends and even for different countries--suggesting that a generic theoretical basis may underlie these phenomena. Here we propose a model, based on a plausible set of assumptions, which provides an explanation for these empirical power laws. Our model is based on the hypothesis that large movements in stock market activity arise from the trades of large participants. Starting from an empirical characterization of the size distribution of those large market participants (mutual funds), we show that the power laws observed in financial data arise when the trading behaviour is performed in an optimal way. Our model additionally explains certain striking empirical regularities that describe the relationship between large fluctuations in prices, trading volume and the number of trades.
Soft gluon resummation of Drell-Yan rapidity distributions: Theory and phenomenology
International Nuclear Information System (INIS)
Bonvini, Marco; Forte, Stefano; Ridolfi, Giovanni
2011-01-01
We examine critically the theoretical underpinnings and phenomenological implications of soft gluon (threshold) resummation of rapidity distributions at a hadron collider, taking Drell-Yan production at the Tevatron and the LHC as a reference test case. First, we show that in perturbative QCD soft gluon resummation is necessary whenever the partonic (rather the hadronic) center-of-mass energy is close enough to threshold, and we provide tools to assess when resummation is relevant for a given process. Then, we compare different prescriptions for handling the divergent nature of the series of resummed perturbative corrections, specifically the minimal and Borel prescriptions. We assess the intrinsic ambiguities of resummed results, both due to the asymptotic nature of their perturbative expansion, and to the treatment of subleading terms. Turning to phenomenology, we introduce a fast and accurate method for the implementation of resummation with the minimal and Borel prescriptions using an expansion on a basis of Chebyshev polynomials. We then present results for W and Z production as well as both high- and low-mass dilepton pairs at the LHC, and show that soft gluon resummation effects are generally comparable in size to NNLO corrections, but sometimes affected by substantial ambiguities.
Directory of Open Access Journals (Sweden)
Oreiro José Luis
2013-01-01
Full Text Available This article analyzes the relationship between economic growth, income distribution and real exchange rate within the neo-Kaleckian literature, through the construction of a nonlinear macrodynamic model for an open economy in which investment in fixed capital is assumed to be a quadratic function of the real exchange rate. The model demonstrates that the prevailing regime of accumulation in a given economy depends on the type of currency misalignment, so if the real exchange rate is overvalued, then the regime of accumulation will be profit-led, but if the exchange rate is undervalued, then the accumulation regime is wage-led. Subsequently, the adherence of the theoretical model to data is tested for Brazil in the period 1994/Q3-2008/Q4. The econometric results are consistent with the theoretical non-linear specification of the investment function used in the model, so that we can define the existence of a real exchange rate that maximizes the rate of capital accumulation for the Brazilian economy. From the estimate of this optimal rate we show that the real exchange rate is overvalued in 1994/Q3- 2001/Q1 and 2005/Q4-2008/Q4 and undervalued in the period 2001/Q2-2005/Q3. As a direct corollary of this result, it follows that the prevailing regime of accumulation in the Brazilian economy after the last quarter of 2005 is profit-led.
International Nuclear Information System (INIS)
Choy, C.W.; Xiao, J.J.; Yu, K.W.
2007-01-01
The recent Green function formalism (GFF) has been used to study the local field distribution near a periodic interface separating two homogeneous media of different dielectric constants. In the GFF, the integral equations can be solved conveniently because of the existence of an analytic expression for the kernel (Greenian). However, due to a severe singularity in the Greenian, the formalism was formerly applied to compute the electric fields away from the interface region. In this work, we have succeeded in extending the GFF to compute the electric field inside the interface region by taking advantage of a sum rule. To our surprise, the strengths of the electric fields are quite similar in both media across the interface, despite of the large difference in dielectric constants. Moreover, we propose a simple effective medium approximation (EMA) to compute the electric field inside the interface region. We show that the EMA can indeed give an excellent description of the electric field, except near a surface plasmon resonance
Talla Mbé, Jimmi Hervé; Woafo, Paul
2018-03-01
We report on a simple way to generate complex optical waveforms with very cheap and accessible equipments. The general idea consists in modulating a laser diode with an autonomous electronic oscillator, and in the case of this study, we use a distributed feedback (DFB) laser diode pumped with an electronic Chua's circuit. Based on the adiabatic P-I characteristics of the laser diode at low frequencies, we show that when the total pump is greater than the laser threshold, it is possible to convert the electrical waveforms of the Chua's circuit into optical carriers. But, if that is not the case, the on-off dynamical behavior of the laser permits to obtain many other optical waveform signals, mainly pulses. Our numerical results are consistent with experimental measurements. The work presents the advantage of extending the range of possible chaotic dynamics of the laser diodes in the time domains (millisecond) where it is not usually expected with conventional modulation techniques. Moreover, this new technique of laser diodes modulation brings a general benefit in the physical equipment, reduces their cost and congestion so that, it can constitute a step towards photonic integrated circuits.
Directory of Open Access Journals (Sweden)
Ze Yuan
2017-11-01
Full Text Available The grid structures, load levels, and running states of distribution networks in different supply regions are known as the influencing factors of energy loss. In this paper, the case library of energy loss is constructed to differentiate the crucial factors of energy loss in the different supply regions. First of all, the characteristic state values are selected as the representation of the cases based on the analysis of energy loss under various voltage classes and in different types of regions. Then, the methods of Grey Relational Analysis and the K-Nearest Neighbor are utilized to implement the critical technologies of case library construction, including case representation, processing, analysis, and retrieval. Moreover, the analysis software of the case library is designed based on the case library construction technology. Some case studies show that there are many differences and similarities concerning the factors that influence the energy loss in different types of regions. In addition, the most relevant sample case can be retrieved from the case library. Compared with the traditional techniques, constructing a case library provides a new way to find out the characteristics of energy loss in different supply regions and constitutes differentiated loss-reducing programs.
John R. Jones
1985-01-01
Quaking aspen is the most widely distributed native North American tree species (Little 1971, Sargent 1890). It grows in a great diversity of regions, environments, and communities (Harshberger 1911). Only one deciduous tree species in the world, the closely related Eurasian aspen (Populus tremula), has a wider range (Weigle and Frothingham 1911)....
Directory of Open Access Journals (Sweden)
Zhao Hao
2016-01-01
Full Text Available The problem of multifault rush repair in distribution networks (DNs is a multiobjective dynamic combinatorial problem with topology constraints. The problem consists of archiving an optimal faults’ allocation strategy to squads and an admissible multifault rush repairing strategy with coordinating switch operations. In this article, the utility theory is introduced to solve the first problem and a new discrete bacterial colony chemotaxis (DBCC algorithm is proposed for the second problem to determine the optimal sequence for each squad to repair faults and the corresponding switch operations. The above solution is called the two-stage approach. Additionally, a double mathematical optimization model based on the fault level is proposed in the second stage to minimize the outage loss and total repairing time. The real-time adjustment multiagent system (RA-MAS is proposed to provide facility to achieve online multifault rush repairing strategy in DNs when there are emergencies after natural disasters. The two-stage approach is illustrated with an example from a real urban distribution network and the simulation results show the effectiveness of the two-stage approach.
Cazé, Ana Luiza R; Mäder, Geraldo; Nunes, Teonildes S; Queiroz, Luciano P; de Oliveira, Guilherme; Diniz-Filho, José Alexandre F; Bonatto, Sandro L; Freitas, Loreta B
2016-08-01
The Atlantic Forest is one of the most species-rich ecoregions in the world. The historical origins of this richness and the evolutionary processes that produced diversification and promoted speciation in this ecosystem remain poorly understood. In this context, focusing on Passiflora contracta, an endemic species from the Atlantic Forest distributed exclusively at sea level along forest edges, this study aimed to characterize the patterns of genetic variability and explore two hypotheses that attempt to explain the possible causes of the genetic diversity in this region: the refuge and riverine barrier theories. We employed Bayesian methods combined with niche modeling to identify genetically homogeneous groups, to determine the diversification age, and identify long-term climate stability areas to species survival. The analyses were performed using molecular markers from nuclear and plastid genomes, with samples collected throughout the entire geographic distribution of the species, and comparisons with congeners species. The results indicated that populations were genetically structured and provided evidence of demographic stability. The molecular markers indicated the existence of a clear structure and the presence of five homogeneous groups. Interestingly, the separation of the groups coincides with the geographical locations of local rivers, corroborating the hypothesis of rivers acting as barriers to gene flow in this species. The highest levels of genetic diversity and the areas identified as having long-term climate stability were found in the same region reported for other species as a possible refuge area during the climatic changes of the Quaternary. Copyright © 2016 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Cai, Yuan; Wang, Jian-zhou; Tang, Yun; Yang, Yu-chen
2011-01-01
This paper presents a neural network based on adaptive resonance theory, named distributed ART (adaptive resonance theory) and HS-ARTMAP (Hyper-spherical ARTMAP network), applied to the electric load forecasting problem. The distributed ART combines the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multi-layer perceptions. The HS-ARTMAP, a hybrid of an RBF (Radial Basis Function)-network-like module which uses hyper-sphere basis function substitute the Gaussian basis function and an ART-like module, performs incremental learning capabilities in function approximation problem. The HS-ARTMAP only receives the compressed distributed coding processed by distributed ART to deal with the proliferation problem which ARTMAP (adaptive resonance theory map) architecture often encounters and still performs well in electric load forecasting. To demonstrate the performance of the methodology, data from New South Wales and Victoria in Australia are illustrated. Results show that the developed method is much better than the traditional BP and single HS-ARTMAP neural network. -- Research highlights: → The processing of the presented network is based on compressed distributed data. It's an innovation among the adaptive resonance theory architecture. → The presented network decreases the proliferation the Fuzzy ARTMAP architectures usually encounter. → The network on-line forecasts electrical load accurately, stably. → Both one-period and multi-period load forecasting are executed using data of different cities.
Acidity in DMSO from the embedded cluster integral equation quantum solvation model.
Heil, Jochen; Tomazic, Daniel; Egbers, Simon; Kast, Stefan M
2014-04-01
The embedded cluster reference interaction site model (EC-RISM) is applied to the prediction of acidity constants of organic molecules in dimethyl sulfoxide (DMSO) solution. EC-RISM is based on a self-consistent treatment of the solute's electronic structure and the solvent's structure by coupling quantum-chemical calculations with three-dimensional (3D) RISM integral equation theory. We compare available DMSO force fields with reference calculations obtained using the polarizable continuum model (PCM). The results are evaluated statistically using two different approaches to eliminating the proton contribution: a linear regression model and an analysis of pK(a) shifts for compound pairs. Suitable levels of theory for the integral equation methodology are benchmarked. The results are further analyzed and illustrated by visualizing solvent site distribution functions and comparing them with an aqueous environment.
International Nuclear Information System (INIS)
Saichev, A.; Sornette, D.
2005-01-01
Using the epidemic-type aftershock sequence (ETAS) branching model of triggered seismicity, we apply the formalism of generating probability functions to calculate exactly the average difference between the magnitude of a mainshock and the magnitude of its largest aftershock over all generations. This average magnitude difference is found empirically to be independent of the mainshock magnitude and equal to 1.2, a universal behavior known as Baath's law. Our theory shows that Baath's law holds only sufficiently close to the critical regime of the ETAS branching process. Allowing for error bars ±0.1 for Baath's constant value around 1.2, our exact analytical treatment of Baath's law provides new constraints on the productivity exponent α and the branching ratio n: 0.9 < or approx. α≤1 and 0.8 < or approx. n≤1. We propose a method for measuring α based on the predicted renormalization of the Gutenberg-Richter distribution of the magnitudes of the largest aftershock. We also introduce the 'second Baath law for foreshocks': the probability that a main earthquake turns out to be the foreshock does not depend on its magnitude ρ
Ghadiri, Majid; Shafiei, Navvab
2016-04-01
In this study, thermal vibration of rotary functionally graded Timoshenko microbeam has been analyzed based on modified couple stress theory considering temperature change in four types of temperature distribution on thermal environment. Material properties of FG microbeam are supposed to be temperature dependent and vary continuously along the thickness according to the power-law form. The axial forces are also included in the model as the thermal and true spatial variation due to the rotation. Governing equations and boundary conditions have been derived by employing Hamiltonian's principle. The differential quadrature method is employed to solve the governing equations for cantilever and propped cantilever boundary conditions. Validations are done by comparing available literatures and obtained results which indicate accuracy of applied method. Results represent effects of temperature changes, different boundary conditions, nondimensional angular velocity, length scale parameter, different boundary conditions, FG index and beam thickness on fundamental, second and third nondimensional frequencies. Results determine critical values of temperature changes and other essential parameters which can be applicable to design micromachines like micromotor and microturbine.
Distributed Leadership: A Good Theory but What if Leaders Won't, Don't Know How, or Can't Lead?
McKenzie, Kathryn Bell; Locke, Leslie Ann
2014-01-01
This article presents the results from an empirical qualitative study of the challenges faced by teacher leaders in their attempts to work directly with their colleagues to change instructional strategies and improve student success. Additionally, it offers a challenge to the utility of a naïvely espoused theory of distributed leadership, which…
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-07-01
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl- + CH3Cl → ClCH3 + Cl-) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF.
International Nuclear Information System (INIS)
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-01-01
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein–Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple S N 2 reaction (Cl − + CH 3 Cl → ClCH 3 + Cl − ) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-07-07
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl(-) + CH3Cl → ClCH3 + Cl(-)) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF.
International Nuclear Information System (INIS)
Kawamura, Hiroyuki; Tanaka, Kazuhiro
2010-01-01
The B-meson distribution amplitude (DA) is defined as the matrix element of a quark-antiquark bilocal light-cone operator in the heavy-quark effective theory, corresponding to a long-distance component in the factorization formula for exclusive B-meson decays. The evolution equation for the B-meson DA is governed by the cusp anomalous dimension as well as the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi-type anomalous dimension, and these anomalous dimensions give the ''quasilocal'' kernel in the coordinate-space representation. We show that this evolution equation can be solved analytically in the coordinate space, accomplishing the relevant Sudakov resummation at the next-to-leading logarithmic accuracy. The quasilocal nature leads to a quite simple form of our solution which determines the B-meson DA with a quark-antiquark light-cone separation t in terms of the DA at a lower renormalization scale μ with smaller interquark separations zt (z≤1). This formula allows us to present rigorous calculation of the B-meson DA at the factorization scale ∼√(m b Λ QCD ) for t less than ∼1 GeV -1 , using the recently obtained operator product expansion of the DA as the input at μ∼1 GeV. We also derive the master formula, which reexpresses the integrals of the DA at μ∼√(m b Λ QCD ) for the factorization formula by the compact integrals of the DA at μ∼1 GeV.
Akbardin, J.; Parikesit, D.; Riyanto, B.; TMulyono, A.
2018-05-01
Zones that produce land fishery commodity and its yields have characteristics that is limited in distribution capability because infrastructure conditions availability. High demand for fishery commodities caused to a growing distribution at inefficient distribution distance. The development of the gravity theory with the limitation of movement generation from the production zone can increase the interaction inter-zones by distribution distances effectively and efficiently with shorter movement distribution distances. Regression analysis method with multiple variable of transportation infrastructure condition based on service level and quantitative capacity is determined to estimate the 'mass' of movement generation that is formed. The resulting movement distribution (Tid) model has the equation Tid = 27.04 -0.49 tid. Based on barrier function of power model with calibration value β = 0.0496. In the way of development of the movement generation 'mass' boundary at production zone will shorten the distribution distance effectively with shorter distribution distances. Shorter distribution distances will increase the accessibility inter-zones to interact according to the magnitude of the movement generation 'mass'.
Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel
2014-06-05
Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.
Li, Qifan; Chen, Yajie; Harris, Vincent G.
2018-05-01
This letter reports an extended effective medium theory (EMT) including particle-size distribution functions to maximize the magnetic properties of magneto-dielectric composites. It is experimentally verified by Co-Ti substituted barium ferrite (BaCoxTixFe12-2xO19)/wax composites with specifically designed particle-size distributions. In the form of an integral equation, the extended EMT formula essentially takes the size-dependent parameters of magnetic particle fillers into account. It predicts the effective permeability of magneto-dielectric composites with various particle-size distributions, indicating an optimal distribution for a population of magnetic particles. The improvement of the optimized effective permeability is significant concerning magnetic particles whose properties are strongly size dependent.
Tian, Meng; Risku, Mika; Collin, Kaija
2016-01-01
This article provides a meta-analysis of research conducted on distributed leadership from 2002 to 2013. It continues the review of distributed leadership commissioned by the English National College for School Leadership (NCSL) ("Distributed Leadership: A Desk Study," Bennett et al., 2003), which identified two gaps in the research…
International Nuclear Information System (INIS)
Aleksanyan, V.T.; Samvelyan, S.Kh.
1984-01-01
General principles of plotting the parametric theory of IR spectrum intensities of polyatomic molecules are outlined. The development of the effective charges model in this theory is considered and the mathematical formalism of the first approximation of the method of effective atom charges is described in detail. The results of calculations of charges distribution in the Mo(CO) 6 , W(CO) 6 , Cp 2 V, Cp 2 Ru and others (Cp-cyclopentadiene), performed in the frame work of the outlined scheme are presented. It is shown that in the investigated carbonyles the effective charge on oxygen and metal atoms is negative, on carbon atom - positive. In dicyclopentavienyl complexes the effective charge on the metal atom is positive and is not over 0.6e; charge values on hydrogen and carbon atoms do not exceed, 0.10-0.15e. The notions of ''electrovalence'' of coordination bond and charge distribution in the case of metallocenes are not correlated
Vlad, Marcel Ovidiu; Tsuchiya, Masa; Oefner, Peter; Ross, John
2002-01-01
We investigate the statistical properties of systems with random chemical composition and try to obtain a theoretical derivation of the self-similar Dirichlet distribution, which is used empirically in molecular biology, environmental chemistry, and geochemistry. We consider a system made up of many chemical species and assume that the statistical distribution of the abundance of each chemical species in the system is the result of a succession of a variable number of random dilution events, which can be described by using the renormalization-group theory. A Bayesian approach is used for evaluating the probability density of the chemical composition of the system in terms of the probability densities of the abundances of the different chemical species. We show that for large cascades of dilution events, the probability density of the composition vector of the system is given by a self-similar probability density of the Dirichlet type. We also give an alternative formal derivation for the Dirichlet law based on the maximum entropy approach, by assuming that the average values of the chemical potentials of different species, expressed in terms of molar fractions, are constant. Although the maximum entropy approach leads formally to the Dirichlet distribution, it does not clarify the physical origin of the Dirichlet statistics and has serious limitations. The random theory of dilution provides a physical picture for the emergence of Dirichlet statistics and makes it possible to investigate its validity range. We discuss the implications of our theory in molecular biology, geochemistry, and environmental science.
Peng, Bo; Yu, Yang-Xin
2009-10-07
The structural and thermodynamic properties for charge symmetric and asymmetric electrolytes as well as mixed electrolyte system inside a charged cylindrical nanopore are investigated using a partially perturbative density functional theory. The electrolytes are treated in the restricted primitive model and the internal surface of the cylindrical nanopore is considered to have a uniform charge density. The proposed theory is directly applicable to the arbitrary mixed electrolyte solution containing ions with the equal diameter and different valences. Large amount of simulation data for ion density distributions, separation factors, and exclusion coefficients are used to determine the range of validity of the partially perturbative density functional theory for monovalent and multivalent counterion systems. The proposed theory is found to be in good agreement with the simulations for both mono- and multivalent counterion systems. In contrast, the classical Poisson-Boltzmann equation only provides reasonable descriptions of monovalent counterion system at low bulk density, and is qualitatively and quantitatively wrong in the prediction for the multivalent counterion systems due to its neglect of the strong interionic correlations in these systems. The proposed density functional theory has also been applied to an electrolyte absorbed into a pore that is a model of the filter of a physiological calcium channel.
Clevenger, Shelly L; Navarro, Jordana N; Jasinski, Jana L
2016-09-01
This study examined the demographic and background characteristic differences between those arrested for child pornography (CP) possession (only), or CP production/distribution, or an attempted or completed sexual exploitation of a minor (SEM) that involved the Internet in some capacity within the context of self-control theory using data from the second wave of the National Juvenile Online Victimization Study (N-JOV2). Results indicate few demographic similarities, which thereby suggest these are largely heterogeneous groupings of individuals. Results also indicate CP producers/distributers engaged in a greater number of behaviors indicative of low self-control compared with CP possessors. Specifically, offenders arrested for CP production/distribution were more likely to have (a) had problems with drugs/alcohol at the time of the crime and (b) been previously violent. In contrast, the only indicator of low self-control that reached statistical significance for CP possessors was the previous use of violence. Moreover, in contrast to CP producers/distributers, full-time employment and marital status may be important factors to consider in the likelihood of arrest for CP possessors, which is congruent with the tenets of self-control theory. © The Author(s) 2014.
Morrison, James L.
A computerized delivery system in consumer economics developed at the University of Delaware uses the PLATO system to provide a basis for analyzing consumer behavior in the marketplace. The 16 sequential lessons, part of the Consumer in the Marketplace Series (CMS), demonstrate consumer economic theory in layman's terms and are structured to focus…
Casault, Sébastien; Groen, Arend J.; Linton, Jonathan D.; Linton, Jonathan
2015-01-01
Oil and gas exploration and production firms have return profiles that are not easily explained by current financial theory – the variation in their market returns is non-Gaussian. In this paper, the nature and underlying reason for these significant deviations from expected behavior are considered.
International Nuclear Information System (INIS)
Liu, Guoliang; Zhang, Feng; Hao, Lizhen
2012-01-01
We previously introduced a time record model for use in studying the duration of sand–dust storms. In the model, X is the normalized wind speed and Xr is the normalized wind speed threshold for the sand–dust storm. X is represented by a random signal with a normal Gaussian distribution. The storms occur when X ≥ Xr. From this model, the time interval distribution of N = Aexp(−bt) can be deduced, wherein N is the number of time intervals with length greater than t, A and b are constants, and b is related to Xr. In this study, sand–dust storm data recorded in spring at the Yanchi meteorological station in China were analysed to verify whether the time interval distribution of the sand–dust storms agrees with the above time interval distribution. We found that the distribution of the time interval between successive sand–dust storms in April agrees well with the above exponential equation. However, the interval distribution for the sand–dust storm data for the entire spring period displayed a better fit to the Weibull equation and depended on the variation of the sand–dust storm threshold wind speed. (paper)
Wilson, William G; Lundberg, Per
2004-09-22
Theoretical interest in the distributions of species abundances observed in ecological communities has focused recently on the results of models that assume all species are identical in their interactions with one another, and rely upon immigration and speciation to promote coexistence. Here we examine a one-trophic level system with generalized species interactions, including species-specific intraspecific and interspecific interaction strengths, and density-independent immigration from a regional species pool. Comparisons between results from numerical integrations and an approximate analytic calculation for random communities demonstrate good agreement, and both approaches yield abundance distributions of nearly arbitrary shape, including bimodality for intermediate immigration rates.
Danel, J.-F.; Kazandjian, L.
2018-06-01
It is shown that the equation of state (EOS) and the radial distribution functions obtained by density-functional theory molecular dynamics (DFT-MD) obey a simple scaling law. At given temperature, the thermodynamic properties and the radial distribution functions given by a DFT-MD simulation remain unchanged if the mole fractions of nuclei of given charge and the average volume per atom remain unchanged. A practical interest of this scaling law is to obtain an EOS table for a fluid from that already obtained for another fluid if it has the right characteristics. Another practical interest of this result is that an asymmetric mixture made up of light and heavy atoms requiring very different time steps can be replaced by a mixture of atoms of equal mass, which facilitates the exploration of the configuration space in a DFT-MD simulation. The scaling law is illustrated by numerical results.
Yamamoto, Takuya; Nishigaki, Shinsuke M.
2018-02-01
We compute individual distributions of low-lying eigenvalues of a chiral random matrix ensemble interpolating symplectic and unitary symmetry classes by the Nyström-type method of evaluating the Fredholm Pfaffian and resolvents of the quaternion kernel. The one-parameter family of these distributions is shown to fit excellently the Dirac spectra of SU(2) lattice gauge theory with a constant U(1) background or dynamically fluctuating U(1) gauge field, which weakly breaks the pseudoreality of the unperturbed SU(2) Dirac operator. The observed linear dependence of the crossover parameter with the strength of the U(1) perturbations leads to precise determination of the pseudo-scalar decay constant, as well as the chiral condensate in the effective chiral Lagrangian of the AI class.
Drusano, George L.
1991-01-01
The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.
McCarthy, S.
2014-02-01
This paper describes the theory and application of a perceptually-inspired video processing technology that was recently incorporated into professional video encoders now being used by major cable, IPTV, satellite, and internet video service providers. We will present data that show that this perceptual video processing (PVP) technology can improve video compression efficiency by up to 50% for MPEG-2, H.264, and High Efficiency Video Coding (HEVC). The PVP technology described in this paper works by forming predicted eye-tracking attractor maps that indicate how likely it might be that a free viewing person would look at particular area of an image or video. We will introduce in this paper the novel model and supporting theory used to calculate the eye-tracking attractor maps. We will show how the underlying perceptual model was inspired by electrophysiological studies of the vertebrate retina, and will explain how the model incorporates statistical expectations about natural scenes as well as a novel method for predicting error in signal estimation tasks. Finally, we will describe how the eye-tracking attractor maps are created in real time and used to modify video prior to encoding so that it is more compressible but not noticeably different than the original unmodified video.
Quijano Silva, Nicanor; Ocampo-Martínez, Carlos; Barreiro Gómez, Julian; Obando, Germán; Pantoja, Andres; Mojica Nava, Eduardo
2017-01-01
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Recently, there has been an increasing interest in the control community in studying large-scale distributed s...
Directory of Open Access Journals (Sweden)
Wanxing Sheng
2016-05-01
Full Text Available In this paper, a reactive power optimization method based on historical data is investigated to solve the dynamic reactive power optimization problem in distribution network. In order to reflect the variation of loads, network loads are represented in a form of random matrix. Load similarity (LS is defined to measure the degree of similarity between the loads in different days and the calculation method of the load similarity of load random matrix (LRM is presented. By calculating the load similarity between the forecasting random matrix and the random matrix of historical load, the historical reactive power optimization dispatching scheme that most matches the forecasting load can be found for reactive power control usage. The differences of daily load curves between working days and weekends in different seasons are considered in the proposed method. The proposed method is tested on a standard 14 nodes distribution network with three different types of load. The computational result demonstrates that the proposed method for reactive power optimization is fast, feasible and effective in distribution network.
Maggiano, Corey M; Maggiano, Isabel S; Tiesler, Vera G; Chi-Keb, Julio R; Stout, Sam D
2016-01-01
This study compares two novel methods quantifying bone shaft tissue distributions, and relates observations on human humeral growth patterns for applications in anthropological and anatomical research. Microstructural variation in compact bone occurs due to developmental and mechanically adaptive circumstances that are 'recorded' by forming bone and are important for interpretations of growth, health, physical activity, adaptation, and identity in the past and present. Those interpretations hinge on a detailed understanding of the modeling process by which bones achieve their diametric shape, diaphyseal curvature, and general position relative to other elements. Bone modeling is a complex aspect of growth, potentially causing the shaft to drift transversely through formation and resorption on opposing cortices. Unfortunately, the specifics of modeling drift are largely unknown for most skeletal elements. Moreover, bone modeling has seen little quantitative methodological development compared with secondary bone processes, such as intracortical remodeling. The techniques proposed here, starburst point-count and 45° cross-polarization hand-drawn histomorphometry, permit the statistical and populational analysis of human primary tissue distributions and provide similar results despite being suitable for different applications. This analysis of a pooled archaeological and modern skeletal sample confirms the importance of extreme asymmetry in bone modeling as a major determinant of microstructural variation in diaphyses. Specifically, humeral drift is posteromedial in the human humerus, accompanied by a significant rotational trend. In general, results encourage the usage of endocortical primary bone distributions as an indicator and summary of bone modeling drift, enabling quantitative analysis by direction and proportion in other elements and populations. © 2015 Anatomical Society.
International Nuclear Information System (INIS)
Hanot, C.; Riaud, P.; Absil, O.; Mennesson, B.; Martin, S.; Liewer, K.; Loya, F.; Mawet, D.; Serabyn, E.
2011-01-01
A new 'self-calibrated' statistical analysis method has been developed for the reduction of nulling interferometry data. The idea is to use the statistical distributions of the fluctuating null depth and beam intensities to retrieve the astrophysical null depth (or equivalently the object's visibility) in the presence of fast atmospheric fluctuations. The approach yields an accuracy much better (about an order of magnitude) than is presently possible with standard data reduction methods, because the astrophysical null depth accuracy is no longer limited by the magnitude of the instrumental phase and intensity errors but by uncertainties on their probability distributions. This approach was tested on the sky with the two-aperture fiber nulling instrument mounted on the Palomar Hale telescope. Using our new data analysis approach alone-and no observations of calibrators-we find that error bars on the astrophysical null depth as low as a few 10 -4 can be obtained in the near-infrared, which means that null depths lower than 10 -3 can be reliably measured. This statistical analysis is not specific to our instrument and may be applicable to other interferometers.
Russo, Lucia; Russo, Paola; Siettos, Constantinos I
2016-01-01
Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire.
Directory of Open Access Journals (Sweden)
Lucia Russo
Full Text Available Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a an artificial forest of randomly distributed density of vegetation, and (b a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire.
Bhatnagar, Gaurav; Chatterjee, Sayantan; Chapman, Walter G.; Dugan, Brandon; Dickens, Gerald R.; Hirasaki, George J.
2011-03-01
We develop a theory that relates gas hydrate saturation in marine sediments to the depth of the sulfate-methane transition (SMT) zone below the seafloor using steady state, analytical expressions. These expressions are valid for systems in which all methane transported into the gas hydrate stability zone (GHSZ) comes from deeper external sources (i.e., advective systems). This advective constraint causes anaerobic oxidation of methane to be the only sulfate sink, allowing us to link SMT depth to net methane flux. We also develop analytical expressions that define the gas hydrate saturation profile based on SMT depth and site-specific parameters such as sedimentation rate, methane solubility, and porosity. We evaluate our analytical model at four drill sites along the Cascadia Margin where methane sources from depth dominate. With our model, we calculate average gas hydrate saturations across GHSZ and the top occurrence of gas hydrate at these sites as 0.4% and 120 mbsf (Site 889), 1.9% and 70 mbsf (Site U1325), 4.7% and 40 mbsf (Site U1326), and 0% (Site U1329), mbsf being meters below seafloor. These values compare favorably with average saturations and top occurrences computed from resistivity log and chloride data. The analytical expressions thus provide a fast and convenient method to calculate gas hydrate saturation and first-order occurrence at a given geologic setting where vertically upward advection dominates the methane flux.
Kataoka, Hajime
2017-07-01
Body fluid volume regulation is a complex process involving the interaction of various afferent (sensory) and neurohumoral efferent (effector) mechanisms. Historically, most studies focused on the body fluid dynamics in heart failure (HF) status through control of the balance of sodium, potassium, and water in the body, and maintaining arterial circulatory integrity is central to a unifying hypothesis of body fluid regulation in HF pathophysiology. The pathophysiologic background of the biochemical determinants of vascular volume in HF status, however, has not been known. I recently demonstrated that changes in vascular and red blood cell volumes are independently associated with the serum chloride concentration, but not the serum sodium concentration, during worsening HF and its recovery. Based on these observations and the established central role of chloride in the renin-angiotensin-aldosterone system, I propose a unifying hypothesis of the "chloride theory" for HF pathophysiology, which states that changes in the serum chloride concentration are the primary determinant of changes in plasma volume and the renin-angiotensin-aldosterone system under worsening HF and therapeutic resolution of worsening HF. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Granger, S.; Perotin, L.
1997-01-01
Maintaining the PWR components under reliable operating conditions requires a complex design to prevent various damaging processes, including fatigue and wear problems due to flow-induced vibration. In many practical situations, it is difficult, if not impossible, to perform direct measurements or calculations of the external forces acting on vibrating structures. Instead, vibrational responses can often be conveniently measured. This paper presents an inverse method for estimating a distributed random excitation from the measurement of the structural response at a number of discrete points. This paper is devoted to the presentation of the theoretical development. The force identification method is based on a modal model for the structure and a spatial orthonormal decomposition of the excitation field. The estimation of the Fourier coefficients of this orthonormal expansion is presented. As this problem turns out to be ill-posed, a regularization process is introduced. The minimization problem associated to this process is then formulated and its solutions is developed. (author)
International Nuclear Information System (INIS)
Bhattacharyya, Pratip; Chakrabarti, Bikas K
2008-01-01
We study different ways of determining the mean distance (r n ) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating (r n ). Next, we describe two alternative means of deriving the exact expression of (r n ): we review the method using absolute probability and develop an alternative method using conditional probability. Finally, we obtain an approximation to (r n ) from the mean volume between the reference point and its nth neighbour and compare it with the heuristic and exact results
International Nuclear Information System (INIS)
Noriega-Crespo, A.; Bohm, K.H.; Raga, A.C.
1989-01-01
In this paper, it is shown that most of the spatial intensity distribution of 11 selected emission lines for Herbig-Haro 1 (including the forbidden S II emission lines at 6731 A and 4069 A, the forbidden O III line at 5007 A, and the forbidden O II line at 3727 A) can be explained by a bow shock with a shock velocity of about 150-200 km/sec at the stagnation point, and under the assumption that the gas entering the shock is fully preionized. The results are based on three spectrograms (with a total exposure time of 180 min) obtained consecutively. Specifically, the ratios of each of the forbidden lines to H-alpha were studied, which permitted a critical test of the model. The agreement between the theoretical predictions and the observations was found to be remarkable, considering the complex geometry that a bow shock could have. 38 refs
Kerins, E; Evans, N W; Baillon, Paul; Carr, B J; Giraud-Héraud, Yannick; Gould, A; Hewett, P C; Kaplan, J; Paulin-Henriksson, S; Smartt, S J; Tsapras, Y; Valls-Gabaud, D
2003-01-01
The POINT-AGAPE collaboration is currently searching for massive compact halo objects (MACHOs) towards the Andromeda galaxy (M31). The survey aims to exploit the high inclination of the M31 disk, which causes an asymmetry in the spatial distribution of M31 MACHOs. Here, we investigate the effects of halo velocity anisotropy and flattening on the asymmetry signal using simple halo models. For a spherically symmetric and isotropic halo, we find that the underlying pixel-lensing rate in far-disk M31 MACHOs is more than 5 times the rate of near-disk events. We find that the asymmetry is increased further by about 30% if the MACHOs occupy radial orbits rather than tangential orbits, but is substantially reduced if the MACHOs lie in a flattened halo. However, even for haloes with a minor-to-major axis ratio q = 0.3, the numbers of M31 MACHOs in the far-side outnumber those in the near-side by a factor of ~2. We show that, if positional information is exploited in addition to number counts, then the number of candid...
Liu, Baoding
2015-01-01
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...
Energy Technology Data Exchange (ETDEWEB)
Golovin, A V [Photon Factory, Institute of Materials Structure Science, Tsukuba 305-0801 (Japan); Institute of Physics, St Petersburg State University, 198504 St Petersburg (Russian Federation); Adachi, J [Photon Factory, Institute of Materials Structure Science, Tsukuba 305-0801 (Japan); Graduate School of Science, University of Tokyo, Bunkyo-ku, Tokyo 113-0033 (Japan); Motoki, S [Graduate School of Science, University of Tokyo, Bunkyo-ku, Tokyo 113-0033, (Japan); Takahashi, M [Institute for Molecular Science, Okazaki 444-8585 (Japan); Yagishita, A [Photon Factory, Institute of Materials Structure Science, Tsukuba 305-0801 (Japan); Graduate School of Science, University of Tokyo, Bunkyo-ku, Tokyo 113-0033 (Japan)
2005-10-28
Photoelectron angular distributions (PADs) for O 1s, C 1s and S 2p{sub 1/2}, 2p{sub 3/2} ionization of OCS molecules have been measured in shape resonance regions. These PAD results are compared with the results for O 1s and C 1s ionization of CO molecules, and multi-scattering X{alpha} (MSX{alpha}) calculations. The mechanism of the PAD formation both for parallel and perpendicular transitions differs very significantly in these molecules and a step from a two-centre potential (CO) to a three-centre potential (OCS) plays a principal role in electron scattering and the formation of the resulting PAD. For parallel transitions, it is found that for the S 2p and O 1s ionization the photoelectrons are emitted preferentially in a hemisphere directed to the ionized S and O atom, respectively. In OCS O 1s ionization, the S-C fragment plays the role of a strong 'scatterer' for photoelectrons, and in the shape resonance region most intensities of the PADs are concentrated on the region directed to the O atom. The MSX{alpha} calculations for perpendicular transitions reproduce the experimental data, but not so well as in the case of parallel transitions. The results of PAD, calculated with different l{sub max} on different atomic centres, reveal the important role of the d (l = 2) partial wave for the S atom in the partial wave decompositions of photoelectron wavefunctions.
Partial distribution strategy of local conflict in evidence theory%证据推理中局部冲突部分分配策略
Institute of Scientific and Technical Information of China (English)
曹洁; 孟兴
2012-01-01
为了解决证据理论中冲突证据合成问题,提出了一种局部冲突部分分配策略.该策略假设证据具有一致可信度,设定一个阈值,将大于该阈值的局部冲突按比例分配给产生该冲突的焦元,并用标准合成规则对证据进行融合,使合成的结果更加可靠.仿真结果表明该策略能有效解决冲突证据合成问题.%In order to solve the problem of conflict evidence combination in standard evidence theory, this paper proposed a kind of partial distribution strategy of local conflict. Assuming that the evidences had consistent reliability, the method proportionally distributed the local conflicts which was greater than threshold to those focus elements which contributed to the conflict. Then, it used the standard combination rule to achieve evidence fusion and obtained more reliable results. The simulation results demonstrate that the proposed method can effectively solve the problem to conflict evidence combination.
International Nuclear Information System (INIS)
Hambrock, Christian
2011-04-01
In my thesis I present our work on the bottom-baryon light-cone distribution amplitudes (LCDAs) and on the [bq][ anti b anti q]-tetraquarks. For the former we extended the known LCDAs for the ground state baryon Λ b to the entire b-baryon ground state multiplets and included s-quark mass-breaking effects. The LCDAs form crucial input for the calculations of characteristic properties of b-baryon decays. In this context they can for example be used in the calculation of form factors for semileptonic flavor-changing neutral-current (FCNC) decays. For the [bq][ anti b anti q]-tetraquarks, we calculated the tetraquark mass spectrum for all quarks q=u,d,s,c in a constituent Hamiltonian quark model. We estimated the electronic width by introducing a generalized Van Royen-Weisskopf formula for the tetraquarks, and evaluated the partial hadronic two-body and total decay widths for the tetraquarks with quantum numbers J PC =1 -- . With this input, we performed a Breit-Wigner fit, including the tetraquark contributions, to the inclusive R b -spectrum measured by BaBar. The obtained χ 2 /d.o.f. of the BaBar R b -scan data is fairly good. The resulting fits are suggestive of tetraquark states but not conclusive. We developed a model to describe the transitions e + e - →Y b →Υ(nS)(π + π - ,K + K - ,ηπ 0 ), in which Y b is a 1 -- tetraquark state. The model includes the exchange of light tetraquark and meson states. We used this model to fit the invariant-mass and helicity spectra for the dipionic final state measured by Belle and used the results to estimate the spectra of the channels e + e - →Y b →Υ(nS)(K + K - ,ηπ 0 ). The spectra are enigmatic in shape and magnitude and defy an interpretation in the framework of the standard bottomonia, requesting either an interpretation in terms of exotic states, such as tetraquarks, or a radical alteration of the, otherwise successful, QCD-based bottomonium-model. The tetraquark hypothesis describes the current data well
Energy Technology Data Exchange (ETDEWEB)
Hambrock, Christian
2011-04-15
In my thesis I present our work on the bottom-baryon light-cone distribution amplitudes (LCDAs) and on the [bq][ anti b anti q]-tetraquarks. For the former we extended the known LCDAs for the ground state baryon {lambda}{sub b} to the entire b-baryon ground state multiplets and included s-quark mass-breaking effects. The LCDAs form crucial input for the calculations of characteristic properties of b-baryon decays. In this context they can for example be used in the calculation of form factors for semileptonic flavor-changing neutral-current (FCNC) decays. For the [bq][ anti b anti q]-tetraquarks, we calculated the tetraquark mass spectrum for all quarks q=u,d,s,c in a constituent Hamiltonian quark model. We estimated the electronic width by introducing a generalized Van Royen-Weisskopf formula for the tetraquarks, and evaluated the partial hadronic two-body and total decay widths for the tetraquarks with quantum numbers J{sup PC}=1{sup --}. With this input, we performed a Breit-Wigner fit, including the tetraquark contributions, to the inclusive R{sub b}-spectrum measured by BaBar. The obtained {chi}{sup 2}/d.o.f. of the BaBar R{sub b}-scan data is fairly good. The resulting fits are suggestive of tetraquark states but not conclusive. We developed a model to describe the transitions e{sup +}e{sup -}{yields}Y{sub b}{yields}{upsilon}(nS)({pi}{sup +}{pi}{sup -},K{sup +}K{sup -},{eta}{pi}{sup 0}), in which Y{sub b} is a 1{sup --} tetraquark state. The model includes the exchange of light tetraquark and meson states. We used this model to fit the invariant-mass and helicity spectra for the dipionic final state measured by Belle and used the results to estimate the spectra of the channels e{sup +}e{sup -}{yields}Y{sub b}{yields}{upsilon}(nS)(K{sup +}K{sup -},{eta}{pi}{sup 0}). The spectra are enigmatic in shape and magnitude and defy an interpretation in the framework of the standard bottomonia, requesting either an interpretation in terms of exotic states, such as
Ikeguchi, Mitsunori; Doi, Junta
1995-09-01
The Ornstein-Zernike integral equation (OZ equation) has been used to evaluate the distribution function of solvents around solutes, but its numerical solution is difficult for molecules with a complicated shape. This paper proposes a numerical method to directly solve the OZ equation by introducing the 3D lattice. The method employs no approximation the reference interaction site model (RISM) equation employed. The method enables one to obtain the spatial distribution of spherical solvents around solutes with an arbitrary shape. Numerical accuracy is sufficient when the grid-spacing is less than 0.5 Å for solvent water. The spatial water distribution around a propane molecule is demonstrated as an example of a nonspherical hydrophobic molecule using iso-value surfaces. The water model proposed by Pratt and Chandler is used. The distribution agrees with the molecular dynamics simulation. The distribution increases offshore molecular concavities. The spatial distribution of water around 5α-cholest-2-ene (C27H46) is visualized using computer graphics techniques and a similar trend is observed.
Kodaira, Kunihiko
2017-01-01
This book deals with the classical theory of Nevanlinna on the value distribution of meromorphic functions of one complex variable, based on minimum prerequisites for complex manifolds. The theory was extended to several variables by S. Kobayashi, T. Ochiai, J. Carleson, and P. Griffiths in the early 1970s. K. Kodaira took up this subject in his course at The University of Tokyo in 1973 and gave an introductory account of this development in the context of his final paper, contained in this book. The first three chapters are devoted to holomorphic mappings from C to complex manifolds. In the fourth chapter, holomorphic mappings between higher dimensional manifolds are covered. The book is a valuable treatise on the Nevanlinna theory, of special interests to those who want to understand Kodaira's unique approach to basic questions on complex manifolds.
DEFF Research Database (Denmark)
Jensen, Lotte Groth; Bossen, Claus
2016-01-01
different socio-technical systems (paper-based and electronic patient records). Drawing on the theory of distributed cognition and narrative theory, primarily inspired by the work done within health care by Cheryl Mattingly, we propose that the creation of overview may be conceptualised as ‘distributed plot......-making’. Distributed cognition focuses on the role of artefacts, humans and their interaction in information processing, while narrative theory focuses on how humans create narratives through the plot construction. Hence, the concept of distributed plot-making highlights the distribution of information processing...
DEFF Research Database (Denmark)
Midtgaard, Søren Flinch
2012-01-01
Thomas Pogge’s ingenious and influential Rawlsian theory of global justice asserts that principles of justice such as the difference principle or, alternatively, a universal criterion of human rights consisting of a subset of the principles of social justice apply to the global basic structure...
Heavy flavours: theory summary
Corcella, Gennaro
2005-01-01
I summarize the theory talks given in the Heavy Flavours Working Group. In particular, I discuss heavy-flavour parton distribution functions, threshold resummation for heavy-quark production, progress in fragmentation functions, quarkonium production, heavy-meson hadroproduction.
Energy Technology Data Exchange (ETDEWEB)
Romero, V.J.
1994-03-01
CIRCE2 is a computer code for modeling the optical performance of three-dimensional dish-type solar energy concentrators. Statistical methods are used to evaluate the directional distribution of reflected rays from any given point on the concentrator. Given concentrator and receiver geometries, sunshape (angular distribution of incident rays from the sun), and concentrator imperfections such as surface roughness and random deviation in slope, the code predicts the flux distribution and total power incident upon the target. Great freedom exists in the variety of concentrator and receiver configurations that can be modeled. Additionally, provisions for shading and receiver aperturing are included.- DEKGEN2 is a preprocessor designed to facilitate input of geometry, error distributions, and sun models. This manual describes the optical model, user inputs, code outputs, and operation of the software package. A user tutorial is included in which several collectors are built and analyzed in step-by-step examples.
Introductory photoemission theory
International Nuclear Information System (INIS)
Arai, Hiroko; Fujikawa, Takashi
2010-01-01
An introductory review is presented on the basis of many-body scattering theory. Some fundamental aspects of photoemission theory are discussed in detail. A few applications are also discussed; photoelectron diffraction, depth distribution function and multi-atom resonant photoemission are also discussed briefly. (author)
Ahsanullah, Mohammad
2016-01-01
The aim of the book is to give a through account of the basic theory of extreme value distributions. The book cover a wide range of materials available to date. The central ideas and results of extreme value distributions are presented. The book rwill be useful o applied statisticians as well statisticians interrested to work in the area of extreme value distributions.vmonograph presents the central ideas and results of extreme value distributions.The monograph gives self-contained of theory and applications of extreme value distributions.
Directory of Open Access Journals (Sweden)
Marchesi Julian R
2007-03-01
Full Text Available Abstract Background The question of how a circle or line segment becomes covered when random arcs are marked off has arisen repeatedly in bioinformatics. The number of uncovered gaps is of particular interest. Approximate distributions for the number of gaps have been given in the literature, one motivation being ease of computation. Error bounds for these approximate distributions have not been given. Results We give bounds on the probability distribution of the number of gaps when a circle is covered by fragments of fixed size. The absolute error in the approximation is typically on the order of 0.1% at 10× coverage depth. The method can be applied to coverage problems on the interval, including edge effects, and applications are given to metagenomic libraries and shotgun sequencing.
Caffarel, Michel; Giner, Emmanuel; Scemama, Anthony; Ramírez-Solís, Alejandro
2014-12-09
We present a comparative study of the spatial distribution of the spin density of the ground state of CuCl2 using Density Functional Theory (DFT), quantum Monte Carlo (QMC), and post-Hartree-Fock wave function theory (WFT). A number of studies have shown that an accurate description of the electronic structure of the lowest-lying states of this molecule is particularly challenging due to the interplay between the strong dynamical correlation effects in the 3d shell and the delocalization of the 3d hole over the chlorine atoms. More generally, this problem is representative of the difficulties encountered when studying open-shell metal-containing molecular systems. Here, it is shown that qualitatively different results for the spin density distribution are obtained from the various quantum-mechanical approaches. At the DFT level, the spin density distribution is found to be very dependent on the functional employed. At the QMC level, Fixed-Node Diffusion Monte Carlo (FN-DMC) results are strongly dependent on the nodal structure of the trial wave function. Regarding wave function methods, most approaches not including a very high amount of dynamic correlation effects lead to a much too high localization of the spin density on the copper atom, in sharp contrast with DFT. To shed some light on these conflicting results Full CI-type (FCI) calculations using the 6-31G basis set and based on a selection process of the most important determinants, the so-called CIPSI approach (Configuration Interaction with Perturbative Selection done Iteratively) are performed. Quite remarkably, it is found that for this 63-electron molecule and a full CI space including about 10(18) determinants, the FCI limit can almost be reached. Putting all results together, a natural and coherent picture for the spin distribution is proposed.
Energy Technology Data Exchange (ETDEWEB)
Jin, Jae Sik [Chosun College of Science and Technology, Gwangju (Korea, Republic of)
2017-03-15
Phonon dynamics in nanostructure is critically important to thermoelectric and optoelectronic devices because it determines the transport and other crucial properties. However, accurately evaluating the phonon lifetimes is extremely difficult. This study reports on the development of a new semi-empirical method to estimate the full-spectrum phonon lifetimes in thin silicon films at room temperature based on the experimental data on the phonon mean-free-path spectrum in bulk silicon and a phenomenological consideration of phonon transport in thin films. The bulk of this work describes the theory and the validation; then, we discuss the trend of the phonon lifetimes in thin silicon films when their thicknesses decrease.
DEFF Research Database (Denmark)
Wæver, Ole
2009-01-01
-empiricism and anti-positivism of his position. Followers and critics alike have treated Waltzian neorealism as if it was at bottom a formal proposition about cause-effect relations. The extreme case of Waltz being so victorious in the discipline, and yet being consistently mis-interpreted on the question of theory......, shows the power of a dominant philosophy of science in US IR, and thus the challenge facing any ambitious theorising. The article suggests a possible movement of fronts away from the ‘fourth debate' between rationalism and reflectivism towards one of theory against empiricism. To help this new agenda...
Statistical theory and inference
Olive, David J
2014-01-01
This text is for a one semester graduate course in statistical theory and covers minimal and complete sufficient statistics, maximum likelihood estimators, method of moments, bias and mean square error, uniform minimum variance estimators and the Cramer-Rao lower bound, an introduction to large sample theory, likelihood ratio tests and uniformly most powerful tests and the Neyman Pearson Lemma. A major goal of this text is to make these topics much more accessible to students by using the theory of exponential families. Exponential families, indicator functions and the support of the distribution are used throughout the text to simplify the theory. More than 50 ``brand name" distributions are used to illustrate the theory with many examples of exponential families, maximum likelihood estimators and uniformly minimum variance unbiased estimators. There are many homework problems with over 30 pages of solutions.
Matching theory for wireless networks
Han, Zhu; Saad, Walid
2017-01-01
This book provides the fundamental knowledge of the classical matching theory problems. It builds up the bridge between the matching theory and the 5G wireless communication resource allocation problems. The potentials and challenges of implementing the semi-distributive matching theory framework into the wireless resource allocations are analyzed both theoretically and through implementation examples. Academics, researchers, engineers, and so on, who are interested in efficient distributive wireless resource allocation solutions, will find this book to be an exceptional resource. .
Energy Technology Data Exchange (ETDEWEB)
Cassen, B. [Department of Biophysics and Nuclear Medicine, University of California, Los Angeles, CA (United States)
1964-10-15
In a published paper, the author has presented a simplified basic theory of the quantitative performance characteristics of radioisotope imaging and scanning systems. A ''figure of merit'' is derived, depending partly on the instrument characteristics and partly on the methodology utilized. The factors involved in the figure of merit are: the count-rate sensitivity of the detector per unit solid angle, subtended from a resolution element, the solid angle subtended from a resolution element, the number of resolution elements viewed simultaneously, the number of resolution elements in the total field scanned or imaged, the dose concentration, the total time taken to produce an image or scan and the background count-rate. In the present presentation the basic theory is further refined and discussed to take into account factors such as ''depth-of-focus'' resolution, variable speed or variable dwell scanning, possibilities of variable or automatically variable resolution, relative resolvability of ''cold'' and ''hot'' nodules, field size and use of special radioisotopes and types of detectors. Also, a discussion is presented on the handling and processing of data arising from scanning and imaging techniques. The relative merits of background eliminators and contrast enhancement procedures such as photoscanning and colour scanning are analysed. The advantages and possible disadvantages of tape recording original data and the repeated playback through contrast and background processing circuits is discussed. Some experimentally determined figure-of-merit data are presented on a widely used commercialscanner and on some new developments, especially a new high quantum utilization scanner, which is briefly described in the above-mentioned article and has since undergone further development, improvement and testing. (author) [French] Dans une publication, l'auteur a expose une theorie fondamentale simplifiee des caracteristiques quantitatives de rendement des divers systemes
Directory of Open Access Journals (Sweden)
Gongcheng Zhang
2015-01-01
Full Text Available Taking a hydrocarbon zone or a basin group as a unit, this paper analyzed the vertical hydrocarbon generation regularity of onshore and offshore oil and gas fields in China, based on the theory of co-control of source and heat. The results demonstrated that the hydrocarbon generation modes of oil and gas fields in China are orderly. First, the hydrocarbon zones in southeastern China offshore area, including the East and South China Sea basins, are dominated by single hydrocarbon generation mode, which displays as either single oil generation in the near shore or single gas generation in the offshore controlled by both source and heat. Second, the eastern hydrocarbon zones, including the Bohai Bay, Songliao and Jianghan basins and the North and South Yellow Sea basins, are dominated by a two-layer hydrocarbon generation mode, which performs as “upper oil and lower gas”. Third, the central hydrocarbon zones, including the Ordos, Sichuan and Chuxiong basins, are also dominated by the “upper oil and lower gas” two-layer hydrocarbon generation mode. In the Ordos Basin, gas is mainly generated in the Triassic, and oil is predominantly generated in the Paleozoic. In the Sichuan Basin, oil was discovered in the Jurassic, and gas was mostly discovered in the Sinian and Triassic. Fourth, the western hydrocarbon zones are dominated by a “sandwich” multi-layer mode, such as the Junggar, Tarim, Qaidam basins. In summary, the theory of co-control of source and heat will be widely applied to oil and gas exploration all over China. Oil targets should be focused on the near shore areas in the southeastern China sea, the upper strata in the eastern and middle hydrocarbon zones, and the Ordovician, Permian and Paleogene strata in the western hydrocarbon zone, while gas targets should be focused on the off-shore areas in the southeastern China sea, the Cambrian, Carboniferous, Jurassic, and Quaternary strata in the western hydrocarbon zone. A pattern of
Quasihomogeneous distributions
von Grudzinski, O
1991-01-01
This is a systematic exposition of the basics of the theory of quasihomogeneous (in particular, homogeneous) functions and distributions (generalized functions). A major theme is the method of taking quasihomogeneous averages. It serves as the central tool for the study of the solvability of quasihomogeneous multiplication equations and of quasihomogeneous partial differential equations with constant coefficients. Necessary and sufficient conditions for solvability are given. Several examples are treated in detail, among them the heat and the Schrödinger equation. The final chapter is devoted to quasihomogeneous wave front sets and their application to the description of singularities of quasihomogeneous distributions, in particular to quasihomogeneous fundamental solutions of the heat and of the Schrödinger equation.
Directory of Open Access Journals (Sweden)
S. WEINTRAUB
2013-12-01
Full Text Available L'opera presenta un modello lineare del settore del mercato privato dell'economia costruito in blocchi principali di Keynes , con alcuni pezzi forniti da Kalecki , Kaldor , e Robinson . L'autore sostiene che tale modello è ricco di promesse in virtù della sua portata e che risolve gran parte del mistero dei modelli econometrici e dimostra le circostanze in cui essi sono suscettibili di agire bene o male . Pedagogicamente , le idee di base sono facilemnte trasmissibilinella prima fase degli studi economici The work presents a linear model of the private market sector of the economy built out of the main blocks of Keynes with some pieces furnished by Kalecki, Kaldor, and Robinson. The author argues that such model is rich in promise by virtue of its scope and that it strips much of the mystery from econometric models and demonstrates the circumstances in which they are likely to perform well or badly. The consistent relations offer hospitable shelter for the theory of income, employment, price level, and income shares in a succinct design. Pedagogically, the elemental ideas are capable of transmission at an early stage in economic studies. JEL: E12
Directory of Open Access Journals (Sweden)
Tkáč Štefan
2015-11-01
Full Text Available To achieve the smart growth and equitable development in the region, urban planners should consider also lateral energies represented by the energy urban models like further proposed EEPGC focused on energy distribution via connections among micro-urban structures, their onsite renewable resources and the perception of micro-urban structures as decentralized energy carriers based on pre industrialized era. These structures are still variously bound when part of greater patterns. After the industrial revolution the main traded goods became energy in its various forms. The EEPGC is focused on sustainable energy transportation distances between the villages and the city, described by the virtual “energy circles”. This more human scale urbanization, boost the economy in micro-urban areas, rising along with clean energy available in situ that surely gives a different perspective to human quality of life in contrast to overcrowded multicultural mega-urban structures facing generations of problems and struggling to survive as a whole.
Tkáč, Štefan
2015-11-01
To achieve the smart growth and equitable development in the region, urban planners should consider also lateral energies represented by the energy urban models like further proposed EEPGC focused on energy distribution via connections among micro-urban structures, their onsite renewable resources and the perception of micro-urban structures as decentralized energy carriers based on pre industrialized era. These structures are still variously bound when part of greater patterns. After the industrial revolution the main traded goods became energy in its various forms. The EEPGC is focused on sustainable energy transportation distances between the villages and the city, described by the virtual "energy circles". This more human scale urbanization, boost the economy in micro-urban areas, rising along with clean energy available in situ that surely gives a different perspective to human quality of life in contrast to overcrowded multicultural mega-urban structures facing generations of problems and struggling to survive as a whole.
Jara, Pascual; Torrecillas, Blas
1988-01-01
The papers in this proceedings volume are selected research papers in different areas of ring theory, including graded rings, differential operator rings, K-theory of noetherian rings, torsion theory, regular rings, cohomology of algebras, local cohomology of noncommutative rings. The book will be important for mathematicians active in research in ring theory.
DEFF Research Database (Denmark)
Hendricks, Vincent F.
Game Theory is a collection of short interviews based on 5 questions presented to some of the most influential and prominent scholars in game theory. We hear their views on game theory, its aim, scope, use, the future direction of game theory and how their work fits in these respects....
International Nuclear Information System (INIS)
Chan Hongmo.
1987-10-01
The paper traces the development of the String Theory, and was presented at Professor Sir Rudolf Peierls' 80sup(th) Birthday Symposium. The String theory is discussed with respect to the interaction of strings, the inclusion of both gauge theory and gravitation, inconsistencies in the theory, and the role of space-time. The physical principles underlying string theory are also outlined. (U.K.)
White, Claire E; Provis, John L; Proffen, Thomas; Riley, Daniel P; van Deventer, Jannie S J
2010-04-07
Understanding the atomic structure of complex metastable (including glassy) materials is of great importance in research and industry, however, such materials resist solution by most standard techniques. Here, a novel technique combining thermodynamics and local structure is presented to solve the structure of the metastable aluminosilicate material metakaolin (calcined kaolinite) without the use of chemical constraints. The structure is elucidated by iterating between least-squares real-space refinement using neutron pair distribution function data, and geometry optimisation using density functional modelling. The resulting structural representation is both energetically feasible and in excellent agreement with experimental data. This accurate structural representation of metakaolin provides new insight into the local environment of the aluminium atoms, with evidence of the existence of tri-coordinated aluminium. By the availability of this detailed chemically feasible atomic description, without the need to artificially impose constraints during the refinement process, there exists the opportunity to tailor chemical and mechanical processes involving metakaolin and other complex metastable materials at the atomic level to obtain optimal performance at the macro-scale.
Energy Technology Data Exchange (ETDEWEB)
Lecocq, F
2000-07-15
Because of the inertia of the climate system, policy makers cannot avoid making early decisions regarding climate change in a sea of uncertainties. In this context, the very legitimacy of economic analysis to tackle such questions, and in particular the underlying equity issues (who pays for climate mitigation? when?) faces widespread skepticism. This thesis aims at demonstrating how public economy still remains a powerful tool to try and put some rationale into the debate, by checking the internal consistency of the different discourses, and by providing robust insights, if not definitive answers, into climate decisions. We use a set of compact integrated climate policy optimization models to progressively introduce, articulate, and assess numerically the prominent issues at stake. We obtain three main results. We first demonstrate that the so-called timing debate between short term and long term action cannot be reduced to a mere dispute over discount rate. Given the high uncertainties surrounding climate change indeed, the margins of freedom we pass on to future generations, and in particular the technical and institutional systems we transmit, become more important than the discount rate value. Secondly, we apply the various emission quota allocation rules proposed in the literature for the enlargement of annex B to developing economies. We show that the distributive outcome of these rules depends critically on ex ante assumptions about future economic and emission growth. Therefrom, we conclude that a careful design of the institutions surrounding the tradable permits market is a necessary condition to enhance the systems robustness. Last, on a broader perspective, this thesis illustrates the complementarity between ethics and economics: though the economist does not have per se a superior word about what is fair, his toolbox is powerful enough to show how some intuitively appealing ideas, such as a zero discount rate to take care of both present and future
String theory or field theory?
International Nuclear Information System (INIS)
Marshakov, A.V.
2002-01-01
The status of string theory is reviewed, and major recent developments - especially those in going beyond perturbation theory in the string theory and quantum field theory frameworks - are analyzed. This analysis helps better understand the role and place of experimental phenomena, it is emphasized that there are some insurmountable problems inherent in it - notably the impossibility to formulate the quantum theory of gravity on its basis - which prevent it from being a fundamental physical theory of the world of microscopic distances. It is this task, the creation of such a theory, which string theory, currently far from completion, is expected to solve. In spite of its somewhat vague current form, string theory has already led to a number of serious results and greatly contributed to progress in the understanding of quantum field theory. It is these developments, which are our concern in this review [ru
Energy Technology Data Exchange (ETDEWEB)
Prata, Bruno de Athayde; Arruda, Joao Bosco Furtado [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil). Nucleo de Pesquisa em Logistica, Transporte e Desenvolvimento
2004-07-01
The use of Natural Gas is nowadays increasing in Brazilian scene and this fact shows the necessity of effective planning tasks in that sector. In the case of Natural Gas Vehicular (NGV) distribution one can face problems of actor's (distributor, retailers, customers and non-users) point of view conflicts and fuel stations expand in most Brazilian urban areas in an uncontrolled way, despising counties regulation on land use. This paper reports a study using a model based in Game Theory concepts to determine some key-variables as the number of fuel stations which must deliver NGV in a given study area. Although some information could not be available the results of simulation shows the usefulness of using such an approach to give solutions to distribution questions in NGV sector. The model was applied to the case of a district in Fortaleza city which is the study area of a project entitled Projeto GASLOG presently on process under the sponsoring of Brazilian Government, PETROBRAS and Brazilian GasEnergy Research Network. (author)
Energy Technology Data Exchange (ETDEWEB)
Prata, Bruno de Athayde; Arruda, Joao Bosco Furtado [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil). Nucleo de Pesquisa em Logistica, Transporte e Desenvolvimento
2004-07-01
The use of Natural Gas is nowadays increasing in Brazilian scene and this fact shows the necessity of effective planning tasks in that sector. In the case of Natural Gas Vehicular (NGV) distribution one can face problems of actor's (distributor, retailers, customers and non-users) point of view conflicts and fuel stations expand in most Brazilian urban areas in an uncontrolled way, despising counties regulation on land use. This paper reports a study using a model based in Game Theory concepts to determine some key-variables as the number of fuel stations which must deliver NGV in a given study area. Although some information could not be available the results of simulation shows the usefulness of using such an approach to give solutions to distribution questions in NGV sector. The model was applied to the case of a district in Fortaleza city which is the study area of a project entitled Projeto GASLOG presently on process under the sponsoring of Brazilian Government, PETROBRAS and Brazilian GasEnergy Research Network. (author)
International Nuclear Information System (INIS)
Uehara, S.
1985-01-01
Of all supergravity theories, the maximal, i.e., N = 8 in 4-dimension or N = 1 in 11-dimension, theory should perform the unification since it owns the highest degree of symmetry. As to the N = 1 in d = 11 theory, it has been investigated how to compactify to the d = 4 theories. From the phenomenological point of view, local SUSY GUTs, i.e., N = 1 SUSY GUTs with soft breaking terms, have been studied from various angles. The structures of extended supergravity theories are less understood than those of N = 1 supergravity theories, and matter couplings in N = 2 extended supergravity theories are under investigation. The harmonic superspace was recently proposed which may be useful to investigate the quantum effects of extended supersymmetry and supergravity theories. As to the so-called Kaluza-Klein supergravity, there is another possibility. (Mori, K.)
Johnstone, PT
2014-01-01
Focusing on topos theory's integration of geometric and logical ideas into the foundations of mathematics and theoretical computer science, this volume explores internal category theory, topologies and sheaves, geometric morphisms, other subjects. 1977 edition.
Dorogovtsev, A Ya; Skorokhod, A V; Silvestrov, D S; Skorokhod, A V
1997-01-01
This book of problems is intended for students in pure and applied mathematics. There are problems in traditional areas of probability theory and problems in the theory of stochastic processes, which has wide applications in the theory of automatic control, queuing and reliability theories, and in many other modern science and engineering fields. Answers to most of the problems are given, and the book provides hints and solutions for more complicated problems.
International Nuclear Information System (INIS)
Lee, B.W.
1976-01-01
Some introductory remarks to Yang-Mills fields are given and the problem of the Coulomb gauge is considered. The perturbation expansion for quantized gauge theories is discussed and a survey of renormalization schemes is made. The role of Ward-Takahashi identities in gauge theories is discussed. The author then discusses the renormalization of pure gauge theories and theories with spontaneously broken symmetry. (B.R.H.)
Information theory of molecular systems
Nalewajski, Roman F
2006-01-01
As well as providing a unified outlook on physics, Information Theory (IT) has numerous applications in chemistry and biology owing to its ability to provide a measure of the entropy/information contained within probability distributions and criteria of their information ""distance"" (similarity) and independence. Information Theory of Molecular Systems applies standard IT to classical problems in the theory of electronic structure and chemical reactivity. The book starts by introducing the basic concepts of modern electronic structure/reactivity theory based upon the Density Functional Theory
Loring, FH
2014-01-01
Summarising the most novel facts and theories which were coming into prominence at the time, particularly those which had not yet been incorporated into standard textbooks, this important work was first published in 1921. The subjects treated cover a wide range of research that was being conducted into the atom, and include Quantum Theory, the Bohr Theory, the Sommerfield extension of Bohr's work, the Octet Theory and Isotopes, as well as Ionisation Potentials and Solar Phenomena. Because much of the material of Atomic Theories lies on the boundary between experimentally verified fact and spec
Harris, Tina
2015-04-29
Grounded theory is a popular research approach in health care and the social sciences. This article provides a description of grounded theory methodology and its key components, using examples from published studies to demonstrate practical application. It aims to demystify grounded theory for novice nurse researchers, by explaining what it is, when to use it, why they would want to use it and how to use it. It should enable nurse researchers to decide if grounded theory is an appropriate approach for their research, and to determine the quality of any grounded theory research they read.
Number theory via Representation theory
Indian Academy of Sciences (India)
2014-11-09
Number theory via Representation theory. Eknath Ghate. November 9, 2014. Eightieth Annual Meeting, Chennai. Indian Academy of Sciences1. 1. This is a non-technical 20 minute talk intended for a general Academy audience.
International Nuclear Information System (INIS)
Schwarz, J.H.
1985-01-01
Dual string theories, initially developed as phenomenological models of hadrons, now appear more promising as candidates for a unified theory of fundamental interactions. Type I superstring theory (SST I), is a ten-dimensional theory of interacting open and closed strings, with one supersymmetry, that is free from ghosts and tachyons. It requires that an SO(eta) or Sp(2eta) gauge group be used. A light-cone-gauge string action with space-time supersymmetry automatically incorporates the superstring restrictions and leads to the discovery of type II superstring theory (SST II). SST II is an interacting theory of closed strings only, with two D=10 supersymmetries, that is also free from ghosts and tachyons. By taking six of the spatial dimensions to form a compact space, it becomes possible to reconcile the models with our four-dimensional perception of spacetime and to define low-energy limits in which SST I reduces to N=4, D=4 super Yang-Mills theory and SST II reduces to N=8, D=4 supergravity theory. The superstring theories can be described by a light-cone-gauge action principle based on fields that are functionals of string coordinates. With this formalism any physical quantity should be calculable. There is some evidence that, unlike any conventional field theory, the superstring theories provide perturbatively renormalizable (SST I) or finite (SST II) unifications of gravity with other interactions
String theory or field theory?
International Nuclear Information System (INIS)
Marshakov, Andrei V
2002-01-01
The status of string theory is reviewed, and major recent developments - especially those in going beyond perturbation theory in the string theory and quantum field theory frameworks - are analyzed. This analysis helps better understand the role and place of string theory in the modern picture of the physical world. Even though quantum field theory describes a wide range of experimental phenomena, it is emphasized that there are some insurmountable problems inherent in it - notably the impossibility to formulate the quantum theory of gravity on its basis - which prevent it from being a fundamental physical theory of the world of microscopic distances. It is this task, the creation of such a theory, which string theory, currently far from completion, is expected to solve. In spite of its somewhat vague current form, string theory has already led to a number of serious results and greatly contributed to progress in the understanding of quantum field theory. It is these developments which are our concern in this review. (reviews of topical problems)
Dependence theory via game theory
Grossi, D.; Turrini, P.
2011-01-01
In the multi-agent systems community, dependence theory and game theory are often presented as two alternative perspectives on the analysis of social interaction. Up till now no research has been done relating these two approaches. The unification presented provides dependence theory with the sort
Fractal tracer distributions in turbulent field theories
DEFF Research Database (Denmark)
Hansen, J. Lundbek; Bohr, Tomas
1998-01-01
We study the motion of passive tracers in a two-dimensional turbulent velocity field generated by the Kuramoto-Sivashinsky equation. By varying the direction of the velocity-vector with respect to the field-gradient we can continuously vary the two Lyapunov exponents for the particle motion and t...
The theory of syntactic domains
Kracht, M.
In this essay we develop a mathematical theory of syntactic domains with special attention to the theory of government and binding. Starting from an intrinsic characterization of command relations as defined in [Ba 90] we determine the structure of the distributive lattice of command relations.
International Nuclear Information System (INIS)
Souza, Manoelito M. de
1997-01-01
We discuss the physical meaning and the geometric interpretation of implementation in classical field theories. The origin of infinities and other inconsistencies in field theories is traced to fields defined with support on the light cone; a finite and consistent field theory requires a light-cone generator as the field support. Then, we introduce a classical field theory with support on the light cone generators. It results on a description of discrete (point-like) interactions in terms of localized particle-like fields. We find the propagators of these particle-like fields and discuss their physical meaning, properties and consequences. They are conformally invariant, singularity-free, and describing a manifestly covariant (1 + 1)-dimensional dynamics in a (3 = 1) spacetime. Remarkably this conformal symmetry remains even for the propagation of a massive field in four spacetime dimensions. We apply this formalism to Classical electrodynamics and to the General Relativity Theory. The standard formalism with its distributed fields is retrieved in terms of spacetime average of the discrete field. Singularities are the by-products of the averaging process. This new formalism enlighten the meaning and the problem of field theory, and may allow a softer transition to a quantum theory. (author)
The Weibull distribution a handbook
Rinne, Horst
2008-01-01
The Most Comprehensive Book on the SubjectChronicles the Development of the Weibull Distribution in Statistical Theory and Applied StatisticsExploring one of the most important distributions in statistics, The Weibull Distribution: A Handbook focuses on its origin, statistical properties, and related distributions. The book also presents various approaches to estimate the parameters of the Weibull distribution under all possible situations of sampling data as well as approaches to parameter and goodness-of-fit testing.Describes the Statistical Methods, Concepts, Theories, and Applications of T
Chang, CC
2012-01-01
Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko
Aubin, Jean-Pierre; Saint-Pierre, Patrick
2011-01-01
Viability theory designs and develops mathematical and algorithmic methods for investigating the adaptation to viability constraints of evolutions governed by complex systems under uncertainty that are found in many domains involving living beings, from biological evolution to economics, from environmental sciences to financial markets, from control theory and robotics to cognitive sciences. It involves interdisciplinary investigations spanning fields that have traditionally developed in isolation. The purpose of this book is to present an initiation to applications of viability theory, explai
Hierarchical species distribution models
Hefley, Trevor J.; Hooten, Mevin B.
2016-01-01
Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.
Cox, David A
2012-01-01
Praise for the First Edition ". . .will certainly fascinate anyone interested in abstract algebra: a remarkable book!"—Monatshefte fur Mathematik Galois theory is one of the most established topics in mathematics, with historical roots that led to the development of many central concepts in modern algebra, including groups and fields. Covering classic applications of the theory, such as solvability by radicals, geometric constructions, and finite fields, Galois Theory, Second Edition delves into novel topics like Abel’s theory of Abelian equations, casus irreducibili, and the Galo
Dufwenberg, Martin
2011-03-01
Game theory is a toolkit for examining situations where decision makers influence each other. I discuss the nature of game-theoretic analysis, the history of game theory, why game theory is useful for understanding human psychology, and why game theory has played a key role in the recent explosion of interest in the field of behavioral economics. WIREs Cogni Sci 2011 2 167-173 DOI: 10.1002/wcs.119 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.
Hashiguchi, Koichi
2009-01-01
This book details the mathematics and continuum mechanics necessary as a foundation of elastoplasticity theory. It explains physical backgrounds with illustrations and provides descriptions of detailed derivation processes..
Spatial data modelling and maximum entropy theory
Czech Academy of Sciences Publication Activity Database
Klimešová, Dana; Ocelíková, E.
2005-01-01
Roč. 51, č. 2 (2005), s. 80-83 ISSN 0139-570X Institutional research plan: CEZ:AV0Z10750506 Keywords : spatial data classification * distribution function * error distribution Subject RIV: BD - Theory of Information
Magnetic confinement theory summary
International Nuclear Information System (INIS)
Connor, J.W.
2005-01-01
A total of 93 papers under the theory, TH, heading were presented at the conference, although a number of experimental papers also contained significant theory elements: only the former are reviewed here. A novel development was the inclusion of a Theory Overview paper, presented by P H Diamond, on the subject of zonal flows, currently a topic of great interest to the fusion community. The remainder of the theory papers were distributed amongst oral presentations (32, with 11 rapporteured and one a post-deadline submission) and 58 posters, one of which was post-deadline. A number of themes, or trends, are evident, all springing from the growing use of numerical approaches to plasma theory. These are: (i) the use of direct numerical simulations to calculate and provide insights into turbulent transport (indeed there were about 30 papers with contributions on this topic), although analytic modelling plays a role in interpreting these 'numerical experiments'; (ii) increasing realism in modelling of geometry and physics in areas such as macroscopic MHD phenomena and radio-frequency heating and current drive, both of which involve modelling of fast-particle distributions; and (iii) a growing emphasis on integrated modelling, bringing together modules that describe interacting aspects of plasma behaviour
International Nuclear Information System (INIS)
Bartlett, R.; Kirtman, B.; Davidson, E.R.
1978-01-01
After noting some advantages of using perturbation theory some of the various types are related on a chart and described, including many-body nonlinear summations, quartic force-field fit for geometry, fourth-order correlation approximations, and a survey of some recent work. Alternative initial approximations in perturbation theory are also discussed. 25 references
R. Veenhoven (Ruut)
2014-01-01
markdownabstract__Abstract__ Need theory of happiness is linked to affect theory, which holds that happiness is a reflection of how well we feel generally. In this view, we do not "calculate" happiness but rather "infer" it, the typical heuristic being "I feel good most of the time, hence
Bouwkamp, C.J.
1954-01-01
A critical review is presented of recent progress in classical diffraction theory. Both scalar and electromagnetic problems are discussed. The report may serve as an introduction to general diffraction theory although the main emphasis is on diffraction by plane obstacles. Various modifications of
LeVeque, William J
1996-01-01
This excellent textbook introduces the basics of number theory, incorporating the language of abstract algebra. A knowledge of such algebraic concepts as group, ring, field, and domain is not assumed, however; all terms are defined and examples are given - making the book self-contained in this respect.The author begins with an introductory chapter on number theory and its early history. Subsequent chapters deal with unique factorization and the GCD, quadratic residues, number-theoretic functions and the distribution of primes, sums of squares, quadratic equations and quadratic fields, diopha
Lukeš, Jaroslav; Netuka, Ivan; Veselý, Jiří
1988-01-01
Within the tradition of meetings devoted to potential theory, a conference on potential theory took place in Prague on 19-24, July 1987. The Conference was organized by the Faculty of Mathematics and Physics, Charles University, with the collaboration of the Institute of Mathematics, Czechoslovak Academy of Sciences, the Department of Mathematics, Czech University of Technology, the Union of Czechoslovak Mathematicians and Physicists, the Czechoslovak Scientific and Technical Society, and supported by IMU. During the Conference, 69 scientific communications from different branches of potential theory were presented; the majority of them are in cluded in the present volume. (Papers based on survey lectures delivered at the Conference, its program as well as a collection of problems from potential theory will appear in a special volume of the Lecture Notes Series published by Springer-Verlag). Topics of these communications truly reflect the vast scope of contemporary potential theory. Some contributions deal...
DEFF Research Database (Denmark)
Bjerg, Ole; Presskorn-Thygesen, Thomas
2017-01-01
The paper is a contribution to current debates about conspiracy theories within philosophy and cultural studies. Wittgenstein’s understanding of language is invoked to analyse the epistemological effects of designating particular questions and explanations as a ‘conspiracy theory......’. It is demonstrated how such a designation relegates these questions and explanations beyond the realm of meaningful discourse. In addition, Agamben’s concept of sovereignty is applied to explore the political effects of using the concept of conspiracy theory. The exceptional epistemological status assigned...... to alleged conspiracy theories within our prevalent paradigms of knowledge and truth is compared to the exceptional legal status assigned to individuals accused of terrorism under the War on Terror. The paper concludes by discussing the relation between conspiracy theory and ‘the paranoid style...
1999-11-08
In these lectures I will build up the concept of field theory using the language of Feynman diagrams. As a starting point, field theory in zero spacetime dimensions is used as a vehicle to develop all the necessary techniques: path integral, Feynman diagrams, Schwinger-Dyson equations, asymptotic series, effective action, renormalization etc. The theory is then extended to more dimensions, with emphasis on the combinatorial aspects of the diagrams rather than their particular mathematical structure. The concept of unitarity is used to, finally, arrive at the various Feynman rules in an actual, four-dimensional theory. The concept of gauge-invariance is developed, and the structure of a non-abelian gauge theory is discussed, again on the level of Feynman diagrams and Feynman rules.
DEFF Research Database (Denmark)
Hjørland, Birger
2009-01-01
Concept theory is an extremely broad, interdisciplinary and complex field of research related to many deep fields with very long historical traditions without much consensus. However, information science and knowledge organization cannot avoid relating to theories of concepts. Knowledge...... organizing systems (e.g. classification systems, thesauri and ontologies) should be understood as systems basically organizing concepts and their semantic relations. The same is the case with information retrieval systems. Different theories of concepts have different implications for how to construe......, evaluate and use such systems. Based on "a post-Kuhnian view" of paradigms this paper put forward arguments that the best understanding and classification of theories of concepts is to view and classify them in accordance with epistemological theories (empiricism, rationalism, historicism and pragmatism...
Dillon, Joshua V.; Langmore, Ian; Tran, Dustin; Brevdo, Eugene; Vasudevan, Srinivas; Moore, Dave; Patton, Brian; Alemi, Alex; Hoffman, Matt; Saurous, Rif A.
2017-01-01
The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation. Building on two basic abstractions, it offers flexible building blocks for probabilistic computation. Distributions provide fast, numerically stable methods for generating samples and computing statistics, e.g., log density. Bijectors provide composable volume-tracking transformations with automatic caching. Together these enable...
Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems.
Heersmink, Richard
2017-04-01
There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. I specifically conceptualise how such artifacts (a) scaffold and extend moral reasoning and decision-making processes, (b) have a certain moral status which is contingent on their cognitive status, and (c) whether responsibility can be attributed to distributed systems. This paper is primarily written for those interested in the intersection of cognitive and moral theory as it relates to artifacts, but also for those independently interested in philosophical debates in extended and distributed cognition and ethics of (cognitive) technology.
Andrews, George E
1994-01-01
Although mathematics majors are usually conversant with number theory by the time they have completed a course in abstract algebra, other undergraduates, especially those in education and the liberal arts, often need a more basic introduction to the topic.In this book the author solves the problem of maintaining the interest of students at both levels by offering a combinatorial approach to elementary number theory. In studying number theory from such a perspective, mathematics majors are spared repetition and provided with new insights, while other students benefit from the consequent simpl
Schmidli, Hanspeter
2017-01-01
This book provides an overview of classical actuarial techniques, including material that is not readily accessible elsewhere such as the Ammeter risk model and the Markov-modulated risk model. Other topics covered include utility theory, credibility theory, claims reserving and ruin theory. The author treats both theoretical and practical aspects and also discusses links to Solvency II. Written by one of the leading experts in the field, these lecture notes serve as a valuable introduction to some of the most frequently used methods in non-life insurance. They will be of particular interest to graduate students, researchers and practitioners in insurance, finance and risk management.
DEFF Research Database (Denmark)
Smith, Shelley
This paper came about within the context of a 13-month research project, Focus Area 1 - Method and Theory, at the Center for Public Space Research at the Royal Academy of the Arts School of Architecture in Copenhagen, Denmark. This project has been funded by RealDania. The goals of the research...... project, Focus Area 1 - Method and Theory, which forms the framework for this working paper, are: * To provide a basis from which to discuss the concept of public space in a contemporary architectural and urban context - specifically relating to theory and method * To broaden the discussion of the concept...
Lubliner, Jacob
2008-01-01
The aim of Plasticity Theory is to provide a comprehensive introduction to the contemporary state of knowledge in basic plasticity theory and to its applications. It treats several areas not commonly found between the covers of a single book: the physics of plasticity, constitutive theory, dynamic plasticity, large-deformation plasticity, and numerical methods, in addition to a representative survey of problems treated by classical methods, such as elastic-plastic problems, plane plastic flow, and limit analysis; the problem discussed come from areas of interest to mechanical, structural, and
DEFF Research Database (Denmark)
Linder, Stefan; Foss, Nicolai Juul
Agency theory studies the problems and solutions linked to delegation of tasks from principals to agents in the context of conflicting interests between the parties. Beginning from clear assumptions about rationality, contracting and informational conditions, the theory addresses problems of ex...... ante (“hidden characteristics”) as well as ex post information asymmetry (“hidden action”), and examines conditions under which various kinds of incentive instruments and monitoring arrangements can be deployed to minimize the welfare loss. Its clear predictions and broad applicability have allowed...... agency theory to enjoy considerable scientific impact on social science; however, it has also attracted considerable criticism....
DEFF Research Database (Denmark)
Linder, Stefan; Foss, Nicolai Juul
2015-01-01
Agency theory studies the problems and solutions linked to delegation of tasks from principals to agents in the context of conflicting interests between the parties. Beginning from clear assumptions about rationality, contracting, and informational conditions, the theory addresses problems of ex...... ante (‘hidden characteristics’) as well as ex post information asymmetry (‘hidden action’), and examines conditions under which various kinds of incentive instruments and monitoring arrangements can be deployed to minimize the welfare loss. Its clear predictions and broad applicability have allowed...... agency theory to enjoy considerable scientific impact on social science; however, it has also attracted considerable criticism....
International Nuclear Information System (INIS)
Wilkinson, D.H.
1992-11-01
Although the central limit theorem and the Gaussian approximation are useful for describing the usual behaviour of statistical systems they are useless for discussing very small probabilities i.e. for quantifying the likelihood of very rare events. For this latter purpose the ruin theory of F. Esscher is well adapted; it is exposed, and some applications presented in detail, for the case that the influences to be summed are all positive definite with their arising governed by the Poisson distribution; the case that influences of both signs are involved is also considered as is the alternative impact of the Polya distribution. (author) 4 refs., 1 tab., 15 figs
Equilibrium theory of island biogeography: A review
Angela D. Yu; Simon A. Lei
2001-01-01
The topography, climatic pattern, location, and origin of islands generate unique patterns of species distribution. The equilibrium theory of island biogeography creates a general framework in which the study of taxon distribution and broad island trends may be conducted. Critical components of the equilibrium theory include the species-area relationship, island-...
International Nuclear Information System (INIS)
Nielsen, H.B.; Ninomiya, Masao
1989-12-01
We give an elementary review of the so called 'the theory of baby universes' which is a series of ideas or speculations about some effects in quantum gravity, viz. the effect of a certain type of wormholes, representing the exchange of small 3-space universes called baby universes. We consider this 'theory' as being physically and scientifically a very promising candidate for a theory of everything. It is, however, mathematically lacking any strong foundation at all. It solves several fine-tuning problems: First of all the cosmological constant problem, and also the strong CP-problem and the hierarchy problem. We also speculate that it might predict the possibility of influencing the probability distributions of the outcome of quantum mechanical measurements at one time by acts at a later time. (orig.)
Theory of vibration protection
Karnovsky, Igor A
2016-01-01
This text is an advancement of the theory of vibration protection of mechanical systems with lumped and distributed parameters. The book offers various concepts and methods of solving vibration protection problems, discusses the advantages and disadvantages of different methods, and the fields of their effective applications. Fundamental approaches of vibration protection, which are considered in this book, are the passive, parametric and optimal active vibration protection. The passive vibration protection is based on vibration isolation, vibration damping and dynamic absorbers. Parametric vibration protection theory is based on the Shchipanov-Luzin invariance principle. Optimal active vibration protection theory is based on the Pontryagin principle and the Krein moment method. The book also contains special topics such as suppression of vibrations at the source of their occurrence and the harmful influence of vibrations on humans. Numerous examples, which illustrate the theoretical ideas of each chapter, ar...
Analytical theory of noncollinear amorphous metallic magnetism
International Nuclear Information System (INIS)
Kakehashi, Y.; Uchida, T.
2001-01-01
Analytical theory of noncollinear magnetism in amorphous metals is proposed on the basis of the Gaussian model for the distribution of the interatomic distance and the saddle-point approximation. The theory removes the numerical difficulty in the previous theory based on the Monte-Carlo sampling method, and reasonably describes the magnetic properties of amorphous transition metals
Nel, Louis
2016-01-01
This book presents a detailed, self-contained theory of continuous mappings. It is mainly addressed to students who have already studied these mappings in the setting of metric spaces, as well as multidimensional differential calculus. The needed background facts about sets, metric spaces and linear algebra are developed in detail, so as to provide a seamless transition between students' previous studies and new material. In view of its many novel features, this book will be of interest also to mature readers who have studied continuous mappings from the subject's classical texts and wish to become acquainted with a new approach. The theory of continuous mappings serves as infrastructure for more specialized mathematical theories like differential equations, integral equations, operator theory, dynamical systems, global analysis, topological groups, topological rings and many more. In light of the centrality of the topic, a book of this kind fits a variety of applications, especially those that contribute to ...
Hodges, Wilfrid
1993-01-01
An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.
Lunardi, Alessandra
2018-01-01
This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.
International Nuclear Information System (INIS)
1989-06-01
This report discusses concepts in nuclear theory such as: neutrino nucleosynthesis; double beta decay; neutrino oscillations; chiral symmetry breaking; T invariance; quark propagator; cold fusion; and other related topics
R. Veenhoven (Ruut)
2014-01-01
markdownabstract__Abstract__ Assumptions Livability theory involves the following six key assumptions: 1. Like all animals, humans have innate needs, such as for food, safety, and companionship. 2. Gratification of needs manifests in hedonic experience. 3. Hedonic experience determines how
Hyperfinite representation of distributions
Indian Academy of Sciences (India)
A nonstandard treatment of the theory of distributions in terms of a hyperfinite representa- ... is an (internal) hyperfinite set of hyperreal numbers with internal cardinality 2. ..... The factor space ΠgI ΠDIauI is a C-vector space which may be.
Tensions in Distributed Leadership
Ho, Jeanne; Ng, David
2017-01-01
Purpose: This article proposes the utility of using activity theory as an analytical lens to examine the theoretical construct of distributed leadership, specifically to illuminate tensions encountered by leaders and how they resolved these tensions. Research Method: The study adopted the naturalistic inquiry approach of a case study of an…
SAIDANI Lassaad
2015-01-01
The nokton theory is an attempt to construct a theory adapted to every physical phenomenon. Space and time have been discretized. Its laws are iterative and precise. Probability plays an important role here. At first I defined the notion of image function and its mathematical framework. The notion of nokton and its state are the basis of several definitions. I later defined the canonical image function and the canonical contribution. Two constants have been necessary to define the dynam...
SAIDANI Lassaad
2017-01-01
The nokton theory is an attempt to construct a theory adapted to every physical phenomenon. Space and time have been discretized. Its laws are iterative and precise. Probability plays an important role here. At first I defined the notion of image function and its mathematical framework. The notion of nokton and its state are the basis of several definitions. I later defined the canonical image function and the canonical contribution. Two constants have been necessary to define the dynam...
Gould, Ronald
2012-01-01
This introduction to graph theory focuses on well-established topics, covering primary techniques and including both algorithmic and theoretical problems. The algorithms are presented with a minimum of advanced data structures and programming details. This thoroughly corrected 1988 edition provides insights to computer scientists as well as advanced undergraduates and graduate students of topology, algebra, and matrix theory. Fundamental concepts and notation and elementary properties and operations are the first subjects, followed by examinations of paths and searching, trees, and networks. S
Wolpert, David H.
2005-01-01
Probability theory governs the outcome of a game; there is a distribution over mixed strat.'s, not a single "equilibrium". To predict a single mixed strategy must use our loss function (external to the game's players. Provides a quantification of any strategy's rationality. Prove rationality falls as cost of computation rises (for players who have not previously interacted). All extends to games with varying numbers of players.
Introduction to electromagnetic theory
Owen, George E
2003-01-01
A direct, stimulating approach to electromagnetic theory, this text employs matrices and matrix methods for the simple development of broad theorems. The author uses vector representation throughout the book, with numerous applications of Poisson's equation and the Laplace equation (the latter occurring in both electronics and magnetic media). Contents include the electrostatics of point charges, distributions of charge, conductors and dielectrics, currents and circuits, and the Lorentz force and the magnetic field. Additional topics comprise the magnetic field of steady currents, induced ele
ESR spectroscopy and electron distribution
International Nuclear Information System (INIS)
Davies, A.G.
1997-01-01
EPR spectroscopy can map out the electron distribution in a molecule, in much the same way as proton NMR spectroscopy can map out the proton distribution, and it provides some of the most direct evidence for the principal concepts underlying the electronic theory of organic structure and mechanism. This is illustrated for phenomena of conjugation, hyper-conjugation, substituent effects in annulenes, Hueckel theory, ring strain, the Mills-Nixon effect, and ion pairing. (author)
Distributed Language and Dialogism
DEFF Research Database (Denmark)
Steffensen, Sune Vork
2015-01-01
addresses Linell’s critique of Distributed Language as rooted in biosemiotics and in theories of organism-environment systems. It is argued that Linell’s sense-based approach entails an individualist view of how conspecific Others acquire their status as prominent parts of the sense-maker’s environment......This article takes a starting point in Per Linell’s (2013) review article on the book Distributed Language (Cowley, 2011a) and other contributions to the field of ‘Distributed Language’, including Cowley et al. (2010) and Hodges et al. (2012). The Distributed Language approach is a naturalistic...... and anti-representational approach to language that builds on recent developments in the cognitive sciences. With a starting point in Linell’s discussion of the approach, the article aims to clarify four aspects of a distributed view of language vis-à-vis the tradition of Dialogism, as presented by Linell...
Forgotten and neglected theories of Poincare
International Nuclear Information System (INIS)
Arnol'd, Vladimir I
2006-01-01
This paper describes a number of published and unpublished works of Henri Poincare that await continuation by the next generations of mathematicians: works on celestial mechanics, on topology, on the theory of chaos and dynamical systems, and on homology, intersections and links. Also discussed are the history of the theory of relativity and the theory of generalized functions (distributions) and the connection between the Poincare conjecture and the theory of knot invariants.
International Nuclear Information System (INIS)
Kenyon, I.R.
1986-01-01
Modern theories of the interactions between fundamental particles are all gauge theories. In the case of gravitation, application of this principle to space-time leads to Einstein's theory of general relativity. All the other interactions involve the application of the gauge principle to internal spaces. Electromagnetism serves to introduce the idea of a gauge field, in this case the electromagnetic field. The next example, the strong force, shows unique features at long and short range which have their origin in the self-coupling of the gauge fields. Finally the unification of the description of the superficially dissimilar electromagnetic and weak nuclear forces completes the picture of successes of the gauge principle. (author)
Stewart, Ian
2003-01-01
Ian Stewart's Galois Theory has been in print for 30 years. Resoundingly popular, it still serves its purpose exceedingly well. Yet mathematics education has changed considerably since 1973, when theory took precedence over examples, and the time has come to bring this presentation in line with more modern approaches.To this end, the story now begins with polynomials over the complex numbers, and the central quest is to understand when such polynomials have solutions that can be expressed by radicals. Reorganization of the material places the concrete before the abstract, thus motivating the g
International Nuclear Information System (INIS)
Sitenko, A.
1991-01-01
This book emerged out of graduate lectures given by the author at the University of Kiev and is intended as a graduate text. The fundamentals of non-relativistic quantum scattering theory are covered, including some topics, such as the phase-function formalism, separable potentials, and inverse scattering, which are not always coverded in textbooks on scattering theory. Criticisms of the text are minor, but the reviewer feels an inadequate index is provided and the citing of references in the Russian language is a hindrance in a graduate text
Sferra, Bobbie A.; Paddock, Susan C.
This booklet describes various theoretical aspects of leadership, including the proper exercise of authority, effective delegation, goal setting, exercise of control, assignment of responsibility, performance evaluation, and group process facilitation. It begins by describing the evolution of general theories of leadership from historic concepts…
Hall, Marshall
2011-01-01
Includes proof of van der Waerden's 1926 conjecture on permanents, Wilson's theorem on asymptotic existence, and other developments in combinatorics since 1967. Also covers coding theory and its important connection with designs, problems of enumeration, and partition. Presents fundamentals in addition to latest advances, with illustrative problems at the end of each chapter. Enlarged appendixes include a longer list of block designs.
Toso, Robert B.
2000-01-01
Inspired by William Glasser's Reality Therapy ideas, Control Theory (CT) is a disciplinary approach that stresses people's ability to control only their own behavior, based on internal motivations to satisfy five basic needs. At one North Dakota high school, CT-trained teachers are the program's best recruiters. (MLH)
de Vreese, C.H.; Lecheler, S.; Mazzoleni, G.; Barnhurst, K.G.; Ikeda, K.; Maia, R.C.M.; Wessler, H.
2016-01-01
Political issues can be viewed from different perspectives and they can be defined differently in the news media by emphasizing some aspects and leaving others aside. This is at the core of news framing theory. Framing originates within sociology and psychology and has become one of the most used
International Nuclear Information System (INIS)
Gong, Ha Soung
2006-12-01
The text book composed of five parts, which are summary of this book, arrangement of electricity theory including electricity nad magnetism, a direct current, and alternating current. It has two dictionary electricity terms for a synonym. The last is an appendix. It is for preparing for test of officer, electricity engineer and fire fighting engineer.
DEFF Research Database (Denmark)
Monthoux, Pierre Guillet de; Statler, Matt
2014-01-01
The recent Carnegie report (Colby, et al., 2011) characterizes the goal of business education as the development of practical wisdom. In this chapter, the authors reframe Scharmer’s Theory U as an attempt to develop practical wisdom by applying certain European philosophical concepts. Specifically...
DEFF Research Database (Denmark)
Guillet de Monthoux, Pierre; Statler, Matt
2017-01-01
The recent Carnegie report (Colby, et al., 2011) characterizes the goal of business education as the development of practical wisdom. In this chapter, the authors reframe Scharmer's Theory U as an attempt to develop practical wisdom by applying certain European philosophical concepts. Specifically...
International Nuclear Information System (INIS)
Tang, W.M.
2001-01-01
This is a summary of the advances in magnetic fusion energy theory research presented at the 17th International Atomic Energy Agency Fusion Energy Conference from 19 24 October, 1998 in Yokohama, Japan. Theory and simulation results from this conference provided encouraging evidence of significant progress in understanding the physics of thermonuclear plasmas. Indeed, the grand challenge for this field is to acquire the basic understanding that can readily enable the innovations which would make fusion energy practical. In this sense, research in fusion energy is increasingly able to be categorized as fitting well the 'Pasteur's Quadrant' paradigm, where the research strongly couples basic science ('Bohr's Quadrant') to technological impact ('Edison's Quadrant'). As supported by some of the work presented at this conference, this trend will be further enhanced by advanced simulations. Eventually, realistic three-dimensional modeling capabilities, when properly combined with rapid and complete data interpretation of results from both experiments and simulations, can contribute to a greatly enhanced cycle of understanding and innovation. Plasma science theory and simulation have provided reliable foundations for this improved modeling capability, and the exciting advances in high-performance computational resources have further accelerated progress. There were 68 papers presented at this conference in the area of magnetic fusion energy theory
Penland, Patrick R.
Three papers are presented which delineate the foundation of theory and principles which underlie the research and instructional approach to communications at the Graduate School of Library and Information Science, University of Pittsburgh. Cybernetic principles provide the integration, and validation is based in part on a situation-producing…
Lee, William H K.
2016-01-01
A complex system consists of many interacting parts, generates new collective behavior through self organization, and adaptively evolves through time. Many theories have been developed to study complex systems, including chaos, fractals, cellular automata, self organization, stochastic processes, turbulence, and genetic algorithms.
Plummer, MD
1986-01-01
This study of matching theory deals with bipartite matching, network flows, and presents fundamental results for the non-bipartite case. It goes on to study elementary bipartite graphs and elementary graphs in general. Further discussed are 2-matchings, general matching problems as linear programs, the Edmonds Matching Algorithm (and other algorithmic approaches), f-factors and vertex packing.
DEFF Research Database (Denmark)
Bertelsen, Olav Wedege; Bødker, Susanne
2003-01-01
the young HCI research tradition. But HCI was already facing problems: lack of consideration for other aspects of human behavior, for interaction with other people, for culture. Cognitive science-based theories lacked means to address several issues that came out of the empirical projects....
A nonlinear theory of generalized functions
1990-01-01
This book provides a simple introduction to a nonlinear theory of generalized functions introduced by J.F. Colombeau, which gives a meaning to any multiplication of distributions. This theory extends from pure mathematics (it presents a faithful generalization of the classical theory of C? functions and provides a synthesis of most existing multiplications of distributions) to physics (it permits the resolution of ambiguities that appear in products of distributions), passing through the theory of partial differential equations both from the theoretical viewpoint (it furnishes a concept of weak solution of pde's leading to existence-uniqueness results in many cases where no distributional solution exists) and the numerical viewpoint (it introduces new and efficient methods developed recently in elastoplasticity, hydrodynamics and acoustics). This text presents basic concepts and results which until now were only published in article form. It is in- tended for mathematicians but, since the theory and applicati...
Robert Nozick's entitlement theory of justice: a critique | Nnajiofor ...
African Journals Online (AJOL)
The burden of this paper is to critique Robert Nozick's entitlement theory of justice which was drafted as an argument against traditional distribution theories. Nozick's theory of justice claims that whether a distribution is just or not depend entirely on how it came about. By contrast, justice according to equality, need, desert or ...
International Nuclear Information System (INIS)
Agrachev, A.A.
2002-01-01
contains thirteen contributions divided into two parts. The volume, as well as the school it is based on, pursues primarily educational and instructive goals. We tried to distribute the material according to the same purposes. The volume starts with Linear Control Systems, then turns to Nonlinear Systems and Optimal Control Theory. Basic elementary courses are intended to help to study subsequent more specific ones. The volume finishes with some real world applications. We believe that the volume as a whole and its parts can serve for both the self-depended study and the teaching as a kind of contemporary textbook in Mathematical Control Theory. (author)
Energy Technology Data Exchange (ETDEWEB)
Agrachev, A A [Steklov Mathematical Institute, Moscow (Russian Federation); SISSA, Trieste [Italy; ed.
2002-07-15
thirteen contributions divided into two parts. The volume, as well as the school it is based on, pursues primarily educational and instructive goals. We tried to distribute the material according to the same purposes. The volume starts with Linear Control Systems, then turns to Nonlinear Systems and Optimal Control Theory. Basic elementary courses are intended to help to study subsequent more specific ones. The volume fini with some real world applications. We believe that the volume as a whole and its parts can serve for both the self-depended study and the teaching as a kind of contemporary textbook in Mathematical Control Theory. (author)
Kitt, R.; Kalda, J.
2006-03-01
The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.
Interferometric Computation Beyond Quantum Theory
Garner, Andrew J. P.
2018-03-01
There are quantum solutions for computational problems that make use of interference at some stage in the algorithm. These stages can be mapped into the physical setting of a single particle travelling through a many-armed interferometer. There has been recent foundational interest in theories beyond quantum theory. Here, we present a generalized formulation of computation in the context of a many-armed interferometer, and explore how theories can differ from quantum theory and still perform distributed calculations in this set-up. We shall see that quaternionic quantum theory proves a suitable candidate, whereas box-world does not. We also find that a classical hidden variable model first presented by Spekkens (Phys Rev A 75(3): 32100, 2007) can also be used for this type of computation due to the epistemic restriction placed on the hidden variable.
Nuclear theory. 1998 progress report
International Nuclear Information System (INIS)
1998-01-01
Summaries of progress made on the following topics are given: (1) nonresonant contributions to inelastic N→Δ(1232) parity violation; (2) neutron distribution effects in elastic nuclear parity violation; (3) Wilson RG for scalar-plus-fermion field theories at finite density; (4) Perturbation theory for spin ladders using angular momentum coupled bases; (5) mean-field theory for spin ladders using angular momentum density; (6) finite temperature renormalization group effective potentials for the linear Sigma model; (7) negative-parity baryon resonances from lattice QCD; (8) the N→Δ electromagnetic transition amplitudes from QCD sum rules; and (9) higher nucleon resonances in exclusive reactions (γ, πN) on nuclei
Mathematical game theory and applications
Mazalov, Vladimir
2014-01-01
An authoritative and quantitative approach to modern game theory with applications from diverse areas including economics, political science, military science, and finance. Explores areas which are not covered in current game theory texts, including a thorough examination of zero-sum game.Provides introductory material to game theory, including bargaining, parlour games, sport, networking games and dynamic games.Explores Bargaining models, discussing new result such as resource distributions, buyer-seller instructions and reputation in bargaining models.Theoretical results are presented along
Mathematical foundations of transport theory
International Nuclear Information System (INIS)
Ershov, Yu.I.; Shikhov, S.B.
1985-01-01
Main items of application of the operator equations analyzing method in transport theory problems are considered. The mathematical theory of a reactor critical state is presented. Theorems of existence of positive solutions of non-linear non-stationary equations taking into account the temperature and xenon feedbacks are proved. Conditions for stability and asymptotic stability of steady-state regimes for different distributed models of a nuclear reactor are obtained on the basis of the modern operator perturbation theory, certain problems on control using an absorber are considered
Differential algebras in field theory
International Nuclear Information System (INIS)
Stora, R.
1988-01-01
The applications of differential algebras, as mathematical tools, in field theory are reviewed. The Yang-Mills theories are recalled and the free bosonic string model is treated. Moreover, in the scope of the work, the following topics are discussed: the Faddeev Popov fixed action, in a Feynman like gauge; the structure of local anomalies, including the algebric and the topological theories; the problem of quantizing a degenerate state; and the zero mode problem, in the treatment of the bosonic string conformal gauge. The analysis leads to the conclusion that not much is known about situations where a non involutive distribution is involved
On Distributed Port-Hamiltonian Process Systems
Lopezlena, Ricardo; Scherpen, Jacquelien M.A.
2004-01-01
In this paper we use the term distributed port-Hamiltonian Process Systems (DPHPS) to refer to the result of merging the theory of distributed Port-Hamiltonian systems (DPHS) with the theory of process systems (PS). Such concept is useful for combining the systematic interconnection of PHS with the
New generalized functions and multiplication of distributions
International Nuclear Information System (INIS)
Colombeau, J.F.
1984-01-01
Since its conception, Quantum Field Theory is based on 'heuristic' computations (in particular products of distributions) that, despite lots of effort, remained meaningless from a mathematical viewpoint. In this book the author presents a new mathematical theory giving a rigorous mathematical sense to these heuristic computations and, from a mathematical viewpoint, to all products of distributions. This new mathematical theory is a new theory of Generalized Functions defined on any open subset Ω of Rsup(n), which are much more general than the distributions on Ω. (Auth.)
Information Theory - The Bridge Connecting Bounded Rational Game Theory and Statistical Physics
Wolpert, David H.
2005-01-01
A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality of all red-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. This paper shows that the same information theoretic mathematical structure, known as Product Distribution (PD) theory, addresses both issues. In this, PD theory not only provides a principle formulation of bounded rationality and a set of new types of mean field theory in statistical physics; it also shows that those topics are fundamentally one and the same.
International Nuclear Information System (INIS)
Chiu, Hueihuang.
1989-01-01
A theoretical method is being developed by which the structure of a radiation field can be predicted by a radiation potential theory, similar to a classical potential theory. The introduction of a scalar potential is justified on the grounds that the spectral intensity vector is irrotational. The vector is also solenoidal in the limits of a radiation field in complete radiative equilibrium or in a vacuum. This method provides an exact, elliptic type equation that will upgrade the accuracy and the efficiency of the current CFD programs required for the prediction of radiation and flow fields. A number of interesting results emerge from the present study. First, a steady state radiation field exhibits an optically modulated inverse square law distribution character. Secondly, the unsteady radiation field is structured with two conjugate scalar potentials. Each is governed by a Klein-Gordon equation with a frictional force and a restoring force. This steady potential field structure and the propagation of radiation potentials are consistent with the well known results of classical electromagnetic theory. The extension of the radiation potential theory for spray combustion and hypersonic flow is also recommended
DEFF Research Database (Denmark)
Stein, Irene F.; Stelter, Reinhard
2011-01-01
Communication theory covers a wide variety of theories related to the communication process (Littlejohn, 1999). Communication is not simply an exchange of information, in which we have a sender and a receiver. This very technical concept of communication is clearly outdated; a human being...... is not a data processing device. In this chapter, communication is understood as a process of shared meaning-making (Bruner, 1990). Human beings interpret their environment, other people, and themselves on the basis of their dynamic interaction with the surrounding world. Meaning is essential because people...... ascribe specific meanings to their experiences, their actions in life or work, and their interactions. Meaning is reshaped, adapted, and transformed in every communication encounter. Furthermore, meaning is cocreated in dialogues or in communities of practice, such as in teams at a workplace or in school...
2015-01-01
A one-sentence definition of operator theory could be: The study of (linear) continuous operations between topological vector spaces, these being in general (but not exclusively) Fréchet, Banach, or Hilbert spaces (or their duals). Operator theory is thus a very wide field, with numerous facets, both applied and theoretical. There are deep connections with complex analysis, functional analysis, mathematical physics, and electrical engineering, to name a few. Fascinating new applications and directions regularly appear, such as operator spaces, free probability, and applications to Clifford analysis. In our choice of the sections, we tried to reflect this diversity. This is a dynamic ongoing project, and more sections are planned, to complete the picture. We hope you enjoy the reading, and profit from this endeavor.
Helms, Lester L
2014-01-01
Potential Theory presents a clear path from calculus to classical potential theory and beyond, with the aim of moving the reader into the area of mathematical research as quickly as possible. The subject matter is developed from first principles using only calculus. Commencing with the inverse square law for gravitational and electromagnetic forces and the divergence theorem, the author develops methods for constructing solutions of Laplace's equation on a region with prescribed values on the boundary of the region. The latter half of the book addresses more advanced material aimed at those with the background of a senior undergraduate or beginning graduate course in real analysis. Starting with solutions of the Dirichlet problem subject to mixed boundary conditions on the simplest of regions, methods of morphing such solutions onto solutions of Poisson's equation on more general regions are developed using diffeomorphisms and the Perron-Wiener-Brelot method, culminating in application to Brownian motion. In ...
DEFF Research Database (Denmark)
Jensen, Klaus Bruhn
2016-01-01
This article revisits the place of normative and other practical issues in the wider conceptual architecture of communication theory, building on the tradition of philosophical pragmatism. The article first characterizes everyday concepts of communication as the accumulated outcome of natural...... evolution and history: practical resources for human existence and social coexistence. Such practical concepts have served as the point of departure for diverse theoretical conceptions of what communication is. The second part of the article highlights the past neglect and current potential of normative...... communication theories that ask, in addition, what communication ought to be, and what it could be, taking the relationship between communication and justice as a case in point. The final section returns to empirical conceptualizations of different institutions, practices and discourses of communication...
International Nuclear Information System (INIS)
Jarlskog, C.
An introduction to the unified gauge theories of weak and electromagnetic interactions is given. The ingredients of gauge theories and symmetries and conservation laws lead to discussion of local gauge invariance and QED, followed by weak interactions and quantum flavor dynamics. The construction of the standard SU(2)xU(1) model precedes discussion of the unification of weak and electromagnetic interactions and weak neutral current couplings in this model. Presentation of spontaneous symmetry breaking and spontaneous breaking of a local symmetry leads to a spontaneous breaking scheme for the standard SU(2)xU(1) model. Consideration of quarks, leptons, masses and the Cabibbo angles, of the four quark and six quark models and CP violation lead finally to grand unification, followed by discussion of mixing angles in the Georgi-Glashow model, the Higgses of the SU(5) model and proton/ neutron decay in SU(5). (JIW)
International Nuclear Information System (INIS)
Perjes, Z.
1982-01-01
Particle models in twistor theory are reviewed, starting with an introduction into the kinematical-twistor formalism which describes massive particles in Minkowski space-time. The internal transformations of constituent twistors are then discussed. The quantization rules available from a study of twistor scattering situations are used to construct quantum models of fundamental particles. The theory allows the introduction of an internal space with a Kaehlerian metric where hadron structure is described by spherical states of bound constituents. It is conjectured that the spectrum of successive families of hadrons might approach an accumulation point in energy. Above this threshold energy, the Kaehlerian analog of ionization could occur wherein the zero-mass constituents (twistors) of the particle break free. (Auth.)
DEFF Research Database (Denmark)
Carroll, Joseph; Clasen, Mathias; Jonsson, Emelie
2017-01-01
Biocultural theory is an integrative research program designed to investigate the causal interactions between biological adaptations and cultural constructions. From the biocultural perspective, cultural processes are rooted in the biological necessities of the human life cycle: specifically human...... of research as contributions to a coherent, collective research program. This article argues that a mature biocultural paradigm needs to be informed by at least 7 major research clusters: (a) gene-culture coevolution; (b) human life history theory; (c) evolutionary social psychology; (d) anthropological...... forms of birth, growth, survival, mating, parenting, and sociality. Conversely, from the biocultural perspective, human biological processes are constrained, organized, and developed by culture, which includes technology, culturally specific socioeconomic and political structures, religious...
Weber, Rebecca
2012-01-01
What can we compute--even with unlimited resources? Is everything within reach? Or are computations necessarily drastically limited, not just in practice, but theoretically? These questions are at the heart of computability theory. The goal of this book is to give the reader a firm grounding in the fundamentals of computability theory and an overview of currently active areas of research, such as reverse mathematics and algorithmic randomness. Turing machines and partial recursive functions are explored in detail, and vital tools and concepts including coding, uniformity, and diagonalization are described explicitly. From there the material continues with universal machines, the halting problem, parametrization and the recursion theorem, and thence to computability for sets, enumerability, and Turing reduction and degrees. A few more advanced topics round out the book before the chapter on areas of research. The text is designed to be self-contained, with an entire chapter of preliminary material including re...
Hashiguchi, Koichi
2014-01-01
This book was written to serve as the standard textbook of elastoplasticity for students, engineers and researchers in the field of applied mechanics. The present second edition is improved thoroughly from the first edition by selecting the standard theories from various formulations and models, which are required to study the essentials of elastoplasticity steadily and effectively and will remain universally in the history of elastoplasticity. It opens with an explanation of vector-tensor analysis and continuum mechanics as a foundation to study elastoplasticity theory, extending over various strain and stress tensors and their rates. Subsequently, constitutive equations of elastoplastic and viscoplastic deformations for monotonic, cyclic and non-proportional loading behavior in a general rate and their applications to metals and soils are described in detail, and constitutive equations of friction behavior between solids and its application to the prediction of stick-slip phenomena are delineated. In additi...
Sheaves of Schwartz distributions
International Nuclear Information System (INIS)
Damyanov, B.P.
1991-09-01
The theory of sheaves is a relevant mathematical language for describing the localization principle, known to be valid for the Schwartz distributions (generalized functions). After introducing some fundamentals of sheaves and the basic facts about distribution spaces, the distribution sheaf D Ω of topological C-vector spaces over an open set Ω in R n is systematically studied. A sheaf D M of distributions on a C ∞ -manifold M is then introduced, following a definition of Hoermander's for its particular elements. Further, a general definition of sheaves on a manifold, that are locally isomorphic to (or, modelled on) a sheaf on R n , in proposed. The sheaf properties of D M are studied and this sheaf is shown to be locally isomorphic to D Ω , as a sheaf of topological vector spaces. (author). 14 refs
Veenhoven, Ruut
2014-01-01
markdownabstract__Abstract__ Assumptions Livability theory involves the following six key assumptions: 1. Like all animals, humans have innate needs, such as for food, safety, and companionship. 2. Gratification of needs manifests in hedonic experience. 3. Hedonic experience determines how much we like the life we live (happiness). Hence, happiness depends on need gratification. 4.Need gratification depends on both external living conditions and inner abilities to use these. Hence, bad living...
International Nuclear Information System (INIS)
Casten, R F
2015-01-01
This paper discusses some simple issues that arise in testing models, with a focus on models for low energy nuclear structure. By way of simplified examples, we illustrate some dangers in blind statistical assessments, pointing out especially the need to include theoretical uncertainties, the danger of over-weighting precise or physically redundant experimental results, the need to assess competing theories with independent and physically sensitive observables, and the value of statistical tests properly evaluated. (paper)
Diestel, Reinhard
2017-01-01
This standard textbook of modern graph theory, now in its fifth edition, combines the authority of a classic with the engaging freshness of style that is the hallmark of active mathematics. It covers the core material of the subject with concise yet reliably complete proofs, while offering glimpses of more advanced methods in each field by one or two deeper results, again with proofs given in full detail. The book can be used as a reliable text for an introductory course, as a graduate text, and for self-study. From the reviews: “This outstanding book cannot be substituted with any other book on the present textbook market. It has every chance of becoming the standard textbook for graph theory.”Acta Scientiarum Mathematiciarum “Deep, clear, wonderful. This is a serious book about the heart of graph theory. It has depth and integrity. ”Persi Diaconis & Ron Graham, SIAM Review “The book has received a very enthusiastic reception, which it amply deserves. A masterly elucidation of modern graph theo...
Friedrich, Harald
2016-01-01
This corrected and updated second edition of "Scattering Theory" presents a concise and modern coverage of the subject. In the present treatment, special attention is given to the role played by the long-range behaviour of the projectile-target interaction, and a theory is developed, which is well suited to describe near-threshold bound and continuum states in realistic binary systems such as diatomic molecules or molecular ions. It is motivated by the fact that experimental advances have shifted and broadened the scope of applications where concepts from scattering theory are used, e.g. to the field of ultracold atoms and molecules, which has been experiencing enormous growth in recent years, largely triggered by the successful realization of Bose-Einstein condensates of dilute atomic gases in 1995. The book contains sections on special topics such as near-threshold quantization, quantum reflection, Feshbach resonances and the quantum description of scattering in two dimensions. The level of abstraction is k...
Fleeson, William; Jayawickreme, Eranda
2014-01-01
Personality researchers should modify models of traits to include mechanisms of differential reaction to situations. Whole Trait Theory does so via five main points. First, the descriptive side of traits should be conceptualized as density distributions of states. Second, it is important to provide an explanatory account of the Big 5 traits. Third, adding an explanatory account to the Big 5 creates two parts to traits, an explanatory part and a descriptive part, and these two parts should be recognized as separate entities that are joined into whole traits. Fourth, Whole Trait Theory proposes that the explanatory side of traits consists of social-cognitive mechanisms. Fifth, social-cognitive mechanisms that produce Big-5 states should be identified. PMID:26097268
Wilde, Mark M
2017-01-01
Developing many of the major, exciting, pre- and post-millennium developments from the ground up, this book is an ideal entry point for graduate students into quantum information theory. Significant attention is given to quantum mechanics for quantum information theory, and careful studies of the important protocols of teleportation, superdense coding, and entanglement distribution are presented. In this new edition, readers can expect to find over 100 pages of new material, including detailed discussions of Bell's theorem, the CHSH game, Tsirelson's theorem, the axiomatic approach to quantum channels, the definition of the diamond norm and its interpretation, and a proof of the Choi–Kraus theorem. Discussion of the importance of the quantum dynamic capacity formula has been completely revised, and many new exercises and references have been added. This new edition will be welcomed by the upcoming generation of quantum information theorists and the already established community of classical information theo...
Ethical principles and theories.
Schultz, R C
1993-01-01
Ethical theory about what is right and good in human conduct lies behind the issues practitioners face and the codes they turn to for guidance; it also provides guidance for actions, practices, and policies. Principles of obligation, such as egoism, utilitarianism, and deontology, offer general answers to the question, "Which acts/practices are morally right?" A re-emerging alternative to using such principles to assess individual conduct is to center normative theory on personal virtues. For structuring society's institutions, principles of social justice offer alternative answers to the question, "How should social benefits and burdens be distributed?" But human concerns about right and good call for more than just theoretical responses. Some critics (eg, the postmodernists and the feminists) charge that normative ethical theorizing is a misguided enterprise. However, that charge should be taken as a caution and not as a refutation of normative ethical theorizing.
Kroese, A.H.; van der Meulen, E.A.; Poortema, Klaas; Schaafsma, W.
1995-01-01
The making of statistical inferences in distributional form is conceptionally complicated because the epistemic 'probabilities' assigned are mixtures of fact and fiction. In this respect they are essentially different from 'physical' or 'frequency-theoretic' probabilities. The distributional form is
Distributed photovoltaic grid transformers
Shertukde, Hemchandra Madhusudan
2014-01-01
The demand for alternative energy sources fuels the need for electric power and controls engineers to possess a practical understanding of transformers suitable for solar energy. Meeting that need, Distributed Photovoltaic Grid Transformers begins by explaining the basic theory behind transformers in the solar power arena, and then progresses to describe the development, manufacture, and sale of distributed photovoltaic (PV) grid transformers, which help boost the electric DC voltage (generally at 30 volts) harnessed by a PV panel to a higher level (generally at 115 volts or higher) once it is
Distributional Watson transforms
Dijksma, A.; Snoo, H.S.V. de
1974-01-01
For all Watson transforms W in L2(R+) a triple of Hilbert space LG ⊂ L2(R+) ⊂ L'G is constructed such that W may be extended to L'G. These results allow the construction of a triple L ⊂ L2(R+) ⊂ L', where L is a Gelfand-Fréchet space. This leads to a theory of distributional Watson transforms.
The theory of electromagnetism
Jones, D S
1964-01-01
The Theory of the Electomagnetism covers the behavior of electromagnetic fields and those parts of applied mathematics necessary to discover this behavior. This book is composed of 11 chapters that emphasize the Maxwell's equations. The first chapter is concerned with the general properties of solutions of Maxwell's equations in matter, which has certain macroscopic properties. The succeeding chapters consider specific problems in electromagnetism, including the determination of the field produced by a variable charge, first in isolation and then in the surface distributions of an antenna. The
Unified kinetic theory in toroidal systems
International Nuclear Information System (INIS)
Hitchcock, D.A.; Hazeltine, R.D.
1980-12-01
The kinetic theory of toroidal systems has been characterized by two approaches: neoclassical theory which ignores instabilities and quasilinear theory which ignores collisions. In this paper we construct a kinetic theory for toroidal systems which includes both effects. This yields a pair of evolution equations; one for the spectrum and one for the distribution function. In addition, this theory yields a toroidal generalization of the usual collision operator which is shown to have many similar properties - conservation laws, H theorem - to the usual collision operator
Goldie, Charles M
1991-01-01
This book is an introduction, for mathematics students, to the theories of information and codes. They are usually treated separately but, as both address the problem of communication through noisy channels (albeit from different directions), the authors have been able to exploit the connection to give a reasonably self-contained treatment, relating the probabilistic and algebraic viewpoints. The style is discursive and, as befits the subject, plenty of examples and exercises are provided. Some examples and exercises are provided. Some examples of computer codes are given to provide concrete illustrations of abstract ideas.
2009-01-01
This book deals with the basic subjects of design theory. It begins with balanced incomplete block designs, various constructions of which are described in ample detail. In particular, finite projective and affine planes, difference sets and Hadamard matrices, as tools to construct balanced incomplete block designs, are included. Orthogonal latin squares are also treated in detail. Zhu's simpler proof of the falsity of Euler's conjecture is included. The construction of some classes of balanced incomplete block designs, such as Steiner triple systems and Kirkman triple systems, are also given.
On the operation of composition of distributions
Energy Technology Data Exchange (ETDEWEB)
Kaminski, A; Sorek, S [Institute of Mathematics, University of Rzeszow, Rejtana 16A, 35-310 Rzeszow (Poland)
2006-02-28
The proofs of the results of P. Antosik [3] on the distributional composition of distributions (in the sense of Mikusinski's theory of irregular operations), which contained essential gaps, are completed due to some measure theory techniques, and the results are generalized. The obtained theorems can be applied to prove some formulas, which may be interesting to physicists, concerning the substitution of measures (in particular, the Dirac delta distribution) to continuous functions.
International business theory and marketing theory
Soldner, Helmut
1984-01-01
International business theory and marketing theory : elements for internat. marketing theory building. - In: Marketing aspects of international business / Gerald M. Hampton ... (eds.). - Boston u.a. : Kluwer, 1984. - S. 25-57
MOLECULAR DESCRIPTION OF ELECTROLYTE SOLUTION IN A CARBON AEROGEL ELECTRODE
Directory of Open Access Journals (Sweden)
A.Kovalenko
2003-01-01
Full Text Available We develop a molecular theory of aqueous electrolyte solution sorbed in a nanoporous carbon aerogel electrode, based on the replica reference interaction site model (replica RISM for realistic molecular quenched-annealed systems. We also briefly review applications of carbon aerogels for supercapacitor and electrochemical separation devices, as well as theoretical and computer modelling of disordered porous materials. The replica RISM integral equation theory yields the microscopic properties of the electrochemical double layer formed at the surface of carbon aerogel nanopores, with due account of chemical specificities of both sorbed electrolyte and carbon aerogel material. The theory allows for spatial disorder of aerogel pores in the range from micro- to macroscopic size scale. We considered ambient aqueous solution of 1 M sodium chloride sorbed in two model nanoporous carbon aerogels with carbon nanoparticles either arranged into branched chains or randomly distributed. The long-range correlations of the carbon aerogel nanostructure substantially affect the properties of the electrochemical double layer formed by the solution sorbed in nanopores.
Stochastic theory of grain growth
International Nuclear Information System (INIS)
Hu Haiyun; Xing Xiusan.
1990-11-01
The purpose of this note is to set up a stochastic theory of grain growth and to derive the statistical distribution function and the average value of the grain radius so as to match them with the experiment further. 8 refs, 1 fig
Neutrons moderation theory; Theorie du ralentissement des neutrons
Energy Technology Data Exchange (ETDEWEB)
Vigier, J P
1949-07-01
This report gives a summarized presentation of the theory of fast neutrons diffusion and moderation in a given environment as elaborated by M. Langevin, E. Fermi, R. Marshak and others. This statistical theory is based on three assumptions: there is no inelastic diffusion, the elastic diffusion has a spherical symmetry with respect to the center of gravity of the neutron-nucleus system (s-scattering), and the effects of chemical bonds and thermal agitation of nuclei are neglected. The first chapter analyzes the Boltzmann equation of moderation, its first approximate solution (age-velocity equation) and its domain of validity, the extension of the age-velocity theory (general solution) and the boundary conditions, the upper order approximation (spherical harmonics method and Laplace transformation), the asymptotic solutions, and the theory of spatial momenta. The second chapter analyzes the energy distribution of delayed neutrons (stationary and non-stationary cases). (J.S.)
A Field Theory with Curvature and Anticurvature
Directory of Open Access Journals (Sweden)
M. I. Wanas
2014-01-01
Full Text Available The present work is an attempt to construct a unified field theory in a space with curvature and anticurvature, the PAP-space. The theory is derived from an action principle and a Lagrangian density using a symmetric linear parameterized connection. Three different methods are used to explore physical contents of the theory obtained. Poisson’s equations for both material and charge distributions are obtained, as special cases, from the field equations of the theory. The theory is a pure geometric one in the sense that material distribution, charge distribution, gravitational and electromagnetic potentials, and other physical quantities are defined in terms of pure geometric objects of the structure used. In the case of pure gravity in free space, the spherical symmetric solution of the field equations gives the Schwarzschild exterior field. The weak equivalence principle is respected only in the case of pure gravity in free space; otherwise it is violated.
International Nuclear Information System (INIS)
Markland, J.T.
1992-01-01
Techniques used in conventional project appraisal are mathematically very simple in comparison to those used in reservoir modelling, and in the geosciences. Clearly it would be possible to value assets in mathematically more sophisticated ways if it were meaningful and worthwhile so to do. The DCf approach in common use has recognized limitations; the inability to select a meaningful discount rate being particularly significant. Financial Theory has advanced enormously over the last few years, along with computational techniques, and methods are beginning to appear which may change the way we do project evaluations in practice. The starting point for all of this was a paper by Black and Scholes, which asserts that almost all corporate liabilities can be viewed as options of varying degrees of complexity. Although the financial presentation may be unfamiliar to engineers and geoscientists, some of the concepts used will not be. This paper outlines, in plain English, the basis of option pricing theory for assessing the market value of a project. it also attempts to assess the future role of this type of approach in practical Petroleum Exploration and Engineering economics. Reference is made to relevant published Natural Resource literature
Exclusion Statistics in Conformal Field Theory Spectra
International Nuclear Information System (INIS)
Schoutens, K.
1997-01-01
We propose a new method for investigating the exclusion statistics of quasiparticles in conformal field theory (CFT) spectra. The method leads to one-particle distribution functions, which generalize the Fermi-Dirac distribution. For the simplest SU(n) invariant CFTs we find a generalization of Gentile parafermions, and we obtain new distributions for the simplest Z N -invariant CFTs. In special examples, our approach reproduces distributions based on 'fractional exclusion statistics' in the sense of Haldane. We comment on applications to fractional quantum Hall effect edge theories. copyright 1997 The American Physical Society
Workshop III – Cosmology: Observations versus theories
Indian Academy of Sciences (India)
599–601. Workshop III – Cosmology: Observations versus theories. T R SESHADRI ... The gravitational lens image separation distribution function in the presence of evolving models of ... Restoration of local electroweak symmetry is achieved.
Distribution of values of holomorphic mappings
Shabat, B V
1985-01-01
A vast literature has grown up around the value distribution theory of meromorphic functions, synthesized by Rolf Nevanlinna in the 1920s and singled out by Hermann Weyl as one of the greatest mathematical achievements of this century. The multidimensional aspect, involving the distribution of inverse images of analytic sets under holomorphic mappings of complex manifolds, has not been fully treated in the literature. This volume thus provides a valuable introduction to multivariate value distribution theory and a survey of some of its results, rich in relations to both algebraic and differential geometry and surely one of the most important branches of the modern geometric theory of functions of a complex variable. Since the book begins with preparatory material from the contemporary geometric theory of functions, only a familiarity with the elements of multidimensional complex analysis is necessary background to understand the topic. After proving the two main theorems of value distribution theory, the auth...
Distributed Decision Making and Control
Rantzer, Anders
2012-01-01
Distributed Decision Making and Control is a mathematical treatment of relevant problems in distributed control, decision and multiagent systems, The research reported was prompted by the recent rapid development in large-scale networked and embedded systems and communications. One of the main reasons for the growing complexity in such systems is the dynamics introduced by computation and communication delays. Reliability, predictability, and efficient utilization of processing power and network resources are central issues and the new theory and design methods presented here are needed to analyze and optimize the complex interactions that arise between controllers, plants and networks. The text also helps to meet requirements arising from industrial practice for a more systematic approach to the design of distributed control structures and corresponding information interfaces Theory for coordination of many different control units is closely related to economics and game theory network uses being dictated by...
Wavelet theory and its applications
Energy Technology Data Exchange (ETDEWEB)
Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.
1996-07-01
This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.
Intrinsic irreversibility in quantum theory
International Nuclear Information System (INIS)
Prigogine, I.; Petrosky, T.Y.
1987-01-01
Quantum theory has a dual structure: while solutions of the Schroedinger equation evolve in a deterministic and time reversible way, measurement introduces irreversibility and stochasticity. This presents a contrast to Bohr-Sommerfeld-Einstein theory, in which transitions between quantum states are associated with spontaneous and induced transitions, defined in terms of stochastic processes. A new form of quantum theory is presented here, which contains an intrinsic form of irreversibility, independent of observation. This new form applies to situations corresponding to a continuous spectrum and to quantum states with finite life time. The usual non-commutative algebra associated to quantum theory is replaced by more general algebra, in which operators are also non-distributive. Our approach leads to a number of predictions, which hopefully may be verified or refuted in the next years. (orig.)
Large-order perturbation theory
International Nuclear Information System (INIS)
Wu, T.T.
1982-01-01
The original motivation for studying the asymptotic behavior of the coefficients of perturbation series came from quantum field theory. An overview is given of some of the attempts to understand quantum field theory beyond finite-order perturbation series. At least is the case of the Thirring model and probably in general, the full content of a relativistic quantum field theory cannot be recovered from its perturbation series. This difficulty, however, does not occur in quantum mechanics, and the anharmonic oscillator is used to illustrate the methods used in large-order perturbation theory. Two completely different methods are discussed, the first one using the WKB approximation, and a second one involving the statistical analysis of Feynman diagrams. The first one is well developed and gives detailed information about the desired asymptotic behavior, while the second one is still in its infancy and gives instead information about the distribution of vertices of the Feynman diagrams
Hallin, M.; Piegorsch, W.; El Shaarawi, A.
2012-01-01
The random variable X taking values 0,1,2,…,x,… with probabilities pλ(x) = e−λλx/x!, where λ∈R0+ is called a Poisson variable, and its distribution a Poisson distribution, with parameter λ. The Poisson distribution with parameter λ can be obtained as the limit, as n → ∞ and p → 0 in such a way that
National Aeronautics and Space Administration — Distributed Visualization allows anyone, anywhere, to see any simulation, at any time. Development focuses on algorithms, software, data formats, data systems and...
International Nuclear Information System (INIS)
Gong, Ha Seong
2006-02-01
This book explains electric theory which is divided into four chapters. The first chapter includes electricity and material, electric field, capacitance, magnetic field and electromagnetic force, inductance. The second chapter mentions electronic circuit analysis, electric resistance,heating and power, chemical activity on current and battery with electrolysis. The third chapter deals with an alternating current circuit about the basics of an AC circuit, operating of resistance, inductance and capacitance, series circuit and parallel circuit of PLC, an alternating current circuit, Three-phase Alternating current, two terminal pair network and voltage and current of non-linearity circuit. The last explains transient phenomena of RC series circuit, RL series circuit, transient phenomena of an alternating current circuit and transient phenomena of RLC series circuit.
International Nuclear Information System (INIS)
Nobile, G.
1993-07-01
With reference to highly debated sustainable growth strategies to counter pressing interrelated global environmental and socio-economic problems, this paper reviews economic and resource development theories proposed by classical and neoclassical economists. The review evidences the growing debate among public administration decision makers regarding appropriate methods to assess the worth of natural resources and ecosystems. Proposed methods tend to be biased either towards environmental protection or economic development. Two major difficulties in the effective implementation of sustainable growth strategies are also evidenced - the management of such strategies would require appropriate revisions to national accounting systems, and the dynamic flow of energy and materials between an economic system and the environment would generate a sequence of unstable structures evolving in a chaotic and unpredictable way
Theory of multiphoton ionization of atoms
International Nuclear Information System (INIS)
Szoeke, A.
1986-03-01
A non-perturbative approach to the theory of multiphoton ionization is reviewed. Adiabatic Floquet theory is its first approximation. It explains qualitatively the energy and angular distribution of photoelectrons. In many-electron atoms it predicts collective and inner shell excitation. 14 refs
A Future of Communication Theory: Systems Theory.
Lindsey, Georg N.
Concepts of general systems theory, cybernetics and the like may provide the methodology for communication theory to move from a level of technology to a level of pure science. It was the purpose of this paper to (1) demonstrate the necessity of applying systems theory to the construction of communication theory, (2) review relevant systems…
International Nuclear Information System (INIS)
Golubov, B I
2007-01-01
On the basis of the concept of pointwise dyadic derivative dyadic distributions are introduced as continuous linear functionals on the linear space D d (R + ) of infinitely differentiable functions compactly supported by the positive half-axis R + together with all dyadic derivatives. The completeness of the space D' d (R + ) of dyadic distributions is established. It is shown that a locally integrable function on R + generates a dyadic distribution. In addition, the space S d (R + ) of infinitely dyadically differentiable functions on R + rapidly decreasing in the neighbourhood of +∞ is defined. The space S' d (R + ) of dyadic distributions of slow growth is introduced as the space of continuous linear functionals on S d (R + ). The completeness of the space S' d (R + ) is established; it is proved that each integrable function on R + with polynomial growth at +∞ generates a dyadic distribution of slow growth. Bibliography: 25 titles.
2010-12-02
will face in an uncertain future. Complexity Theory , History, Practice, Military Theory , Leadership 14. SUBJECT TERMS 70 15. NUMBER OF PAGES...complexity theory : scale, adaptive leadership , and bottom up feedback from the agents (the soldiers in the field). These are all key sub components of...Approved for Public Release; Distribution is Unlimited COMPARING THEORY AND PRACTICE: AN APPLICATION OF COMPLEXITY THEORY TO GENERAL RIDGWAY’S
International Nuclear Information System (INIS)
Maillard, S.; Skorek, R.; Maugis, P.; Dumont, M.
2015-01-01
This chapter presents the basic principles of cluster dynamics as a particular case of mesoscopic rate theory models developed to investigate fuel behaviour under irradiation such as in UO 2 . It is shown that as this method simulates the evolution of the concentration of every type of point or aggregated defect in a grain of material. It produces rich information that sheds light on the mechanisms involved in microstructure evolution and gas behaviour that are not accessible through conventional models but yet can provide for improvements in those models. Cluster dynamics parameters are mainly the energetic values governing the basic evolution mechanisms of the material (diffusion, trapping and thermal resolution). In this sense, the model has a general applicability to very different operational situations (irradiation, ion-beam implantation, annealing) provided that they rely on the same basic mechanisms, without requiring additional data fitting, as is required for more empirical conventional models. This technique, when applied to krypton implanted and annealed samples, yields a precise interpretation of the release curves and helps assess migration mechanisms and the krypton diffusion coefficient, for which data is very difficult to obtain due to the low solubility of the gas. (authors)
Alencar, Marcelo S
2014-01-01
During the last decade we have witnessed rapid developments of computer networks and Internet technologies along with dramatic improvements in the processing power of personal computers. These developments make Interactive Distance Education a reality. By designing and deploying distributed and collaborative applications running on computers disseminated over the Internet, distance educators can reach remote learners, overcoming the time and distance constraints. Besides the necessary theoretical base provided by lectures and written materials, hands-on experience provided by physical laboratories is a vital part for engineering education. It helps engineering students become effective professionals. Such instruction not only provides the students with the knowledge of the physical equipment but also adds the important dimension of group work and collaboration. However, laboratories are expensive to setup, to maintain and provide long hours of daily staffing. Due to budget limitations, many universities and c...
Independent production and Poisson distribution
International Nuclear Information System (INIS)
Golokhvastov, A.I.
1994-01-01
The well-known statement of factorization of inclusive cross-sections in case of independent production of particles (or clusters, jets etc.) and the conclusion of Poisson distribution over their multiplicity arising from it do not follow from the probability theory in any way. Using accurately the theorem of the product of independent probabilities, quite different equations are obtained and no consequences relative to multiplicity distributions are obtained. 11 refs
DEFF Research Database (Denmark)
Borregaard, Michael Krabbe; Hendrichsen, Ditte Katrine; Nachman, Gøsta Støger
2008-01-01
, depending on the nature of intraspecific interactions between them: while the individuals of some species repel each other and partition the available area, others form groups of varying size, determined by the fitness of each group member. The spatial distribution pattern of individuals again strongly......Living organisms are distributed over the entire surface of the planet. The distribution of the individuals of each species is not random; on the contrary, they are strongly dependent on the biology and ecology of the species, and vary over different spatial scale. The structure of whole...... populations reflects the location and fragmentation pattern of the habitat types preferred by the species, and the complex dynamics of migration, colonization, and population growth taking place over the landscape. Within these, individuals are distributed among each other in regular or clumped patterns...
Denning, Peter J.
1989-01-01
Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.
International Nuclear Information System (INIS)
Gruenemeyer, D.
1991-01-01
This paper reports on a Distribution Automation (DA) System enhances the efficiency and productivity of a utility. It also provides intangible benefits such as improved public image and market advantages. A utility should evaluate the benefits and costs of such a system before committing funds. The expenditure for distribution automation is economical when justified by the deferral of a capacity increase, a decrease in peak power demand, or a reduction in O and M requirements
Vaginal drug distribution modeling.
Katz, David F; Yuan, Andrew; Gao, Yajing
2015-09-15
This review presents and applies fundamental mass transport theory describing the diffusion and convection driven mass transport of drugs to the vaginal environment. It considers sources of variability in the predictions of the models. It illustrates use of model predictions of microbicide drug concentration distribution (pharmacokinetics) to gain insights about drug effectiveness in preventing HIV infection (pharmacodynamics). The modeling compares vaginal drug distributions after different gel dosage regimens, and it evaluates consequences of changes in gel viscosity due to aging. It compares vaginal mucosal concentration distributions of drugs delivered by gels vs. intravaginal rings. Finally, the modeling approach is used to compare vaginal drug distributions across species with differing vaginal dimensions. Deterministic models of drug mass transport into and throughout the vaginal environment can provide critical insights about the mechanisms and determinants of such transport. This knowledge, and the methodology that obtains it, can be applied and translated to multiple applications, involving the scientific underpinnings of vaginal drug distribution and the performance evaluation and design of products, and their dosage regimens, that achieve it. Copyright © 2015 Elsevier B.V. All rights reserved.
Sparse distributed memory overview
Raugh, Mike
1990-01-01
The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.
Arifin; Puripat, Maneeporn; Yokogawa, Daisuke; Parasuk, Vudhichai; Irle, Stephan
2016-01-30
Isomerization and transformation of glucose and fructose to 5-hydroxymethylfurfural (HMF) in both ionic liquids (ILs) and water has been studied by the reference interaction site model self-consistent field spatial electron density distribution (RISM-SCF-SEDD) method coupled with ab initio electronic structure theory, namely coupled cluster single, double, and perturbative triple excitation (CCSD(T)). Glucose isomerization to fructose has been investigated via cyclic and open chain mechanisms. In water, the calculations support the cyclic mechanism of glucose isomerization; with the predicted activation free energy is 23.8 kcal mol(-1) at experimental condition. Conversely, open ring mechanism is more favorable in ILs with the energy barrier is 32.4 kcal mol(-1) . Moreover, the transformation of fructose into HMF via cyclic mechanism is reasonable; the calculated activation barriers are 16.0 and 21.5 kcal mol(-1) in aqueous and ILs solutions, respectively. The solvent effects of ILs could be explained by the decomposition of free energies and radial distribution functions of solute-solvent that are produced by RISM-SCF-SEDD. © 2015 Wiley Periodicals, Inc.
Chance and stability stable distributions and their applications
Uchaikin, Vladimir V
1999-01-01
An introduction to the theory of stable distributions and their applications. It contains a modern outlook on the mathematical aspects of the theory. The authors explain numerous peculiarities of stable distributions and describe the principle concept of probability theory and function analysis. A significant part of the book is devoted to applications of stable distributions. Another notable feature is the material on the interconnection of stable laws with fractals, chaos and anomalous transport processes.
MACCIA, ELIZABETH S.; AND OTHERS
AN ANNOTATED BIBLIOGRAPHY OF 20 ITEMS AND A DISCUSSION OF ITS SIGNIFICANCE WAS PRESENTED TO DESCRIBE CURRENT UTILIZATION OF SUBJECT THEORIES IN THE CONSTRUCTION OF AN EDUCATIONAL THEORY. ALSO, A THEORY MODEL WAS USED TO DEMONSTRATE CONSTRUCTION OF A SCIENTIFIC EDUCATIONAL THEORY. THE THEORY MODEL INCORPORATED SET THEORY (S), INFORMATION THEORY…
Analyzing Test-Taking Behavior: Decision Theory Meets Psychometric Theory.
Budescu, David V; Bo, Yuanchao
2015-12-01
We investigate the implications of penalizing incorrect answers to multiple-choice tests, from the perspective of both test-takers and test-makers. To do so, we use a model that combines a well-known item response theory model with prospect theory (Kahneman and Tversky, Prospect theory: An analysis of decision under risk, Econometrica 47:263-91, 1979). Our results reveal that when test-takers are fully informed of the scoring rule, the use of any penalty has detrimental effects for both test-takers (they are always penalized in excess, particularly those who are risk averse and loss averse) and test-makers (the bias of the estimated scores, as well as the variance and skewness of their distribution, increase as a function of the severity of the penalty).
Statistical methods in nuclear theory
International Nuclear Information System (INIS)
Shubin, Yu.N.
1974-01-01
The paper outlines statistical methods which are widely used for describing properties of excited states of nuclei and nuclear reactions. It discusses physical assumptions lying at the basis of known distributions between levels (Wigner, Poisson distributions) and of widths of highly excited states (Porter-Thomas distribution, as well as assumptions used in the statistical theory of nuclear reactions and in the fluctuation analysis. The author considers the random matrix method, which consists in replacing the matrix elements of a residual interaction by random variables with a simple statistical distribution. Experimental data are compared with results of calculations using the statistical model. The superfluid nucleus model is considered with regard to superconducting-type pair correlations
Müller, Gert; Sacks, Gerald
1990-01-01
These proceedings contain research and survey papers from many subfields of recursion theory, with emphasis on degree theory, in particular the development of frameworks for current techniques in this field. Other topics covered include computational complexity theory, generalized recursion theory, proof theoretic questions in recursion theory, and recursive mathematics.
Eliciting Subjective Probability Distributions with Binary Lotteries
DEFF Research Database (Denmark)
Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd
2015-01-01
We test in a laboratory experiment the theoretical prediction that risk attitudes have a surprisingly small role in distorting reports from true belief distributions. We find evidence consistent with theory in our experiment....
K-theory and representation theory
International Nuclear Information System (INIS)
Kuku, A.O.
2003-01-01
This contribution includes K-theory of orders, group-rings and modules over EI categories, equivariant higher algebraic K-theory for finite, profinite and compact Lie group actions together with their relative generalisations and applications
International Nuclear Information System (INIS)
Sugama, H.
1999-08-01
The Lagrangian formulation of the gyrokinetic theory is generalized in order to describe the particles' dynamics as well as the self-consistent behavior of the electromagnetic fields. The gyrokinetic equation for the particle distribution function and the gyrokinetic Maxwell's equations for the electromagnetic fields are both derived from the variational principle for the Lagrangian consisting of the parts of particles, fields, and their interaction. In this generalized Lagrangian formulation, the energy conservation property for the total nonlinear gyrokinetic system of equations is directly shown from the Noether's theorem. This formulation can be utilized in order to derive the nonlinear gyrokinetic system of equations and the rigorously conserved total energy for fluctuations with arbitrary frequency. (author)
Gravity, general relativity theory and alternative theories
International Nuclear Information System (INIS)
Zel'dovich, Ya.B.; Grishchuk, L.P.; Moskovskij Gosudarstvennyj Univ.
1986-01-01
The main steps in plotting the current gravitation theory and some prospects of its subsequent development are reviewed. The attention is concentrated on a comparison of the relativistic gravitational field with other physical fields. Two equivalent formulations of the general relativity (GR) - geometrical and field-theoretical - are considered in detail. It is shown that some theories of gravity constructed as the field theories at a flat background space-time are in fact just different formulations of GR and not alternative theories
Generalizability theory and item response theory
Glas, Cornelis A.W.; Eggen, T.J.H.M.; Veldkamp, B.P.
2012-01-01
Item response theory is usually applied to items with a selected-response format, such as multiple choice items, whereas generalizability theory is usually applied to constructed-response tasks assessed by raters. However, in many situations, raters may use rating scales consisting of items with a selected-response format. This chapter presents a short overview of how item response theory and generalizability theory were integrated to model such assessments. Further, the precision of the esti...
Foundations of compositional model theory
Czech Academy of Sciences Publication Activity Database
Jiroušek, Radim
2011-01-01
Roč. 40, č. 6 (2011), s. 623-678 ISSN 0308-1079 R&D Projects: GA MŠk 1M0572; GA ČR GA201/09/1891; GA ČR GEICC/08/E010 Institutional research plan: CEZ:AV0Z10750506 Keywords : multidimensional probability distribution * conditional independence * graphical Markov model * composition of distributions Subject RIV: IN - Informatics, Computer Science Impact factor: 0.667, year: 2011 http://library.utia.cas.cz/separaty/2011/MTR/jirousek-foundations of compositional model theory.pdf
Wealth distribution on complex networks
Ichinomiya, Takashi
2012-12-01
We study the wealth distribution of the Bouchaud-Mézard model on complex networks. It is known from numerical simulations that this distribution depends on the topology of the network; however, no one has succeeded in explaining it. Using “adiabatic” and “independent” assumptions along with the central-limit theorem, we derive equations that determine the probability distribution function. The results are compared to those of simulations for various networks. We find good agreement between our theory and the simulations, except for the case of Watts-Strogatz networks with a low rewiring rate due to the breakdown of independent assumption.
Probability theory a foundational course
Pakshirajan, R P
2013-01-01
This book shares the dictum of J. L. Doob in treating Probability Theory as a branch of Measure Theory and establishes this relation early. Probability measures in product spaces are introduced right at the start by way of laying the ground work to later claim the existence of stochastic processes with prescribed finite dimensional distributions. Other topics analysed in the book include supports of probability measures, zero-one laws in product measure spaces, Erdos-Kac invariance principle, functional central limit theorem and functional law of the iterated logarithm for independent variables, Skorohod embedding, and the use of analytic functions of a complex variable in the study of geometric ergodicity in Markov chains. This book is offered as a text book for students pursuing graduate programs in Mathematics and or Statistics. The book aims to help the teacher present the theory with ease, and to help the student sustain his interest and joy in learning the subject.
Kinetic theory and transport phenomena
Soto, Rodrigo
2016-01-01
This textbook presents kinetic theory, which is a systematic approach to describing nonequilibrium systems. The text is balanced between the fundamental concepts of kinetic theory (irreversibility, transport processes, separation of time scales, conservations, coarse graining, distribution functions, etc.) and the results and predictions of the theory, where the relevant properties of different systems are computed. The book is organised in thematic chapters where different paradigmatic systems are studied. The specific features of these systems are described, building and analysing the appropriate kinetic equations. Specifically, the book considers the classical transport of charges, the dynamics of classical gases, Brownian motion, plasmas, and self-gravitating systems, quantum gases, the electronic transport in solids and, finally, semiconductors. Besides these systems that are studied in detail, concepts are applied to some modern examples including the quark–gluon plasma, the motion of bacterial suspen...
Light Meson Distribution Amplitudes
Arthur, R.; Brommel, D.; Donnellan, M.A.; Flynn, J.M.; Juttner, A.; de Lima, H.Pedroso; Rae, T.D.; Sachrajda, C.T.; Samways, B.
2010-01-01
We calculated the first two moments of the light-cone distribution amplitudes for the pseudoscalar mesons ($\\pi$ and $K$) and the longitudinally polarised vector mesons ($\\rho$, $K^*$ and $\\phi$) as part of the UKQCD and RBC collaborations' $N_f=2+1$ domain-wall fermion phenomenology programme. These quantities were obtained with a good precision and, in particular, the expected effects of $SU(3)$-flavour symmetry breaking were observed. Operators were renormalised non-perturbatively and extrapolations to the physical point were made, guided by leading order chiral perturbation theory. The main results presented are for two volumes, $16^3\\times 32$ and $24^3\\times 64$, with a common lattice spacing. Preliminary results for a lattice with a finer lattice spacing, $32^3\\times64$, are discussed and a first look is taken at the use of twisted boundary conditions to extract distribution amplitudes.
Distributed picture compilation demonstration
Alexander, Richard; Anderson, John; Leal, Jeff; Mullin, David; Nicholson, David; Watson, Graham
2004-08-01
A physical demonstration of distributed surveillance and tracking is described. The demonstration environment is an outdoor car park overlooked by a system of four rooftop cameras. The cameras extract moving objects from the scene, and these objects are tracked in a decentralized way, over a real communication network, using the information form of the standard Kalman filter. Each node therefore has timely access to the complete global picture and because there is no single point of failure in the system, it is robust. The demonstration system and its main components are described here, with an emphasis on some of the lessons we have learned as a result of applying a corpus of distributed data fusion theory and algorithms in practice. Initial results are presented and future plans to scale up the network are also outlined.
Sequential approach to Colombeau's theory of generalized functions
International Nuclear Information System (INIS)
Todorov, T.D.
1987-07-01
J.F. Colombeau's generalized functions are constructed as equivalence classes of the elements of a specially chosen ultrapower of the class of the C ∞ -functions. The elements of this ultrapower are considered as sequences of C ∞ -functions, so in a sense, the sequential construction presented here refers to the original Colombeau theory just as, for example, the Mikusinski sequential approach to the distribution theory refers to the original Schwartz theory of distributions. The paper could be used as an elementary introduction to the Colombeau theory in which recently a solution was found to the problem of multiplication of Schwartz distributions. (author). Refs
The coherence problem with th Unified Neutral Theory of biodiversity
James S. Clark
2012-01-01
The Unified Neutral Theory of Biodiversity (UNTB), proposed as an alternative to niche theory, has been viewed as a theory that species coexist without niche differences, without fitness differences, or with equal probability of success. Support is claimed when models lacking species differences predict highly aggregated metrics, such as species abundance distributions...
Becker, Katrin; Becker, Melanie; Schwarz, John H.
String theory is one of the most exciting and challenging areas of modern theoretical physics. This book guides the reader from the basics of string theory to recent developments. It introduces the basics of perturbative string theory, world-sheet supersymmetry, space-time supersymmetry, conformal field theory and the heterotic string, before describing modern developments, including D-branes, string dualities and M-theory. It then covers string geometry and flux compactifications, applications to cosmology and particle physics, black holes in string theory and M-theory, and the microscopic origin of black-hole entropy. It concludes with Matrix theory, the AdS/CFT duality and its generalizations. This book is ideal for graduate students and researchers in modern string theory, and will make an excellent textbook for a one-year course on string theory. It contains over 120 exercises with solutions, and over 200 homework problems with solutions available on a password protected website for lecturers at www.cambridge.org/9780521860697. Comprehensive coverage of topics from basics of string theory to recent developments Ideal textbook for a one-year course in string theory Includes over 100 exercises with solutions Contains over 200 homework problems with solutions available to lecturers on-line
DEFF Research Database (Denmark)
Glaveanu, Vlad Petre
This book challenges the standard view that creativity comes only from within an individual by arguing that creativity also exists ‘outside’ of the mind or more precisely, that the human mind extends through the means of action into the world. The notion of ‘distributed creativity’ is not commonly...... used within the literature and yet it has the potential to revolutionise the way we think about creativity, from how we define and measure it to what we can practically do to foster and develop creativity. Drawing on cultural psychology, ecological psychology and advances in cognitive science......, this book offers a basic framework for the study of distributed creativity that considers three main dimensions of creative work: sociality, materiality and temporality. Starting from the premise that creativity is distributed between people, between people and objects and across time, the book reviews...
Boundary feedback stabilization of distributed parameter systems
DEFF Research Database (Denmark)
Pedersen, Michael
1988-01-01
The author introduces the method of pseudo-differential stabilization. He notes that the theory of pseudo-differential boundary operators is a fruitful approach to problems arising in control and stabilization theory of distributed-parameter systems. The basic pseudo-differential calculus can...
On a connection between Stieltjes continued fraction, KAM theory and E-infinity theory
International Nuclear Information System (INIS)
Marek-Crnjac, L.
2004-01-01
In the present work we establish a connection between El Naschie's E-infinity theory and Stieltjes solution of the problem on the distribution of mass along a line and KAM theory following Gantmacher and Krein mechanical interpretation of Stieltjes' classical research on the subject
Van Steen, Maarten
2017-01-01
For this third edition of "Distributed Systems," the material has been thoroughly revised and extended, integrating principles and paradigms into nine chapters: 1. Introduction 2. Architectures 3. Processes 4. Communication 5. Naming 6. Coordination 7. Replication 8. Fault tolerance 9. Security A separation has been made between basic material and more specific subjects. The latter have been organized into boxed sections, which may be skipped on first reading. To assist in understanding the more algorithmic parts, example programs in Python have been included. The examples in the book leave out many details for readability, but the complete code is available through the book's Website, hosted at www.distributed-systems.net.
Mathematical theory of sedimentation analysis
Fujita, Hiroshi; Van Rysselberghe, P
1962-01-01
Mathematical Theory of Sedimentation Analysis presents the flow equations for the ultracentrifuge. This book is organized into two parts encompassing six chapters that evaluate the systems of reacting components, the differential equations for the ultracentrifuge, and the case of negligible diffusion. The first chapters consider the Archibald method for molecular weight determination; pressure-dependent sedimentation; expressions for the refractive index and its gradient; relation between refractive index and concentration; and the analysis of Gaussian distribution. Other chapters deal with th
Stochastic theory of fatigue corrosion
Hu, Haiyun
1999-10-01
A stochastic theory of corrosion has been constructed. The stochastic equations are described giving the transportation corrosion rate and fluctuation corrosion coefficient. In addition the pit diameter distribution function, the average pit diameter and the most probable pit diameter including other related empirical formula have been derived. In order to clarify the effect of stress range on the initiation and growth behaviour of pitting corrosion, round smooth specimen were tested under cyclic loading in 3.5% NaCl solution.
Thermodynamic theory of equilibrium fluctuations
International Nuclear Information System (INIS)
Mishin, Y.
2015-01-01
The postulational basis of classical thermodynamics has been expanded to incorporate equilibrium fluctuations. The main additional elements of the proposed thermodynamic theory are the concept of quasi-equilibrium states, a definition of non-equilibrium entropy, a fundamental equation of state in the entropy representation, and a fluctuation postulate describing the probability distribution of macroscopic parameters of an isolated system. Although these elements introduce a statistical component that does not exist in classical thermodynamics, the logical structure of the theory is different from that of statistical mechanics and represents an expanded version of thermodynamics. Based on this theory, we present a regular procedure for calculations of equilibrium fluctuations of extensive parameters, intensive parameters and densities in systems with any number of fluctuating parameters. The proposed fluctuation formalism is demonstrated by four applications: (1) derivation of the complete set of fluctuation relations for a simple fluid in three different ensembles; (2) fluctuations in finite-reservoir systems interpolating between the canonical and micro-canonical ensembles; (3) derivation of fluctuation relations for excess properties of grain boundaries in binary solid solutions, and (4) derivation of the grain boundary width distribution for pre-melted grain boundaries in alloys. The last two applications offer an efficient fluctuation-based approach to calculations of interface excess properties and extraction of the disjoining potential in pre-melted grain boundaries. Possible future extensions of the theory are outlined.
On the seagull effect in semi-inclusive distributions
International Nuclear Information System (INIS)
Ernst, W.
1978-01-01
Taking into account that pions contain two constituents, the semi-inclusive distributions are derived by information theory. From these distributions the psub(L)psub(T) correlations are especially calculated, the results are compared with experimental data
Distributed Collaborative Learning Communities Enabled by Information Communication Technology
H.L. Alvarez (Heidi Lee)
2006-01-01
textabstractHow and why can Information Communication Technology (ICT) contribute to enhancing learning in distributed Collaborative Learning Communities (CLCs)? Drawing from relevant theories concerned with phenomenon of ICT enabled distributed collaborative learning, this book identifies gaps in
Matrix-exponential distributions in applied probability
Bladt, Mogens
2017-01-01
This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...
Introduction to lattice theory with computer science applications
Garg, Vijay K
2015-01-01
A computational perspective on partial order and lattice theory, focusing on algorithms and their applications This book provides a uniform treatment of the theory and applications of lattice theory. The applications covered include tracking dependency in distributed systems, combinatorics, detecting global predicates in distributed systems, set families, and integer partitions. The book presents algorithmic proofs of theorems whenever possible. These proofs are written in the calculational style advocated by Dijkstra, with arguments explicitly spelled out step by step. The author's intent
International Nuclear Information System (INIS)
Bergmann, P.G.
1980-01-01
A problem of construction of the unitary field theory is discussed. The preconditions of the theory are briefly described. The main attention is paid to the geometrical interpretation of physical fields. The meaning of the conceptions of diversity and exfoliation is elucidated. Two unitary field theories are described: the Weyl conformic geometry and Calitzy five-dimensioned theory. It is proposed to consider supersymmetrical theories as a new approach to the problem of a unitary field theory. It is noted that the supergravitational theories are really unitary theories, since the fields figuring there do not assume invariant expansion
Boley, Bruno A
1997-01-01
Highly regarded text presents detailed discussion of fundamental aspects of theory, background, problems with detailed solutions. Basics of thermoelasticity, heat transfer theory, thermal stress analysis, more. 1985 edition.
International Nuclear Information System (INIS)
Marciano, W.J.
1984-12-01
The present state of the art in elementary particle theory is reviewed. Topics include quantum electrodynamics, weak interactions, electroweak unification, quantum chromodynamics, and grand unified theories. 113 references
Employing Theories Far beyond Their Limits - Linear Dichroism Theory.
Mayerhöfer, Thomas G
2018-05-15
Using linear polarized light, it is possible in case of ordered structures, such as stretched polymers or single crystals, to determine the orientation of the transition moments of electronic and vibrational transitions. This not only helps to resolve overlapping bands, but also assigning the symmetry species of the transitions and to elucidate the structure. To perform spectral evaluation quantitatively, a sometimes "Linear Dichroism Theory" called approach is very often used. This approach links the relative orientation of the transition moment and polarization direction to the quantity absorbance. This linkage is highly questionable for several reasons. First of all, absorbance is a quantity that is by its definition not compatible with Maxwell's equations. Furthermore, absorbance seems not to be the quantity which is generally compatible with linear dichroism theory. In addition, linear dichroism theory disregards that it is not only the angle between transition moment and polarization direction, but also the angle between sample surface and transition moment, that influences band shape and intensity. Accordingly, the often invoked "magic angle" has never existed and the orientation distribution influences spectra to a much higher degree than if linear dichroism theory would hold strictly. A last point that is completely ignored by linear dichroism theory is the fact that partially oriented or randomly-oriented samples usually consist of ordered domains. It is their size relative to the wavelength of light that can also greatly influence a spectrum. All these findings can help to elucidate orientation to a much higher degree by optical methods than currently thought possible by the users of linear dichroism theory. Hence, it is the goal of this contribution to point out these shortcomings of linear dichroism theory to its users to stimulate efforts to overcome the long-lasting stagnation of this important field. © 2018 Wiley-VCH Verlag GmbH & Co. KGa
Jardine, John F
2015-01-01
This monograph on the homotopy theory of topologized diagrams of spaces and spectra gives an expert account of a subject at the foundation of motivic homotopy theory and the theory of topological modular forms in stable homotopy theory. Beginning with an introduction to the homotopy theory of simplicial sets and topos theory, the book covers core topics such as the unstable homotopy theory of simplicial presheaves and sheaves, localized theories, cocycles, descent theory, non-abelian cohomology, stacks, and local stable homotopy theory. A detailed treatment of the formalism of the subject is interwoven with explanations of the motivation, development, and nuances of ideas and results. The coherence of the abstract theory is elucidated through the use of widely applicable tools, such as Barr's theorem on Boolean localization, model structures on the category of simplicial presheaves on a site, and cocycle categories. A wealth of concrete examples convey the vitality and importance of the subject in topology, n...
Geometry of lattice field theory
International Nuclear Information System (INIS)
Honan, T.J.
1986-01-01
Using some tools of algebraic topology, a general formalism for lattice field theory is presented. The lattice is taken to be a simplicial complex that is also a manifold and is referred to as a simplicial manifold. The fields on this lattice are cochains, that are called lattice forms to emphasize the connections with differential forms in the continuum. This connection provides a new bridge between lattice and continuum field theory. A metric can be put onto this simplicial manifold by assigning lengths to every link or I-simplex of the lattice. Regge calculus is a way of defining general relativity on this lattice. A geometric discussion of Regge calculus is presented. The Regge action, which is a discrete form of the Hilbert action, is derived from the Hilbert action using distribution valued forms. This is a new derivation that emphasizes the underlying geometry. Kramers-Wannier duality in statistical mechanics is discussed in this general setting. Nonlinear field theories, which include gauge theories and nonlinear sigma models are discussed in the continuum and then are put onto a lattice. The main new result here is the generalization to curved spacetime, which consists of making the theory compatible with Regge calculus
Nuclear structure theory. Annual technical progress report, October 1, 1979-August 31, 1980
International Nuclear Information System (INIS)
French, J.B.; Koltun, D.S.
1980-01-01
This report summarizes progress during the past year in the following areas of nuclear structure and reaction theory: statistical spectroscopy (including random matrix methods, with applications to fluctuations in spectra and in strength distributions, and to problems of ergodicity; group symmetries in spectral-distribution theory; electromagnetic and β transitions); meson scattering and absorption by nuclei (including general scattering theory with absorption, multiple scattering theory and its reactive content, statistical theory of absorption); and meson currents in electromagnetic transitions
Structure and thermodynamics of core-softened models for alcohols
International Nuclear Information System (INIS)
Munaò, Gianmarco; Urbic, Tomaz
2015-01-01
The phase behavior and the fluid structure of coarse-grain models for alcohols are studied by means of reference interaction site model (RISM) theory and Monte Carlo simulations. Specifically, we model ethanol and 1-propanol as linear rigid chains constituted by three (trimers) and four (tetramers) partially fused spheres, respectively. Thermodynamic properties of these models are examined in the RISM context, by employing closed formulæ for the calculation of free energy and pressure. Gas-liquid coexistence curves for trimers and tetramers are reported and compared with already existing data for a dimer model of methanol. Critical temperatures slightly increase with the number of CH 2 groups in the chain, while critical pressures and densities decrease. Such a behavior qualitatively reproduces the trend observed in experiments on methanol, ethanol, and 1-propanol and suggests that our coarse-grain models, despite their simplicity, can reproduce the essential features of the phase behavior of such alcohols. The fluid structure of these models is investigated by computing radial distribution function g ij (r) and static structure factor S ij (k); the latter shows the presence of a low−k peak at intermediate-high packing fractions and low temperatures, suggesting the presence of aggregates for both trimers and tetramers
Uncertainties and reliability theories for reactor safety
International Nuclear Information System (INIS)
Veneziano, D.
1975-01-01
What makes the safety problem of nuclear reactors particularly challenging is the demand for high levels of reliability and the limitation of statistical information. The latter is an unfortunate circumstance, which forces deductive theories of reliability to use models and parameter values with weak factual support. The uncertainty about probabilistic models and parameters which are inferred from limited statistical evidence can be quantified and incorporated rationally into inductive theories of reliability. In such theories, the starting point is the information actually available, as opposed to an estimated probabilistic model. But, while the necessity of introducing inductive uncertainty into reliability theories has been recognized by many authors, no satisfactory inductive theory is presently available. The paper presents: a classification of uncertainties and of reliability models for reactor safety; a general methodology to include these uncertainties into reliability analysis; a discussion about the relative advantages and the limitations of various reliability theories (specifically, of inductive and deductive, parametric and nonparametric, second-moment and full-distribution theories). For example, it is shown that second-moment theories, which were originally suggested to cope with the scarcity of data, and which have been proposed recently for the safety analysis of secondary containment vessels, are the least capable of incorporating statistical uncertainty. The focus is on reliability models for external threats (seismic accelerations and tornadoes). As an application example, the effect of statistical uncertainty on seismic risk is studied using parametric full-distribution models
Rationality, Theory Acceptance and Decision Theory
Directory of Open Access Journals (Sweden)
J. Nicolas Kaufmann
1998-06-01
Full Text Available Following Kuhn's main thesis according to which theory revision and acceptance is always paradigm relative, I propose to outline some possible consequences of such a view. First, asking the question in what sense Bayesian decision theory could serve as the appropriate (normative theory of rationality examined from the point of view of the epistemology of theory acceptance, I argue that Bayesianism leads to a narrow conception of theory acceptance. Second, regarding the different types of theory revision, i.e. expansion, contraction, replacement and residuals shifts, I extract from Kuhn's view a series of indications showing that theory replacement cannot be rationalized within the framework of Bayesian decision theory, not even within a more sophisticated version of that model. Third, and finally, I will point to the need for a more comprehensive model of rationality than the Bayesian expected utility maximization model, the need for a model which could better deal with the different aspects of theory replacement. I will show that Kuhn's distinction between normal and revolutionary science gives us several hints for a more adequate theory of rationality in science. I will also show that Kuhn is not in a position to fully articulate his main ideas and that he well be confronted with a serious problem concerning collective choice of a paradigm.
Quantum theory of massive Yang-Mills fields, 3
International Nuclear Information System (INIS)
Fukuda, Takashi; Matsuda, Hiroaki; Seki, Yoshinori; Yokoyama, Kan-ichi
1983-01-01
The renormalizable structure of a massive Yang-Mills field theory proposed previously is revealed in view of nonpolynomial Lagrangian theories. Analytic properties of several relevant superpropagators are elucidated in the sense of distributions. It is shown that these regularized superpropagators exhibit a strong infinity-suppression mechanism making the theory renormalizable. There appears a divergence-free model as a subcase of the present theory. (author)
Lindström, Robin; Rosvall, Tobias
2013-01-01
En prestandaanalys utfördes på en SAAB 2000 som referensobjekt. Olika metoder för att driva flygplan på ett miljövänligare sätt utvärderades tillsammans med distributed propulsion. Efter undersökningar valdes elmotorer tillsammans med Zink-luft batterier för att driva SAAB 2000 med distributed propulsion. En prestandaanalys utfördes på detta plan på samma sätt som för den ursprungliga SAAB 2000. Resultaten jämfördes och slutsatsen blev att räckvidden var för kort för att konfigurationen skull...
From chaos to unification: U theory vs. M theory
International Nuclear Information System (INIS)
Ye, Fred Y.
2009-01-01
A unified physical theory called U theory, that is different from M theory, is defined and characterized. U theory, which includes spinor and twistor theory, loop quantum gravity, causal dynamical triangulations, E-infinity unification theory, and Clifford-Finslerian unifications, is based on physical tradition and experimental foundations. In contrast, M theory pays more attention to mathematical forms. While M theory is characterized by supersymmetry string theory, U theory is characterized by non-supersymmetry unified field theory.
Binns, Lewis A.; Valachis, Dimitris; Anderson, Sean; Gough, David W.; Nicholson, David; Greenway, Phil
2002-07-01
Previously, we have developed techniques for Simultaneous Localization and Map Building based on the augmented state Kalman filter. Here we report the results of experiments conducted over multiple vehicles each equipped with a laser range finder for sensing the external environment, and a laser tracking system to provide highly accurate ground truth. The goal is simultaneously to build a map of an unknown environment and to use that map to navigate a vehicle that otherwise would have no way of knowing its location, and to distribute this process over several vehicles. We have constructed an on-line, distributed implementation to demonstrate the principle. In this paper we describe the system architecture, the nature of the experimental set up, and the results obtained. These are compared with the estimated ground truth. We show that distributed SLAM has a clear advantage in the sense that it offers a potential super-linear speed-up over single vehicle SLAM. In particular, we explore the time taken to achieve a given quality of map, and consider the repeatability and accuracy of the method. Finally, we discuss some practical implementation issues.
The Properties of Model Selection when Retaining Theory Variables
DEFF Research Database (Denmark)
Hendry, David F.; Johansen, Søren
Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....
Contemporary theories of democracy
Directory of Open Access Journals (Sweden)
Mladenović Ivan
2008-01-01
Full Text Available The aim of this paper is two-fold: first, to analyze several contemporary theories of democracy, and secondly, to propose a theoretical framework for further investigations based on analyzed theories. The following four theories will be analyzed: pluralism, social choice theory, deliberative democracy and participatory democracy.
Moschovakis, YN
1987-01-01
Now available in paperback, this monograph is a self-contained exposition of the main results and methods of descriptive set theory. It develops all the necessary background material from logic and recursion theory, and treats both classical descriptive set theory and the effective theory developed by logicians.
't Hooft, Gerardus; Witten, Edward
2005-01-01
In his later years, Einstein sought a unified theory that would extend general relativity and provide an alternative to quantum theory. There is now talk of a "theory of everything"; fifty years after his death, how close are we to such a theory? (3 pages)
de Bruin, B.P.
2005-01-01
Game theory is the mathematical study of strategy and conflict. It has wide applications in economics, political science, sociology, and, to some extent, in philosophy. Where rational choice theory or decision theory is concerned with individual agents facing games against nature, game theory deals
The Impact of Education on Income Distribution.
Tinbergen, Jan
The author's previously developed theory on income distribution, in which two of the explanatory variables are the average level and the distribution of education, is refined and tested on data selected and processed by the author and data from three studies by Americans. The material consists of data on subdivisions of three countries, the United…
Hidden variable interpretation of spontaneous localization theory
Energy Technology Data Exchange (ETDEWEB)
Bedingham, Daniel J, E-mail: d.bedingham@imperial.ac.uk [Blackett Laboratory, Imperial College, London SW7 2BZ (United Kingdom)
2011-07-08
The spontaneous localization theory of Ghirardi, Rimini, and Weber (GRW) is a theory in which wavepacket reduction is treated as a genuine physical process. Here it is shown that the mathematical formalism of GRW can be given an interpretation in terms of an evolving distribution of particles on configuration space similar to Bohmian mechanics (BM). The GRW wavefunction acts as a pilot wave for the set of particles. In addition, a continuous stream of noisy information concerning the precise whereabouts of the particles must be specified. Nonlinear filtering techniques are used to determine the dynamics of the distribution of particles conditional on this noisy information and consistency with the GRW wavefunction dynamics is demonstrated. Viewing this development as a hybrid BM-GRW theory, it is argued that, besides helping to clarify the relationship between the GRW theory and BM, its merits make it worth considering in its own right.
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
The basic ideas of game theory were originated from the problems of maximum and minimum given by J.Yon Neumann in 1928. Later, wars accelerated the study of game theory, there are many developments that contributed to the advancement of game theory, many problems of optimum appeared in economic development process. Scientists applied mathematic methods to studying game theory to make the theory more profound and perfect. The axiomatic structure of game theory was nearly complete in 1944. The path of the development of game theory started from finite to infinite, from two players to many players, from expressing gains with quantity to showing the ending of game theory with abstract result, and from certainty problems to random problems. Thus development of game theory is closely related to the economic development. In recent years, the research on the non-differentiability of Shapley value posed by Belgian Mertens is one of the advanced studies in game theory.
Nonrelativistic superstring theories
International Nuclear Information System (INIS)
Kim, Bom Soo
2007-01-01
We construct a supersymmetric version of the critical nonrelativistic bosonic string theory [B. S. Kim, Phys. Rev. D 76, 106007 (2007).] with its manifest global symmetry. We introduce the anticommuting bc conformal field theory (CFT) which is the super partner of the βγ CFT. The conformal weights of the b and c fields are both 1/2. The action of the fermionic sector can be transformed into that of the relativistic superstring theory. We explicitly quantize the theory with manifest SO(8) symmetry and find that the spectrum is similar to that of type IIB superstring theory. There is one notable difference: the fermions are nonchiral. We further consider noncritical generalizations of the supersymmetric theory using the superspace formulation. There is an infinite range of possible string theories similar to the supercritical string theories. We comment on the connection between the critical nonrelativistic string theory and the lightlike linear dilaton theory
Nonrelativistic closed string theory
International Nuclear Information System (INIS)
Gomis, Jaume; Ooguri, Hirosi
2001-01-01
We construct a Galilean invariant nongravitational closed string theory whose excitations satisfy a nonrelativistic dispersion relation. This theory can be obtained by taking a consistent low energy limit of any of the conventional string theories, including the heterotic string. We give a finite first order worldsheet Hamiltonian for this theory and show that this string theory has a sensible perturbative expansion, interesting high energy behavior of scattering amplitudes and a Hagedorn transition of the thermal ensemble. The strong coupling duals of the Galilean superstring theories are considered and are shown to be described by an eleven-dimensional Galilean invariant theory of light membrane fluctuations. A new class of Galilean invariant nongravitational theories of light-brane excitations are obtained. We exhibit dual formulations of the strong coupling limits of these Galilean invariant theories and show that they exhibit many of the conventional dualities of M theory in a nonrelativistic setting
Gauge theory loop operators and Liouville theory
International Nuclear Information System (INIS)
Drukker, Nadav; Teschner, Joerg
2009-10-01
We propose a correspondence between loop operators in a family of four dimensional N=2 gauge theories on S 4 - including Wilson, 't Hooft and dyonic operators - and Liouville theory loop operators on a Riemann surface. This extends the beautiful relation between the partition function of these N=2 gauge theories and Liouville correlators found by Alday, Gaiotto and Tachikawa. We show that the computation of these Liouville correlators with the insertion of a Liouville loop operator reproduces Pestun's formula capturing the expectation value of a Wilson loop operator in the corresponding gauge theory. We prove that our definition of Liouville loop operators is invariant under modular transformations, which given our correspondence, implies the conjectured action of S-duality on the gauge theory loop operators. Our computations in Liouville theory make an explicit prediction for the exact expectation value of 't Hooft and dyonic loop operators in these N=2 gauge theories. The Liouville loop operators are also found to admit a simple geometric interpretation within quantum Teichmueller theory as the quantum operators representing the length of geodesics. We study the algebra of Liouville loop operators and show that it gives evidence for our proposal as well as providing definite predictions for the operator product expansion of loop operators in gauge theory. (orig.)
Identity theory and personality theory: mutual relevance.
Stryker, Sheldon
2007-12-01
Some personality psychologists have found a structural symbolic interactionist frame and identity theory relevant to their work. This frame and theory, developed in sociology, are first reviewed. Emphasized in the review are a multiple identity conception of self, identities as internalized expectations derived from roles embedded in organized networks of social interaction, and a view of social structures as facilitators in bringing people into networks or constraints in keeping them out, subsequently, attention turns to a discussion of the mutual relevance of structural symbolic interactionism/identity theory and personality theory, looking to extensions of the current literature on these topics.
Towards a theory of spacetime theories
Schiemann, Gregor; Scholz, Erhard
2017-01-01
This contributed volume is the result of a July 2010 workshop at the University of Wuppertal Interdisciplinary Centre for Science and Technology Studies which brought together world-wide experts from physics, philosophy and history, in order to address a set of questions first posed in the 1950s: How do we compare spacetime theories? How do we judge, objectively, which is the “best” theory? Is there even a unique answer to this question? The goal of the workshop, and of this book, is to contribute to the development of a meta-theory of spacetime theories. Such a meta-theory would reveal insights about specific spacetime theories by distilling their essential similarities and differences, deliver a framework for a class of theories that could be helpful as a blueprint to build other meta-theories, and provide a higher level viewpoint for judging which theory most accurately describes nature. But rather than drawing a map in broad strokes, the focus is on particularly rich regions in the “space of spaceti...
Gauge theory loop operators and Liouville theory
Energy Technology Data Exchange (ETDEWEB)
Drukker, Nadav [Humboldt Univ. Berlin (Germany). Inst. fuer Physik; Gomis, Jaume; Okuda, Takuda [Perimeter Inst. for Theoretical Physics, Waterloo, ON (Canada); Teschner, Joerg [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2009-10-15
We propose a correspondence between loop operators in a family of four dimensional N=2 gauge theories on S{sup 4} - including Wilson, 't Hooft and dyonic operators - and Liouville theory loop operators on a Riemann surface. This extends the beautiful relation between the partition function of these N=2 gauge theories and Liouville correlators found by Alday, Gaiotto and Tachikawa. We show that the computation of these Liouville correlators with the insertion of a Liouville loop operator reproduces Pestun's formula capturing the expectation value of a Wilson loop operator in the corresponding gauge theory. We prove that our definition of Liouville loop operators is invariant under modular transformations, which given our correspondence, implies the conjectured action of S-duality on the gauge theory loop operators. Our computations in Liouville theory make an explicit prediction for the exact expectation value of 't Hooft and dyonic loop operators in these N=2 gauge theories. The Liouville loop operators are also found to admit a simple geometric interpretation within quantum Teichmueller theory as the quantum operators representing the length of geodesics. We study the algebra of Liouville loop operators and show that it gives evidence for our proposal as well as providing definite predictions for the operator product expansion of loop operators in gauge theory. (orig.)
DEFF Research Database (Denmark)
Andersen, Jack
2015-01-01
Purpose To provide a small overview of genre theory and its associated concepts and to show how genre theory has had its antecedents in certain parts of the social sciences and not in the humanities. Findings The chapter argues that the explanatory force of genre theory may be explained with its...... emphasis on everyday genres, de facto genres. Originality/value By providing an overview of genre theory, the chapter demonstrates the wealth and richness of forms of explanations in genre theory....
Conlon, Joseph
2016-01-01
Is string theory a fraud or one of the great scientific advances? Why do so many physicists work on string theory if it cannot be tested? This book provides insight into why such a theory, with little direct experimental support, plays such a prominent role in theoretical physics. The book gives a modern and accurate account of string theory and science, explaining what string theory is, why it is regarded as so promising, and why it is hard to test.
Teaching Theory X and Theory Y in Organizational Communication
Noland, Carey
2014-01-01
The purpose of the activity described here is to integrate McGregor's Theory X and Theory Y into a group application: design a syllabus that embodies either Theory X or Theory Y tenets. Students should be able to differentiate between Theory X and Theory Y, create a syllabus based on Theory X or Theory Y tenets, evaluate the different syllabi…
Probability theory and mathematical statistics for engineers
Pugachev, V S
1984-01-01
Probability Theory and Mathematical Statistics for Engineers focuses on the concepts of probability theory and mathematical statistics for finite-dimensional random variables.The publication first underscores the probabilities of events, random variables, and numerical characteristics of random variables. Discussions focus on canonical expansions of random vectors, second-order moments of random vectors, generalization of the density concept, entropy of a distribution, direct evaluation of probabilities, and conditional probabilities. The text then examines projections of random vector
The theory of hybrid stochastic algorithms
International Nuclear Information System (INIS)
Duane, S.; Kogut, J.B.
1986-01-01
The theory of hybrid stochastic algorithms is developed. A generalized Fokker-Planck equation is derived and is used to prove that the correct equilibrium distribution is generated by the algorithm. Systematic errors following from the discrete time-step used in the numerical implementation of the scheme are computed. Hybrid algorithms which simulate lattice gauge theory with dynamical fermions are presented. They are optimized in computer simulations and their systematic errors and efficiencies are studied. (orig.)
Introduction to the theory of games
McKinsey, John C C
1952-01-01
One of the classic early monographs on game theory, this comprehensive overview illustrates the theory's applications to situations involving conflicts of interest, including economic, social, political, and military contexts. Contents include a survey of rectangular games; a method of approximating the value of a game; games in extensive form and those with infinite strategies; distribution functions; Stieltjes integrals; the fundamental theorem for continuous games; separable games; games with convex payoff functions; applications to statistical inference; and much more. Appropriate for adva
Gravitational lensing in metric theories of gravity
International Nuclear Information System (INIS)
Sereno, Mauro
2003-01-01
Gravitational lensing in metric theories of gravity is discussed. I introduce a generalized approximate metric element, inclusive of both post-post-Newtonian contributions and a gravitomagnetic field. Following Fermat's principle and standard hypotheses, I derive the time delay function and deflection angle caused by an isolated mass distribution. Several astrophysical systems are considered. In most of the cases, the gravitomagnetic correction offers the best perspectives for an observational detection. Actual measurements distinguish only marginally different metric theories from each other
Distributions with given marginals and statistical modelling
Fortiana, Josep; Rodriguez-Lallena, José
2002-01-01
This book contains a selection of the papers presented at the meeting `Distributions with given marginals and statistical modelling', held in Barcelona (Spain), July 17-20, 2000. In 24 chapters, this book covers topics such as the theory of copulas and quasi-copulas, the theory and compatibility of distributions, models for survival distributions and other well-known distributions, time series, categorical models, definition and estimation of measures of dependence, monotonicity and stochastic ordering, shape and separability of distributions, hidden truncation models, diagonal families, orthogonal expansions, tests of independence, and goodness of fit assessment. These topics share the use and properties of distributions with given marginals, this being the fourth specialised text on this theme. The innovative aspect of the book is the inclusion of statistical aspects such as modelling, Bayesian statistics, estimation, and tests.
Beare, Brendan K.
2009-01-01
Suppose that X and Y are random variables. We define a replicating function to be a function f such that f(X) and Y have the same distribution. In general, the set of replicating functions for a given pair of random variables may be infinite. Suppose we have some objective function, or cost function, defined over the set of replicating functions, and we seek to estimate the replicating function with the lowest cost. We develop an approach to estimating the cheapest replicating function that i...
2007-01-01
Please note that starting from 1 March 2007, the mail distribution and collection times will be modified for the following buildings: 6, 8, 9, 10, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 29, 69, 40, 70, 101, 102, 109, 118, 152, 153, 154, 155, 166, 167, 169, 171, 174, 261, 354, 358, 576, 579 and 580. Complementary Information on the new times will be posted on the entry doors and left in the mail boxes of each building. TS/FM Group
Stewart, Stan
2004-01-01
Switchgear plays a fundamental role within the power supply industry. It is required to isolate faulty equipment, divide large networks into sections for repair purposes, reconfigure networks in order to restore power supplies and control other equipment.This book begins with the general principles of the Switchgear function and leads on to discuss topics such as interruption techniques, fault level calculations, switching transients and electrical insulation; making this an invaluable reference source. Solutions to practical problems associated with Distribution Switchgear are also included.
Distributed Leadership and Shared Governance in Post-Secondary Education
Burke, Kenneth M.
2010-01-01
Leadership in education and traditional management theory face the challenges of advancing a theory and practice for twenty-first century organization. This article approaches the potential for new frames of reference on leadership through the correlations between the emergent theory of distributed leadership and the philosophy of shared…
Gyrocenter-gauge kinetic theory
International Nuclear Information System (INIS)
Qin, H.; Tang, W.M.; Lee, W.W.
2000-01-01
Gyrocenter-gauge kinetic theory is developed as an extension of the existing gyrokinetic theories. In essence, the formalism introduced here is a kinetic description of magnetized plasmas in the gyrocenter coordinates which is fully equivalent to the Vlasov-Maxwell system in the particle coordinates. In particular, provided the gyroradius is smaller than the scale-length of the magnetic field, it can treat high frequency range as well as the usual low frequency range normally associated with gyrokinetic approaches. A significant advantage of this formalism is that it enables the direct particle-in-cell simulations of compressional Alfven waves for MHD applications and of RF waves relevant to plasma heating in space and laboratory plasmas. The gyrocenter-gauge kinetic susceptibility for arbitrary wavelength and arbitrary frequency electromagnetic perturbations in a homogeneous magnetized plasma is shown to recover exactly the classical result obtained by integrating the Vlasov-Maxwell system in the particle coordinates. This demonstrates that all the waves supported by the Vlasov-Maxwell system can be studied using the gyrocenter-gauge kinetic model in the gyrocenter coordinates. This theoretical approach is so named to distinguish it from the existing gyrokinetic theory, which has been successfully developed and applied to many important low-frequency and long parallel wavelength problems, where the conventional meaning of gyrokinetic has been standardized. Besides the usual gyrokinetic distribution function, the gyrocenter-gauge kinetic theory emphasizes as well the gyrocenter-gauge distribution function, which sometimes contains all the physics of the problems being studied, and whose importance has not been realized previously. The gyrocenter-gauge distribution function enters Maxwell's equations through the pull-back transformation of the gyrocenter transformation, which depends on the perturbed fields. The efficacy of the gyrocenter-gauge kinetic approach is
Pullback Transformations in Gyrokinetic Theory
International Nuclear Information System (INIS)
Qin, H.; Tang, W.M.
2003-01-01
The Pullback transformation of the distribution function is a key component of the gyrokinetic theory. In this paper, a systematic treatment of this subject is presented, and results from applications of the uniform framework developed are reviewed. The focus is on providing a clear exposition of the basic formalism which arises from the existence of three distinct coordinate systems in gyrokinetic theory. The familiar gyrocenter coordinate system, where the gyromotion is decoupled from the rest of particle's dynamics, is non-canonical and non-fabric. On the other hand, Maxwell's equations, which are needed to complete a kinetic system, are initially only defined in the fabric laboratory phase space coordinate system. The pullback transformations provide a rigorous connection between the distribution functions in gyrocenter coordinates and Maxwell's equations in laboratory phase space coordinates. This involves the generalization of the usual moment integrals originally defined on the cotangent fiber of the phase space to the moment integrals on a general 6D symplectic manifold, is shown to be an important step in the proper formulation of gyrokinetic theory. The resultant systematic treatment of the moment integrals enabled by the pullback transformation. Without this vital element, a number of prominent physics features, such as the presence of the compressional Alfven wave and a proper description of the gyrokinetic equilibrium, cannot be readily recovered
Direction: unified theory of interactions
International Nuclear Information System (INIS)
Valko, P.
1987-01-01
Briefly characterized are the individual theories, namely, the general relativity theory, the Kaluza-Klein theory, the Weyl theory, the unified theory of electromagnetic and weak interactions, the supergravity theory, and the superstring theory. The history is recalled of efforts aimed at creating a unified theory of interactions, and future prospects are outlined. (M.D.). 2 figs
On the stochastic quantization of gauge theories
Energy Technology Data Exchange (ETDEWEB)
Jona-Lasinio, G.; Parrinello, C.
1988-11-03
The non-gradient stochastic quantization scheme for gauge theories proposed by Zwanziger is analyzed in the semiclassical limit. Using ideas from the theory of small random perturbations of dynamical systems we derive a lower bound for the equilibrium distribution in a neighbourhood of a stable critical point of the drift. In this approach the calculation of the equilibrium distribution is reduced to the problem of finding a minimum for the large fluctuation functional associated to the Langevin equation. Our estimate follows from a simple upper bound for this minimum; in addition to the Yang-Mills action a gauge-fixing term which tends to suppress Gribov copies appears.
Phase-space quantization of field theory
International Nuclear Information System (INIS)
Curtright, T.; Zachos, C.
1999-01-01
In this lecture, a limited introduction of gauge invariance in phase-space is provided, predicated on canonical transformations in quantum phase-space. Exact characteristic trajectories are also specified for the time-propagating Wigner phase-space distribution function: they are especially simple--indeed, classical--for the quantized simple harmonic oscillator. This serves as the underpinning of the field theoretic Wigner functional formulation introduced. Scalar field theory is thus reformulated in terms of distributions in field phase-space. This is a pedagogical selection from work published and reported at the Yukawa Institute Workshop ''Gauge Theory and Integrable Models'', 26-29 January, 1999
The enhancon mechanism in string theory
International Nuclear Information System (INIS)
Jarv, Laur
2002-01-01
The enhancon mechanism is a specific phenomenon in string theory which resolves a certain naked spacetime singularity arising in the supergravity description related to N = 2 supersymmetric pure gauge theory. After reviewing the problem of singularities in general relativity as well as in string theory, and discussing the prototypical enhancon example constructed by wrapping D6-branes on a K3 surface, the thesis presents three generalisations to this static spherically symmetric case pertaining to large N SU(N) gauge theory. First we will use orientifolds to show how the enhancon mechanism also works in similar situations related to SO(2N+1), USp(2N) and SO(2N) gauge theories. Second we will wrap D-brane distributions on K3 to obtain the enhancon in oblate, toroidal and prolate shapes. Third we will study a rotating enhancon configuration and consider its implications for the black hole entropy and the second law of thermodynamics. (author)
Building theory through design
DEFF Research Database (Denmark)
Markussen, Thomas
2017-01-01
This chapter deals with a fundamental matter of concern in research through design: how can design work lead to the building of new theory? Controversy exists about the balance between theory and design work in research through design. While some researchers see theory production as the scientific...... hallmark of this type of research, others argue for design work being the primary achievement, with theory serving the auxiliary function of inspiring new designs. This paper demonstrates how design work and theory can be appreciated as two equally important outcomes of research through design. To set...... the scene, it starts out by briefly examining ideas on this issue presented in existing research literature. Hereafter, it introduces three basic forms in which design work can lead to theory that is referred to as extending theories, scaffolding theories and blending theories. Finally, it is discussed how...
Compressed sensing for distributed systems
Coluccia, Giulio; Magli, Enrico
2015-01-01
This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...
Generalizability Theory and Classical Test Theory
Brennan, Robert L.
2011-01-01
Broadly conceived, reliability involves quantifying the consistencies and inconsistencies in observed scores. Generalizability theory, or G theory, is particularly well suited to addressing such matters in that it enables an investigator to quantify and distinguish the sources of inconsistencies in observed scores that arise, or could arise, over…
Generalizability theory and item response theory
Glas, Cornelis A.W.; Eggen, T.J.H.M.; Veldkamp, B.P.
2012-01-01
Item response theory is usually applied to items with a selected-response format, such as multiple choice items, whereas generalizability theory is usually applied to constructed-response tasks assessed by raters. However, in many situations, raters may use rating scales consisting of items with a
Davis, Joe M
2011-10-28
General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.