WorldWideScience

Sample records for principles field methods

  1. Complementary variational principle method applied to thermal conductivities of a plasma in a uniform magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Sehgal, A K; Gupta, S C [Punjabi Univ., Patiala (India). Dept. of Physics

    1982-12-14

    The complementary variational principles method (CVP) is applied to the thermal conductivities of a plasma in a uniform magnetic field. The results of computations show that the CVP derived results are very useful.

  2. Microwave measurement of electrical fields in different media – principles, methods and instrumentation

    International Nuclear Information System (INIS)

    St. Kliment Ohridski, Faculty of Physics, James Bourchier blvd., Sofia 1164 (Bulgaria))" data-affiliation=" (Sofia University St. Kliment Ohridski, Faculty of Physics, James Bourchier blvd., Sofia 1164 (Bulgaria))" >Dankov, Plamen I

    2014-01-01

    This paper, presented in the frame of 4th International Workshop and Summer School on Plasma Physics (IWSSPP'2010, Kiten, Bulgaria), is a brief review of the principles, methods and instrumentation of the microwave measurements of electrical fields in different media. The main part of the paper is connected with the description of the basic features of many field sensors and antennas – narrow-, broadband and ultra-wide band, miniaturized, reconfigurable and active sensors, etc. The main features and applicability of these sensors for determination of electric fields in different media is discussed. The last part of the paper presents the basic principles for utilization of electromagnetic 3-D simulators for E-field measurement purposes. Two illustrative examples have been given – the determination of the dielectric anisotropy of multi-layer materials and discussion of the selectivity of hairpin-probe for determination of the electron density in dense gaseous plasmas.

  3. Quantum principles in field interactions

    International Nuclear Information System (INIS)

    Shirkov, D.V.

    1986-01-01

    The concept of quantum principle is intruduced as a principle whosee formulation is based on specific quantum ideas and notions. We consider three such principles, viz. those of quantizability, local gauge symmetry, and supersymmetry, and their role in the development of the quantum field theory (QFT). Concerning the first of these, we analyze the formal aspects and physical contents of the renormalization procedure in QFT and its relation to ultraviolet divergences and the renorm group. The quantizability principle is formulated as an existence condition of a self-consistent quantum version with a given mechanism of the field interaction. It is shown that the consecutive (from a historial point of view) use of these quantum principles puts still larger limitations on possible forms of field interactions

  4. Comments on field equivalence principles

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1987-01-01

    It is pointed Out that often-used arguments based on a short-circuit concept in presentations of field equivalence principles are not correct. An alternative presentation based on the uniqueness theorem is given. It does not contradict the results obtained by using the short-circuit concept...

  5. Physical acoustics principles and methods

    CERN Document Server

    Mason, Warren P

    2012-01-01

    Physical Acoustics: Principles and Methods, Volume IV, Part B: Applications to Quantum and Solid State Physics provides an introduction to the various applications of quantum mechanics to acoustics by describing several processes for which such considerations are essential. This book discusses the transmission of sound waves in molten metals. Comprised of seven chapters, this volume starts with an overview of the interactions that can happen between electrons and acoustic waves when magnetic fields are present. This text then describes acoustic and plasma waves in ionized gases wherein oscillations are subject to hydrodynamic as well as electromagnetic forces. Other chapters examine the resonances and relaxations that can take place in polymer systems. This book discusses as well the general theory of the interaction of a weak sinusoidal field with matter. The final chapter describes the sound velocities in the rocks composing the Earth. This book is a valuable resource for physicists and engineers.

  6. Physical acoustics v.8 principles and methods

    CERN Document Server

    Mason, Warren P

    1971-01-01

    Physical Acoustics: Principles and Methods, Volume VIII discusses a number of themes on physical acoustics that are divided into seven chapters. Chapter 1 describes the principles and applications of a tool for investigating phonons in dielectric crystals, the spin phonon spectrometer. The next chapter discusses the use of ultrasound in investigating Landau quantum oscillations in the presence of a magnetic field and their relation to the strain dependence of the Fermi surface of metals. The third chapter focuses on the ultrasonic measurements that are made by pulsing methods with velo

  7. Generalized field quantization and the Pauli principle

    International Nuclear Information System (INIS)

    Govorkov, A.B.

    1990-01-01

    The work is an attempt to prove that the generalized Pauli principle (i.e. Fermi statistics) for the half-integer spin fields and the Bose statistics for the integer spin fields with allowance for the existence of internal gauge symmetries are consequences of more general assumptions of the local quantum field theory. 32 refs.; 1 tab

  8. Physical acoustics principles and methods

    CERN Document Server

    Mason, Warren P

    1964-01-01

    Physical Acoustics: Principles and Methods, Volume l-Part A focuses on high frequency sound waves in gases, liquids, and solids that have been proven as powerful tools in analyzing the molecular, defect, domain wall, and other types of motions. The selection first tackles wave propagation in fluids and normal solids and guided wave propagation in elongated cylinders and plates. Discussions focus on fundamentals of continuum mechanics; small-amplitude waves in a linear viscoelastic medium; representation of oscillations and waves; and special effects associated with guided elastic waves in plat

  9. The common principles established to expert's preparation by a remote methods in the Earth sciences field, and their decision

    Science.gov (United States)

    Kudzh, S.; Trofimov, S.

    Modern socially economic situation in the country and in an education system is those, that traditional forms of getting education and training model cannot satisfy all needs for the educational services usually concentrated in the big cities, and so - the increased interest to new, progressive specialities has received the development in electronic - training systems. The attitude to education on the part of the states, the governments, societies has changed also. Education began to be considered as the major factor of economic growth and social development of the countries, the decision of some global problems connected to survival of mankind. In this connection, recently development and practical introduction of technologies of remote and open education are conducted in the different countries, the especial attention is given to the systems, capable to comprise, transfer and analyze huge streams of information. The experience which has been saved up by foreign colleagues, shows, that the sanction of this technological conflict lays, generally, in sphere of creation of a wide network of remote training, and, in narrow, both quality and quantity of a substantial part, also it is necessary not to forget about a choice of electronic-training systems with their reference to various areas. And an occurrence of the computer equipment in the user's end, development of existing ways and means of data transmission, functional expansion of already existing and creation of absolutely new hardware-software complexes, and many other things has begun occurrence of new scientific directions in such basic area of sciences as the Earth - science. (These are geoinformation systems, research of natural resources by space methods, organization and technology of data protection in geoinformation systems etc.) Clearly, that new specialities impose the certain conditions for preparation of experts, and, carrying out the analysis of already existing electronic training systems in the

  10. Principles of power frequency magnetic field management

    International Nuclear Information System (INIS)

    Fugate, D.; Feero, W.

    1995-01-01

    At the most general level, magnetic field management is the creation, elimination, or modification of sources in order to alter the spatial distribution of magnetic fields over some region of space. The two main options for magnetic field management are source modification (elimination or modification of original sources) and cancellation (creation of new sources). Source modification includes any changes in the layout or location of field sources, elimination of ground paths, or any options that increase the distance between sources and regions of interest. Cancellation involves the creation of new magnetic field sources, passive and/or active that produce magnetic fields that are opposite to the original fields in the region of interest. Shielding using materials of high conductivity and/or high permeability falls under the cancellation option. Strategies for magnetic field management, whether they are source modification or cancellation, typically vary on a case to case basis depending on the regions of interest, the types of sources and resulting complexity of the field structure, the field levels, and the attenuation requirements. This paper gives an overview of magnetic field management based on fundamental concepts. Low field design principles are described, followed by a structured discussion of cancellation and shielding. The two basic material shielding mechanisms, induced current shielding, and flux-shunting are discussed

  11. Neutron activation analysis: principle and methods

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.

    2006-01-01

    Neutron activation analysis (NAA) is a powerful isotope specific nuclear analytical technique for simultaneous determination of elemental composition of major, minor and trace elements in diverse matrices. The technique is capable of yielding high analytical sensitivity and low detection limits (ppm to ppb). Due to high penetration power of neutrons and gamma rays, NAA experiences negligible matrix effects in the samples of different origins. Depending on the sample matrix and element of interest NAA technique is used non-destructively, known as instrumental neutron activation analysis (INAA), or through chemical NAA methods. The present article describes principle of NAA, different methods and gives a overview some applications in the fields like environment, biology, geology, material sciences, nuclear technology and forensic sciences. (author)

  12. Schwinger's quantum action principle from Dirac’s formulation through Feynman’s path integrals, the Schwinger-Keldysh method, quantum field theory, to source theory

    CERN Document Server

    Milton, Kimball A

    2015-01-01

    Starting from the earlier notions of stationary action principles, these tutorial notes shows how Schwinger’s Quantum Action Principle descended from Dirac’s formulation, which independently led Feynman to his path-integral formulation of quantum mechanics. Part I brings out in more detail the connection between the two formulations, and applications are discussed. Then, the Keldysh-Schwinger time-cycle method of extracting matrix elements is described. Part II will discuss the variational formulation of quantum electrodynamics and the development of source theory.

  13. Nanowire field effect transistors principles and applications

    CERN Document Server

    Jeong, Yoon-Ha

    2014-01-01

    “Nanowire Field Effect Transistor: Basic Principles and Applications” places an emphasis on the application aspects of nanowire field effect transistors (NWFET). Device physics and electronics are discussed in a compact manner, together with the p-n junction diode and MOSFET, the former as an essential element in NWFET and the latter as a general background of the FET. During this discussion, the photo-diode, solar cell, LED, LD, DRAM, flash EEPROM and sensors are highlighted to pave the way for similar applications of NWFET. Modeling is discussed in close analogy and comparison with MOSFETs. Contributors focus on processing, electrostatic discharge (ESD) and application of NWFET. This includes coverage of solar and memory cells, biological and chemical sensors, displays and atomic scale light emitting diodes. Appropriate for scientists and engineers interested in acquiring a working knowledge of NWFET as well as graduate students specializing in this subject.

  14. Variational principles for particles and fields in Heisenberg matrix mechanics

    International Nuclear Information System (INIS)

    Klein, A.; Li, C.T.; Vassanji, M.

    1980-01-01

    For many years we have advocated a form of quantum mechanics based on the application of sum rule methods (completeness) to the equations of motion and to the commutation relations, i.e., to Heisenberg matrix mechanics. Sporadically we have discussed or alluded to a variational foundation for this method. In this paper we present a series of variational principles applicable to a range of systems from one-dimensional quantum mechanics to quantum fields. The common thread is that the stationary quantity is the trace of the Hamiltonian over Hilbert space (or over a subspace of interest in an approximation) expressed as a functional of matrix elements of the elementary operators of the theory. These parameters are constrained by the kinematical relations of the theory introduced by the method of Lagrange multipliers. For the field theories, variational principles in which matrix elements of the density operators are chosen as fundamental are also developed. A qualitative discussion of applications is presented

  15. Gauge principle for hyper(para) fields

    Energy Technology Data Exchange (ETDEWEB)

    Govorkov, A.B. (Joint Inst. for Nuclear Research, Dubna (USSR))

    1983-04-01

    A special representation for parafields is considered which is based on the use of the Clifford hypernumbers. The principle of gauge invariance under hypercomplex phase transformations of parafields is formulated. A special role of quaternion hyperfields and corresponding Yang-Mills lagrangian with the gauge SO(3)-symmetry is pointed out.

  16. Rigorous force field optimization principles based on statistical distance minimization

    Energy Technology Data Exchange (ETDEWEB)

    Vlcek, Lukas, E-mail: vlcekl1@ornl.gov [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States); Joint Institute for Computational Sciences, University of Tennessee, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6173 (United States); Chialvo, Ariel A. [Chemical Sciences Division, Geochemistry & Interfacial Sciences Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831-6110 (United States)

    2015-10-14

    We use the concept of statistical distance to define a measure of distinguishability between a pair of statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the model’s static measurable properties to those of the target. We exploit this feature to define a rigorous basis for the development of accurate and robust effective molecular force fields that are inherently compatible with coarse-grained experimental data. The new model optimization principles and their efficient implementation are illustrated through selected examples, whose outcome demonstrates the higher robustness and predictive accuracy of the approach compared to other currently used methods, such as force matching and relative entropy minimization. We also discuss relations between the newly developed principles and established thermodynamic concepts, which include the Gibbs-Bogoliubov inequality and the thermodynamic length.

  17. Molecular physics. Theoretical principles and experimental methods

    International Nuclear Information System (INIS)

    Demtroeder, W.

    2005-01-01

    This advanced textbook comprehensively explains important principles of diatomic and polyatomic molecules and their spectra in two separate, distinct parts. The first part concentrates on the theoretical aspects of molecular physics, whereas the second part of the book covers experimental techniques, i.e. laser, Fourier, NMR, and ESR spectroscopies, used in the fields of physics, chemistry, biolog, and material science. Appropriate for undergraduate and graduate students in physics and chemistry with a knowledge of atomic physics and familiar with the basics of quantum mechanics. From the contents: - Electronic States of Molecules, - Rotation, Oscillation and Potential Curves of Diatomic Molecules, - The Spectra of Diatomic Molecules, - Molecule Symmetries and Group Theory, - Rotation and Oscillations of Polyatomic Molecules, - Electronic States of Polyatomic Molecules, - The Spectra of Polyatomic Molecules, - Collapse of the Born-Oppenheimer-Approximation, Disturbances in Molecular Spectra, - Molecules in Disturbing Fields, - Van-der-Waals-Molecules and Cluster, - Experimental Techniques in Molecular Physics. (orig.)

  18. Field Research in Political Science Practices and Principles

    DEFF Research Database (Denmark)

    Gravier, Magali

    2017-01-01

    Book review of: Kapiszewski (Diana), Maclean (Lauren M.), Read (Benjamin L.) ­ Field Research in Political Science. Practices and Principles. ­ Cambridge, Cambridge University Press, 2015 (Strategies for Social Inquiry). XIV + 456 p. Figures. Annexe. Bibliogr. Index.......Book review of: Kapiszewski (Diana), Maclean (Lauren M.), Read (Benjamin L.) ­ Field Research in Political Science. Practices and Principles. ­ Cambridge, Cambridge University Press, 2015 (Strategies for Social Inquiry). XIV + 456 p. Figures. Annexe. Bibliogr. Index....

  19. Xeroradiography. Principles, characteristics and method indications

    International Nuclear Information System (INIS)

    Uemura, L.; Forni, S.S.; Abreu Neto, B.P.; Broccoli Neto, J.; Abilio, S.O.

    1987-01-01

    The xeroradiography is an entirely photoeletric process to obtain the radiographic image, employing a selenium plate instead of the conventional radiologic film. A revision of the physical principles and technical aspects of the method is presented, and the characteristics of xeroradiographic images are explained. The applications of xeroradiography include evaluation of breast, structures of the neck soft tissues tumors and some bone tumors for soft tissue components, foreign body detection in soft issues and visualization of the smaller skeletal structures of the extremeties. (Author) [pt

  20. Expanding Uncertainty Principle to Certainty-Uncertainty Principles with Neutrosophy and Quad-stage Method

    Directory of Open Access Journals (Sweden)

    Fu Yuhua

    2015-03-01

    Full Text Available The most famous contribution of Heisenberg is uncertainty principle. But the original uncertainty principle is improper. Considering all the possible situations (including the case that people can create laws and applying Neutrosophy and Quad-stage Method, this paper presents "certainty-uncertainty principles" with general form and variable dimension fractal form. According to the classification of Neutrosophy, "certainty-uncertainty principles" can be divided into three principles in different conditions: "certainty principle", namely a particle’s position and momentum can be known simultaneously; "uncertainty principle", namely a particle’s position and momentum cannot be known simultaneously; and neutral (fuzzy "indeterminacy principle", namely whether or not a particle’s position and momentum can be known simultaneously is undetermined. The special cases of "certain ty-uncertainty principles" include the original uncertainty principle and Ozawa inequality. In addition, in accordance with the original uncertainty principle, discussing high-speed particle’s speed and track with Newton mechanics is unreasonable; but according to "certaintyuncertainty principles", Newton mechanics can be used to discuss the problem of gravitational defection of a photon orbit around the Sun (it gives the same result of deflection angle as given by general relativity. Finally, for the reason that in physics the principles, laws and the like that are regardless of the principle (law of conservation of energy may be invalid; therefore "certaintyuncertainty principles" should be restricted (or constrained by principle (law of conservation of energy, and thus it can satisfy the principle (law of conservation of energy.

  1. Complex networks principles, methods and applications

    CERN Document Server

    Latora, Vito; Russo, Giovanni

    2017-01-01

    Networks constitute the backbone of complex systems, from the human brain to computer communications, transport infrastructures to online social systems and metabolic reactions to financial markets. Characterising their structure improves our understanding of the physical, biological, economic and social phenomena that shape our world. Rigorous and thorough, this textbook presents a detailed overview of the new theory and methods of network science. Covering algorithms for graph exploration, node ranking and network generation, among the others, the book allows students to experiment with network models and real-world data sets, providing them with a deep understanding of the basics of network theory and its practical applications. Systems of growing complexity are examined in detail, challenging students to increase their level of skill. An engaging presentation of the important principles of network science makes this the perfect reference for researchers and undergraduate and graduate students in physics, ...

  2. Reciprocity principle for scattered fields from discontinuities in waveguides.

    Science.gov (United States)

    Pau, Annamaria; Capecchi, Danilo; Vestroni, Fabrizio

    2015-01-01

    This study investigates the scattering of guided waves from a discontinuity exploiting the principle of reciprocity in elastodynamics, written in a form that applies to waveguides. The coefficients of reflection and transmission for an arbitrary mode can be derived as long as the principle of reciprocity is satisfied at the discontinuity. Two elastodynamic states are related by the reciprocity. One is the response of the waveguide in the presence of the discontinuity, with the scattered fields expressed as a superposition of wave modes. The other state is the response of the waveguide in the absence of the discontinuity oscillating according to an arbitrary mode. The semi-analytical finite element method is applied to derive the needed dispersion relation and wave mode shapes. An application to a solid cylinder with a symmetric double change of cross-section is presented. This model is assumed to be representative of a damaged rod. The coefficients of reflection and transmission of longitudinal waves are investigated for selected values of notch length and varying depth. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. A Maximum Principle for SDEs of Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Daniel, E-mail: danieand@math.kth.se; Djehiche, Boualem, E-mail: boualem@math.kth.se [Royal Institute of Technology, Department of Mathematics (Sweden)

    2011-06-15

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  4. A Maximum Principle for SDEs of Mean-Field Type

    International Nuclear Information System (INIS)

    Andersson, Daniel; Djehiche, Boualem

    2011-01-01

    We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.

  5. Principle of coincidence method and application in activity measurement

    International Nuclear Information System (INIS)

    Li Mou; Dai Yihua; Ni Jianzhong

    2008-01-01

    The basic principle of coincidence method was discussed. The basic principle was generalized by analysing the actual example, and the condition in theory of coincidence method was brought forward. The cause of variation of efficiency curve and the effect of dead-time in activity measurement were explained using the above principle and condition. This principle of coincidence method provides the foundation in theory for activity measurement. (authors)

  6. Field Synergy Principle for Energy Conservation Analysis and Application

    Directory of Open Access Journals (Sweden)

    Qun Chen

    2010-01-01

    Full Text Available Optimization of mass and energy transfer process is critical to improve energy efficiency. In this contribution we introduce the field synergy principle as a unified principle for analyzing and improving the performance of the transfer process. Three field synergy numbers are introduced for heat, mass, and momentum transfer, respectively, and three cases are demonstrated for validation. The results indicate that the field synergy numbers will increase when reducing the angle between the velocity vector and the temperature gradient or the species concentration gradient fields in the convective heat or mass transfer, and the overall heat or mass transfer capability is therefore enhanced. In fluid flows, it will reduce the fluid flow drag to decrease the synergy number between the velocity and the velocity gradient fields over the entire domain and to decrease the product between the fluid viscosity and the velocity gradient at the boundary simultaneously.

  7. Methodical ecologization principles in construction management

    OpenAIRE

    Nuzhina Irina Pavlovna; Yudakhina Olga Borisovna

    2015-01-01

    In the article the subject of ecologization of construction sector is presented, the necessity of ecologization technology and technological processes is proved. The article also presents principles of ecologically friendly management of construction and investment activities and describes these principles in detail.

  8. The renormalized action principle in quantum field theory

    International Nuclear Information System (INIS)

    Balasin, H.

    1990-03-01

    The renormalized action principle holds a central position in field theory, since it offers a variety of applications. The main concern of this work is the proof of the action principle within the so-called BPHZ-scheme of renormalization. Following the classical proof given by Lam and Lowenstein, some loopholes are detected and closed. The second part of the work deals with the application of the action principle to pure Yang-Mills-theories within the axial gauge (n 2 ≠ 0). With the help of the action principle we investigate the decoupling of the Faddeev-Popov-ghost-fields from the gauge field. The consistency of this procedure, suggested by three-graph approximation, is proven to survive quantization. Finally we deal with the breaking of Lorentz-symmetry caused by the presence of the gauge-direction n. Using BRST-like techniques and the semi-simplicity of the Lorentz-group, it is shown that no new breakings arise from quantization. Again the main step of the proof is provided by the action principle. (Author, shortened by G.Q.)

  9. The Principle and the Method of the Radioimmunoassay

    Energy Technology Data Exchange (ETDEWEB)

    Kurata, Kunio [Dainabot Radioisotope Laboratory, Tokyo (Japan)

    1970-03-15

    The measurements of the amounts of various hormones in the body is one of the most important subjects in the field of endocrinology. The result obtained is not only helpful for the basic studies, such as the function studies of each organ or their interrelationship, but also valuable for routine clinical diagnosis. For the most of peptide hormones or protein hormones, a chemical measurement is very difficult. Biological methods are mostly utilized for this purpose but are not always satisfactory. The use of labeled hormone in combination with its antiserum led to the highly specific and sensitive measurement of the hormone in human plasma. This method is based essentially on the principle of isotope dilution method and is called radioimmunoassay. From the nuclear medical aspect, this is now one of the major fields of in vitro assay with radioisotope. With this method, less than 1 micro unit of insulin per ml of serum can be detected. In this lecture, I would like to talk about the principle and the method of the radioimmunoassay.

  10. Noncommutative Common Cause Principles in algebraic quantum field theory

    International Nuclear Information System (INIS)

    Hofer-Szabó, Gábor; Vecsernyés, Péter

    2013-01-01

    States in algebraic quantum field theory “typically” establish correlation between spacelike separated events. Reichenbach's Common Cause Principle, generalized to the quantum field theoretical setting, offers an apt tool to causally account for these superluminal correlations. In the paper we motivate first why commutativity between the common cause and the correlating events should be abandoned in the definition of the common cause. Then we show that the Noncommutative Weak Common Cause Principle holds in algebraic quantum field theory with locally finite degrees of freedom. Namely, for any pair of projections A, B supported in spacelike separated regions V A and V B , respectively, there is a local projection C not necessarily commuting with A and B such that C is supported within the union of the backward light cones of V A and V B and the set {C, C ⊥ } screens off the correlation between A and B.

  11. Local conservation laws for principle chiral fields (d=1)

    International Nuclear Information System (INIS)

    Cherednik, I.V.

    1979-01-01

    The Beklund transformation for chiral fields in the two-dimensional Minkovski space is found. As a result an infinite series of conservation laws for principle chiral Osub(n) fields (d=1) has been built. It is shown that these laws are local, the infinite series of global invariants which do not depend on xi, eta, and which is rather rapidly decrease along xi (or along eta) solutions being connected with these laws (xi, eta - coordinates of the light cone). It is noted that with the help of the construction proposed it is possible to obtain conservation laws of principle chiral G fields, including G in the suitable ortogonal groups. Symmetry permits to exchange xi and eta. The construction of conservation laws may be carried out without supposition that lambda has a multiplicity equal to 1, however the proof of the locality applied does not transfer on the laws obtained

  12. The short-circuit concept used in field equivalence principles

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1990-01-01

    In field equivalence principles, electric and magnetic surface currents are specified and considered as impressed currents. Often the currents are placed on perfect conductors. It is shown that these currents can be treated through two approaches. The first approach is decomposition of the total...... field into partial fields caused by the individual impressed currents. When this approach is used, it is shown that, on a perfect electric (magnetic) conductor, impressed electric (magnetic) surface currents are short-circuited. The second approach is to note that, since Maxwell's equations...... and the boundary conditions are satisfied, none of the impressed currents is short-circuited and no currents are induced on the perfect conductors. Since all currents and field quantities are considered at the same time, this approach is referred to as the total-field approach. The partial-field approach leads...

  13. Cosmological equivalence principle and the weak-field limit

    International Nuclear Information System (INIS)

    Wiltshire, David L.

    2008-01-01

    The strong equivalence principle is extended in application to averaged dynamical fields in cosmology to include the role of the average density in the determination of inertial frames. The resulting cosmological equivalence principle is applied to the problem of synchronization of clocks in the observed universe. Once density perturbations grow to give density contrasts of order 1 on scales of tens of megaparsecs, the integrated deceleration of the local background regions of voids relative to galaxies must be accounted for in the relative synchronization of clocks of ideal observers who measure an isotropic cosmic microwave background. The relative deceleration of the background can be expected to represent a scale in which weak-field Newtonian dynamics should be modified to account for dynamical gradients in the Ricci scalar curvature of space. This acceleration scale is estimated using the best-fit nonlinear bubble model of the universe with backreaction. At redshifts z -10 ms -2 , is small, when integrated over the lifetime of the universe it amounts to an accumulated relative difference of 38% in the rate of average clocks in galaxies as compared to volume-average clocks in the emptiness of voids. A number of foundational aspects of the cosmological equivalence principle are also discussed, including its relation to Mach's principle, the Weyl curvature hypothesis, and the initial conditions of the universe.

  14. Study of field assessment methods and worker risks for processing alternatives to support principles for FUSRAP waste materials. Part 1: Treatment methods and comparative risks of thorium removal from waste residues

    Energy Technology Data Exchange (ETDEWEB)

    Porter, R.D.; Hamby, D.M.; Martin, J.E.

    1997-07-01

    This study was done to examine the risks of remediation and the effectiveness of removal methods for thorium and its associated radioactive decay products from various soils and wastes associated with DOE`s Formerly Utilized Sites Remedial Action Program (FUSRAP). Its purpose was to provide information to the Environmental Management Advisory Board`s FUSRAP Committee for use in its deliberation of guiding principles for FUSRAP sites, in particular the degree to which treatment should be considered in the FUSRAP Committee`s recommendations. Treatment of FUSRAP wastes to remove thorium could be beneficial to management of lands that contain thorium if such treatment were effective and cost efficient. It must be recognized, however, that treatment methods invariably require workers to process residues and waste materials usually with bulk handling techniques. These processes expose workers to the radioactivity in the materials, therefore, workers would incur radiological risks in addition to industrial accident risks. An important question is whether the potential reduction of future radiological risks to members of the public justifies the risks that are incurred by remediation workers due to handling materials. This study examines, first, the effectiveness of treatment and then the risks that would be associated with remediation. Both types of information should be useful for decisions on whether and how to apply thorium removal methods to FUSRAP waste materials.

  15. Study of field assessment methods and worker risks for processing alternatives to support principles for FURSRAP waste materials. Part 1: Treatment methods and comparative risks of thorium removal from waste residues

    International Nuclear Information System (INIS)

    Porter, R.D.; Hamby, D.M.; Martin, J.E.

    1997-07-01

    This study was done to examine the risks of remediation and the effectiveness of removal methods for thorium and its associated radioactive decay products from various soils and wastes associated with DOE's Formerly Utilized Sites Remedial Action Program (FUSRAP). Its purpose was to provide information to the Environmental Management Advisory Board's FUSRAP Committee for use in its deliberation of guiding principles for FUSRAP sites, in particular the degree to which treatment should be considered in the FUSRAP Committee's recommendations. Treatment of FUSRAP wastes to remove thorium could be beneficial to management of lands that contain thorium if such treatment were effective and cost efficient. It must be recognized, however, that treatment methods invariably require workers to process residues and waste materials usually with bulk handling techniques. These processes expose workers to the radioactivity in the materials, therefore, workers would incur radiological risks in addition to industrial accident risks. An important question is whether the potential reduction of future radiological risks to members of the public justifies the risks that are incurred by remediation workers due to handling materials. This study examines, first, the effectiveness of treatment and then the risks that would be associated with remediation. Both types of information should be useful for decisions on whether and how to apply thorium removal methods to FUSRAP waste materials

  16. Hot gauge field properties from the thermal variational principle

    International Nuclear Information System (INIS)

    Schroeder, Y.; Schulz, H.

    1995-10-01

    A Feynman-Jensen version of the thermal variational principle is applied to hot gauge fields, abelian as well as nonabelian: scalar electrodynamics (without scalar self-coupling) and the gluon plasma. The perturbatively known self-energies are shown to derive by variation from a free quadratic (''gaussian'') trial Lagrangian. Independence of the covariant gauge fixing parameter is reached (within the order g 2 studies and for scalar ED) after a reformulation of the partition function such that it depends on only even powers of the gauge field. This way, however, the potential non-perturbative power of the calculus seems to be ruined. (orig.)

  17. A homogeneous static gravitational field and the principle of equivalence

    International Nuclear Information System (INIS)

    Chernikov, N.A.

    2001-01-01

    In this paper any gravitational field (both in the Einsteinian case and in the Newtonian case) is described by the connection, called gravitational. A homogeneous static gravitational field is considered in the four-dimensional area z>0 of a space-time with Cartesian coordinates x, y, z, and t. Such field can be created by masses, disposed outside the area z>0 with a density distribution independent of x, y, and t. Remarkably, in the four-dimensional area z>0, together with the primitive background connection, the primitive gravitational connection has been derived. In concordance with the Principle of Equivalence all components of such gravitational connection are equal to zero in the uniformly accelerated frame system, in which the gravitational force of attraction is balanced by the inertial force. However, all components of such background connection are equal to zero in the resting frame system, but not in the accelerated frame system

  18. Principles of crystallization, and methods of single crystal growth

    International Nuclear Information System (INIS)

    Chacra, T.

    2010-01-01

    Most of single crystals (monocrystals), have distinguished optical, electrical, or magnetic properties, which make from single crystals, key elements in most of technical modern devices, as they may be used as lenses, Prisms, or grating sin optical devises, or Filters in X-Ray and spectrographic devices, or conductors and semiconductors in electronic, and computer industries. Furthermore, Single crystals are used in transducer devices. Moreover, they are indispensable elements in Laser and Maser emission technology.Crystal Growth Technology (CGT), has started, and developed in the international Universities and scientific institutions, aiming at some of single crystals, which may have significant properties and industrial applications, that can attract the attention of international crystal growth centers, to adopt the industrial production and marketing of such crystals. Unfortunately, Arab universities generally, and Syrian universities specifically, do not give even the minimum interest, to this field of Science.The purpose of this work is to attract the attention of Crystallographers, Physicists and Chemists in the Arab universities and research centers to the importance of crystal growth, and to work on, in the first stage to establish simple, uncomplicated laboratories for the growth of single crystal. Such laboratories can be supplied with equipment, which are partly available or can be manufactured in the local market. Many references (Articles, Papers, Diagrams, etc..) has been studied, to conclude the most important theoretical principles of Phase transitions,especially of crystallization. The conclusions of this study, are summarized in three Principles; Thermodynamic-, Morphologic-, and Kinetic-Principles. The study is completed by a brief description of the main single crystal growth methods with sketches, of equipment used in each method, which can be considered as primary designs for the equipment, of a new crystal growth laboratory. (author)

  19. The Lattice Boltzmann method principles and practice

    CERN Document Server

    Krüger, Timm; Kuzmin, Alexandr; Shardt, Orest; Silva, Goncalo; Viggen, Erlend Magnus

    2017-01-01

    This book is an introduction to the theory, practice, and implementation of the Lattice Boltzmann (LB) method, a powerful computational fluid dynamics method that is steadily gaining attention due to its simplicity, scalability, extensibility, and simple handling of complex geometries. The book contains chapters on the method's background, fundamental theory, advanced extensions, and implementation. To aid beginners, the most essential paragraphs in each chapter are highlighted, and the introductory chapters on various LB topics are front-loaded with special "in a nutshell" sections that condense the chapter's most important practical results. Together, these sections can be used to quickly get up and running with the method. Exercises are integrated throughout the text, and frequently asked questions about the method are dealt with in a special section at the beginning. In the book itself and through its web page, readers can find example codes showing how the LB method can be implemented efficiently on a va...

  20. Detailed balance principle and finite-difference stochastic equation in a field theory

    International Nuclear Information System (INIS)

    Kozhamkulov, T.A.

    1986-01-01

    A finite-difference equation, which is a generalization of the Langevin equation in field theory, has been obtained basing upon the principle of detailed balance for the Markov chain. Advantages of the present approach as compared with the conventional Parisi-Wu method are shown for examples of an exactly solvable problem of zero-dimensional quantum theory and a simple numerical simulation

  1. Principle of detailed balance and the finite-difference stochastic equation in field theory

    International Nuclear Information System (INIS)

    Kozhamkulov, T.A.

    1986-01-01

    The principle of detailed balance for the Markov chain is used to obtain a finite-difference equation which generalizes the Langevin equation in field theory. The advantages of using this approach compared to the conventional Parisi-Wu method are demonstrated for the examples of an exactly solvable problem in zero-dimensional quantum theory and a simple numerical simulation

  2. Generalization of the ERIT Principle and Method

    International Nuclear Information System (INIS)

    Ruggiero, A.

    2008-01-01

    The paper describes the generalization of the method to produce secondary particles with a low-energy and low-intensity primary beam circulating in a Storage Ring with the Emittance-Recovery by Internal-Target (ERIT)

  3. Principles of physics from quantum field theory to classical mechanics

    CERN Document Server

    Jun, Ni

    2014-01-01

    This book starts from a set of common basic principles to establish the formalisms in all areas of fundamental physics, including quantum field theory, quantum mechanics, statistical mechanics, thermodynamics, general relativity, electromagnetic field, and classical mechanics. Instead of the traditional pedagogic way, the author arranges the subjects and formalisms in a logical-sequential way, i.e. all the formulas are derived from the formulas before them. The formalisms are also kept self-contained. Most of the required mathematical tools are also given in the appendices. Although this book covers all the disciplines of fundamental physics, the book is concise and can be treated as an integrated entity. This is consistent with the aphorism that simplicity is beauty, unification is beauty, and thus physics is beauty. The book may be used as an advanced textbook by graduate students. It is also suitable for physicists who wish to have an overview of fundamental physics. Readership: This is an advanced gradua...

  4. The Principle-Based Method of Practical Ethics.

    Science.gov (United States)

    Spielthenner, Georg

    2017-09-01

    This paper is about the methodology of doing practical ethics. There is a variety of methods employed in ethics. One of them is the principle-based approach, which has an established place in ethical reasoning. In everyday life, we often judge the rightness and wrongness of actions by their conformity to principles, and the appeal to principles plays a significant role in practical ethics, too. In this paper, I try to provide a better understanding of the nature of principle-based reasoning. To accomplish this, I show in the first section that these principles can be applied to cases in a meaningful and sufficiently precise way. The second section discusses the question how relevant applying principles is to the resolution of ethical issues. This depends on their nature. I argue that the principles under consideration in this paper should be interpreted as presumptive principles and I conclude that although they cannot be expected to bear the weight of definitely resolving ethical problems, these principles can nevertheless play a considerable role in ethical research.

  5. THE PRINCIPLES OF METHODICAL SYSTEM OF TEACHING GEOMETRY

    Directory of Open Access Journals (Sweden)

    Svetlana Feodosevna Miteneva

    2016-01-01

    Full Text Available The article deals with the main components of methodical system of teaching geometry, including the developing spatial thinking, the process of creating images, the conditions for the organization of students cognitive activity. The author describes the scheme of formation of spatial understanding of geometric objects, marks the conditions of students activities organization aimed at creating a spatial image of the studied object, lists research skills to address geometric problems and ways to implement these skills into practice, gives a summary of methods of teaching geometry, including the principle of holistic approach priority, the principle of an open multi-valuedness, the principle of subjective experience accounting.

  6. A Stochastic Maximum Principle for General Mean-Field Systems

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Li, Juan; Ma, Jin

    2016-01-01

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  7. A Stochastic Maximum Principle for General Mean-Field Systems

    Energy Technology Data Exchange (ETDEWEB)

    Buckdahn, Rainer, E-mail: Rainer.Buckdahn@univ-brest.fr [Université de Bretagne-Occidentale, Département de Mathématiques (France); Li, Juan, E-mail: juanli@sdu.edu.cn [Shandong University, Weihai, School of Mathematics and Statistics (China); Ma, Jin, E-mail: jinma@usc.edu [University of Southern California, Department of Mathematics (United States)

    2016-12-15

    In this paper we study the optimal control problem for a class of general mean-field stochastic differential equations, in which the coefficients depend, nonlinearly, on both the state process as well as of its law. In particular, we assume that the control set is a general open set that is not necessary convex, and the coefficients are only continuous on the control variable without any further regularity or convexity. We validate the approach of Peng (SIAM J Control Optim 2(4):966–979, 1990) by considering the second order variational equations and the corresponding second order adjoint process in this setting, and we extend the Stochastic Maximum Principle of Buckdahn et al. (Appl Math Optim 64(2):197–216, 2011) to this general case.

  8. Methods for magnetostatic field calculation

    International Nuclear Information System (INIS)

    Vorozhtsov, S.B.

    1984-01-01

    Two methods for magnetostatic field calculation: differential and integrat are considered. Both approaches are shown to have certain merits and drawbacks, choice of the method depend on the type of the solved problem. An opportunity of combination of these tWo methods in one algorithm (hybrid method) is considered

  9. Derivation of the phase field equations from the thermodynamic extremal principle

    International Nuclear Information System (INIS)

    Svoboda, J.; Fischer, F.D.; McDowell, D.L.

    2012-01-01

    Thermodynamics employs quantities that characterize the state of the system and provides driving forces for system evolution. These quantities can be applied by means of the thermodynamic extremal principle to obtain models and consequently constitutive equations for the evolution of the thermodynamic systems. The phase field method is a promising tool for simulation of the microstructure evolution in complex systems but introduces several parameters that are not standard in thermodynamics. The purpose of this paper is to show how the phase field method equations can be derived from the thermodynamic extremal principle, allowing the common treatment of the phase field parameters together with standard thermodynamic parameters in future applications. Fixed values of the phase field parameters may, however, not guarantee fixed values of thermodynamic parameters. Conditions are determined, for which relatively stable values of the thermodynamic parameters are guaranteed during phase field method simulations of interface migration. Finally, analytical relations between the thermodynamic and phase field parameters are found and verified for these simulations. A slight dependence of the thermodynamic parameters on the driving force is determined for the cases examined.

  10. Weak principle of equivalence and gauge theory of tetrad aravitational field

    International Nuclear Information System (INIS)

    Tunyak, V.N.

    1978-01-01

    It is shown that, unlike the tetrade formulation of the general relativity theory derived from the requirement on the Poincare group localization, the tetrade gravitation theory corresponding to the Trader formulation of the weak equivalence principle, where the nongravitational-matter Lagrangian is the direct covariant generalization of the partial relativistic expression on the Riemann space-time is incompatible with the known method for deriving the calibration theory of the tetrade gravitation field

  11. The Plastic Tension Field Method

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2005-01-01

    This paper describes a calculation method for steel plate girders with transverse web stiffeners subjected to shear. It may be used for predicting the failure load or, as a design method, to determine the optimal amount of internal web stiffeners. The new method is called the plastic tension field...... method. The method is based on the theory of plasticity and is analogous to the so-called diagonal compression field method developed for reinforced concrete beams with transverse stirrups, which is adopted in the common European concrete code (Eurocode 2). Many other theories have been developed...

  12. First principles molecular dynamics without self-consistent field optimization

    International Nuclear Information System (INIS)

    Souvatzis, Petros; Niklasson, Anders M. N.

    2014-01-01

    We present a first principles molecular dynamics approach that is based on time-reversible extended Lagrangian Born-Oppenheimer molecular dynamics [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] in the limit of vanishing self-consistent field optimization. The optimization-free dynamics keeps the computational cost to a minimum and typically provides molecular trajectories that closely follow the exact Born-Oppenheimer potential energy surface. Only one single diagonalization and Hamiltonian (or Fockian) construction are required in each integration time step. The proposed dynamics is derived for a general free-energy potential surface valid at finite electronic temperatures within hybrid density functional theory. Even in the event of irregular functional behavior that may cause a dynamical instability, the optimization-free limit represents a natural starting guess for force calculations that may require a more elaborate iterative electronic ground state optimization. Our optimization-free dynamics thus represents a flexible theoretical framework for a broad and general class of ab initio molecular dynamics simulations

  13. Speckle imaging using the principle value decomposition method

    International Nuclear Information System (INIS)

    Sherman, J.W.

    1978-01-01

    Obtaining diffraction-limited images in the presence of atmospheric turbulence is a topic of current interest. Two types of approaches have evolved: real-time correction and speckle imaging. A speckle imaging reconstruction method was developed by use of an ''optimal'' filtering approach. This method is based on a nonlinear integral equation which is solved by principle value decomposition. The method was implemented on a CDC 7600 for study. The restoration algorithm is discussed and its performance is illustrated. 7 figures

  14. Thermodynamic properties of organic compounds estimation methods, principles and practice

    CERN Document Server

    Janz, George J

    1967-01-01

    Thermodynamic Properties of Organic Compounds: Estimation Methods, Principles and Practice, Revised Edition focuses on the progression of practical methods in computing the thermodynamic characteristics of organic compounds. Divided into two parts with eight chapters, the book concentrates first on the methods of estimation. Topics presented are statistical and combined thermodynamic functions; free energy change and equilibrium conversions; and estimation of thermodynamic properties. The next discussions focus on the thermodynamic properties of simple polyatomic systems by statistical the

  15. Achieving Integration in Mixed Methods Designs—Principles and Practices

    OpenAIRE

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participato...

  16. Solid state nuclear track detection principles, methods and applications

    CERN Document Server

    Durrani, S A; ter Haar, D

    1987-01-01

    Solid State Nuclear Track Detection: Principles, Methods and Applications is the second book written by the authors after Nuclear Tracks in Solids: Principles and Applications. The book is meant as an introduction to the subject solid state of nuclear track detection. The text covers the interactions of charged particles with matter; the nature of the charged-particle track; the methodology and geometry of track etching; thermal fading of latent damage trails on tracks; the use of dielectric track recorders in particle identification; radiation dossimetry; and solid state nuclear track detecti

  17. Perfect Form: Variational Principles, Methods, and Applications in Elementary Physics

    International Nuclear Information System (INIS)

    Isenberg, C

    1997-01-01

    This short book is concerned with the physical applications of variational principles of the calculus. It is intended for undergraduate students who have taken some introductory lectures on the subject and have been exposed to Lagrangian and Hamiltonian mechanics. Throughout the book the author emphasizes the historical background to the subject and provides numerous problems, mainly from the fields of mechanics and optics. Some of these problems are provided with an answer, while others, regretfully, are not. It would have been an added help to the undergraduate reader if complete solutions could have been provided in an appendix. The introductory chapter is concerned with Fermat's Principle and image formation. This is followed by the derivation of the Euler - Lagrange equation. The third chapter returns to the subject of optical paths without making the link with a mechanical variational principle - that comes later. Chapters on the subjects of minimum potential energy, least action and Hamilton's principle follow. This volume provides an 'easy read' for a student keen to learn more about the subject. It is well illustrated and will make a useful addition to all undergraduate physics libraries. (book review)

  18. Enhancing Food Processing by Pulsed and High Voltage Electric Fields: Principles and Applications.

    Science.gov (United States)

    Wang, Qijun; Li, Yifei; Sun, Da-Wen; Zhu, Zhiwei

    2018-02-02

    Improvements in living standards result in a growing demand for food with high quality attributes including freshness, nutrition and safety. However, current industrial processing methods rely on traditional thermal and chemical methods, such as sterilization and solvent extraction, which could induce negative effects on food quality and safety. The electric fields (EFs) involving pulsed electric fields (PEFs) and high voltage electric fields (HVEFs) have been studied and developed for assisting and enhancing various food processes. In this review, the principles and applications of pulsed and high voltage electric fields are described in details for a range of food processes, including microbial inactivation, component extraction, and winemaking, thawing and drying, freezing and enzymatic inactivation. Moreover, the advantages and limitations of electric field related technologies are discussed to foresee future developments in the food industry. This review demonstrates that electric field technology has a great potential to enhance food processing by supplementing or replacing the conventional methods employed in different food manufacturing processes. Successful industrial applications of electric field treatments have been achieved in some areas such as microbial inactivation and extraction. However, investigations of HVEFs are still in an early stage and translating the technology into industrial applications need further research efforts.

  19. Thermopower switching by magnetic field: first-principles calculations

    DEFF Research Database (Denmark)

    Maslyuk, Volodymyr V.; Achilles, Steven; Sandratskii, Leonid

    2013-01-01

    We present first-principles studies of the thermopower of the organometallic V4Bz5 molecule attached between Co electrodes with noncollinear magnetization directions. Different regimes in the formation of the noncollinear magnetic state of the molecule lead to a remarkable nonmonotonous dependence...

  20. Variational methods for field theories

    Energy Technology Data Exchange (ETDEWEB)

    Ben-Menahem, S.

    1986-09-01

    Four field theory models are studied: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. Free field theory is used as a laboratory for a new variational blocking-truncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes (Boron-Oppenheimer approximation). This ''adiabatic truncation'' method gives very accurate results for ground-state energy density and correlation functions. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Euclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. The transfer-matrix method is used to find a good (non-blocking) trial ground state for the Ising model in a transverse magnetic field in (1 + 1) dimensions.

  1. Achieving integration in mixed methods designs-principles and practices.

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-12-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.

  2. Achieving Integration in Mixed Methods Designs—Principles and Practices

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835

  3. SYSTEMATIC PRINCIPLES AND METHODS OF SYMBOLIC APPROACHES IN URBAN DESIGN

    Directory of Open Access Journals (Sweden)

    BULAKH I. V

    2015-12-01

    Full Text Available Formulation of the problem. The low level of expression and personalization of mass architecture of the second half of the twentieth century connected with the spread of industrial technology and even to a greater extent with mechanistic traditionally functional relation to the average person as, abstract consumer architecture. The condition out of the critical situation is focusing on matters aesthetic, artistic understanding and harmonious image creation environment. The problem of increasing architectural and artistic level of architectural and urban planning solutions to overcome the monotony of planning and development, creating aesthetically expressive urban environment does not lose relevance over the past decades. Understanding and acceptance of enigma and dynamic development of cities encourage architects to find new design techniques that are able to provide in the future a reasonable possibility of forming artistic and aesthetic image of the modern city. Purpose. Define and systematize the principles of symbolization architectural and planning images; propose methods symbolism in the architectural planning of image of the urban environment. Conclusion based on analysis of the enhanced concept symbolizing the image of Architecture and Planning, the place, role and symbolization trends at all levels of the urban environment - planning, three-dimensional and improvement of urban areas; first identified the main stages and levels of symbolization (analohyzatsyya, schematization and alehoryzatsiya, their features and characteristics, formulated the basic principles of symbolization architectural and planning of image, namely the principles of communication between figurative analogies, transformation of subsequent circuits, switching allegorical groupings and metamorfizm ultimate goal – symbol birth .

  4. Properties of gases, liquids, and solutions principles and methods

    CERN Document Server

    Mason, Warren P

    2013-01-01

    Physical Acoustics: Principles and Methods, Volume ll-Part A: Properties of Gases, Liquids, and Solutions ponders on high frequency sound waves in gases, liquids, and solids that have been proven as effective tools in examining the molecular, domain wall, and other types of motions. The selection first offers information on the transmission of sound waves in gases at very low pressures and the phenomenological theory of the relaxation phenomena in gases. Topics include free molecule propagation, phenomenological thermodynamics of irreversible processes, and simultaneous multiple relaxation pro

  5. Michelson wide-field stellar interferometry : Principles and experimental verification

    NARCIS (Netherlands)

    Montilla, I.; Pereira, S.F.; Braat, J.J.M.

    2005-01-01

    A new interferometric technique for Michelson wide-field interferometry is presented that consists of a Michelson pupil-plane combination scheme in which a wide field of view can be achieved in one shot. This technique uses a stair-shaped mirror in the intermediate image plane of each telescope in

  6. First-Principles Lattice Dynamics Method for Strongly Anharmonic Crystals

    Science.gov (United States)

    Tadano, Terumasa; Tsuneyuki, Shinji

    2018-04-01

    We review our recent development of a first-principles lattice dynamics method that can treat anharmonic effects nonperturbatively. The method is based on the self-consistent phonon theory, and temperature-dependent phonon frequencies can be calculated efficiently by incorporating recent numerical techniques to estimate anharmonic force constants. The validity of our approach is demonstrated through applications to cubic strontium titanate, where overall good agreement with experimental data is obtained for phonon frequencies and lattice thermal conductivity. We also show the feasibility of highly accurate calculations based on a hybrid exchange-correlation functional within the present framework. Our method provides a new way of studying lattice dynamics in severely anharmonic materials where the standard harmonic approximation and the perturbative approach break down.

  7. Generalized uncertainty principle as a consequence of the effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Faizal, Mir, E-mail: mirfaizalmir@gmail.com [Irving K. Barber School of Arts and Sciences, University of British Columbia – Okanagan, Kelowna, British Columbia V1V 1V7 (Canada); Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada); Ali, Ahmed Farag, E-mail: ahmed.ali@fsc.bu.edu.eg [Department of Physics, Faculty of Science, Benha University, Benha, 13518 (Egypt); Netherlands Institute for Advanced Study, Korte Spinhuissteeg 3, 1012 CG Amsterdam (Netherlands); Nassar, Ali, E-mail: anassar@zewailcity.edu.eg [Department of Physics, Zewail City of Science and Technology, 12588, Giza (Egypt)

    2017-02-10

    We will demonstrate that the generalized uncertainty principle exists because of the derivative expansion in the effective field theories. This is because in the framework of the effective field theories, the minimum measurable length scale has to be integrated away to obtain the low energy effective action. We will analyze the deformation of a massive free scalar field theory by the generalized uncertainty principle, and demonstrate that the minimum measurable length scale corresponds to a second more massive scale in the theory, which has been integrated away. We will also analyze CFT operators dual to this deformed scalar field theory, and observe that scaling of the new CFT operators indicates that they are dual to this more massive scale in the theory. We will use holographic renormalization to explicitly calculate the renormalized boundary action with counter terms for this scalar field theory deformed by generalized uncertainty principle, and show that the generalized uncertainty principle contributes to the matter conformal anomaly.

  8. Generalized uncertainty principle as a consequence of the effective field theory

    Directory of Open Access Journals (Sweden)

    Mir Faizal

    2017-02-01

    Full Text Available We will demonstrate that the generalized uncertainty principle exists because of the derivative expansion in the effective field theories. This is because in the framework of the effective field theories, the minimum measurable length scale has to be integrated away to obtain the low energy effective action. We will analyze the deformation of a massive free scalar field theory by the generalized uncertainty principle, and demonstrate that the minimum measurable length scale corresponds to a second more massive scale in the theory, which has been integrated away. We will also analyze CFT operators dual to this deformed scalar field theory, and observe that scaling of the new CFT operators indicates that they are dual to this more massive scale in the theory. We will use holographic renormalization to explicitly calculate the renormalized boundary action with counter terms for this scalar field theory deformed by generalized uncertainty principle, and show that the generalized uncertainty principle contributes to the matter conformal anomaly.

  9. Principles of spectroscopic diagnostics of a plasma with oscillating electric fields

    International Nuclear Information System (INIS)

    Oks, E.A.

    1986-01-01

    Three types of main principles of spectroscopic diagnosis of the plasma with quasimonochromatic electric fields (QEF) are considered. Principles based on the effects intersectionally depending on the parameters of QEF and the plasma medium are considered. Occurrence of depressions or dips in the profiles of spectral lines is the most important effect among others. Principles based on the nonlinear theory of plasma and laser sattelites of spectral lines as well as laser-spectroscopic diagnosis of QEF in the plasma are considered

  10. Principles of planar near-field antenna measurements

    CERN Document Server

    Gregson, Stuart; Parini, Clive

    2007-01-01

    This single volume provides a comprehensive introduction and explanation of both the theory and practice of 'Planar Near-Field Antenna Measurement' from its basic postulates and assumptions, to the intricacies of its deployment in complex and demanding measurement scenarios.

  11. On Huygens' principle for Dirac operators associated to electromagnetic fields

    Directory of Open Access Journals (Sweden)

    CHALUB FABIO A.C.C.

    2001-01-01

    Full Text Available We study the behavior of massless Dirac particles, i.e., solutions of the Dirac equation with m = 0 in the presence of an electromagnetic field. Our main result (Theorem 1 is that for purely real or imaginary fields any Huygens type (in Hadamard's sense Dirac operators is equivalent to the free Dirac operator, equivalence given by changes of variables and multiplication (right and left by nonzero functions.

  12. [Case-non case studies: Principles, methods, bias and interpretation].

    Science.gov (United States)

    Faillie, Jean-Luc

    2017-10-31

    Case-non case studies belongs to the methods assessing drug safety by analyzing the disproportionality of notifications of adverse drug reactions in pharmacovigilance databases. Used for the first time in the 1980s, the last few decades have seen a significant increase in the use of this design. The principle of the case-non case study is to compare drug exposure in cases of a studied adverse reaction with that of cases of other reported adverse reactions and called "non cases". Results are presented in the form of a reporting odds ratio (ROR), the interpretation of which makes it possible to identify drug safety signals. This article describes the principle of the case-non case study, the method of calculating the ROR and its confidence interval, the different modalities of analysis and how to interpret its results with regard to the advantages and limitations of this design. Copyright © 2017 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.

  13. First principle DFT study of electric field effects on the characteristics of bilayer graphene

    Energy Technology Data Exchange (ETDEWEB)

    Sabzyan, Hassan; Sadeghpour, Narges [Isfahan Univ. (Iran, Islamic Republic of). Dept. of Chemistry

    2017-04-01

    First principle density functional theory methods, local density and Perdew-Burke-Ernzerhof generalized gradient approximations with Goedecker pseudopotential (LDA-G and PBE-G), are used to study the electric field effects on the binding energy and atomic charges of bilayer graphene (BLG) at the Γ point of the Brillouin zone based on two types of unit cells (α and β) containing n{sub C}=8-32 carbon atoms. Results show that application of electric fields of 4-24 V/nm strengths reduces the binding energies and induces charge transfer between the two layers. The transferred charge increases almost linearly with the strength of the electric field for all sizes of the two types of unit cells. Furthermore, the charge transfer calculated with the α-type unit cells is more sensitive to the electric field strength. The calculated field-dependent contour plots of the differential charge densities of the two layers show details of charge density redistribution under the influence of the electric field.

  14. Ore reserve estimation: a summary of principles and methods

    International Nuclear Information System (INIS)

    Marques, J.P.M.

    1985-01-01

    The mining industry has experienced substantial improvements with the increasing utilization of computerized and electronic devices throughout the last few years. In the ore reserve estimation field the main methods have undergone recent advances in order to improve their overall efficiency. This paper presents the three main groups of ore reserve estimation methods presently used worldwide: Conventional, Statistical and Geostatistical, and elaborates a detaited description and comparative analysis of each. The Conventional Methods are the oldest, less complex and most employed ones. The Geostatistical Methods are the most recent precise and more complex ones. The Statistical Methods are intermediate to the others in complexity, diffusion and chronological order. (D.J.M.) [pt

  15. Semiclassical methods in field theories

    International Nuclear Information System (INIS)

    Ventura, I.

    1978-10-01

    A new scheme is proposed for semi-classical quantization in field theory - the expansion about the charge (EAC) - which is developed within the canonical formalism. This method is suitable for quantizing theories that are invariant under global gauge transformations. It is used in the treatment of the non relativistic logarithmic theory that was proposed by Bialynicki-Birula and Mycielski - a theory we can formulate in any number of spatial dimensions. The non linear Schroedinger equation is also quantized by means of the EAC. The classical logarithmic theories - both, the non relativistic and the relativistic one - are studied in detail. It is shown that the Bohr-Sommerfeld quantization rule(BSQR) in field theory is, in many cases, equivalent to charge quantization. This rule is then applied to the massive Thirring Model and the logarithmic theories. The BSQR can be see as a simplified and non local version of the EAC [pt

  16. X-ray microprobe analysis of platelets. Principles, methods and review of the literature.

    Science.gov (United States)

    Yarom, R

    1983-01-01

    Platelets are well suited to X-ray microanalysis as there is no need for chemical fixation or sectioning, and the concentrations of calcium and phosphorus are above 10(-3). The principles of the technique, the methods of specimen preparation, instrumental conditions during analysis and ways of quantitation are described. This is followed by a review of published reports and a brief summary of the author's own work in the field.

  17. Task-space separation principle: a force-field approach to motion planning for redundant manipulators.

    Science.gov (United States)

    Tommasino, Paolo; Campolo, Domenico

    2017-02-03

    In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.

  18. Electric field effect of GaAs monolayer from first principles

    Directory of Open Access Journals (Sweden)

    Jiongyao Wu

    2017-03-01

    Full Text Available Using first-principle calculations, we investigate two-dimensional (2D honeycomb monolayer structures composed of group III-V binary elements. It is found that such compound like GaAs should have a buckled structure which is more stable than graphene-like flat structure. This results a polar system with out-of-plane dipoles arising from the non-planar structure. Here, we optimized GaAs monolayer structure, then calculated the electronic band structure and the change of buckling height under external electric field within density functional theory using generalized gradient approximation method. We found that the band gap would change proportionally with the electric field magnitude. When the spin-orbit coupling (SOC is considered, we revealed fine spin-splitting at different points in the reciprocal space. Furthermore, the valence and conduction bands spin-splitting energies due to SOC at the K point of buckled GaAs monolayers are found to be weakly dependent on the electric field strength. Finally electric field effects on the spin texture and second harmonic generation are discussed. The present work sheds light on the control of physical properties of GaAs monolayer by the applied electric field.

  19. A General Stochastic Maximum Principle for SDEs of Mean-field Type

    International Nuclear Information System (INIS)

    Buckdahn, Rainer; Djehiche, Boualem; Li Juan

    2011-01-01

    We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng’s-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966–979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng’s stochastic maximum principle.

  20. THE PROCEDURE OF REALIZATION OF THE DIDACTIC PRINCIPLE OF VISUAL METHOD IN AN EDUCATIONAL LABORATORY

    Directory of Open Access Journals (Sweden)

    Anatolii H. Protasov

    2010-08-01

    Full Text Available This paper is devoted to the procedure of realization of the main didactic principle – use visual method which becomes an essential factor of student perception of educational sources. The procedure is realized with series of laboratory works which are based on the principle – “device-computer-software”. The transformers of a physical magnitude into electrical signal are used in laboratory works. The combination of these transformers and a computer form the device which can measure a physical magnitude. The software allows reconstructing a virtual field distribution of this magnitude in area and observing its history. MATLAB is used as software and it provides with computation of different physical processes. The proposed procedure provides with a direct visual method and an indirect one as well. This matter promotes forming future specialists’ professional competence.

  1. Variational principles for Ginzburg-Landau equation by He's semi-inverse method

    International Nuclear Information System (INIS)

    Liu, W.Y.; Yu, Y.J.; Chen, L.D.

    2007-01-01

    Via the semi-inverse method of establishing variational principles proposed by He, a generalized variational principle is established for Ginzburg-Landau equation. The present theory provides a quite straightforward tool to the search for various variational principles for physical problems. This paper aims at providing a more complete theoretical basis for applications using finite element and other direct variational methods

  2. Application of the Method Risk Matrix to Radiotherapy. Main Principles

    International Nuclear Information System (INIS)

    2012-08-01

    The published fundamental principles of security, and basic international standards of security for ionizing radiation safety, contain requirements of protection for patients undergoing medical exposure. In accordance with these requirements and fulfilling its responsibility to provide for the application of these rules, the IAEA has been working intensively in the prevention of accidental exposures in radiotherapy, and this has resulted in a series of technical reports on the lessons learned from the research done in very serious events, and also in teaching materials shared for regional courses and accessible on the website for the protection of patients. The lessons learned are necessary but not sufficient, as we continue receiving information about new types of accidental exposures and not all may have been published. We need a more proactive approach, with a systematic, comprehensive and structured manner, to try to find out in advance what other errors may happen, to prevent or detect them early. Among these approaches are the method of the 'risk matrix', which by its relative simplicity can be applied to all radiotherapy service.

  3. Efficient Training Methods for Conditional Random Fields

    National Research Council Canada - National Science Library

    Sutton, Charles A

    2008-01-01

    .... In this thesis, I investigate efficient training methods for conditional random fields with complex graphical structure, focusing on local methods which avoid propagating information globally along the graph...

  4. Apparatuses and methods for generating electric fields

    Science.gov (United States)

    Scott, Jill R; McJunkin, Timothy R; Tremblay, Paul L

    2013-08-06

    Apparatuses and methods relating to generating an electric field are disclosed. An electric field generator may include a semiconductive material configured in a physical shape substantially different from a shape of an electric field to be generated thereby. The electric field is generated when a voltage drop exists across the semiconductive material. A method for generating an electric field may include applying a voltage to a shaped semiconductive material to generate a complex, substantially nonlinear electric field. The shape of the complex, substantially nonlinear electric field may be configured for directing charged particles to a desired location. Other apparatuses and methods are disclosed.

  5. Hierarchical Coupling of First-Principles Molecular Dynamics with Advanced Sampling Methods.

    Science.gov (United States)

    Sevgen, Emre; Giberti, Federico; Sidky, Hythem; Whitmer, Jonathan K; Galli, Giulia; Gygi, Francois; de Pablo, Juan J

    2018-05-14

    We present a seamless coupling of a suite of codes designed to perform advanced sampling simulations, with a first-principles molecular dynamics (MD) engine. As an illustrative example, we discuss results for the free energy and potential surfaces of the alanine dipeptide obtained using both local and hybrid density functionals (DFT), and we compare them with those of a widely used classical force field, Amber99sb. In our calculations, the efficiency of first-principles MD using hybrid functionals is augmented by hierarchical sampling, where hybrid free energy calculations are initiated using estimates obtained with local functionals. We find that the free energy surfaces obtained from classical and first-principles calculations differ. Compared to DFT results, the classical force field overestimates the internal energy contribution of high free energy states, and it underestimates the entropic contribution along the entire free energy profile. Using the string method, we illustrate how these differences lead to different transition pathways connecting the metastable minima of the alanine dipeptide. In larger peptides, those differences would lead to qualitatively different results for the equilibrium structure and conformation of these molecules.

  6. Principle features of metal magnetic memory method and inspection tools as compared to known magnetic NDT methods

    International Nuclear Information System (INIS)

    Dubov, A.

    2006-01-01

    Principle features of method of metal magnetic memory (MMM) as compared to known magnetic NDT methods are considered. Among the basic features of the MMM method, that it is based on use of the own magnetic leakage field (SMLF), arising in ferromagnetic and paramagnetic products on accumulations of high-density dislocations. Magnetodislocation hysteresis underlying effect of metal magnetic memory, takes place as at manufacture of products during formation of internal stresses and at their operation under action of working loads. It is impossible to obtain an information source like a self-magnetic field at any conditions with artificial magnetization in working constructions. Such information is formed and can be obtained only in a small external field, as the Earth's magnetic field is, in loaded constructions when deformation energy is a cut above the energy of the external magnetic field. Features and uniqueness of magnetometric instruments are considered. The instruments have no world analogues. Opportunities of the MMM method for the solution of actual NDT problems are: 100% quality control of machine-building products and heterogeneity of metal structure in a line production; express quality control of welded joints in the united complex system of the factors 'structural-mechanical heterogeneity - defects of a weld - structural and technological stress concentrator'; and, early diagnostics of fatigue damages of metal at an estimation and forecasting of equipment lifetime. (author)

  7. First-principles calculation of transport property in nano-devices under an external magnetic field

    International Nuclear Information System (INIS)

    Chen Jingzhe; Zhang Jin; Han Rushan

    2008-01-01

    The mesoscopic quantum interference phenomenon (QIP) can be observed and behaves as the oscillation of conductance in nano-devices when the external magnetic field changes. Excluding the factor of impurities or defects, specific QIP is determined by the sample geometry. We have improved a first-principles method based on the matrix Green's function and the density functional theory to simulate the transport behaviour of such systems under a magnetic field. We have studied two kinds of QIP: universal conductance fluctuation (UCF) and Aharonov–Bohm effect (A–B effect). We find that the amplitude of UCF is much smaller than the previous theoretical prediction. We have discussed the origin of difference and concluded that due to the failure of ergodic hypothesis, the ensemble statistics is not applicable, and the conductance fluctuation is determined by the flux-dependent density of states (DOSs). We have also studied the relation between the UCF and the structure of sample. For a specific structure, an atomic circle, the A–B effect is observed and the origin of the oscillation is also discussed

  8. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    Science.gov (United States)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  9. Principles and foundation: national standards on quantities and units in nuclear science field

    International Nuclear Information System (INIS)

    Chen Lishu

    1993-11-01

    The main contents of National Standards on Quantities and units of atomic and nuclear physics (GB 3102.9) and Quantities and Units of nuclear reactions and ionizing radiations (GB 310.10) are presented in which most important quantities with their symbols and definitions in the nuclear scientific field are given. The principles and foundation, including the International System of Units (SI) and its application to the nuclear scientific field, in the setting of the National Standards are explained

  10. Introduction to First-Principles Electronic Structure Methods: Application to Actinide Materials

    International Nuclear Information System (INIS)

    Klepeis, J E

    2005-01-01

    The purpose of this paper is to provide an introduction for non-experts to first-principles electronic structure methods that are widely used in the field of condensed-matter physics, including applications to actinide materials. The methods I describe are based on density functional theory (DFT) within the local density approximation (LDA) and the generalized gradient approximation (GGA). In addition to explaining the meaning of this terminology I also describe the underlying theory itself in some detail in order to enable a better understanding of the relative strengths and weaknesses of the methods. I briefly mention some particular numerical implementations of DFT, including the linear muffin-tin orbital (LMTO), linear augmented plane wave (LAPW), and pseudopotential methods, as well as general methodologies that go beyond DFT and specifically address some of the weaknesses of the theory. The last third of the paper is devoted to a few selected applications that illustrate the ideas discussed in the first two-thirds. In particular, I conclude by addressing the current controversy regarding magnetic DFT calculations for actinide materials. Throughout this paper particular emphasis is placed on providing the appropriate background to enable the non-expert to gain a better appreciation of the application of first-principles electronic structure methods to the study of actinide and other materials

  11. Methods of thermal field theory

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Institute of Nuclear Physics, Calcutta (India)

    1998-11-01

    We introduce the basic ideas of thermal field theory and review its path integral formulation. We then discuss the problems of QCD theory at high and at low temperatures. At high temperature the naive perturbation expansion breaks down and is cured by resummation. We illustrate this improved perturbation expansion with the g{sup 2}{phi}{sup 4} theory and then sketch its application to find the gluon damping rate in QCD theory. At low temperature the hadronic phase is described systematically by the chiral perturbation theory. The results obtained from this theory for the quark and the gluon condensates are discussed. (author) 22 refs., 6 figs.

  12. METHODOLOGICAL PRINCIPLES AND METHODS OF TERMS OF TRADE STATISTICAL EVALUATION

    Directory of Open Access Journals (Sweden)

    N. Kovtun

    2014-09-01

    Full Text Available The paper studies the methodological principles and guidance of the statistical evaluation of terms of trade for the United Nations classification model – Harmonized Commodity Description and Coding System (HS. The practical implementation of the proposed three-stage model of index analysis and estimation of terms of trade for Ukraine's commodity-members for the period of 2011-2012 are realized.

  13. Recent Progress in First-Principles Methods for Computing the Electronic Structure of Correlated Materials

    Directory of Open Access Journals (Sweden)

    Fredrik Nilsson

    2018-03-01

    Full Text Available Substantial progress has been achieved in the last couple of decades in computing the electronic structure of correlated materials from first principles. This progress has been driven by parallel development in theory and numerical algorithms. Theoretical development in combining ab initio approaches and many-body methods is particularly promising. A crucial role is also played by a systematic method for deriving a low-energy model, which bridges the gap between real and model systems. In this article, an overview is given tracing the development from the LDA+U to the latest progress in combining the G W method and (extended dynamical mean-field theory ( G W +EDMFT. The emphasis is on conceptual and theoretical aspects rather than technical ones.

  14. Classical field theory in the space of reference frames. [Space-time manifold, action principle

    Energy Technology Data Exchange (ETDEWEB)

    Toller, M [Dipartimento di Matematica e Fisica, Libera Universita, Trento (Italy)

    1978-03-11

    The formalism of classical field theory is generalized by replacing the space-time manifold M by the ten-dimensional manifold S of all the local reference frames. The geometry of the manifold S is determined by ten vector fields corresponding to ten operationally defined infinitesimal transformations of the reference frames. The action principle is written in terms of a differential 4-form in the space S (the Lagrangian form). Densities and currents are represented by differential 3-forms in S. The field equations and the connection between symmetries and conservation laws (Noether's theorem) are derived from the action principle. Einstein's theory of gravitation and Maxwell's theory of electromagnetism are reformulated in this language. The general formalism can also be used to formulate theories in which charge, energy and momentum cannot be localized in space-time and even theories in which a space-time manifold cannot be defined exactly in any useful way.

  15. Clinical Supervision in Alcohol and Drug Abuse Counseling: Principles, Models, Methods.

    Science.gov (United States)

    Powell, David J.

    A case is made for professionalism in clinical training as substance abuse counseling becomes a unique field. Part 1, "Principles," includes: (1) "A Historical Review of Supervision"; (2) "A Working Definition of Supervision"; (3) "Leadership Principles for Supervisors" and; (4) "Traits of an Effective Clinical Supervisor." Part 2, "Models,"…

  16. Principles, Methods of Participatory Research: Proposal for Draft Animal Power

    Directory of Open Access Journals (Sweden)

    E. Chia

    2004-03-01

    Full Text Available The meeting of researchers, who question themselves on the efficiency of their actions when they accompany stakeholders during change processes, provides the opportunity to ponder on the research methods to develop when working together with the stakeholders: participative research, research-action, research-intervention… The author proposes to present the research-action approach as new. If the three phases of research-action are important, the negotiation phase is essential, because it enables contract formalization among partners (ethical aspect, development of a common language, and formalization of structuring efforts between researchers with various specialties and stakeholders. In the research-action approach, the managing set-ups (scientific committees… play a major role: they guarantee at the same time a solution to problems, production, and the legitimacy of the scientific knowledge produced. In conclusion, the author suggests ways to develop research-action in the field of animal traction in order to conceive new socio-technical and organizational innovations that will make the use of this technique easier.

  17. First-principles simulations of Graphene/Transition-metal-Dichalcogenides/Graphene Field-Effect Transistor

    Science.gov (United States)

    Li, Xiangguo; Wang, Yun-Peng; Zhang, X.-G.; Cheng, Hai-Ping

    A prototype field-effect transistor (FET) with fascinating properties can be made by assembling graphene and two-dimensional insulating crystals into three-dimensional stacks with atomic layer precision. Transition metal dichalcogenides (TMDCs) such as WS2, MoS2 are good candidates for the atomically thin barrier between two layers of graphene in the vertical FET due to their sizable bandgaps. We investigate the electronic properties of the Graphene/TMDCs/Graphene sandwich structure using first-principles method. We find that the effective tunnel barrier height of the TMDC layers in contact with the graphene electrodes has a layer dependence and can be modulated by a gate voltage. Consequently a very high ON/OFF ratio can be achieved with appropriate number of TMDC layers and a suitable range of the gate voltage. The spin-orbit coupling in TMDC layers is also layer dependent but unaffected by the gate voltage. These properties can be important in future nanoelectronic device designs. DOE/BES-DE-FG02-02ER45995; NERSC.

  18. Introduction to the background field method

    International Nuclear Information System (INIS)

    Abbott, L.F.; Brandeis Univ., Waltham, MA

    1982-01-01

    The background field approach to calculations in gauge field theories is presented. Conventional functional techniques are reviewed and the background field method is introduced. Feynman rules and renormalization are discussed and, as an example, the Yang-Mills β function is computed. (author)

  19. Numerical simulation of turbulent flow and heat transfer in a parallel channel. Verification of the field synergy principle

    International Nuclear Information System (INIS)

    Tian Wenxi; Su, G.H.; Qiu Suizheng; Jia Dounan

    2004-01-01

    The field synergy principle was proposed by Guo(1998) which is based on 2-D boundary laminar flow and it resulted from a second look at the mechanism of convective heat transfer. Numerical verification of this principle's validity for turbulent flow has been carried out by very few researchers, and mostly commercial software such as FLUENT, CFX etc. were used in their study. In this paper, numerical simulation of turbulent flow with recirculation was developed using SIMPLE algorithm with two-equation k-ε model. Extension of computational region method and wall function method were quoted to regulate the whole computational region geometrically. Given the inlet Reynold number keeps constant: 10000, by changing the height of the solid obstacle, simulation was conducted and the result showed that the wall heat flux decreased with the angle between the velocity vector and the temperature gradient. Thus it is validated that the field synergy principle based on 2-D boundary laminar flow can also be applied to complex turbulent flow even with recirculation. (author)

  20. Field penetration induced charge redistribution effects on the field emission properties of carbon nanotubes - a first-principle study

    International Nuclear Information System (INIS)

    Chen, C.-W.; Lee, M.-H.; Clark, S.J.

    2004-01-01

    The effect of field penetration induced charge redistribution on the field emission properties of carbon nanotubes (CNTs) have been studied by the first-principle calculations. It is found that the carbon nanotube becomes polarized under external electric field leading to a charge redistribution. The resulting band bending induced by field penetration into the nanotube tip surface can further reduce the effective workfunction of the carbon nanotubes. The magnitude of the redistributed charge ΔQ is found to be nearly linear to the applied external field strength. In addition, we found that the capped (9, 0) zigzag nanotube demonstrates better field emission properties than the capped (5, 5) armchair nanotube due to the fact that the charge redistribution of π electrons along the zigzag-like tube axis is easier than for the armchair-like tube. The density of states (DOS) of the capped region of the nanotube is found to be enhanced with a value 30% higher than that of the sidewall part for the capped (5, 5) nanotube and 40% for the capped (9, 0) nanotube under an electric field of 0.33 V/A. Such enhancements of the DOS at the carbon nanotube tip show that electrons near the Fermi level will emit more easily due to the change of the surface band structure resulting from the field penetration in a high field

  1. First-principles method for electron-phonon coupling and electron mobility

    DEFF Research Database (Denmark)

    Gunst, Tue; Markussen, Troels; Stokbro, Kurt

    2016-01-01

    We present density functional theory calculations of the phonon-limited mobility in n-type monolayer graphene, silicene, and MoS2. The material properties, including the electron-phonon interaction, are calculated from first principles. We provide a detailed description of the normalized full......-band relaxation time approximation for the linearized Boltzmann transport equation (BTE) that includes inelastic scattering processes. The bulk electron-phonon coupling is evaluated by a supercell method. The method employed is fully numerical and does therefore not require a semianalytic treatment of part...... of the problem and, importantly, it keeps the anisotropy information stored in the coupling as well as the band structure. In addition, we perform calculations of the low-field mobility and its dependence on carrier density and temperature to obtain a better understanding of transport in graphene, silicene...

  2. Physical principles, geometrical aspects, and locality properties of gauge field theories

    International Nuclear Information System (INIS)

    Mack, G.; Hamburg Univ.

    1981-01-01

    Gauge field theories, particularly Yang - Mills theories, are discussed at a classical level from a geometrical point of view. The introductory chapters are concentrated on physical principles and mathematical tools. The main part is devoted to locality problems in gauge field theories. Examples show that locality problems originate from two sources in pure Yang - Mills theories (without matter fields). One is topological and the other is related to the existence of degenerated field configurations of the infinitesimal holonomy groups on some extended region of space or space-time. Nondegenerate field configurations in theories with semisimple gauge groups can be analysed with the help of the concept of a local gauge. Such gauges play a central role in the discussion. (author)

  3. Field differential equations for a potential flow from a Hamilton type variational principle

    International Nuclear Information System (INIS)

    Fierros Palacios, A.

    1992-01-01

    The same theoretical frame that was used to solve the problem of the field equations for a viscous fluid is utilized in this work. The purpose is to obtain the differential field equations for a potential flow from the Lagrangian formalism as in classical field theory. An action functional is introduced as a space-time integral over a region of three-dimensional Euclidean space, of a Lagrangian density as a function of certain field variables. A Hamilton type extremum action principle is postulated with adequate boundary conditions, and a set of differential field equations is derived. A particular Lagrangian density of the T-V type leads to the wave equation for the velocity potential. (Author)

  4. Application of the principle of supramolecular chemistry in the fields of radiochemistry and radiation chemistry

    International Nuclear Information System (INIS)

    Shen Xinghai; Chen Qingde; Gao Hongcheng

    2008-01-01

    Supramolecular chemistry, one of the front fields in chemistry, is defined as 'chemistry beyond the molecule', bearing on the organized entities of higher complexity that result from the association of two or more chemical species held together by intermolecular forces. This article focuses on the application of the principle of supramolecular chemistry in the fields of radiochemistry and radiation chemistry. The following aspects are concerned: (1) the recent progress of supramolecular chemistry; (2) the application of the principle of supramolecular chemistry and the functions of supramolecular system, i.e., recognition, assembly and translocation, in the extraction of nuclides; (3) the application of microemulsion, ionic imprinted polymers, ionic liquids and cloud point extraction in the enrichment of nuclides; (4) the radiation effect of supramolecular systems. (authors)

  5. A Stochastic Maximum Principle for Risk-Sensitive Mean-Field Type Control

    KAUST Repository

    Djehiche, Boualem; Tembine, Hamidou; Tempone, Raul

    2015-01-01

    In this paper we study mean-field type control problems with risk-sensitive performance functionals. We establish a stochastic maximum principle (SMP) for optimal control of stochastic differential equations (SDEs) of mean-field type, in which the drift and the diffusion coefficients as well as the performance functional depend not only on the state and the control but also on the mean of the distribution of the state. Our result extends the risk-sensitive SMP (without mean-field coupling) of Lim and Zhou (2005), derived for feedback (or Markov) type optimal controls, to optimal control problems for non-Markovian dynamics which may be time-inconsistent in the sense that the Bellman optimality principle does not hold. In our approach to the risk-sensitive SMP, the smoothness assumption on the value-function imposed in Lim and Zhou (2005) needs not be satisfied. For a general action space a Peng's type SMP is derived, specifying the necessary conditions for optimality. Two examples are carried out to illustrate the proposed risk-sensitive mean-field type SMP under linear stochastic dynamics with exponential quadratic cost function. Explicit solutions are given for both mean-field free and mean-field models.

  6. A Stochastic Maximum Principle for Risk-Sensitive Mean-Field Type Control

    KAUST Repository

    Djehiche, Boualem

    2015-02-24

    In this paper we study mean-field type control problems with risk-sensitive performance functionals. We establish a stochastic maximum principle (SMP) for optimal control of stochastic differential equations (SDEs) of mean-field type, in which the drift and the diffusion coefficients as well as the performance functional depend not only on the state and the control but also on the mean of the distribution of the state. Our result extends the risk-sensitive SMP (without mean-field coupling) of Lim and Zhou (2005), derived for feedback (or Markov) type optimal controls, to optimal control problems for non-Markovian dynamics which may be time-inconsistent in the sense that the Bellman optimality principle does not hold. In our approach to the risk-sensitive SMP, the smoothness assumption on the value-function imposed in Lim and Zhou (2005) needs not be satisfied. For a general action space a Peng\\'s type SMP is derived, specifying the necessary conditions for optimality. Two examples are carried out to illustrate the proposed risk-sensitive mean-field type SMP under linear stochastic dynamics with exponential quadratic cost function. Explicit solutions are given for both mean-field free and mean-field models.

  7. A systematic first principle method to study magnetic properties of FeMo, CoMo and NiMo

    International Nuclear Information System (INIS)

    Bhattacharjee, Ashis Kumar; Touheed, Md.; Ahmed, Mesbahuddin; Halder, A.; Mookerjee, A.

    2003-06-01

    We use the first principle TB-LMTO (Tight-Binding Linear Muffin Tin Orbital) method combined with the ASM (Augmented Space Method) to take care of disorder beyond the mean field (CPA - Cohetent Potential Approximation) approximation. We analyze binary alloys between magnetic transition metals Fe, Co, Ni and non-magnetic Mo to find out the effect of disorder on electronic structure and consequently magnetic properties of the alloys. (author)

  8. Quantum Field Theoretic Derivation of the Einstein Weak Equivalence Principle Using Emqg Theory

    OpenAIRE

    Ostoma, Tom; Trushyk, Mike

    1999-01-01

    We provide a quantum field theoretic derivation of Einstein's Weak Equivalence Principle of general relativity using a new quantum gravity theory proposed by the authors called Electro-Magnetic Quantum Gravity or EMQG (ref. 1). EMQG is based on a new theory of inertia (ref. 5) proposed by R. Haisch, A. Rueda, and H. Puthoff (which we modified and called Quantum Inertia). Quantum Inertia states that classical Newtonian Inertia is a property of matter due to the strictly local electrical force ...

  9. Variational principles are a powerful tool also for formulating field theories

    OpenAIRE

    Dell'Isola , Francesco; Placidi , Luca

    2012-01-01

    Variational principles and calculus of variations have always been an important tool for formulating mathematical models for physical phenomena. Variational methods give an efficient and elegant way to formulate and solve mathematical problems that are of interest for scientists and engineers and are the main tool for the axiomatization of physical theories

  10. Implementation of the Principles of Tpm in Field of Maintenance Preparations

    Directory of Open Access Journals (Sweden)

    Vladimíra Schindlerová

    2016-09-01

    Full Text Available Total Productive Maintenance (TPM is one of the ways to ensure efficient production processes. TPM is primarily associated with the management of maintenance of production equipment. This article deals with the possible implementation of total productive maintenance in other field of maintenance of working means, and it the maintenance of preparations. Experience from practice shows that TPM approaches may be suitable for the maintenance management in this field. In this article are stated the conclusions drawn from the implementation of the principles of TPM in a concrete enterprise having available about 14,400 preparations.

  11. Multigrid Methods for the Computation of Propagators in Gauge Fields

    Science.gov (United States)

    Kalkreuter, Thomas

    Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.

  12. The interpolation method of stochastic functions and the stochastic variational principle

    International Nuclear Information System (INIS)

    Liu Xianbin; Chen Qiu

    1993-01-01

    -order stochastic finite element equations are not very reasonable. On the other hand, Galerkin Method is hopeful, along with the method, the projection principle had been advanced to solve the stochastic operator equations. In Galerkin Method, by means of projecting the stochastic solution functions into the subspace of the solution function space, the treatment of the stochasticity of the structural physical properties and the loads is reasonable. However, the construction or the selection of the subspace of the solution function space which is a Hilbert Space of stochastic functions is difficult, and furthermore it is short of a reasonable rule to measure whether the approximation of the subspace to the solution function space is fine or not. In stochastic finite element method, the discretization of stochastic functions in space and time shows a very importance, so far, the discrete patterns consist of Local Average Theory, Interpolation Method and Orthogonal Expansion Method. Although the Local Average Theory has already been a success in the stationary random fields, it is not suitable for the non-stationary ones as well. For the general stochastic functions, whether it is stationary or not, interpolation method is available. In the present paper, the authors have shown that the error between the true solution function and its approximation, its projection in the subspace, depends continuously on the errors between the stochastic functions and their interpolation functions, the latter rely continuously on the scales of the discrete elements; so a conclusion can be obtained that the Interpolation method of stochastic functions is convergent. That is to say that the approximation solution functions would limit to the true solution functions when the scales of the discrete elements goes smaller and smaller. Using the Interpolation method, a basis of subspace of the solution function space is constructed in this paper, and by means of combining the projection principle and

  13. Human Systems Interface Design Methods Using Ecological Interface Design Principles

    International Nuclear Information System (INIS)

    Hong, Seung Kweon; Park, Jung Chul; Kim, Sun Su; Sim, Kwang Pyo; Yuk, Seung Yul; Choi, Jae Hyeon; Yoon, Seung Hyun

    2009-12-01

    The results of this study categorized into two parts. The first part is the guidelines for EID designs. The procedure to observe for EID design is composed of 6 steps; 1) to define a target system, 2) to make an abstraction hierarchy model, 3) to check the link structure among each components included in the layers of abstraction hierarchy model, 4) to transform information requirements to variables, 5) to make the graphs related to each variables, 6) to check the graphs by visual display design principles and heuristic rules. The second part is an EID design alternative for nuclear power plant. The EID for high level function represents the energy balance and energy flow in each loop of nuclear power plant. The EID for middle level function represents the performance indicators of each equipment involved in the all processes of changing from coolants to steam. The EID for low level function represents the values measured in each equipment such as temperature, pressure, water level and so on

  14. The generally covariant locality principle - a new paradigm for local quantum field theory

    International Nuclear Information System (INIS)

    Brunetti, R.; Fredenhagen, K.; Verch, R.

    2002-05-01

    A new approach to the model-independent description of quantum field theories will be introduced in the present work. The main feature of this new approach is to incorporate in a local sense the principle of general covariance of general relativity, thus giving rise to the concept of a locally covariant quantum field theory. Such locally covariant quantum field theories will be described mathematically in terms of covariant functors between the categories, on one side, of globally hyperbolic spacetimes with isometric embeddings as morphisms and, on the other side, of *-algebras with unital injective *-endomorphisms as morphisms. Moreover, locally covariant quantum fields can be described in this framework as natural transformations between certain functors. The usual Haag-Kastler framework of nets of operator-algebras over a fixed spacetime background-manifold, together with covariant automorphic actions of the isometry-group of the background spacetime, can be re-gained from this new approach as a special case. Examples of this new approach are also outlined. In case that a locally covariant quantum field theory obeys the time-slice axiom, one can naturally associate to it certain automorphic actions, called ''relative Cauchy-evolutions'', which describe the dynamical reaction of the quantum field theory to a local change of spacetime background metrics. The functional derivative of a relative Cauchy-evolution with respect to the spacetime metric is found to be a divergence-free quantity which has, as will be demonstrated in an example, the significance of an energy-momentum tensor for the locally covariant quantum field theory. Furthermore, we discuss the functorial properties of state spaces of locally covariant quantum field theories that entail the validity of the principle of local definiteness. (orig.)

  15. Operation safety of control systems. Principles and methods

    International Nuclear Information System (INIS)

    Aubry, J.F.; Chatelet, E.

    2008-01-01

    This article presents the main operation safety methods that can be implemented to design safe control systems taking into account the behaviour of the different components with each other (binary 'operation/failure' behaviours, non-consistent behaviours and 'hidden' failures, dynamical behaviours and temporal aspects etc). To take into account these different behaviours, advanced qualitative and quantitative methods have to be used which are described in this article: 1 - qualitative methods of analysis: functional analysis, preliminary risk analysis, failure mode and failure effects analyses; 2 - quantitative study of systems operation safety: binary representation models, state space-based methods, event space-based methods; 3 - application to the design of control systems: safe specifications of a control system, qualitative analysis of operation safety, quantitative analysis, example of application; 4 - conclusion. (J.S.)

  16. Historic Methods for Capturing Magnetic Field Images

    Science.gov (United States)

    Kwan, Alistair

    2016-01-01

    I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…

  17. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  18. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  19. Teaching Geographic Field Methods Using Paleoecology

    Science.gov (United States)

    Walsh, Megan K.

    2014-01-01

    Field-based undergraduate geography courses provide numerous pedagogical benefits including an opportunity for students to acquire employable skills in an applied context. This article presents one unique approach to teaching geographic field methods using paleoecological research. The goals of this course are to teach students key geographic…

  20. Scattering theory in quantum mechanics. Physical principles and mathematical methods

    International Nuclear Information System (INIS)

    Amrein, W.O.; Jauch, J.M.; Sinha, K.B.

    1977-01-01

    A contemporary approach is given to the classical topics of physics. The purpose is to explain the basic physical concepts of quantum scattering theory, to develop the necessary mathematical tools for their description, to display the interrelation between the three methods (the Schroedinger equation solutions, stationary scattering theory, and time dependence) to derive the properties of various quantities of physical interest with mathematically rigorous methods

  1. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  2. Tidal fields in general relativity: D'Alembert's principle and the test rigid rod

    International Nuclear Information System (INIS)

    Faulkner, J.; Flannery, B.P.

    1978-01-01

    To the general relativist, tidal forces are a manifestation of the Riemann tensor; the relativist therefore uses the Riemann tensor to calculate the effects of such forces. In contrast, we show that the intorduction of gravitational ''probes'' (or ''test rigid rods'') and the adoption of a view-point closely allied to d'Alembert's principle, give an enormous simplification in cases of interest. No component of the Riemann tensor need to be calculated as such. In the corotating orbital case (or Roche problem) the calculation of the relevant distortional field becomes trivial. As a by-product of this investigation, there emerges an illuminating strong field generalization of de Sitter's weak field precession for slowly spinning gyroscopes

  3. Measures of lifetime detriment from radiation exposures: Principles and methods

    International Nuclear Information System (INIS)

    Darby, Sarah; Fagnani, Francis; Hubert, Philippe; Schneider, Thierry; Thomas, Duncan; Vaeth, Michael; Weiss, Ken

    1990-06-01

    This report presents the work initiated at the 'Workshop on the comparison of methods for deriving life long risk indices for the effects of ionizing radiations', organized by CEPN in Fontenay-aux-Roses (France), on August 7-11, 1989. It has been written in collaboration by participants during the following years

  4. Measures of lifetime detriment from radiation exposures: Principles and methods

    Energy Technology Data Exchange (ETDEWEB)

    Darby, Sarah [University of Oxford (United Kingdom); Fagnani, Francis [INSERM (France); Hubert, Philippe [CEA/IPSN (France); Schneider, Thierry [CEPN, Fontenay-aux-Roses (France); Thomas, Duncan [University of Southern California (United States); Vaeth, Michael [University of Aarhus (Denmark); Weiss, Ken [University of Penn State (United States)

    1990-06-01

    This report presents the work initiated at the 'Workshop on the comparison of methods for deriving life long risk indices for the effects of ionizing radiations', organized by CEPN in Fontenay-aux-Roses (France), on August 7-11, 1989. It has been written in collaboration by participants during the following years.

  5. New Principles of Process Control in Geotechnics by Acoustic Methods

    OpenAIRE

    Leššo, I.; Flegner, P.; Pandula, B.; Horovčák, P.

    2007-01-01

    The contribution describes the new solution of the control of rotary drilling process as some elementary process in geotechnics. The article presents the first results of research on the utilization of acoustic methods in identification process by optimal control of rotary drilling.

  6. New Principles of Process Control in Geotechnics by Acoustic Methods

    Directory of Open Access Journals (Sweden)

    Leššo, I.

    2007-01-01

    Full Text Available The contribution describes the new solution of the control of rotary drilling process as some elementary process in geotechnics. The article presents the first results of research on the utilization of acoustic methods in identification process by optimal control of rotary drilling.

  7. Principles of Vibrational Spectroscopic Methods and their Application to Bioanalysis

    DEFF Research Database (Denmark)

    Moore, David S.; Jepsen, Peter Uhd; Volka, Karel

    2014-01-01

    imaging, fiber optic probes for in vivo and in vitro analysis, and methods to obtain depth profile information. The issue of fluorescence interference will be considered from the perspectives of excitation wavelength selection and data treatment. Methods to optimize signal to noise with minimized...... excitation laser irradiance to avoid sample damage are also discussed. This chapter then reviews applications of Raman spectroscopy to bioanalysis. Areas discussed include pathology, cytopathology, single-cell analysis, in vivo and in vitro tissue characterization, chemical composition of cell components...... as conformation of DNA and proteins), vibrations of inter- and intramolecular hydrogen bonds in solid-state materials, as well as picosecond dynamics in liquid solutions. This chapter reviews modern instrumentation and techniques for THz spectroscopy, with emphasis on applications in bioanalysis....

  8. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    Science.gov (United States)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  9. Method of valuation of water field capacity

    International Nuclear Information System (INIS)

    Dancette, C.; Maertens, C.

    1973-01-01

    A method allowing the obtention of an approximation of field capacity, with only the determination of water retention at pF=3, is described. In alluvial soils, the accuracy of this method appears sufficient to satisfy the current needs in agriculture problems [fr

  10. Compressible cavitation with stochastic field method

    Science.gov (United States)

    Class, Andreas; Dumond, Julien

    2012-11-01

    Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.

  11. Principles and methods of managerial cost-accounting systems.

    Science.gov (United States)

    Suver, J D; Cooper, J C

    1988-01-01

    An introduction to cost-accounting systems for pharmacy managers is provided; terms are defined and examples of specific applications are given. Cost-accounting systems determine, record, and report the resources consumed in providing services. An effective cost-accounting system must provide the information needed for both internal and external reports. In accounting terms, cost is the value given up to secure an asset. In determining how volumes of activity affect costs, fixed costs and variable costs are calculated; applications include pricing strategies, cost determinations, and break-even analysis. Also discussed are the concepts of direct and indirect costs, opportunity costs, and incremental and sunk costs. For most pharmacy department services, process costing, an accounting of intermediate outputs and homogeneous units, is used; in determining the full cost of providing a product or service (e.g., patient stay), job-order costing is used. Development of work-performance standards is necessary for monitoring productivity and determining product costs. In allocating pharmacy department costs, a ratio of costs to charges can be used; this method is convenient, but microcosting (specific identification of the costs of products) is more accurate. Pharmacy managers can use cost-accounting systems to evaluate the pharmacy's strategies, policies, and services and to improve budgets and reports.

  12. Myocardial perfusion scintigraphy with thallium-201 - principle and method

    International Nuclear Information System (INIS)

    Dressler, J.

    1981-01-01

    Since from the cardiological and cardio-surgical aspects non-invasive methods practicable in the diagnostics of regional myocardial blood perfusion are claiming priority, the myocardial perfusion scintigraphy with thallium 201 has gained more and more importance in the diagnostics of coronary heart diseases. Although radiothallium because of its nucleo-physical characteristics is not regarded as ideal radiopharmaceutical, it is at present, because of its potassium-analogue biokinetics the best radiopharmaceutical to represent the regional coronary perfusion distribution, the vitality and configuration of the heart muscle non-invasively. With careful clinical indication and under consideration of the physico-technical limitations, the informative value provided by the serial scintigraphy with thallium 201 is greater than that provided by the excercise ECG. Various possibilities for solving the problem of quantitative analysis of the myocardial scintigrams have been given. Up to the present day a standardised evaluation procedure corresponding to that of the visual scintigram interpretation has not yet found general acceptance. (orig.) [de

  13. North Sea oil taxation: principles, methods and evidence

    International Nuclear Information System (INIS)

    Rutledge, I.; Wright, P.

    1998-01-01

    The analysis of fiscal systems in the World oil and gas industry has become rather narrowly abstract and technical over the past decade. Wider issues of political economy have often been ignored while attention has focused on forecasting the fiscal systems on ex ante profitability and investment decisions using increasingly sophisticated computer models. At a time when the UK government has been considering reform of the UK oil and gas fiscal system it is appropriate to remind ourselves that there is a rich literature on the subject stretching back to the nineteen sixties and beyond, and that many of the insights of these earlier writers are well worth our close consideration. At the same time some more recent contributors have injected new elements into the debate which also depart from what we would see as 'neo-classical' orthodoxy on this subject. Their work is also briefly reviewed in this paper. The related questions of methods and evidence have also, in our view, become focused in a rather narrow direction with little consideration being given to either ex post historical data or the considerable body of empirical material to be found in oil and gas companies. This paper therefore also seeks to widen our horizons about the different sorts of 'raw material' which might be drawn upon in determining the right direction for current UK fiscal policy or indeed oil and gas fiscal policy in general. (author)

  14. On Corestriction Principle in non-abelian Galois cohomology over local and global fields. II: Characteristic p > 0

    International Nuclear Information System (INIS)

    Nguyen Quoc Thang

    2004-08-01

    We show the validity of te Corestriction Principle for non-abelian cohomology of connected reductive groups over local ad global fields of characteristic p > 0 , by extending some results by Kneser and Douai. (author)

  15. Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation.

    Science.gov (United States)

    Brito, Carlos S N; Gerstner, Wulfram

    2016-09-01

    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities.

  16. Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation.

    Directory of Open Access Journals (Sweden)

    Carlos S N Brito

    2016-09-01

    Full Text Available The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities.

  17. Perspective: Ab initio force field methods derived from quantum mechanics

    Science.gov (United States)

    Xu, Peng; Guidez, Emilie B.; Bertoni, Colleen; Gordon, Mark S.

    2018-03-01

    It is often desirable to accurately and efficiently model the behavior of large molecular systems in the condensed phase (thousands to tens of thousands of atoms) over long time scales (from nanoseconds to milliseconds). In these cases, ab initio methods are difficult due to the increasing computational cost with the number of electrons. A more computationally attractive alternative is to perform the simulations at the atomic level using a parameterized function to model the electronic energy. Many empirical force fields have been developed for this purpose. However, the functions that are used to model interatomic and intermolecular interactions contain many fitted parameters obtained from selected model systems, and such classical force fields cannot properly simulate important electronic effects. Furthermore, while such force fields are computationally affordable, they are not reliable when applied to systems that differ significantly from those used in their parameterization. They also cannot provide the information necessary to analyze the interactions that occur in the system, making the systematic improvement of the functional forms that are used difficult. Ab initio force field methods aim to combine the merits of both types of methods. The ideal ab initio force fields are built on first principles and require no fitted parameters. Ab initio force field methods surveyed in this perspective are based on fragmentation approaches and intermolecular perturbation theory. This perspective summarizes their theoretical foundation, key components in their formulation, and discusses key aspects of these methods such as accuracy and formal computational cost. The ab initio force fields considered here were developed for different targets, and this perspective also aims to provide a balanced presentation of their strengths and shortcomings. Finally, this perspective suggests some future directions for this actively developing area.

  18. Second-principles method for materials simulations including electron and lattice degrees of freedom

    Science.gov (United States)

    García-Fernández, Pablo; Wojdeł, Jacek C.; Íñiguez, Jorge; Junquera, Javier

    2016-05-01

    We present a first-principles-based (second-principles) scheme that permits large-scale materials simulations including both atomic and electronic degrees of freedom on the same footing. The method is based on a predictive quantum-mechanical theory—e.g., density functional theory—and its accuracy can be systematically improved at a very modest computational cost. Our approach is based on dividing the electron density of the system into a reference part—typically corresponding to the system's neutral, geometry-dependent ground state—and a deformation part—defined as the difference between the actual and reference densities. We then take advantage of the fact that the bulk part of the system's energy depends on the reference density alone; this part can be efficiently and accurately described by a force field, thus avoiding explicit consideration of the electrons. Then, the effects associated to the difference density can be treated perturbatively with good precision by working in a suitably chosen Wannier function basis. Further, the electronic model can be restricted to the bands of interest. All these features combined yield a very flexible and computationally very efficient scheme. Here we present the basic formulation of this approach, as well as a practical strategy to compute model parameters for realistic materials. We illustrate the accuracy and scope of the proposed method with two case studies, namely, the relative stability of various spin arrangements in NiO (featuring complex magnetic interactions in a strongly-correlated oxide) and the formation of a two-dimensional electron gas at the interface between band insulators LaAlO3 and SrTiO3 (featuring subtle electron-lattice couplings and screening effects). We conclude by discussing ways to overcome the limitations of the present approach (most notably, the assumption of a fixed bonding topology), as well as its many envisioned possibilities and future extensions.

  19. Principle and methods for measurement of snow water equivalent by detection of natural gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Endrestoel, G O [Institutt for Atomenergi, Kjeller (Norway)

    1979-01-01

    The underlying principles for snow cover determination by use of terrestrial gamma radiation are presented. Several of the methods that have been proposed to exploit the effect are discussed, and some of the more important error sources for the different methods are listed. In conclusion an estimate of the precision that can be obtained by these methods is given.

  20. Principle and methods for measurement of snow water equivalent by detection of natural gamma radiation

    Energy Technology Data Exchange (ETDEWEB)

    Endrestol, G O

    1979-01-01

    The underlying principles for snow cover determination by use of terrestrial gamma radiation are presented. Several of the methods that have been proposed to exploit the effect are discussed, and some of the more important error sources for the different methods are listed. In conclusion estimates of the precision that can be obtained by these methods are given.

  1. Principles of nuclear magnetic resonance imaging using an inhomogeneous polarizing field

    International Nuclear Information System (INIS)

    Briguet, A.; Chaillout, J.; Goldman, M.

    1985-01-01

    In this paper, it is indicated how to reconstruct nuclear magnetic resonance images acquired in an inhomogeneous static magnetic field without the previous knowledge of its spatial distribution. The method provides also the map of the static magnetic field through the sample volume; furthermore it allows the use of non uniform but spatially controlled encoding gradients [fr

  2. Hybrid variational principles and synthesis method for finite element neutron transport calculations

    International Nuclear Information System (INIS)

    Ackroyd, R.T.; Nanneh, M.M.

    1990-01-01

    A family of hybrid variational principles is derived using a generalised least squares method. Neutron conservation is automatically satisfied for the hybrid principles employing two trial functions. No interfaces or reflection conditions need to be imposed on the independent even-parity trial function. For some hybrid principles a single trial function can be employed by relating one parity trial function to the other, using one of the parity transport equation in relaxed form. For other hybrid principles the trial functions can be employed sequentially. Synthesis of transport solutions, starting with the diffusion theory approximation, has been used as a way of reducing the scale of the computation that arises with established finite element methods for neutron transport. (author)

  3. Design principles for high–pressure force fields: Aqueous TMAO solutions from ambient to kilobar pressures

    Energy Technology Data Exchange (ETDEWEB)

    Hölzl, Christoph; Horinek, Dominik, E-mail: dominik.horinek@ur.de [Institut für Physikalische und Theoretische Chemie, Universität Regensburg, 93040 Regensburg (Germany); Kibies, Patrick; Frach, Roland; Kast, Stefan M., E-mail: stefan.kast@tu-dortmund.de [Physikalische Chemie III, Technische Universität Dortmund, 44227 Dortmund (Germany); Imoto, Sho, E-mail: sho.imoto@theochem.rub.de; Marx, Dominik [Lehrstuhl für Theoretische Chemie, Ruhr-Universität Bochum, 44780 Bochum (Germany); Suladze, Saba; Winter, Roland [Physikalische Chemie I, Technische Universität Dortmund, 44227 Dortmund (Germany)

    2016-04-14

    Accurate force fields are one of the major pillars on which successful molecular dynamics simulations of complex biomolecular processes rest. They have been optimized for ambient conditions, whereas high-pressure simulations become increasingly important in pressure perturbation studies, using pressure as an independent thermodynamic variable. Here, we explore the design of non-polarizable force fields tailored to work well in the realm of kilobar pressures – while avoiding complete reparameterization. Our key is to first compute the pressure-induced electronic and structural response of a solute by combining an integral equation approach to include pressure effects on solvent structure with a quantum-chemical treatment of the solute within the embedded cluster reference interaction site model (EC-RISM) framework. Next, the solute’s response to compression is taken into account by introducing pressure-dependence into selected parameters of a well-established force field. In our proof-of-principle study, the full machinery is applied to N,N,N-trimethylamine-N-oxide (TMAO) in water being a potent osmolyte that counteracts pressure denaturation. EC-RISM theory is shown to describe well the charge redistribution upon compression of TMAO(aq) to 10 kbar, which is then embodied in force field molecular dynamics by pressure-dependent partial charges. The performance of the high pressure force field is assessed by comparing to experimental and ab initio molecular dynamics data. Beyond its broad usefulness for designing non-polarizable force fields for extreme thermodynamic conditions, a good description of the pressure-response of solutions is highly recommended when constructing and validating polarizable force fields.

  4. Design principles for high-pressure force fields: Aqueous TMAO solutions from ambient to kilobar pressures.

    Science.gov (United States)

    Hölzl, Christoph; Kibies, Patrick; Imoto, Sho; Frach, Roland; Suladze, Saba; Winter, Roland; Marx, Dominik; Horinek, Dominik; Kast, Stefan M

    2016-04-14

    Accurate force fields are one of the major pillars on which successful molecular dynamics simulations of complex biomolecular processes rest. They have been optimized for ambient conditions, whereas high-pressure simulations become increasingly important in pressure perturbation studies, using pressure as an independent thermodynamic variable. Here, we explore the design of non-polarizable force fields tailored to work well in the realm of kilobar pressures--while avoiding complete reparameterization. Our key is to first compute the pressure-induced electronic and structural response of a solute by combining an integral equation approach to include pressure effects on solvent structure with a quantum-chemical treatment of the solute within the embedded cluster reference interaction site model (EC-RISM) framework. Next, the solute's response to compression is taken into account by introducing pressure-dependence into selected parameters of a well-established force field. In our proof-of-principle study, the full machinery is applied to N,N,N-trimethylamine-N-oxide (TMAO) in water being a potent osmolyte that counteracts pressure denaturation. EC-RISM theory is shown to describe well the charge redistribution upon compression of TMAO(aq) to 10 kbar, which is then embodied in force field molecular dynamics by pressure-dependent partial charges. The performance of the high pressure force field is assessed by comparing to experimental and ab initio molecular dynamics data. Beyond its broad usefulness for designing non-polarizable force fields for extreme thermodynamic conditions, a good description of the pressure-response of solutions is highly recommended when constructing and validating polarizable force fields.

  5. Design principles for high–pressure force fields: Aqueous TMAO solutions from ambient to kilobar pressures

    International Nuclear Information System (INIS)

    Hölzl, Christoph; Horinek, Dominik; Kibies, Patrick; Frach, Roland; Kast, Stefan M.; Imoto, Sho; Marx, Dominik; Suladze, Saba; Winter, Roland

    2016-01-01

    Accurate force fields are one of the major pillars on which successful molecular dynamics simulations of complex biomolecular processes rest. They have been optimized for ambient conditions, whereas high-pressure simulations become increasingly important in pressure perturbation studies, using pressure as an independent thermodynamic variable. Here, we explore the design of non-polarizable force fields tailored to work well in the realm of kilobar pressures – while avoiding complete reparameterization. Our key is to first compute the pressure-induced electronic and structural response of a solute by combining an integral equation approach to include pressure effects on solvent structure with a quantum-chemical treatment of the solute within the embedded cluster reference interaction site model (EC-RISM) framework. Next, the solute’s response to compression is taken into account by introducing pressure-dependence into selected parameters of a well-established force field. In our proof-of-principle study, the full machinery is applied to N,N,N-trimethylamine-N-oxide (TMAO) in water being a potent osmolyte that counteracts pressure denaturation. EC-RISM theory is shown to describe well the charge redistribution upon compression of TMAO(aq) to 10 kbar, which is then embodied in force field molecular dynamics by pressure-dependent partial charges. The performance of the high pressure force field is assessed by comparing to experimental and ab initio molecular dynamics data. Beyond its broad usefulness for designing non-polarizable force fields for extreme thermodynamic conditions, a good description of the pressure-response of solutions is highly recommended when constructing and validating polarizable force fields.

  6. Numerical simulation of conjugate heat transfer in electronic cooling and analysis based on field synergy principle

    International Nuclear Information System (INIS)

    Cheng, Y.P.; Lee, T.S.; Low, H.T.

    2008-01-01

    In this paper, the conjugate heat transfer in electronic cooling is numerically simulated with the newly proposed algorithm CLEARER on collocated grid. Because the solid heat source and substrate are isolated from the boundary, special attention is given to deal with the velocity and temperature in the solid region in the full field computation. The influence of openings on the substrate, heat source height and their distribution along the substrate on the maximum temperature and overall Nusselt number is investigated. The numerical results show that the openings on the substrate can enhance the heat transfer as well as increasing the heat source height, meanwhile, by arranging the heat sources coarsely in the front part and densely in the rear part of the substrate, the thermal performance can also be increased. Then the results are analyzed from the viewpoint of field synergy principle, and it is shown that the heat transfer improvement can all be attributed to the better synergy between the velocity field and temperature field, which may offer some guidance in the design of electronic devices

  7. A method for characterizing photon radiation fields

    International Nuclear Information System (INIS)

    Whicker, J.J.; Hsu, H.H.; Hsieh, F.H.; Borak, T.B.

    1999-01-01

    Uncertainty in dosimetric and exposure rate measurements can increase in areas where multi-directional and low-energy photons (< 100 keV) exist because of variations in energy and angular measurement response. Also, accurate measurement of external exposures in spatially non-uniform fields may require multiple dosimetry. Therefore, knowledge of the photon fields in the workplace is required for full understanding of the accuracy of dosimeters and instruments, and for determining the need for multiple dosimeters. This project was designed to develop methods to characterize photon radiation fields in the workplace, and to test the methods in a plutonium facility. The photon field at selected work locations was characterized using TLDs and a collimated NaI(Tl) detector from which spatial variations in photon energy distributions were calculated from measured spectra. Laboratory results showed the accuracy and utility of the method. Field measurement results combined with observed work patterns suggested the following: (1) workers are exposed from all directions, but not isotropically, (2) photon energy distributions were directionally dependent, (3) stuffing nearby gloves into the glovebox reduced exposure rates significantly, (4) dosimeter placement on the front of the chest provided for a reasonable estimate of the average dose equivalent to workers' torsos, (5) justifiable conclusions regarding the need for multiple dosimetry can be made using this quantitative method, and (6) measurements of the exposure rates with ionization chambers pointed with open beta windows toward the glovebox provided the highest measured rates, although absolute accuracy of the field measurements still needs to be assessed

  8. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  9. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  10. Minimum current principle and variational method in theory of space charge limited flow

    Energy Technology Data Exchange (ETDEWEB)

    Rokhlenko, A. [Department of Mathematics, Rutgers University, Piscataway, New Jersey 08854-8019 (United States)

    2015-10-21

    In spirit of the principle of least action, which means that when a perturbation is applied to a physical system, its reaction is such that it modifies its state to “agree” with the perturbation by “minimal” change of its initial state. In particular, the electron field emission should produce the minimum current consistent with boundary conditions. It can be found theoretically by solving corresponding equations using different techniques. We apply here the variational method for the current calculation, which can be quite effective even when involving a short set of trial functions. The approach to a better result can be monitored by the total current that should decrease when we on the right track. Here, we present only an illustration for simple geometries of devices with the electron flow. The development of these methods can be useful when the emitter and/or anode shapes make difficult the use of standard approaches. Though direct numerical calculations including particle-in-cell technique are very effective, but theoretical calculations can provide an important insight for understanding general features of flow formation and even sometimes be realized by simpler routines.

  11. An Aesthetic Interpretation of the Pilates Method: its principles and convergences with somatic education

    Directory of Open Access Journals (Sweden)

    Odilton José

    2014-12-01

    Full Text Available The Pilates method, originally called contrology, has been gaining a significant following in Brazil. This article discusses the method’s principles and convergences with somatic education by analyzing the original works of Joseph Pilates using an aesthetic-philosophical approach. It seems implicit that the Pilates method can be instrumental for the performing arts, and the article accordingly points to some connections in this regard. However, the article also argues that, in the absence of the guiding principles proposed by Pilates, the method ceases to be an art of control and instead is reduced to something not much different from other physical exercises.

  12. A least squares principle unifying finite element, finite difference and nodal methods for diffusion theory

    International Nuclear Information System (INIS)

    Ackroyd, R.T.

    1987-01-01

    A least squares principle is described which uses a penalty function treatment of boundary and interface conditions. Appropriate choices of the trial functions and vectors employed in a dual representation of an approximate solution established complementary principles for the diffusion equation. A geometrical interpretation of the principles provides weighted residual methods for diffusion theory, thus establishing a unification of least squares, variational and weighted residual methods. The complementary principles are used with either a trial function for the flux or a trial vector for the current to establish for regular meshes a connection between finite element, finite difference and nodal methods, which can be exact if the mesh pitches are chosen appropriately. Whereas the coefficients in the usual nodal equations have to be determined iteratively, those derived via the complementary principles are given explicitly in terms of the data. For the further development of the connection between finite element, finite difference and nodal methods, some hybrid variational methods are described which employ both a trial function and a trial vector. (author)

  13. Eigenstates with the auxiliary field method

    Energy Technology Data Exchange (ETDEWEB)

    Semay, Claude [Service de Physique Nucleaire et Subnucleaire, Universite de Mons-UMONS, 20 Place du Parc, 7000 Mons (Belgium); Silvestre-Brac, Bernard, E-mail: claude.semay@umons.ac.b, E-mail: silvestre@lpsc.in2p3.f [LPSC Universite Joseph Fourier, Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Avenue des Martyrs 53, F-38026 Grenoble-Cedex (France)

    2010-07-02

    The auxiliary field method is a powerful technique to obtain approximate closed-form energy formulas for eigenequations in quantum mechanics. Very good results can be obtained for Schroedinger and semirelativistic Hamiltonians with various potentials, even in the case of many-body problems. This method can also provide approximate eigenstates in terms of well-known wavefunctions, for instance harmonic oscillator or hydrogen-like states, but with a characteristic size which depends on quantum numbers. In this paper, we consider two-body Schroedinger equations with linear, logarithmic and exponential potentials and show that analytical approximations of the corresponding eigenstates can be obtained with the auxiliary field method, with very good accuracy in some cases.

  14. Eigenstates with the auxiliary field method

    International Nuclear Information System (INIS)

    Semay, Claude; Silvestre-Brac, Bernard

    2010-01-01

    The auxiliary field method is a powerful technique to obtain approximate closed-form energy formulas for eigenequations in quantum mechanics. Very good results can be obtained for Schroedinger and semirelativistic Hamiltonians with various potentials, even in the case of many-body problems. This method can also provide approximate eigenstates in terms of well-known wavefunctions, for instance harmonic oscillator or hydrogen-like states, but with a characteristic size which depends on quantum numbers. In this paper, we consider two-body Schroedinger equations with linear, logarithmic and exponential potentials and show that analytical approximations of the corresponding eigenstates can be obtained with the auxiliary field method, with very good accuracy in some cases.

  15. Physical Principles of the Method for Determination of Geometrical Characteristics and Particle Recognition in Digital Holography

    Science.gov (United States)

    Dyomin, V. V.; Polovtsev, I. G.; Davydova, A. Yu.

    2018-03-01

    The physical principles of a method for determination of geometrical characteristics of particles and particle recognition based on the concepts of digital holography, followed by processing of the particle images reconstructed from the digital hologram, using the morphological parameter are reported. An example of application of this method for fast plankton particle recognition is given.

  16. Electric Field Quantitative Measurement System and Method

    Science.gov (United States)

    Generazio, Edward R. (Inventor)

    2016-01-01

    A method and system are provided for making a quantitative measurement of an electric field. A plurality of antennas separated from one another by known distances are arrayed in a region that extends in at least one dimension. A voltage difference between at least one selected pair of antennas is measured. Each voltage difference is divided by the known distance associated with the selected pair of antennas corresponding thereto to generate a resulting quantity. The plurality of resulting quantities defined over the region quantitatively describe an electric field therein.

  17. First-Principles Propagation of Geoelectric Fields from Ionosphere to Ground using LANLGeoRad

    Science.gov (United States)

    Jeffery, C. A.; Woodroffe, J. R.; Henderson, M. G.

    2017-12-01

    A notable deficiency in the current SW forecasting chain is the propagation of geoelectric fields from ionosphere to ground using Biot-Savart integrals, which ignore the localized complexity of lithospheric electrical conductivity and the relatively high conductivity of ocean water compared to the lithosphere. Three-dimensional models of Earth conductivity with mesoscale spatial resolution are being developed, but a new approach is needed to incorporate this information into the SW forecast chain. We present initial results from a first-principles geoelectric propagation model call LANLGeoRad, which solves Maxwell's equations on an unstructured geodesic grid. Challenges associated with the disparate response times of millisecond electromagnetic propagation and 10-second geomagnetic fluctuations are highlighted, and a novel rescaling of the ionosphere/ground system is presented that renders this geoelectric system computationally tractable.

  18. Babinet’s principle for scalar complex objects in the far field

    Science.gov (United States)

    Rodriguez-Zurita, G.; Rickenstorff, C.; Pastrana-Sánchez, R.; Vázquez-Castillo, J. F.; Robledo-Sanchez, C.; Meneses-Fabian, C.; Toto-Arellano, N. I.

    2014-10-01

    Babinet’s principle is briefly reviewed, especially regarding the zeroth diffraction order of the far field diffraction pattern associated with a given aperture. The pattern is basically described by the squared modulus of the Fourier transform of its amplitude distribution (scalar case). In this paper, complementary objects are defined with respect to complex values and not only with respect to unity in order to include phase objects and phase modulation. It is shown that the difference in complementary patterns can be sometimes a bright spot at the zero order location as is widely known, but also, it can be a gray spot or even a dark one. Conditions of occurrence for each case are given as well as some numerical and experimental examples.

  19. Babinet’s principle for scalar complex objects in the far field

    International Nuclear Information System (INIS)

    Rodriguez-Zurita, G; Rickenstorff, C; Pastrana-Sánchez, R; Vázquez-Castillo, J F; Robledo-Sanchez, C; Meneses-Fabian, C; Toto-Arellano, N I

    2014-01-01

    Babinet’s principle is briefly reviewed, especially regarding the zeroth diffraction order of the far field diffraction pattern associated with a given aperture. The pattern is basically described by the squared modulus of the Fourier transform of its amplitude distribution (scalar case). In this paper, complementary objects are defined with respect to complex values and not only with respect to unity in order to include phase objects and phase modulation. It is shown that the difference in complementary patterns can be sometimes a bright spot at the zero order location as is widely known, but also, it can be a gray spot or even a dark one. Conditions of occurrence for each case are given as well as some numerical and experimental examples. (paper)

  20. Review of laser-induced fluorescence methods for measuring rf- and microwave electric fields in discharges

    International Nuclear Information System (INIS)

    Gavrilenko, V.; Oks, E.

    1994-01-01

    Development of methods for measuring rf- or μ-wave electric fields E(t) = E 0 cosωt in discharge plasmas is of a great practical importance. First, these are fields used for producing rf- or μ-wave discharges. Second, the fields E(t) may represent electromagnetic waves penetrating into a plasma from the outside. This paper reviews methods for diagnostics of the fields E(t) in low temperature plasmas based on Laser-Induced Fluorescence (LIF). Compared to emission (passive) methods, LIF-methods have a higher sensitivity as well as higher spatial and temporal resolutions. Underlying physical effects may be highlighted by an example of LIF of hydrogen atoms in a plasma. After a presentation of the underlying physical principles, the review focuses on key experiments where these principles were implemented for measurements of rf- and μ-wave electric fields in various discharges

  1. Review: Janice M. Morse & Linda Niehaus (2009). Mixed Method Design: Principles and Procedures

    OpenAIRE

    Öhlen, Joakim

    2010-01-01

    Mixed method design related to the use of a combination of methods, usually quantitative and qualitative, is increasingly used for the investigation of complex phenomena. This review discusses the book, "Mixed Method Design: Principles and Procedures," by Janice M. MORSE and Linda NIEHAUS. A distinctive feature of their approach is the consideration of mixed methods design out of a core and a supplemental component. In order to define these components they emphasize the overall conceptual dir...

  2. First-principles calculation of electric field gradients in metals, semiconductors, and insulators

    Energy Technology Data Exchange (ETDEWEB)

    Zwanziger, J.W. [Dalhousie Univ, Dept Chem, Halifax, NS (Canada); Dalhousie Univ, Inst Res Mat, Halifax, NS (Canada); Torrent, M. [CEA Bruyeres-le-Chatel, Dept Phys Theor and Appl, Bruyeres 91 (France)

    2008-07-01

    A scheme for computing electric field gradients within the projector augmented wave (PAW) formalism of density functional theory is presented. On the basis of earlier work (M. Profeta, F. Mauri, C.J. Pickard, J. Am. Chem. Soc. 125, 541, 2003) the present implementation handles metallic cases as well as insulators and semiconductors with equal efficiency. Details of the implementation, as well as applications and the discussion of the limitations of the PAW method for computing electric field gradients are presented. (authors)

  3. On the impossibility of a small violation of the Pauli principle within the local quantum field theory

    International Nuclear Information System (INIS)

    Govorkov, A.B.

    1988-01-01

    It is shown that the local quantum field theory of free fields allows only the generalizations of the conventional quantizations (corresponding to the Fermi and Bose statistics) that correspond to the para-Fermi and para-Bose statistics and does not permit ''small'' violation of the Pauli principle

  4. Spectral methods in quantum field theory

    International Nuclear Information System (INIS)

    Graham, Noah; Quandt, Markus; Weigel, Herbert

    2009-01-01

    This concise text introduces techniques from quantum mechanics, especially scattering theory, to compute the effects of an external background on a quantum field in general, and on the properties of the quantum vacuum in particular. This approach can be succesfully used in an increasingly large number of situations, ranging from the study of solitons in field theory and cosmology to the determination of Casimir forces in nano-technology. The method introduced and applied in this book is shown to give an unambiguous connection to perturbation theory, implementing standard renormalization conditions even for non-perturbative backgrounds. It both gives new theoretical insights, for example illuminating longstanding questions regarding Casimir stresses, and also provides an efficient analytic and numerical tool well suited to practical calculations. Last but not least, it elucidates in a concrete context many of the subtleties of quantum field theory, such as divergences, regularization and renormalization, by connecting them to more familiar results in quantum mechanics. While addressed primarily at young researchers entering the field and nonspecialist researchers with backgrounds in theoretical and mathematical physics, introductory chapters on the theoretical aspects of the method make the book self-contained and thus suitable for advanced graduate students. (orig.)

  5. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Hosking, John Joseph Absalom, E-mail: j.j.a.hosking@cma.uio.no [University of Oslo, Centre of Mathematics for Applications (CMA) (Norway)

    2012-12-15

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966-979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197-216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  6. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    International Nuclear Information System (INIS)

    Hosking, John Joseph Absalom

    2012-01-01

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966–979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197–216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  7. Renormalization using the background-field method

    International Nuclear Information System (INIS)

    Ichinose, S.; Omote, M.

    1982-01-01

    Renormalization using the background-field method is examined in detail. The subtraction mechanism of subdivergences is described with reference to multi-loop diagrams and one- and two-loop counter-term formulae are explicitly given. The original one-loop counter-term formula of 't Hooft is thereby improved. The present method of renormalization is far easier to manage than the usual one owing to the fact only gauge-invariant quantities are to be considered when worked in an appropriate gauge. Gravity and Yang-Mills theories are studied as examples. (orig.)

  8. Analytical method for analysis of electromagnetic scattering from inhomogeneous spherical structures using duality principles

    Science.gov (United States)

    Kiani, M.; Abdolali, A.; Safari, M.

    2018-03-01

    In this article, an analytical approach is presented for the analysis of electromagnetic (EM) scattering from radially inhomogeneous spherical structures (RISSs) based on the duality principle. According to the spherical symmetry, similar angular dependencies in all the regions are considered using spherical harmonics. To extract the radial dependency, the system of differential equations of wave propagation toward the inhomogeneity direction is equated with the dual planar ones. A general duality between electromagnetic fields and parameters and scattering parameters of the two structures is introduced. The validity of the proposed approach is verified through a comprehensive example. The presented approach substitutes a complicated problem in spherical coordinate to an easy, well posed, and previously solved problem in planar geometry. This approach is valid for all continuously varying inhomogeneity profiles. One of the major advantages of the proposed method is the capability of studying two general and applicable types of RISSs. As an interesting application, a class of lens antenna based on the physical concept of the gradient refractive index material is introduced. The approach is used to analyze the EM scattering from the structure and validate strong performance of the lens.

  9. Who has to pay for measures in the field of water management? A proposal for applying the polluter pays principle.

    Science.gov (United States)

    Grünebaum, Thomas; Schweder, Heinrich; Weyand, Michael

    2009-01-01

    There is no doubt about the fact that the implementation of the European Water Framework Directive (WFD) and the pursuit of its goal of good ecological status will give rise to measures in different fields of water management. However, a conclusive and transparent method of financing these measures is still missing up to now. Measures in the water management sector are no mere end in themselves; instead, they serve specific ends directed at human activities or they serve general environment objectives. Following the integrative approach of the WFD on looking upon river basins as a whole and its requirement to observe the polluter pays principle, all different groups within a river basin should contribute to the costs according to their cost-bearer roles as polluters, stakeholders with vested interests or beneficiaries via relevant yardsticks. In order to quantify the financial expenditure of each cost bearer, a special algorithm was developed and tested in the river basin of a small tributary of the Ruhr River. It was proved to be generally practicable with regard to its handling and the comprehension of the results. Therefore, the application of a cost bearer system based on the polluter-pays principle and thus in correspondence with the WFD's requirements should appear possible in order to finance future measures.

  10. Single-field consistency relations of large scale structure part III: test of the equivalence principle

    Energy Technology Data Exchange (ETDEWEB)

    Creminelli, Paolo [Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, Trieste, 34151 (Italy); Gleyzes, Jérôme; Vernizzi, Filippo [CEA, Institut de Physique Théorique, Gif-sur-Yvette cédex, F-91191 France (France); Hui, Lam [Physics Department and Institute for Strings, Cosmology and Astroparticle Physics, Columbia University, New York, NY, 10027 (United States); Simonović, Marko, E-mail: creminel@ictp.it, E-mail: jerome.gleyzes@cea.fr, E-mail: lhui@astro.columbia.edu, E-mail: msimonov@sissa.it, E-mail: filippo.vernizzi@cea.fr [SISSA, via Bonomea 265, Trieste, 34136 (Italy)

    2014-06-01

    The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a very tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.

  11. Dating by fission tracks in archaeology. 1. Principles and experimental methods

    International Nuclear Information System (INIS)

    Poupeau, G.; Zuleta, E.

    1984-01-01

    The principles of dating method by uranium fission tracks are shortly exposed. The conditions of application to the archaeology are discussed, in particular for the volcanic glasses, where the fossil fission tracks are often affected of a beginning of effacement. (L.C.) [pt

  12. Relaxation methods for gauge field equilibrium equations

    International Nuclear Information System (INIS)

    Adler, S.L.; Piran, T.

    1984-01-01

    This article gives a pedagogical introduction to relaxation methods for the numerical solution of elliptic partial differential equations, with particular emphasis on treating nonlinear problems with delta-function source terms and axial symmetry, which arise in the context of effective Lagrangian approximations to the dynamics of quantized gauge fields. The authors present a detailed theoretical analysis of three models which are used as numerical examples: the classical Abelian Higgs model (illustrating charge screening), the semiclassical leading logarithm model (illustrating flux confinement within a free boundary or ''bag''), and the axially symmetric Bogomol'nyi-Prasad-Sommerfield monopoles (illustrating the occurrence of p topological quantum numbers in non-Abelian gauge fields). They then proceed to a self-contained introduction to the theory of relaxation methods and allied iterative numerical methods and to the practical aspects of their implementation, with attention to general issues which arise in the three examples. The authors conclude with a brief discussion of details of the numerical solution of the models, presenting sample numerical results

  13. Basing of principles and methods of operation of radiometric control and measurement systems

    International Nuclear Information System (INIS)

    Onishchenko, A.M.

    1995-01-01

    Six basic stages of optimization of radiometric systems, methods of defining the preset components of total error and the choice of principles and methods of measurement are described in succession. The possibility of simultaneous optimization of several stages, turning back to the already passed stages, is shown. It is suggested that components of the total error should be preset as identical ones for methodical, instrument, occasional and representativity errors and the greatest of the components should be decreased first of all. Comparative table for 64 radiometric methods of measurement by 11 indices of the methods quality is presented. 2 refs., 1 tab

  14. Principles of the radiosity method versus radiative transfer for canopy reflectance modeling

    Science.gov (United States)

    Gerstl, Siegfried A. W.; Borel, Christoph C.

    1992-01-01

    The radiosity method is introduced to plant canopy reflectance modeling. We review the physics principles of the radiosity method which originates in thermal radiative transfer analyses when hot and cold surfaces are considered within a given enclosure. The radiosity equation, which is an energy balance equation for discrete surfaces, is described and contrasted with the radiative transfer equation, which is a volumetric energy balance equation. Comparing the strengths and weaknesses of the radiosity method and the radiative transfer method, we conclude that both methods are complementary to each other. Results of sample calculations are given for canopy models with up to 20,000 discrete leaves.

  15. Remote sensing of suspended sediment water research: principles, methods, and progress

    Science.gov (United States)

    Shen, Ping; Zhang, Jing

    2011-12-01

    In this paper, we reviewed the principle, data, methods and steps in suspended sediment research by using remote sensing, summed up some representative models and methods, and analyzes the deficiencies of existing methods. Combined with the recent progress of remote sensing theory and application in water suspended sediment research, we introduced in some data processing methods such as atmospheric correction method, adjacent effect correction, and some intelligence algorithms such as neural networks, genetic algorithms, support vector machines into the suspended sediment inversion research, combined with other geographic information, based on Bayesian theory, we improved the suspended sediment inversion precision, and aim to give references to the related researchers.

  16. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    International Nuclear Information System (INIS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-01-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb 1/2 Mn 1/2 )O 3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb 2 O 4 ; and (3) ferroelectric semiconductors with formula M 2 P 2 (S,Se) 6 . A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: ► Integration of first-principles methods and database mining. ► Minor structural families with desirable functional properties. ► Survey of polar entries in the Inorganic Crystal Structural Database.

  17. Review: Janice M. Morse & Linda Niehaus (2009). Mixed method design: principles and procedures

    OpenAIRE

    Öhlen, Joakim

    2010-01-01

    Mixed-Method-Designs, in denen quantitative und qualitative Methoden Verwendung finden, erfreuen sich zunehmender Beliebtheit für die Untersuchung komplexer Phänomene. Die vorliegende Besprechung beschäftigt sich in diesem Zusammenhang mit dem Buch "Mixed Method Design: Principles and Procedures" von Janice M. MORSE und Linda NIEHAUS, die für solche Designs Kern- und Ergänzungskomponenten zu identifizieren versuchen. Hierzu differenzieren sie zwischen Projekten, die einer eher deduktiven oder...

  18. Bootstrapping conformal field theories with the extremal functional method.

    Science.gov (United States)

    El-Showk, Sheer; Paulos, Miguel F

    2013-12-13

    The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.

  19. First-principles study of anharmonic phonon effects in tetrahedral semiconductors via an external electric field

    Energy Technology Data Exchange (ETDEWEB)

    Dabiri, Zohreh, E-mail: z.dabiri@stu.yazd.ac.ir [Physics Department, Yazd University, P.O. Box 89195-741, Yazd (Iran, Islamic Republic of); Kazempour, Ali [Department of Physics, Payame Noor University, P.O. BOX 119395-3697, Tehran (Iran, Islamic Republic of); Nano Structured Coatings Institute of Yazd Payame Noor University, P.O. Code 89431-74559, Yazd (Iran, Islamic Republic of); Sadeghzadeh, Mohammad Ali [Physics Department, Yazd University, P.O. Box 89195-741, Yazd (Iran, Islamic Republic of)

    2016-11-15

    The strength of phonon anharmonicity is investigated in the framework of the Density Functional Perturbation Theory via an applied constant electric field. In contrast to routine approaches, we have employed the electric field as an effective probe to quest after the quasi-harmonic and anharmonic effects. Two typical tetrahedral semiconductors (diamond and silicon) have been selected to test the efficiency of this approach. In this scheme the applied field is responsible for establishing the perturbation and also inducing the anharmonicity in systems. The induced polarization is a result of changing the electronic density while ions are located at their ground state coordinates or at a specified strain. Employing this method, physical quantities of the semiconductors are calculated in presence of the electron–phonon interaction directly and, phonon–phonon interaction, indirectly. The present approach, which is in good agreement with previous theoretical and experimental studies, can be introduced as a benchmark to simply investigate the anharmonicity and pertinent consequences in materials.

  20. The principles of high voltage electric field and its application in food processing: A review.

    Science.gov (United States)

    Dalvi-Isfahan, Mohsen; Hamdami, Nasser; Le-Bail, Alain; Xanthakis, Epameinondas

    2016-11-01

    Food processing is a major part of the modern global industry and it will certainly be an important sector of the industry in the future. Several processes for different purposes are involved in food processing aiming at the development of new products by combining and/or transforming raw materials, to the extension of food shelf-life, recovery, exploitation and further use of valuable compounds and many others. During the last century several new food processes have arisen and most of the traditional ones have evolved. The future food factory will require innovative approaches food processing which can combine increased sustainability, efficiency and quality. Herein, the objective of this review is to explore the multiple applications of high voltage electric field (HVEF) and its potentials within the food industry. These applications include processes such as drying, refrigeration, freezing, thawing, extending food shelf- life, and extraction of biocompounds. In addition, the principles, mechanism of action and influence of specific parameters have been discussed comprehensively. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Design principles and field performance of a solar spectral irradiance meter

    Energy Technology Data Exchange (ETDEWEB)

    Tatsiankou, V.; Hinzer, K.; Haysom, J.; Schriemer, H.; Emery, K.; Beal, R.

    2016-08-01

    A solar spectral irradiance meter (SSIM), designed for measuring the direct normal irradiance (DNI) in six wavelength bands, has been combined with models to determine key atmospheric transmittances and the resulting spectral irradiance distribution of DNI under all sky conditions. The design principles of the SSIM, implementation of a parameterized transmittance model, and field performance comparisons of modeled solar spectra with reference radiometer measurements are presented. Two SSIMs were tested and calibrated at the National Renewable Energy Laboratory (NREL) against four spectroradiometers and an absolute cavity radiometer. The SSIMs' DNI was on average within 1% of the DNI values reported by one of NREL's primary absolute cavity radiometers. An additional SSIM was installed at the SUNLAB Outdoor Test Facility in September 2014, with ongoing collection of environmental and spectral data. The SSIM's performance in Ottawa was compared against a commercial pyrheliometer and a spectroradiometer over an eight month study. The difference in integrated daily spectral irradiance between the SSIM and the ASD spectroradiometer was found to be less than 1%. The cumulative energy density collected by the SSIM over this duration agreed with that measured by an Eppley model NIP pyrheliometer to within 0.5%. No degradation was observed.

  2. A principled dimension-reduction method for the population density approach to modeling networks of neurons with synaptic dynamics.

    Science.gov (United States)

    Ly, Cheng

    2013-10-01

    The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.

  3. Principles and applications of polymerase chain reaction in medical diagnostic fields: a review.

    Science.gov (United States)

    Valones, Marcela Agne Alves; Guimarães, Rafael Lima; Brandão, Lucas André Cavalcanti; de Souza, Paulo Roberto Eleutério; de Albuquerque Tavares Carvalho, Alessandra; Crovela, Sergio

    2009-01-01

    Recent developments in molecular methods have revolutionized the detection and characterization of microorganisms in a broad range of medical diagnostic fields, including virology, mycology, parasitology, microbiology and dentistry. Among these methods, Polymerase Chain Reaction (PCR) has generated great benefits and allowed scientific advancements. PCR is an excellent technique for the rapid detection of pathogens, including those difficult to culture. Along with conventional PCR techniques, Real-Time PCR has emerged as a technological innovation and is playing an ever-increasing role in clinical diagnostics and research laboratories. Due to its capacity to generate both qualitative and quantitative results, Real-Time PCR is considered a fast and accurate platform. The aim of the present literature review is to explore the clinical usefulness and potential of both conventional PCR and Real-Time PCR assays in diverse medical fields, addressing its main uses and advances.

  4. First-principle optimal local pseudopotentials construction via optimized effective potential method

    International Nuclear Information System (INIS)

    Mi, Wenhui; Zhang, Shoutao; Wang, Yanchao; Ma, Yanming; Miao, Maosheng

    2016-01-01

    The local pseudopotential (LPP) is an important component of orbital-free density functional theory, a promising large-scale simulation method that can maintain information on a material’s electron state. The LPP is usually extracted from solid-state density functional theory calculations, thereby it is difficult to assess its transferability to cases involving very different chemical environments. Here, we reveal a fundamental relation between the first-principles norm-conserving pseudopotential (NCPP) and the LPP. On the basis of this relationship, we demonstrate that the LPP can be constructed optimally from the NCPP for a large number of elements using the optimized effective potential method. Specially, our method provides a unified scheme for constructing and assessing the LPP within the framework of first-principles pseudopotentials. Our practice reveals that the existence of a valid LPP with high transferability may strongly depend on the element.

  5. Principles of Single-Laboratory Validation of Analytical Methods for Testing the Chemical Composition of Pesticides

    Energy Technology Data Exchange (ETDEWEB)

    Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)

    2009-07-15

    Underlying theoretical and practical approaches towards pesticide formulation analysis are discussed, i.e. general principles, performance characteristics, applicability of validation data, verification of method performance, and adaptation of validated methods by other laboratories. The principles of single laboratory validation of analytical methods for testing the chemical composition of pesticides are outlined. Also the theoretical background is described for performing pesticide formulation analysis as outlined in ISO, CIPAC/AOAC and IUPAC guidelines, including methodological characteristics such as specificity, selectivity, linearity, accuracy, trueness, precision and bias. Appendices I–III hereof give practical and elaborated examples on how to use the Horwitz approach and formulae for estimating the target standard deviation towards acceptable analytical repeatability. The estimation of trueness and the establishment of typical within-laboratory reproducibility are treated in greater detail by means of worked-out examples. (author)

  6. Extraordinary Magnetic Field Enhancement with Metallic Nanowire: Role of Surface Impedance in Babinet's Principle for Sub-Skin-Depth Regime

    Science.gov (United States)

    Koo, Sukmo; Kumar, M. Sathish; Shin, Jonghwa; Kim, Daisik; Park, Namkyoo

    2009-12-01

    We propose and analyze the “complementary” structure of a metallic nanogap, namely, the metallic nanowire for magnetic field enhancement. A huge enhancement of the field up to a factor of 300 was achieved. Introducing the surface impedance concept, we also develop and numerically confirm a new analytic theory which successfully predicts the field enhancement factors for metal nanostructures. Compared to the predictions of the classical Babinet principle applied to a nanogap, an order of magnitude difference in the field enhancement factor was observed for the sub-skin-depth regime nanowire.

  7. MiTEP's Collaborative Field Course Design Process Based on Earth Science Literacy Principles

    Science.gov (United States)

    Engelmann, C. A.; Rose, W. I.; Huntoon, J. E.; Klawiter, M. F.; Hungwe, K.

    2010-12-01

    Michigan Technological University has developed a collaborative process for designing summer field courses for teachers as part of their National Science Foundation funded Math Science Partnership program, called the Michigan Teacher Excellence Program (MiTEP). This design process was implemented and then piloted during two two-week courses: Earth Science Institute I (ESI I) and Earth Science Institute II (ESI II). Participants consisted of a small group of Michigan urban science teachers who are members of the MiTEP program. The Earth Science Literacy Principles (ESLP) served as the framework for course design in conjunction with input from participating MiTEP teachers as well as research done on common teacher and student misconceptions in Earth Science. Research on the Earth Science misconception component, aligned to the ESLP, is more fully addressed in GSA Abstracts with Programs Vol. 42, No. 5. “Recognizing Earth Science Misconceptions and Reconstructing Knowledge through Conceptual-Change-Teaching”. The ESLP were released to the public in January 2009 by the Earth Science Literacy Organizing Committee and can be found at http://www.earthscienceliteracy.org/index.html. Each day of the first nine days of both Institutes was focused on one of the nine ESLP Big Ideas; the tenth day emphasized integration of concepts across all of the ESLP Big Ideas. Throughout each day, Michigan Tech graduate student facilitators and professors from Michigan Tech and Grand Valley State University consistantly focused teaching and learning on the day's Big Idea. Many Earth Science experts from Michigan Tech and Grand Valley State University joined the MiTEP teachers in the field or on campus, giving presentations on the latest research in their area that was related to that Big Idea. Field sites were chosen for their unique geological features as well as for the “sense of place” each site provided. Preliminary research findings indicate that this collaborative design

  8. Path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong

    2016-08-20

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  9. Longitudinal bunch diagnostics using coherent transition radiation spectroscopy. Physical principles, multichannel spectrometer, experimental results, mathematical methods

    International Nuclear Information System (INIS)

    Schmidt, Bernhard; Wesch, Stephan; Behrens, Christopher; Koevener, Toke; Hass, Eugen; Casalbuoni, Sara

    2018-03-01

    The generation and properties of transition radiation (TR) are thoroughly treated. The spectral energy density, as described by the Ginzburg-Frank formula, is computed analytically, and the modifications caused by the finite size of the TR screen and by near-field diffraction effects are carefully analyzed. The principles of electron bunch shape reconstruction using coherent transition radiation are outlined. Spectroscopic measurements yield only the magnitude of the longitudinal form factor but not its phase. Two phase retrieval methods are investigated and illustrated with model calculations: analytic phase computation by means of the Kramers-Kronig dispersion relation, and iterative phase retrieval. Particular attention is paid to the ambiguities which are unavoidable in the reconstruction of longitudinal charge density profiles from spectroscopic data. The origin of these ambiguities has been identified and a thorough mathematical analysis is presented. The experimental part of the paper comprises a description of our multichannel infrared and THz spectrometer and a selection of measurements at FLASH, comparing the bunch profiles derived from spectroscopic data with those determined with a transversely deflecting microwave structure. A rigorous derivation of the Kramers-Kronig phase formula is presented in Appendix A. Numerous analytic model calculations can be found in Appendix B. The differences between normal and truncated Gaussians are discussed in Appendix C. Finally, Appendix D contains a short description of the propagation of an electromagnetic wave front by two-dimensional fast Fourier transformation. This is the basis of a powerful numerical Mathematica trademark code THzTransport, which permits the propagation of electromagnetic wave fronts through a beam line consisting of drift spaces, lenses, mirrors and apertures.

  10. Longitudinal bunch diagnostics using coherent transition radiation spectroscopy. Physical principles, multichannel spectrometer, experimental results, mathematical methods

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Bernhard; Wesch, Stephan; Behrens, Christopher [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Koevener, Toke [Hamburg Univ. (Germany); European Organization for Nuclear Research (CERN), Geneva (Switzerland); Hass, Eugen [Hamburg Univ. (Germany); Casalbuoni, Sara [Karlsruhe Institute of Technology (Germany). Inst. for Beam Physics and Technology; Schmueser, Peter [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Hamburg Univ. (Germany)

    2018-03-15

    The generation and properties of transition radiation (TR) are thoroughly treated. The spectral energy density, as described by the Ginzburg-Frank formula, is computed analytically, and the modifications caused by the finite size of the TR screen and by near-field diffraction effects are carefully analyzed. The principles of electron bunch shape reconstruction using coherent transition radiation are outlined. Spectroscopic measurements yield only the magnitude of the longitudinal form factor but not its phase. Two phase retrieval methods are investigated and illustrated with model calculations: analytic phase computation by means of the Kramers-Kronig dispersion relation, and iterative phase retrieval. Particular attention is paid to the ambiguities which are unavoidable in the reconstruction of longitudinal charge density profiles from spectroscopic data. The origin of these ambiguities has been identified and a thorough mathematical analysis is presented. The experimental part of the paper comprises a description of our multichannel infrared and THz spectrometer and a selection of measurements at FLASH, comparing the bunch profiles derived from spectroscopic data with those determined with a transversely deflecting microwave structure. A rigorous derivation of the Kramers-Kronig phase formula is presented in Appendix A. Numerous analytic model calculations can be found in Appendix B. The differences between normal and truncated Gaussians are discussed in Appendix C. Finally, Appendix D contains a short description of the propagation of an electromagnetic wave front by two-dimensional fast Fourier transformation. This is the basis of a powerful numerical Mathematica trademark code THzTransport, which permits the propagation of electromagnetic wave fronts through a beam line consisting of drift spaces, lenses, mirrors and apertures.

  11. Development and Field Testing of a Model to Simulate a Demonstration of Le Chatelier's Principle Using the Wheatstone Bridge Circuit.

    Science.gov (United States)

    Vickner, Edward Henry, Jr.

    An electronic simulation model was designed, constructed, and then field tested to determine student opinion of its effectiveness as an instructional aid. The model was designated as the Equilibrium System Simulator (ESS). The model was built on the principle of electrical symmetry applied to the Wheatstone bridge and was constructed from readily…

  12. Narrow field electromagnetic sensor system and method

    International Nuclear Information System (INIS)

    McEwan, T.E.

    1996-01-01

    A narrow field electromagnetic sensor system and method of sensing a characteristic of an object provide the capability to realize a characteristic of an object such as density, thickness, or presence, for any desired coordinate position on the object. One application is imaging. The sensor can also be used as an obstruction detector or an electronic trip wire with a narrow field without the disadvantages of impaired performance when exposed to dirt, snow, rain, or sunlight. The sensor employs a transmitter for transmitting a sequence of electromagnetic signals in response to a transmit timing signal, a receiver for sampling only the initial direct RF path of the electromagnetic signal while excluding all other electromagnetic signals in response to a receive timing signal, and a signal processor for processing the sampled direct RF path electromagnetic signal and providing an indication of the characteristic of an object. Usually, the electromagnetic signal is a short RF burst and the obstruction must provide a substantially complete eclipse of the direct RF path. By employing time-of-flight techniques, a timing circuit controls the receiver to sample only the initial direct RF path of the electromagnetic signal while not sampling indirect path electromagnetic signals. The sensor system also incorporates circuitry for ultra-wideband spread spectrum operation that reduces interference to and from other RF services while allowing co-location of multiple electronic sensors without the need for frequency assignments. 12 figs

  13. Analysis of calculating methods for failure distribution function based on maximal entropy principle

    International Nuclear Information System (INIS)

    Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli

    2009-01-01

    The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)

  14. A maximum-principle preserving finite element method for scalar conservation equations

    KAUST Repository

    Guermond, Jean-Luc

    2014-04-01

    This paper introduces a first-order viscosity method for the explicit approximation of scalar conservation equations with Lipschitz fluxes using continuous finite elements on arbitrary grids in any space dimension. Provided the lumped mass matrix is positive definite, the method is shown to satisfy the local maximum principle under a usual CFL condition. The method is independent of the cell type; for instance, the mesh can be a combination of tetrahedra, hexahedra, and prisms in three space dimensions. © 2014 Elsevier B.V.

  15. A maximum-principle preserving finite element method for scalar conservation equations

    KAUST Repository

    Guermond, Jean-Luc; Nazarov, Murtazo

    2014-01-01

    This paper introduces a first-order viscosity method for the explicit approximation of scalar conservation equations with Lipschitz fluxes using continuous finite elements on arbitrary grids in any space dimension. Provided the lumped mass matrix is positive definite, the method is shown to satisfy the local maximum principle under a usual CFL condition. The method is independent of the cell type; for instance, the mesh can be a combination of tetrahedra, hexahedra, and prisms in three space dimensions. © 2014 Elsevier B.V.

  16. Principles of Developing Multi-Pesticide Methods Based on HPLC Determination

    Energy Technology Data Exchange (ETDEWEB)

    Dudar, E. [Plant Protection & Soil Conservation Service of Budapest, Budapest (Hungary)

    2009-07-15

    Principles for the development of multi-pesticide methods based on HPLC determination are outlined. Flow charts and block diagrams give guidance on how to proceed stepwise in the set-up of respective analytical methods. Detailed information is provided on what to take into consideration for setting up a pesticide formulation analysis method. HPLC variables like the types of column, solvents and their strength, pH value, eluent modifiers, column temperature, etc, and the influence on the separation and resolution of chromatographic peaks are discussed as well as the necessity and benefits of internal standardization. Examples of system suitability testing experiments are given for illustration. (author)

  17. THE PRINCIPLES AND METHODS OF INFORMATION AND EDUCATIONAL SPACE SEMANTIC STRUCTURING BASED ON ONTOLOGIC APPROACH REALIZATION

    Directory of Open Access Journals (Sweden)

    Yurij F. Telnov

    2014-01-01

    Full Text Available This article reveals principles of semantic structuring of information and educational space of objects of knowledge and scientific and educational services with use of methods of ontologic engineering. Novelty of offered approach is interface of ontology of a content and ontology of scientific and educational services that allows to carry out effective composition of services and objects of knowledge according to models of professional competences and requirements being trained. As a result of application of methods of information and educational space semantic structuring integration of use of the diverse distributed scientific and educational content by educational institutions for carrying out scientific researches, methodical development and training is provided.

  18. System principles, mathematical models and methods to ensure high reliability of safety systems

    Science.gov (United States)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  19. Beyond utilitarianism: a method for analyzing competing ethical principles in a decision analysis of liver transplantation.

    Science.gov (United States)

    Volk, Michael L; Lok, Anna S F; Ubel, Peter A; Vijan, Sandeep

    2008-01-01

    The utilitarian foundation of decision analysis limits its usefulness for many social policy decisions. In this study, the authors examine a method to incorporate competing ethical principles in a decision analysis of liver transplantation for a patient with acute liver failure (ALF). A Markov model was constructed to compare the benefit of transplantation for a patient with ALF versus the harm caused to other patients on the waiting list and to determine the lowest acceptable 5-y posttransplant survival for the ALF patient. The weighting of the ALF patient and other patients was then adjusted using a multiattribute variable incorporating utilitarianism, urgency, and other principles such as fair chances. In the base-case analysis, the strategy of transplanting the ALF patient resulted in a 0.8% increase in the risk of death and a utility loss of 7.8 quality-adjusted days of life for each of the other patients on the waiting list. These harms cumulatively outweighed the benefit of transplantation for an ALF patient having a posttransplant survival of less than 48% at 5 y. However, the threshold for an acceptable posttransplant survival for the ALF patient ranged from 25% to 56% at 5 y, depending on the ethical principles involved. The results of the decision analysis vary depending on the ethical perspective. This study demonstrates how competing ethical principles can be numerically incorporated in a decision analysis.

  20. Ethical standards and regulations principles of professional conduct in the field of mediation

    Directory of Open Access Journals (Sweden)

    Yulia I. Melnychuk

    2016-01-01

    Full Text Available An origin of conflicts, during life of man, is the inevitable phenomenon. A subject for a conflict results in the origin of conflict situation, which contains the negative colouring the display of which can be offence. Mediaciya appears the alternative method of permission of conflict, which is directed on zalagodzhennya and decision of conflicts by the direct socializing with an offender and suffering. A collaboration as a result of realization of which reasons of divergences and aspiration of resisting sides of search of vzaemopriynyatnikh ways of decision of situation turn out appears the base of process of mediacii. In this process the third participant is a neurohumor, the purpose of activity of which is adjusting and communicative process control. Institualizaciya of codes of conduct, which are added the certain types of moral mutual relations between people is optimum for realization of professional activity. Socialphilosophical interpretation of cultural, humanism principles of restoration process is fixed in the ethics standards of mediacii. Ethics norms are key in achievement of the real perfection, that is why there is a clear requirement in the ethics estimation of practice of neurohumor for the maintainance of moral, legal norms upgrading functioning. Professional practice of neurohumor is based on an awareness them of ethics aspects and social payment in prevention of recidivism, observance of ethics rules and standards, proper European legislation, national traditions.

  1. A Case Study of Academic Writing Development Through Principled Versus Standard Clt Method at Binus University

    Directory of Open Access Journals (Sweden)

    Almodad Biduk Asmani

    2013-10-01

    Full Text Available The purpose of the research project is to investigate how far the academic writing skills of Binus University students can be developed through two conflicting CLT methods: standard and principled. The research project is expected to result in computer-animated format which can be used as one of the main tools in teaching and learning grammar at Binus University. The research project uses the qualitative approach, and thus uses verbal data. The research project involves two subject groups (experimental and control. The experimental group will receive the treatment of grammar learning by using the Principled CLT approach, while the control group receives the standard CLT approach. Survey is then conducted to the two groups so as to find out their comments on the two teaching methods. From the results of the questionnaires, it is found that Principled CLT method is favored for its knowledge and accuracy factors, while the Standard CLT is preferred for its fun and independence factors.   

  2. Mathematical methods in geometrization of coal field

    Science.gov (United States)

    Shurygin, D. N.; Kalinchenko, V. M.; Tkachev, V. A.; Tretyak, A. Ya

    2017-10-01

    In the work, the approach to increase overall performance of collieries on the basis of an increase in accuracy of geometrization of coal thicknesses is considered. The sequence of stages of mathematical modelling of spatial placing of indicators of a deposit taking into account allocation of homogeneous sites of thickness and an establishment of quantitative interrelations between mountain-geological indicators of coal layers is offered. As a uniform mathematical method for modelling of various interrelations, it is offered to use a method of the group accounting of arguments (MGUA), one of versions of the regressive analysis. This approach can find application during delimitation between geological homogeneous sites of coal thicknesses in the form of a linear discriminant function. By an example of division into districts of a mine field in the conditions of mine “Sadkinsky” (East Donbass), the use of the complex approach for forecasting of zones of the small amplitude of disturbance of a coal layer on the basis of the discriminant analysis and MGUA is shown.

  3. First principles density functional calculation of magnetic moment and hyperfine fields of dilute transition metal impurities in Gd host

    International Nuclear Information System (INIS)

    Mohanta, S.K.; Mishra, S.N.; Srivastava, S.K.

    2014-01-01

    We present first principles calculations of electronic structure and magnetic properties of dilute transition metal (3d, 4d and 5d) impurities in a Gd host. The calculations have been performed within the density functional theory using the full potential linearized augmented plane wave technique and the GGA+U method. The spin and orbital contributions to the magnetic moment and the hyperfine fields have been computed. We find large magnetic moments for 3d (Ti–Co), 4d (Nb–Ru) and 5d (Ta–Os) impurities with magnitudes significantly different from the values estimated from earlier mean field calculation [J. Magn. Magn. Mater. 320 (2008) e446–e449]. The exchange interaction between the impurity and host Gd moments is found to be positive for early 3d elements (Sc–V) while in all other cases an anti-ferromagnetic coupling is observed. The trends for the magnetic moment and hyperfine field of d-impurities in Gd show qualitative difference with respect to their behavior in Fe, Co and Ni. The calculated total hyperfine field, in most cases, shows excellent agreement with the experimental results. A detailed analysis of the Fermi contact hyperfine field has been made, revealing striking differences for impurities having less or more than half filled d-shell. The impurity induced perturbations in host moments and the change in the global magnetization of the unit cell have also been computed. The variation within each of the d-series is found to correlate with the d–d hybridization strength between the impurity and host atoms. - Highlights: • Detailed study of transition metal impurities in ferromagnetic Gd has been carried out. • The trends in impurity magnetic moment are qualitatively different from Fe, Co and Ni. • The variation within each of the d-series is found to correlate with the d–d hybridization strength between the impurity and host atoms. • Experimental trend in a hyperfine field has been reproduced successfully

  4. Low-frequency electrical and magnetic fields: The precautionary principle for national authorities. Guidance for decision-makers

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-31

    This publication is intended as a support for making decisions on health hazards and electromagnetic fields. It is based on the strength of scientific findings hitherto, at the same time as technical and economic aspects of possible measures are considered in the light of limited community resources. The national authorities recommend a precautionary principle based primarily on non-discountable cancer risks. Similar principles should also be applied to other suspected effects on health. The guide offers supportive documentation to decision-makers` tasks with assessing what is reasonable in each individual case, balancing possible hazards against technical and economic considerations. 6 refs

  5. Foreign Experience of Applying the Principle of "Pump or Pay" in the Field of Pipeline Transportation

    Directory of Open Access Journals (Sweden)

    Valeriy I. Salygin

    2015-01-01

    Full Text Available This article reveals the practice of "ship or pay" principle in the US, Canada and Europe. The authors analyze the practice of concluding contracts for oil and petroleum products transportation, procedures, terms and conditions stipulated in the contract. The "take or pay" principle is a common practice in developed countries like the US, Canada and the UK. The specific feature of the United States is that the pipelines are not built only for one shipper, but rather for all market, which is caused the "open season" tradition. In Canada, "take or pay" principle applies to cover the capital costs of the carrier. The main reasons for usage of terms "take or pay" are to minimize risks of the carrier, building or expanding his own pipeline network, by guaranteeing shipper's financial benefits after the putting pipeline into operation. "Take or pay" contracts cover the carrier's obligation to provide agreed minimum amount of petroleum to the consignor within a certain period. In turn, the shipper is obliged to accept the minimum amount of petroleum and pay, regardless of the fact of acceptance of oil. "Take or pay" principle is a kind of risk-sharing mechanism, which allows to shift the risks of non-fulfillment of the contract to the shipper. Besides, the "take or pay" principle can be indirect guarantee in the context of project financing, and therefore, financing. The article emphasizes the main advantages of the application of this principle and opportunities for its use in Russia.

  6. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  7. Preliminary research on finite difference method to solve radon field distribution over sandstone-type uranium ore body

    International Nuclear Information System (INIS)

    Li Bihong; Shuang Na; Liu Qingcheng

    2006-01-01

    The principle of finite difference method is introduced, and the radon field distribution over sandstone-type uranium deposit is narrated. The radon field distribution theory equation is established. To solve radon field distribution equation using finite difference algorithm is to provide the value computational method for forward calculation about radon field over sandstone-type uranium mine. Study on 2-D finite difference method on the center of either high anomaly radon fields in view of the character of radon field over sandstone-type uranium provide an algorithm for further research. (authors)

  8. Rapid Methods for the Detection of Foodborne Bacterial Pathogens: Principles, Applications, Advantages and Limitations

    Directory of Open Access Journals (Sweden)

    Law eJodi Woan-Fei

    2015-01-01

    Full Text Available The incidence of foodborne diseases has increased over the years and resulted in major public health problem globally. Foodborne pathogens can be found in various foods and it is important to detect foodborne pathogens to provide safe food supply and to prevent foodborne diseases. The conventional methods used to detect foodborne pathogen are time consuming and laborious. Hence, a variety of methods have been developed for rapid detection of foodborne pathogens as it is required in many food analyses. Rapid detection methods can be categorized into nucleic acid-based, biosensor-based and immunological-based methods. This review emphasizes on the principles and application of recent rapid methods for the detection of foodborne bacterial pathogens. Detection methods included are simple polymerase chain reaction (PCR, multiplex PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA, loop-mediated isothermal amplification (LAMP and oligonucleotide DNA microarray which classified as nucleic acid-based methods; optical, electrochemical and mass-based biosensors which classified as biosensor-based methods; enzyme-linked immunosorbent assay (ELISA and lateral flow immunoassay which classified as immunological-based methods. In general, rapid detection methods are generally time-efficient, sensitive, specific and labor-saving. The developments of rapid detection methods are vital in prevention and treatment of foodborne diseases.

  9. Efficient Calculation of Near Fields in the FDTD Method

    DEFF Research Database (Denmark)

    Franek, Ondrej

    2011-01-01

    When calculating frequency-domain near fields by the FDTD method, almost 50 % reduction in memory and CPU operations can be achieved if only E-fields are stored during the main time-stepping loop and H-fields computed later. An improved method of obtaining the H-fields from Faraday's Law is prese...

  10. The significance of a new correspondence principle for electromagnetic interaction in strong optical field ionisation

    International Nuclear Information System (INIS)

    Boreham, B. W.; Hora, H.

    1997-01-01

    We have recently developed a correspondence principle for electromagnetic interaction. When applied to laser interactions with electrons this correspondence principle identifies a critical laser intensity I*. This critical intensity is a transition intensity separating classical mechanical and quantum mechanical interaction regimes. In this paper we discuss the further application of I* to the interaction of bound electrons in atoms. By comparing I* with the ionisation threshold intensities as calculated from a cycle-averaged simple-atom model we conclude that I* can be usefully interpreted as a lower bound to the classical regime in studies of ionisation of gas atoms by intense laser beams

  11. Introduction to First-Principles Electronic Structure Methods: Application to Actinide Materials

    International Nuclear Information System (INIS)

    Klepeis, J E

    2006-01-01

    This paper provides an introduction for non-experts to first-principles electronic structure methods that are widely used in condensed-matter physics. Particular emphasis is placed on giving the appropriate background information needed to better appreciate the use of these methods to study actinide and other materials. Specifically, I describe the underlying theory sufficiently to enable an understanding of the relative strengths and weaknesses of the methods. I also explain the meaning of commonly used terminology, including density functional theory (DFT), local density approximation (LDA), and generalized gradient approximation (GGA), as well as linear muffin-tin orbital (LMTO), linear augmented plane wave (LAPW), and pseudopotential methods. I also briefly discuss methodologies that extend the basic theory to address specific limitations. Finally, I describe a few illustrative applications, including quantum molecular dynamics (QMD) simulations and studies of surfaces, impurities, and defects. I conclude by addressing the current controversy regarding magnetic calculations for actinide materials

  12. Selected Tools and Methods from Quality Management Field

    Directory of Open Access Journals (Sweden)

    Kateřina BRODECKÁ

    2009-06-01

    Full Text Available Following paper describes selected tools and methods from Quality management field and their practical applications on defined examples. Solved examples were elaborated in the form of electronic support. This in detail elaborated electronic support provides students opportunity to thoroughly practice specific issues, help them to prepare for exams and consequently will lead to education improvement. Especially students of combined study form will appreciate this support. The paper specifies project objectives, subjects that will be covered by mentioned support, target groups, structure and the way of elaboration of electronic exercise book in view. The emphasis is not only on manual solution of selected examples that may help students to understand the principles and relationships, but also on solving and results interpreting of selected examples using software support. Statistic software Statgraphics Plus v 5.0 is used while working support, because it is free to use for all students of the faculty. Exemplary example from the subject Basic Statistical Methods of Quality Management is also part of this paper.

  13. Edge-Corrected Mean-Field Hubbard Model: Principle and Applications in 2D Materials

    Directory of Open Access Journals (Sweden)

    Xi Zhang

    2017-05-01

    Full Text Available This work reviews the current progress of tight-binding methods and the recent edge-modified mean-field Hubbard model. Undercoordinated atoms (atoms not fully coordinated exist at a high rate in nanomaterials with their impact overlooked. A quantum theory was proposed to calculate electronic structure of nanomaterials by incorporating bond order-length-strength (BOLS correlation to mean-field Hubbard model, i.e., BOLS-HM. Consistency between the BOLS-HM calculation and density functional theory (DFT calculation on 2D materials verified that (i bond contractions and potential well depression occur at the edge of graphene, phosphorene, and antimonene nanoribbons; (ii the physical origin of the band gap opening of graphene, phosphorene, and antimonene nanoribbons lays in the enhancement of edge potentials and hopping integrals due to the shorter and stronger bonds between undercoordinated atoms; (iii the band gap of 2D material nanoribbons expand as the width decreases due to the increasing under-coordination effects of edges which modulates the conductive behaviors; and (iv non-bond electrons at the edges and atomic vacancies of 2D material accompanied with the broken bond contribute to the Dirac-Fermi polaron (DFP with a local magnetic moment.

  14. Cost Finding Principles and Procedures. Preliminary Field Review Edition. Technical Report 26.

    Science.gov (United States)

    Ziemer, Gordon; And Others

    This report is part of the Larger Cost Finding Principles Project designed to develop a uniform set of standards, definitions, and alternative procedures that will use accounting and statistical data to find the full cost of resources utilized in the process of producing institutional outputs. This technical report describes preliminary procedures…

  15. PHASE GRADIENT METHOD OF MAGNETIC FIELD MEASUREMENTS IN ELECTRIC VEHICLES

    Directory of Open Access Journals (Sweden)

    N. G. Ptitsyna

    2013-01-01

    Full Text Available Operation of electric and hybrid vehicles demands real time magnetic field control, for instance, for fire and electromagnetic safety. The article deals with a method of magnetic field measurements onboard electric cars taking into account peculiar features of these fields. The method is based on differential methods of measurements, and minimizes the quantity of magnetic sensors.

  16. Optimization principles for preparation methods and properties of fine ferrite materials

    Science.gov (United States)

    Borisova, N. M.; Golubenko, Z. V.; Kuz'micheva, T. G.; Ol'khovik, L. P.; Shabatin, V. P.

    1992-08-01

    The paper is devoted to the problems of development of fine materials based on Ba-ferrite for vertical magnetic recording in particular. Taking an analogue — BaFe 12-2 xCo xTe xO 19 — we have optimized the melt co-precipitation method and shown a new opportunity to provide chemical homogeneity of microcrystallites by means of cryotechnology. Magnetic characteristics of the magnetic tape experimental sample for digital video recording are presented. A series of principles of consistent control of ferrite powder properties are formulated and illustrated with specific developments.

  17. Direct methods of solution for problems in mechanics from invariance principles

    International Nuclear Information System (INIS)

    Rajan, M.

    1986-01-01

    Direct solutions to problems in mechanics are developed from variational statements derived from the principle of invariance of the action integral under a one-parameter family of infinitesimal transformations. Exact, direct solution procedures for linear systems are developed by a careful choice of the arbitrary functions used to generate the infinitesimal transformations. It is demonstrated that the well-known methods for the solution of differential equations can be directly adapted to the required variational statements. Examples in particle and continuum mechanics are presented

  18. Generalized multivalued equilibrium-like problems: auxiliary principle technique and predictor-corrector methods

    Directory of Open Access Journals (Sweden)

    Vahid Dadashi

    2016-02-01

    Full Text Available Abstract This paper is dedicated to the introduction a new class of equilibrium problems named generalized multivalued equilibrium-like problems which includes the classes of hemiequilibrium problems, equilibrium-like problems, equilibrium problems, hemivariational inequalities, and variational inequalities as special cases. By utilizing the auxiliary principle technique, some new predictor-corrector iterative algorithms for solving them are suggested and analyzed. The convergence analysis of the proposed iterative methods requires either partially relaxed monotonicity or jointly pseudomonotonicity of the bifunctions involved in generalized multivalued equilibrium-like problem. Results obtained in this paper include several new and known results as special cases.

  19. Axial Field Electric Motor and Method

    National Research Council Canada - National Science Library

    Cho, Chahee P

    2007-01-01

    .... A hybrid field, brushless, permanent magnet electric motor utilizing a rotor with two sets of permanent magnets oriented such that the flux produced by the two sets of magnets is perpendicular to each...

  20. Magnetic field transfer device and method

    Science.gov (United States)

    Wipf, S.L.

    1990-02-13

    A magnetic field transfer device includes a pair of oppositely wound inner coils which each include at least one winding around an inner coil axis, and an outer coil which includes at least one winding around an outer coil axis. The windings may be formed of superconductors. The axes of the two inner coils are parallel and laterally spaced from each other so that the inner coils are positioned in side-by-side relation. The outer coil is outwardly positioned from the inner coils and rotatable relative to the inner coils about a rotational axis substantially perpendicular to the inner coil axes to generate a hypothetical surface which substantially encloses the inner coils. The outer coil rotates relative to the inner coils between a first position in which the outer coil axis is substantially parallel to the inner coil axes and the outer coil augments the magnetic field formed in one of the inner coils, and a second position 180[degree] from the first position, in which the augmented magnetic field is transferred into the other inner coil and reoriented 180[degree] from the original magnetic field. The magnetic field transfer device allows a magnetic field to be transferred between volumes with negligible work being required to rotate the outer coil with respect to the inner coils. 16 figs.

  1. Physical principles underlying the experimental methods for studying the orientational order of liquid crystals

    International Nuclear Information System (INIS)

    Limmer, S.

    1989-01-01

    The basic physical principles underlying different experimental methods frequently used for the determination of orientational order parameters of liquid crystals are reviewed. The methods that are dealt with here include the anisotropy of the diamagnetic susceptibility, birefringence, linear dichroism, Raman scattering, fluorescence depolarization, electron paramagnetic resonance (EPR), and nuclear magnetic resonance (NMR). The fundamental assertions that can be obtained by the different methods as well as their advantages, drawbacks and limitations are inspected. Typical sources of uncertainties and inaccuracies are discussed. To quantitatively evaluate the experimental data with reference to the orientational order the general tensor formalism developed by Schmiedel was employed throughout according to which the order matrix comprises 25 real elements yet. Within this context the interplay of orientational ordering and molecular conformation is scrutinized. (author)

  2. Calculation of breaking radiation dose fields in heterogenous media by a method of the transformation of axial distribution

    International Nuclear Information System (INIS)

    Mil'shtejn, R.S.

    1988-01-01

    Analysis of dose fields in a heterogeneous tissue equivalent medium has shown that dose distributions have radial symmetry and can be described by a curve of axial distribution with renormalization of maximum ionization depth. A method of the calculation of a dose field in a heterogeneous medium using the principle of radial symmetry is presented

  3. The four-principle formulation of common morality is at the core of bioethics mediation method.

    Science.gov (United States)

    Ahmadi Nasab Emran, Shahram

    2015-08-01

    Bioethics mediation is increasingly used as a method in clinical ethics cases. My goal in this paper is to examine the implicit theoretical assumptions of the bioethics mediation method developed by Dubler and Liebman. According to them, the distinguishing feature of bioethics mediation is that the method is useful in most cases of clinical ethics in which conflict is the main issue, which implies that there is either no real ethical issue or if there were, they are not the key to finding a resolution. I question the tacit assumption of non-normativity of the mediation method in bioethics by examining the various senses in which bioethics mediation might be non-normative or neutral. The major normative assumption of the mediation method is the existence of common morality. In addition, the four-principle formulation of the theory articulated by Beauchamp and Childress implicitly provides the normative content for the method. Full acknowledgement of the theoretical and normative assumptions of bioethics mediation helps clinical ethicists better understand the nature of their job. In addition, the need for a robust philosophical background even in what appears to be a purely practical method of mediation cannot be overemphasized. Acknowledgement of the normative nature of bioethics mediation method necessitates a more critical attitude of the bioethics mediators towards the norms they usually take for granted uncritically as valid.

  4. Field theoretical methods in chemical physics

    International Nuclear Information System (INIS)

    Paul, R.

    1982-01-01

    Field theory will become an important tool for the chemist, and this book presents a clear and thorough account of the theory itself and its applications for solving a wide variety of chemical problems. The author has brought together the foundations upon which the many and varied applications of field theory have been built, giving more intermediate steps than is usual in the derivations. This makes the book easily accessible to anyone with a background of calculus, statistical thermodynamics and elementary quantum chemistry. (orig./HK)

  5. The MIND method: A decision support for optimization of industrial energy systems - Principles and case studies

    International Nuclear Information System (INIS)

    Karlsson, Magnus

    2011-01-01

    Changes in complex industrial energy systems require adequate tools to be evaluated satisfactorily. The MIND method (Method for analysis of INDustrial energy systems) is a flexible method constructed as decision support for different types of analyses of industrial energy systems. It is based on Mixed Integer Linear Programming (MILP) and developed at Linkoeping University in Sweden. Several industries, ranging from the food industry to the pulp and paper industry, have hitherto been modelled and analyzed using the MIND method. In this paper the principles regarding the use of the method and the creation of constraints of the modelled system are presented. Two case studies are also included, a dairy and a pulp and paper mill, that focus some measures that can be evaluated using the MIND method, e.g. load shaping, fuel conversion and introduction of energy efficiency measures. The case studies illustrate the use of the method and its strengths and weaknesses. The results from the case studies are related to the main issues stated by the European Commission, such as reduction of greenhouse gas emissions, improvements regarding security of supply and increased use of renewable energy, and show great potential as regards both cost reductions and possible load shifting.

  6. Principles of dynamics

    CERN Document Server

    Hill, Rodney

    2013-01-01

    Principles of Dynamics presents classical dynamics primarily as an exemplar of scientific theory and method. This book is divided into three major parts concerned with gravitational theory of planetary systems; general principles of the foundations of mechanics; and general motion of a rigid body. Some of the specific topics covered are Keplerian Laws of Planetary Motion; gravitational potential and potential energy; and fields of axisymmetric bodies. The principles of work and energy, fictitious body-forces, and inertial mass are also looked into. Other specific topics examined are kinematics

  7. Economic evaluations and Randomized trials in spinal disorders: Principles and methods

    DEFF Research Database (Denmark)

    Korthals-de Bos, I; Van Tulder, M; Van Dieten, H

    2004-01-01

    Study Design. Descriptive methodologic recommendations. Objective. To help researchers designing, conducting, and reporting economic evaluations in the field of back and neck pain. Summary of Background Data. Economic evaluations of both existing and new therapeutic interventions are becoming...... increasingly important. There is a need to improve the methods of economic evaluations in the field of spinal disorders. Materials and Methods. To improve the methods of economic evaluations in the field of spinal disorders, this article describes the various steps in an economic evaluation, using as example...... a study on the cost-effectiveness of manual therapy, physiotherapy, and usual care provided by the general practitioner for patients with neck pain. Results. An economic evaluation is a study in which two or more interventions are systematically compared with regard to both costs and effects...

  8. Variational method for integrating radial gradient field

    Science.gov (United States)

    Legarda-Saenz, Ricardo; Brito-Loeza, Carlos; Rivera, Mariano; Espinosa-Romero, Arturo

    2014-12-01

    We propose a variational method for integrating information obtained from circular fringe pattern. The proposed method is a suitable choice for objects with radial symmetry. First, we analyze the information contained in the fringe pattern captured by the experimental setup and then move to formulate the problem of recovering the wavefront using techniques from calculus of variations. The performance of the method is demonstrated by numerical experiments with both synthetic and real data.

  9. Accurate method of the magnetic field measurement of quadrupole magnets

    International Nuclear Information System (INIS)

    Kumada, M.; Sakai, I.; Someya, H.; Sasaki, H.

    1983-01-01

    We present an accurate method of the magnetic field measurement of the quadrupole magnet. The method of obtaining the information of the field gradient and the effective focussing length is given. A new scheme to obtain the information of the skew field components is also proposed. The relative accuracy of the measurement was 1 x 10 -4 or less. (author)

  10. Field evaporation of ZnO: A first-principles study

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Yu, E-mail: yuxia@dal.ca; Karahka, Markus; Kreuzer, H. J. [Department of Physics and Atmospheric Science, Dalhousie University, Halifax, Nova Scotia B3H 3J5 (Canada)

    2015-07-14

    With recent advances in atom probe tomography of insulators and semiconductors, there is a need to understand high electrostatic field effects in these materials as well as the details of field evaporation. We use density functional theory to study field effects in ZnO clusters calculating the potential energy curves, the local field distribution, the polarizability, and the dielectric constant as a function of field strength. We confirm that, as in MgO, the HOMO-LUMO gap of a ZnO cluster closes at the evaporation field strength signaling field-induced metallization of the insulator. Following the structural changes in the cluster at the evaporation field strength, we can identify the field evaporated species, in particular, we show that the most abundant ion, Zn{sup 2+}, is NOT post-ionized but leaves the surface as 2+ largely confirming the experimental observations. Our results also help to explain problems related to stoichiometry in the mass spectra measured in atom probe tomography.

  11. Damped time advance methods for particles and EM fields

    International Nuclear Information System (INIS)

    Friedman, A.; Ambrosiano, J.J.; Boyd, J.K.; Brandon, S.T.; Nielsen, D.E. Jr.; Rambo, P.W.

    1990-01-01

    Recent developments in the application of damped time advance methods to plasma simulations include the synthesis of implicit and explicit ''adjustably damped'' second order accurate methods for particle motion and electromagnetic field propagation. This paper discusses this method

  12. Coexistence of different vacua in the effective quantum field theory and multiple point principle

    International Nuclear Information System (INIS)

    Volovik, G.E.

    2004-01-01

    According to the multiple point principle our Universe in on the coexistence curve of two or more phases of the quantum vacuum. The coexistence of different quantum vacua can be regulated by the exchange of the global fermionic charges between the vacua. If the coexistence is regulated by the baryonic charge, all the coexisting vacua exhibit the baryonic asymmetry. Due to the exchange of the baryonic charge between the vacuum and matter which occurs above the electroweak transition, the baryonic asymmetry of the vacuum induces the baryonic asymmetry of matter in our Standard-Model phase of the quantum vacuum [ru

  13. Proof-of-principle study of a small animal PET/field-cycled MRI combined system using conventional PMT technology

    International Nuclear Information System (INIS)

    Peng Hao; Handler, William B.; Scholl, Timothy J.; Simpson, P.J.; Chronik, Blaine A.

    2010-01-01

    There are currently several approaches to the development of combined PET/MRI systems, all of which need to address adverse interactions between the two systems. Of particular relevance to the majority of proposed PET/MRI systems is the effect that static and dynamic magnetic fields have on the performance of PET detection systems based on photomultiplier tubes (PMTs). In the work reported in this paper, performance of two conventional PMTs has been systematically investigated and characterized as a function of magnetic field exposure conditions. Detector gain, energy resolution, time resolution, and efficiency were measured for static field exposures between 0 and 6.3 mT. Additionally, the short-term recovery and long-term stability of gain and energy resolution were measured in the presence of repeatedly applied dynamic magnetic fields changing at 4 T/s. It was found that the detectors recovered normal operation within several milliseconds following the end of large pulsed magnetic fields. In addition, the repeated applications of large pulsed magnetic fields did not significantly affect detector stability. Based on these results, we implemented a proof-of-principle PET/field-cycled MRI (FCMRI) system for small animal imaging using commercial PMT-based PET detectors. The first PET images acquired within the PET/FCMRI system are presented. The image quality, in terms of spatial resolution, was compared between standalone PET and the PET/FCMRI system. Finally, the relevance of these results to various aspects of PET/MRI system design is discussed.

  14. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  15. Introduction to basic immunological methods : Generalities, Principles, Protocols and Variants of basic protocols

    International Nuclear Information System (INIS)

    Mejri, Naceur

    2013-01-01

    This manuscript is dedicated to student of biological sciences. It provides the information necessary to perform practical works, the most commonly used in immunology. During my doctoral and post-doctoral periods, panoply of methods was employed in diverse subjects in my research. Technical means used in my investigations were diverse enough that i could extract a set of techniques that cover most the basic immunological methods. Each chapter of this manuscript contains a fairly complete description of immunological methods. In each topic the basic protocol and its variants were preceded by background information provided in paragraphs concerning the principle and generalities. The emphasis is placed on describing situations in which each method and its variants were used. These basic immunological methods are useful for students and even researchers studying the immune system of human, nice and other species. Different subjects showed not only detailed protocols but also photos or/and shemas used as support to illustrate some knowledge or practical knowledge. I hope that students will find this manual interesting, easy to use contains necessary information to acquire skills in immunological practice. (Author)

  16. Calibration and uncertainty in electromagnetic fields measuring methods

    International Nuclear Information System (INIS)

    Anglesio, L.; Crotti, G.; Borsero, M.; Vizio, G.

    1999-01-01

    Calibration and reliability in electromagnetic field measuring methods are assured by calibration of measuring instruments. In this work are illustrated systems for generation of electromagnetic fields at low and high frequency, calibration standard and accuracy [it

  17. Mean field methods for cortical network dynamics

    DEFF Research Database (Denmark)

    Hertz, J.; Lerchner, Alexander; Ahmadi, M.

    2004-01-01

    We review the use of mean field theory for describing the dynamics of dense, randomly connected cortical circuits. For a simple network of excitatory and inhibitory leaky integrate- and-fire neurons, we can show how the firing irregularity, as measured by the Fano factor, increases...... with the strength of the synapses in the network and with the value to which the membrane potential is reset after a spike. Generalizing the model to include conductance-based synapses gives insight into the connection between the firing statistics and the high- conductance state observed experimentally in visual...

  18. Machine Learning methods in fitting first-principles total energies for substitutionally disordered solid

    Science.gov (United States)

    Gao, Qin; Yao, Sanxi; Widom, Michael

    2015-03-01

    Density functional theory (DFT) provides an accurate and first-principles description of solid structures and total energies. However, it is highly time-consuming to calculate structures with hundreds of atoms in the unit cell and almost not possible to calculate thousands of atoms. We apply and adapt machine learning algorithms, including compressive sensing, support vector regression and artificial neural networks to fit the DFT total energies of substitutionally disordered boron carbide. The nonparametric kernel method is also included in our models. Our fitted total energy model reproduces the DFT energies with prediction error of around 1 meV/atom. The assumptions of these machine learning models and applications of the fitted total energies will also be discussed. Financial support from McWilliams Fellowship and the ONR-MURI under the Grant No. N00014-11-1-0678 is gratefully acknowledged.

  19. Study on the intrinsic defects in tin oxide with first-principles method

    Science.gov (United States)

    Sun, Yu; Liu, Tingyu; Chang, Qiuxiang; Ma, Changmin

    2018-04-01

    First-principles and thermodynamic methods are used to study the contribution of vibrational entropy to defect formation energy and the stability of the intrinsic point defects in SnO2 crystal. According to thermodynamic calculation results, the contribution of vibrational entropy to defect formation energy is significant and should not be neglected, especially at high temperatures. The calculated results indicate that the oxygen vacancy is the major point defect in undoped SnO2 crystal, which has a higher concentration than that of the other point defect. The property of negative-U is put forward in SnO2 crystal. In order to determine the most stable defects much clearer under different conditions, the most stable intrinsic defect as a function of Fermi level, oxygen partial pressure and temperature are described in the three-dimensional defect formation enthalpy diagrams. The diagram visually provides the most stable point defects under different conditions.

  20. Emanation thermal analysis. Principle of the method, preparation of samples and apparatus

    International Nuclear Information System (INIS)

    Balek, V.; Pentinghaus, H.J.

    1993-12-01

    Principles of the title method are outlined and the sample preparation procedures and instrumental designs are described. The publication is divided into chapters as follows: (I) Introduction; (II) Sample labelling: (II.1) Introducing parent nuclides as a source of inert gas in solid; Distribution of inert gas in the sample; (II.2) Introducing inert gases without parent nuclides (using the recoil effect of nuclear reactions and using ion bombardment); (II.3) Choice of the suitable labelling technique; (III) Equipment for emanation thermal analysis: (III.1) Inert gas detection and measurement of inert gas release rate; (III.2) System of carrier gas flow and stabilization; (IV) Determination of the optimal conditions for radon release rate measurement; (V) Example of ETA measurement. (P.A.). 1 tab., 10 figs. 5 refs

  1. Pseudo-potential method for taking into account the Pauli principle in cluster systems

    International Nuclear Information System (INIS)

    Krasnopol'skii, V.M.; Kukulin, V.I.

    1975-01-01

    In order to take account of the Pauli principle in cluster systems (such as 3α, α + α + n) a convenient method of renormalization of the cluster-cluster deep attractive potentials with forbidden states is suggested. The renormalization consists of adding projectors upon the occupied states with an infinite coupling constant to the initial deep potential which means that we pass to pseudo-potentials. The pseudo-potential approach in projecting upon the noneigenstates is shown to be equivalent to the orthogonality condition model of Saito et al. The orthogonality of the many-particle wave function to the forbidden states of each two-cluster sub-system is clearly demonstrated

  2. Recent advances of mid-infrared compact, field deployable sensors: principles and applications

    Science.gov (United States)

    Tittel, Frank; Gluszek, Aleksander; Hudzikowski, Arkadiusz; Dong, Lei; Li, Chunguang; Patimisco, Pietro; Sampaolo, Angelo; Spagnolo, Vincenzo; Wojtas, Jacek

    2016-04-01

    The recent development of compact interband cascade lasers(ICLs) and quantum cascade lasers (QCLs) based trace gas sensors will permit the targeting of strong fundamental rotational-vibrational transitions in the mid-infrared which are one to two orders of magnitude more intense than transitions in the overtone and combination bands in the near-infrared. This has led to the design and fabrication of mid-infrared compact, field deployable sensors for use in the petrochemical industry, environmental monitoring and atmospheric chemistry. Specifically, the spectroscopic detection and monitoring of four molecular species, methane (CH4) [1], ethane (C2H6), formaldehyde (H2CO) [2] and hydrogen sulphide (H2S) [3] will be described. CH4, C2H6 and H2CO can be detected using two detection techniques: mid-infrared tunable laser absorption spectroscopy (TDLAS) using a compact multi-pass gas cell and quartz enhanced photoacoustic spectroscopy (QEPAS). Both techniques utilize state-of-the-art mid-IR, continuous wave (CW), distributed feedback (DFB) ICLs and QCLs. TDLAS was performed with an ultra-compact 54.6m effective optical path length innovative spherical multipass gas cell capable of 435 passes between two concave mirrors separated by 12.5 cm. QEPAS used a small robust absorption detection module (ADM) which consists of a quartz tuning fork (QTF), two optical windows, gas inlet/outlet ports and a low noise frequency pre-amplifier. Wavelength modulation and second harmonic detection were employed for spectral data processing. TDLAS and QEPAS can achieve minimum detectable absorption losses in the range from 10-8 to 10-11cm-1/Hz1/2. Several recent examples of real world applications of field deployable gas sensors will be described. For example, an ICL based TDLAS sensor system is capable of detecting CH4 and C2H6 concentration levels of 1 ppb in a 1 sec. sampling time, using an ultra-compact, robust sensor architecture. H2S detection was realized with a THz QEPAS sensor

  3. Methods to assess bioavailability of hydrophobic organic contaminants: Principles, operations, and limitations

    International Nuclear Information System (INIS)

    Cui Xinyi; Mayer, Philipp; Gan, Jay

    2013-01-01

    Many important environmental contaminants are hydrophobic organic contaminants (HOCs), which include PCBs, PAHs, PBDEs, DDT and other chlorinated insecticides, among others. Owing to their strong hydrophobicity, HOCs have their final destination in soil or sediment, where their ecotoxicological effects are closely regulated by sorption and thus bioavailability. The last two decades have seen a dramatic increase in research efforts in developing and applying partitioning based methods and biomimetic extractions for measuring HOC bioavailability. However, the many variations of both analytical methods and associated measurement endpoints are often a source of confusion for users. In this review, we distinguish the most commonly used analytical approaches based on their measurement objectives, and illustrate their practical operational steps, strengths and limitations using simple flowcharts. This review may serve as guidance for new users on the selection and use of established methods, and a reference for experienced investigators to identify potential topics for further research. - This review summarizes the principles and operations of bioavailability prediction methods, discusses their strengths and limitations, and highlights issues for future research.

  4. Optimization of capillary zone electrophoresis for charge heterogeneity testing of biopharmaceuticals using enhanced method development principles.

    Science.gov (United States)

    Moritz, Bernd; Locatelli, Valentina; Niess, Michele; Bathke, Andrea; Kiessig, Steffen; Entler, Barbara; Finkler, Christof; Wegele, Harald; Stracke, Jan

    2017-12-01

    CZE is a well-established technique for charge heterogeneity testing of biopharmaceuticals. It is based on the differences between the ratios of net charge and hydrodynamic radius. In an extensive intercompany study, it was recently shown that CZE is very robust and can be easily implemented in labs that did not perform it before. However, individual characteristics of some examined proteins resulted in suboptimal resolution. Therefore, enhanced method development principles were applied here to investigate possibilities for further method optimization. For this purpose, a high number of different method parameters was evaluated with the aim to improve CZE separation. For the relevant parameters, design of experiments (DoE) models were generated and optimized in several ways for different sets of responses like resolution, peak width and number of peaks. In spite of product specific DoE optimization it was found that the resulting combination of optimized parameters did result in significant improvement of separation for 13 out of 16 different antibodies and other molecule formats. These results clearly demonstrate generic applicability of the optimized CZE method. Adaptation to individual molecular properties may sometimes still be required in order to achieve optimal separation but the set screws discussed in this study [mainly pH, identity of the polymer additive (HPC versus HPMC) and the concentrations of additives like acetonitrile, butanolamine and TETA] are expected to significantly reduce the effort for specific optimization. 2017 The Authors. Electrophoresis published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. ESSENCE AND PRINCIPLES OF THE PRAXEOLOGICAL APPROACH FOR METHODICAL PREPARATION OF A FUTURE TEACHER OF TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Андрій Малихін

    2014-10-01

    Full Text Available The article deals with the problem of using the ideas of praxeology in methodical preparation of future teachers of technology. The author explores the theoretical and practical aspects of praxeological approach for the training of a future teacher of education branch «Technology», a description of the system, the activityrelated, competence, personality directed, technological and thesaurus functions of praxeological approach are given that ensure effective communication of objective and subjective factors that contribute to successful mastery of the methodical knowledge and skills by students. Also in the article the essence of the principles of praxeology is revealed: diagnosticity of objectives and learning outcomes; stimulating and motivating the students a positive attitude to learning, focus on their needs and interests; choice of effective methods, means and forms of activity; relationship of stages of training; the importance of learning results; support for the creation of conditions for individual self-regulation of cognitive activity of subjects of training, compliance with which can effectively realize the praxeological approach to methodical training of future teachers of technology.

  6. A radiation-electric-field combination principle for SO2-oxidation in Ar-mixtures

    International Nuclear Information System (INIS)

    Leonhardt, J.; Krueger, H.; Popp, P.; Boes, J.

    1981-01-01

    A simple model for a radiation-induced SO 2 -oxidation in Ar using SO 2 /O 2 /Ar-mixtures has been described by Leonhardt a.o. It is possible to improve the efficiency of the radiation-induced SO 2 -oxidation in such mixtures if the electrons produced by the ionizing radiation are accelerated by means of an electric field. The energy of the field-accelerated electrons must be high enough to form reactive SO 2 radicals but not high enough to ionize the gas mixture. Such an arrangement is described. The connection between the rate of SO 3 -formation and the electric field and the connection between SO 3 -formation and decreasing of the O 2 -concentration in the reaction chaimber were experimentally determined. Further the G-values attained by means of the radiation-electric-field combination are discussed. (author)

  7. Efficient Training Methods for Conditional Random Fields

    Science.gov (United States)

    2008-02-01

    Learning (ICML), 2007. [63] Bruce G. Lindsay. Composite likelihood methods. Contemporary Mathematics, pages 221–239, 1988. 189 [64] Yan Liu, Jaime ...Conference on Machine Learning (ICML), pages 737–744, 2005. [107] Erik F. Tjong Kim Sang and Sabine Buchholz. Introduction to the CoNLL-2000 shared task

  8. Implementation of the Principles of Tpm in Field of Maintenance Preparations

    OpenAIRE

    Vladimíra Schindlerová; Ivana Šajdlerová; Pavel Zmeškal

    2016-01-01

    Total Productive Maintenance (TPM) is one of the ways to ensure efficient production processes. TPM is primarily associated with the management of maintenance of production equipment. This article deals with the possible implementation of total productive maintenance in other field of maintenance of working means, and it the maintenance of preparations. Experience from practice shows that TPM approaches may be suitable for the maintenance management in this field. In this article are stated t...

  9. Methods for production of aluminium powders and their application fields

    Energy Technology Data Exchange (ETDEWEB)

    Gopienko, V.G.; Kiselev, V.P.; Zobnina, N.S. (Vsesoyuznyj Nauchno-Issledovatel' skij i Proektnyj Inst. Alyuminievoj, magnievoj i ehlektrodnoj promyshlennosti (USSR))

    1984-12-01

    Different types of powder products made of alluminium and its alloys (powder, fine powders, granules and pastes) as well as their basic physicochemical properties are briefly characterized. The principle methods for alluminium powder production are outlined: physicochemical methods, the melt spraying by compressed gas being the mostly developed among them, and physico-mechanical ones. Main application spheres for powder productions of aluminium and its alloys are reported in short.

  10. Methods for production of aluminium powders and their application fields

    International Nuclear Information System (INIS)

    Gopienko, V.G.; Kiselev, V.P.; Zobnina, N.S.

    1984-01-01

    Different types of powder products made of alluminium and its alloys (powder, fine powders, granules and pastes) as well as their basic physicochemical properties are briefly characterized. The principle methods for alluminium powder production are outlined: physicochemical methods, the melt spraying by compressed gas being the mostly developed among them, and physico-mechanical ones. Main application spheres for powder productions of aluminium and its alloys are reported in short

  11. Developing "Personality" Taxonomies: Metatheoretical and Methodological Rationales Underlying Selection Approaches, Methods of Data Generation and Reduction Principles.

    Science.gov (United States)

    Uher, Jana

    2015-12-01

    Taxonomic "personality" models are widely used in research and applied fields. This article applies the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) to scrutinise the three methodological steps that are required for developing comprehensive "personality" taxonomies: 1) the approaches used to select the phenomena and events to be studied, 2) the methods used to generate data about the selected phenomena and events and 3) the reduction principles used to extract the "most important" individual-specific variations for constructing "personality" taxonomies. Analyses of some currently popular taxonomies reveal frequent mismatches between the researchers' explicit and implicit metatheories about "personality" and the abilities of previous methodologies to capture the particular kinds of phenomena toward which they are targeted. Serious deficiencies that preclude scientific quantifications are identified in standardised questionnaires, psychology's established standard method of investigation. These mismatches and deficiencies derive from the lack of an explicit formulation and critical reflection on the philosophical and metatheoretical assumptions being made by scientists and from the established practice of radically matching the methodological tools to researchers' preconceived ideas and to pre-existing statistical theories rather than to the particular phenomena and individuals under study. These findings raise serious doubts about the ability of previous taxonomies to appropriately and comprehensively reflect the phenomena towards which they are targeted and the structures of individual-specificity occurring in them. The article elaborates and illustrates with empirical examples methodological principles that allow researchers to appropriately meet the metatheoretical requirements and that are suitable for comprehensively exploring individuals' "personality".

  12. Geometric derivation of string field theory from first principles: Closed strings and modular invariance

    International Nuclear Information System (INIS)

    Kaku, M.

    1988-01-01

    We present an entirely new approach to closed-string field theory, called Igeometric string field theory R, which avoids the complications found in Becchi-Rouet-Stora-Tyutin string field theory (e.g., ghost counting, infinite overcounting of diagrams, midpoints, lack of modular invariance). Following the analogy with general relativity and Yang-Mills theory, we define a new infinite-dimensional local gauge group, called the unified string group, which uniquely specifies the connection fields, the curvature tensor, the measure and tensor calculus, and finally the action itself. Geometric field theory, when gauge fixed, yields an entirely new class of gauges called the interpolating gauge which allows us to smoothly interpolate between the midpoint gauge and the end-point gauge (''covariantized light-cone gauge''). We can show that geometric string field theory reproduces one copy of the Shapiro-Virasoro model. Surprisingly, after the gauge is broken, a new Iclosed four-string interactionR emerges as the counterpart of the instantaneous four-fermion Coulomb term in QED. This term restores modular invariance and precisely fills the missing region of the complex plane

  13. APPLYING THE PRINCIPLES OF ACCOUNTING IN

    OpenAIRE

    NAGY CRISTINA MIHAELA; SABĂU CRĂCIUN; ”Tibiscus” University of Timişoara, Faculty of Economic Science

    2015-01-01

    The application of accounting principles (accounting principle on accrual basis; principle of business continuity; method consistency principle; prudence principle; independence principle; the principle of separate valuation of assets and liabilities; intangibility principle; non-compensation principle; the principle of substance over form; the principle of threshold significance) to companies that are in bankruptcy procedure has a number of particularities. Thus, some principl...

  14. Pairing interaction method in crystal field theory

    International Nuclear Information System (INIS)

    Dushin, R.B.

    1989-01-01

    Expressions, permitting to describe matrix elements of secular equation for metal-ligand pairs via parameters of the method of pairing interactions, genealogical coefficients and Clebsch-Gordan coefficients, are given. The expressions are applicable to any level or term of f n and d n configurations matrix elements for the terms of the maximum multiplicity of f n and d n configurations and also for the main levels of f n configurations are tabulated

  15. Review of design principles for ITER VV remote inspection in magnetic field

    International Nuclear Information System (INIS)

    Izard, Jean-Baptiste; Perrot, Yann; Friconneau, Jean-Pierre

    2009-01-01

    Because ITER magnet system has a limited number of mechanical and thermal stress cycles, shut down number of the toroidal field is limited during lifetime of ITER. Any inspection device able to withstand the toroidal field between two plasma shots will enhance the inspection frequency capacity of ITER during operation phase. In addition to the high magnetic field the system should also cope with high temperature, ultra-high vacuum and high radiation, in order to keep the reactor availability high. Radiation, ultra-high vacuum and temperature constraints already addressed by on going R and D activities within Europe-considering the required level of radiation is to date the highest encountered in remote handling, and that facing all these constraints at once is an additional issue to overcome. Whereas, operating remote handling systems in high magnetic field is quite new field of investigation. This paper aims to be a guideline for future designers to help them choose among options the adequate solution for an ITER relevant inspection device. It provides the designer an objective view of the different effects that stem from technical choices and help them deciding whether a technology is relevant or not depending on the task's requirements. We have selected a set of technologies and products available for structural design, actuation, sensing and data transmission in order to design inspection remote handling equipment for ITER in the given constraints. These different solutions are commented with specific considerations and directions to have them fit in the specifications. Different design strategies to cope with magnetic field are then discussed, which imply either insensitive design or using the magnetic field as a potential energy source and as a positioning help. This analysis is the first result of one of the projects in the PREFIT partnership, part of the European Fusion Training Scheme.

  16. Comparison of different dose calculation methods for irregular photon fields

    International Nuclear Information System (INIS)

    Zakaria, G.A.; Schuette, W.

    2000-01-01

    In this work, 4 calculation methods (Wrede method, Clarskon method of sector integration, beam-zone method of Quast and pencil-beam method of Ahnesjoe) are introduced to calculate point doses in different irregular photon fields. The calculations cover a typical mantle field, an inverted Y-field and different blocked fields for 4 and 10 MV photon energies. The results are compared to those of measurements in a water phantom. The Clarkson and the pencil-beam method have been proved to be the methods of equal standard in relation to accuracy. Both of these methods are being distinguished by minimum deviations and applied in our clinical routine work. The Wrede and beam-zone methods deliver useful results to central beam and yet provide larger deviations in calculating points beyond the central axis. (orig.) [de

  17. Stream temperature investigations: field and analytic methods

    Science.gov (United States)

    Bartholow, J.M.

    1989-01-01

    This document provides guidance to the user of the U.S. Fish and Wildlife Service’s Stream Network Temperature Model (SNTEMP). Planning a temperature study is discussed in terms of understanding the management objectives and ensuring that the questions will be accurately answered with the modeling approach being used. A sensitivity analysis of SNTEMP is presented to illustrate which input variables are most important in predicting stream temperatures. This information helps prioritize data collection activities, highlights the need for quality control, focuses on which parameters can be estimated rather than measured, and offers a broader perspective on management options in terms of knowing where the biggest temperature response will be felt. All of the major input variables for stream geometry, meteorology, and hydrology are discussed in detail. Each variable is defined, with guidance given on how to measure it, what kind of equipment to use, where to obtain it from another agency, and how to calculate it if the data are in a form other than that required by SNTEMP. Examples are presented for the various forms in which water temperature, discharge, and meteorological data are commonly found. Ranges of values for certain input variables that are difficult to measure of estimate are given. Particular attention is given to those variables not commonly understood by field biologists likely to be involved in a stream temperature study. Pertinent literature is cited for each variable, with emphasis on how other people have treated particular problems and on results they have found.

  18. General principles of control method of passenger car bodies bending vibration parameters

    Science.gov (United States)

    Skachkov, A. N.; Samoshkin, S. L.; Korshunov, S. D.; Kobishchanov, V. V.; Antipin, D. Ya

    2018-03-01

    Weight reduction of passenger cars is a promising direction of reducing the cost of their production and increasing transportation profitability. One way to reduce the weight of passenger cars is the lightweight metal body design by means of using of high-strength aluminum alloys, low-alloy and stainless steels. However, it has been found that the limit of the lightweight metal body design is not determined by the total mode of deformation, but its flexural rigidity, as the latter influences natural frequencies of body bending vibrations. With the introduction of mandatory certification for compliance with the Customs Union technical regulations, the following index was confirmed: “first natural frequency of body bending vibrations in the vertical plane”. This is due to the fact that vibration, noise and car motion depend on this index. To define the required indexes, the principles of the control method of bending vibration parameters of passenger car bodies are proposed in this paper. This method covers all stages of car design – development of design documentation, manufacturing and testing experimental and pilot models, launching the production. The authors also developed evaluation criteria and the procedure of using the results for introduction of control method of bending vibration parameters of passenger car bodies.

  19. Analyzed method for calculating the distribution of electrostatic field

    International Nuclear Information System (INIS)

    Lai, W.

    1981-01-01

    An analyzed method for calculating the distribution of electrostatic field under any given axial gradient in tandem accelerators is described. This method possesses satisfactory accuracy compared with the results of numerical calculation

  20. On the principle of gauge invariance in the field theory with curved momentum space

    International Nuclear Information System (INIS)

    Mir-Kasimov, R.M.

    1990-11-01

    The gauge transformations consistent with the hypothesis of the curved momentum space are considered. In this case the components of the electromagnetic field are not commuting. The finite-difference analogue of the D'Alambert equation is derived. (author). 5 refs

  1. Conductive plastic film electrodes for Pulsed Electric Field (PEF) treatment : A proof of principle

    NARCIS (Netherlands)

    Roodenburg, B.; Haan, S.W.H. de; Boxtel, L.B.J. van; Hatt, V.; Wouters, P.C.; Coronel, P.; Ferreira, J.A.

    2010-01-01

    Nowadays Pulsed Electric Field (PEF) treatment of food needs to be performed prior to packaging, either hygienic or aseptic packaging is necessary. New techniques for PEF treatment after packaging can be considered when plastic conductive (film) electrodes can be integrated within the package, so

  2. Effects of an additional small group discussion to cognitive achievement and retention in basic principles of bioethics teaching methods

    Directory of Open Access Journals (Sweden)

    Dedi Afandi

    2009-03-01

    Full Text Available Aim The place of ethics in undergraduate medical curricula is essential but the methods of teaching medical ethics did not show substantial changes. “Basic principles of bioethics” is the best knowledge to develop student’s reasoning analysis in medical ethics In this study, we investigate the effects of an additional small group discussion in basic principles of bioethics conventional lecture methods to cognitive achievement and retention. This study was a randomized controlled trial with parallel design. Cognitive scores of the basic principles of bioethics as a parameter was measured using basic principles of bioethics (Kaidah Dasar Bioetika, KDB test. Both groups were attending conventional lectures, then the intervention group got an additional small group discussion.Result Conventional lectures with or without small group discussion significantly increased cognitive achievement of basic principles of bioethics (P= 0.001 and P= 0.000, respectively, and there were significant differences in cognitive achievement and retention between the 2 groups (P= 0.000 and P= 0.000, respectively.Conclusion Additional small group discussion method improved cognitive achievement and retention of basic principles of bioethics. (Med J Indones 2009; 18: 48-52Keywords: lecture, specification checklist, multiple choice questions

  3. Methods and Principles of Determining the Footwear and Floor Tribological Characteristics

    OpenAIRE

    D. Stamenković; M. Banić; M. Nikolić; M. Mijajlović; M. Milošević

    2017-01-01

    There are many standards relating to the anti-slip properties of footwear and flooring. These standards describe the different test methods and procedures for determining the footwear and floor slip resistance in different conditions. In this paper authors systematize the standards in this field applied in the EU and in Serbia and cite the Serbian institutes which are certified for this type of testing. In addition, the authors have carried out an analysis and comparison of the tests that are...

  4. Field radiometric methods of prospecting and exploration for uranium ores

    International Nuclear Information System (INIS)

    Gorbushina, L.V.; Savenko, E.I.; Serdyukova, A.S.

    1978-01-01

    The textbook includes two main chapters which describe gamma- and emanation field radiometric methods. The textbook is intended for geology and geophysics students having training practice in field radiometric methods and is additional to the course of lectures. The textbook can be used in the''Radiometry'' course which is studied in appropriate geological and technical colleges

  5. LOMEGA: a low frequency, field implicit method for plasma simulation

    International Nuclear Information System (INIS)

    Barnes, D.C.; Kamimura, T.

    1982-04-01

    Field implicit methods for low frequency plasma simulation by the LOMEGA (Low OMEGA) codes are described. These implicit field methods may be combined with particle pushing algorithms using either Lorentz force or guiding center force models to study two-dimensional, magnetized, electrostatic plasmas. Numerical results for ωsub(e)deltat>>1 are described. (author)

  6. Fair and efficient tariffs for wind energy : principles, method, proposal, data and potential consequences in France

    International Nuclear Information System (INIS)

    Chabot, B.

    2001-01-01

    In 2000, the government of France announced a national energy plan that included the installation of 5,000 to 10,000 MW of wind power by 2010. It also announced a new system based on fixed tariffs that would replace the EOLE 2005 calls for tenders for projects under 12 MW. This paper described the principles and methods used to develop this fair and efficient tariff system for wind energy in France. The Agence de l'Environnement et de la Maitrise de l'Energie (ADEME) uses the Profitability Index Method to help define a wind energy tariff system for wind power plants under 12 MW. This paper presents some figures of the related over-cost incurred with the new tariff system which makes it possible for energy developers in France to develop huge wind potential at a pace equal to other countries with fixed premium prices. The over-cost of the new tariff system is not too high, plus it could be passed equally over all consumers of electricity. The tariff system will help France comply with its national, European and international commitments regarding climate change and with the future European directive on electricity generated from renewable energy sources. 8 refs., 1 tab., 4 figs

  7. Transferable Force Field for Metal–Organic Frameworks from First-Principles: BTW-FF

    Science.gov (United States)

    2014-01-01

    We present an ab-initio derived force field to describe the structural and mechanical properties of metal–organic frameworks (or coordination polymers). The aim is a transferable interatomic potential that can be applied to MOFs regardless of metal or ligand identity. The initial parametrization set includes MOF-5, IRMOF-10, IRMOF-14, UiO-66, UiO-67, and HKUST-1. The force field describes the periodic crystal and considers effective atomic charges based on topological analysis of the Bloch states of the extended materials. Transferable potentials were developed for the four organic ligands comprising the test set and for the associated Cu, Zn, and Zr metal nodes. The predicted materials properties, including bulk moduli and vibrational frequencies, are in agreement with explicit density functional theory calculations. The modal heat capacity and lattice thermal expansion are also predicted. PMID:25574157

  8. The Background-Field Method and Noninvariant Renormalization

    International Nuclear Information System (INIS)

    Avdeev, L.V.; Kazakov, D.I.; Kalmykov, M.Yu.

    1994-01-01

    We investigate the consistency of the background-field formalism when applying various regularizations and renormalization schemes. By an example of a two-dimensional σ model it is demonstrated that the background-field method gives incorrect results when the regularization (and/or renormalization) is noninvariant. In particular, it is found that the cut-off regularization and the differential renormalization belong to this class and are incompatible with the background-field method in theories with nonlinear symmetries. 17 refs

  9. Gene probes : principles and protocols [Methods in molecular biology, v. 179

    National Research Council Canada - National Science Library

    Rapley, Ralph; Aquino de Muro, Marilena

    2002-01-01

    ... of labeled DNA has allowed genes to be mapped to single chromosomes and in many cases to a single chromosome band, promoting significant advance in human genome mapping. Gene Probes: Principles and Protocols presents the principles for gene probe design, labeling, detection, target format, and hybridization conditions together with detailed protocols, accom...

  10. Modeling methods of MEMS micro-speaker with electrostatic working principle

    Science.gov (United States)

    Tumpold, D.; Kaltenbacher, M.; Glacer, C.; Nawaz, M.; Dehé, A.

    2013-05-01

    The market for mobile devices like tablets, laptops or mobile phones is increasing rapidly. Device housings get thinner and energy efficiency is more and more important. Micro-Electro-Mechanical-System (MEMS) loudspeakers, fabricated in complementary metal oxide semiconductor (CMOS) compatible technology merge energy efficient driving technology with cost economical fabrication processes. In most cases, the fabrication of such devices within the design process is a lengthy and costly task. Therefore, the need for computer modeling tools capable of precisely simulating the multi-field interactions is increasing. The accurate modeling of such MEMS devices results in a system of coupled partial differential equations (PDEs) describing the interaction between the electric, mechanical and acoustic field. For the efficient and accurate solution we apply the Finite Element (FE) method. Thereby, we fully take the nonlinear effects into account: electrostatic force, charged moving body (loaded membrane) in an electric field, geometric nonlinearities and mechanical contact during the snap-in case between loaded membrane and stator. To efficiently handle the coupling between the mechanical and acoustic fields, we apply Mortar FE techniques, which allow different grid sizes along the coupling interface. Furthermore, we present a recently developed PML (Perfectly Matched Layer) technique, which allows limiting the acoustic computational domain even in the near field without getting spurious reflections. For computations towards the acoustic far field we us a Kirchhoff Helmholtz integral (e.g, to compute the directivity pattern). We will present simulations of a MEMS speaker system based on a single sided driving mechanism as well as an outlook on MEMS speakers using double stator systems (pull-pull-system), and discuss their efficiency (SPL) and quality (THD) towards the generated acoustic sound.

  11. ["Glare vision". I. Physiological principles of vision change with increased test field luminance].

    Science.gov (United States)

    Hauser, B; Ochsner, H; Zrenner, E

    1992-02-01

    Clinical tests of visual acuity are an important measure of visual function. However visual acuity is usually determined only in narrow range of luminance levels between 160 and 320 cd/m2; therefore losses of visual acuity in other ranges of light intensity can not be detected. In a distance of 80 cm from the patients eyes, Landolt rings of varying sizes were presented on a small test field whose light intensity can be varied between 0.1 and 30,000 cd/m2. Thereby an acuity-luminance-function can be obtained. We studied such functions under different conditions of exposure time both with constant and with increasing luminance of the test field. We found that persons with normal vision can increase their visual acuity with increasing test field luminance up to a range of 5000 cd/m2. The maximum values of visual acuity under optimal lightening conditions lie (varying with age) between 2.2 and 0.9. Under pathological conditions visual acuity falls at high luminances accompanied by sensations of glare. Tests of glare sensitivity as a function of exposure time showed 4 sec to be a critical time of exposure since after 4 sec normal persons just reach their maximum visual acuity at high luminances. The underlying physiological mechanisms lead us to suppose that patients with neuronal light adaptation disturbances display a greater visual loss as a result of decreased time of exposure than those with disturbances in the ocular media. Visual acuity as well as the capacity to increase the patients visual acuity under optimal conditions of lighting were both found to be strongly age-dependent.(ABSTRACT TRUNCATED AT 250 WORDS)

  12. The gauge principle vs. the equivalence principle

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1984-01-01

    Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation

  13. Core principles of evolutionary medicine

    Science.gov (United States)

    Grunspan, Daniel Z; Nesse, Randolph M; Barnes, M Elizabeth; Brownell, Sara E

    2018-01-01

    Abstract Background and objectives Evolutionary medicine is a rapidly growing field that uses the principles of evolutionary biology to better understand, prevent and treat disease, and that uses studies of disease to advance basic knowledge in evolutionary biology. Over-arching principles of evolutionary medicine have been described in publications, but our study is the first to systematically elicit core principles from a diverse panel of experts in evolutionary medicine. These principles should be useful to advance recent recommendations made by The Association of American Medical Colleges and the Howard Hughes Medical Institute to make evolutionary thinking a core competency for pre-medical education. Methodology The Delphi method was used to elicit and validate a list of core principles for evolutionary medicine. The study included four surveys administered in sequence to 56 expert panelists. The initial open-ended survey created a list of possible core principles; the three subsequent surveys winnowed the list and assessed the accuracy and importance of each principle. Results Fourteen core principles elicited at least 80% of the panelists to agree or strongly agree that they were important core principles for evolutionary medicine. These principles over-lapped with concepts discussed in other articles discussing key concepts in evolutionary medicine. Conclusions and implications This set of core principles will be helpful for researchers and instructors in evolutionary medicine. We recommend that evolutionary medicine instructors use the list of core principles to construct learning goals. Evolutionary medicine is a young field, so this list of core principles will likely change as the field develops further. PMID:29493660

  14. Method of regulating magnetic field of magnetic pole center

    International Nuclear Information System (INIS)

    Watanabe, Masao; Yamada, Teruo; Kato, Norihiko; Toda, Yojiro; Kaneda, Yasumasa.

    1978-01-01

    Purpose: To provide the subject method comprising using a plurality of magnetic metal pieces having different thicknesses, regulating very easily symmetry of the field of the magnetic pole center depending upon the combination of said metal pieces, thereby obtaining a magnetic field of high precision. Method: The regulation of magnetic field at the central part of the magnetic field is not depending only upon processing of the center plug, axial movement of trim coil and ion source but by providing a magnetic metal piece such as an iron ring, primary higher harmonics of the field at the center of the magnetic field can be regulated simply while the position of the ion source slit is on the equipotential surface in the field. (Yoshihara, H.)

  15. A new sensitive method of dissociation constants determination based on the isohydric solutions principle.

    Science.gov (United States)

    Michałowski, Tadeusz; Pilarski, Bogusław; Asuero, Agustin G; Dobkowska, Agnieszka

    2010-10-15

    The paper provides a new formulation and analytical proposals based on the isohydric solutions concept. It is particularly stated that a mixture formed, according to titrimetric mode, from a weak acid (HX, C(0)mol/L) and a strong acid (HB, Cmol/L) solutions, assumes constant pH, independently on the volumes of the solutions mixed, provided that the relation C(0)=C+C(2)·10(pK(1)) is valid, where pK(1)=-log K(1), K(1) the dissociation constant for HX. The generalized formulation, referred to the isohydric solutions thus obtained, was extended also to more complex acid-base systems. Particularly in the (HX, HB) system, the titration occurs at constant ionic strength (I) value, not resulting from presence of a basal electrolyte. This very advantageous conjunction of the properties provides, among others, a new, very sensitive method for verification of pK(1) value. The new method is particularly useful for weak acids HX characterized by low pK(1) values. The method was tested experimentally on four acid-base systems (HX, HB), in aqueous and mixed-solvent media and compared with the literature data. Some useful (linear and hyperbolic) correlations were stated and applied for validation of pK(1) values. Finally, some practical applications of analytical interest of the isohydricity (pH constancy) principle as one formulated in this paper were enumerated, proving the usefulness of such a property which has its remote roots in the Arrhenius concept. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Near-field acoustic holography using sparse regularization and compressive sampling principles.

    Science.gov (United States)

    Chardon, Gilles; Daudet, Laurent; Peillot, Antoine; Ollivier, François; Bertin, Nancy; Gribonval, Rémi

    2012-09-01

    Regularization of the inverse problem is a complex issue when using near-field acoustic holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of compressive sampling; under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.

  17. Sensitivity-based virtual fields for the non-linear virtual fields method

    Science.gov (United States)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2017-09-01

    The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.

  18. Solar radiation measurements in forests - II. methods based on the principle of hemispherical photography

    International Nuclear Information System (INIS)

    Diaci, J.; Kolar, U.; Thormann, J.-J.

    1999-01-01

    The know-how in the field of solar radiation distribution in forests is important for basic ecological investigations and silvicultural practice. Three methods of solar radiation assessment based on hemispherical canopy photography are compared in the present article: a hemispherical photography by means of a fish-eye lens, a horizontoscope and the ALI-2000 Plant Canopy Analyzer. Experiences, improved methods and a drawing of a horizontoscope stand which was elaborated at the Chair of Silviculture are presented. Fairly good results with some limitations can be achieved with the improved stable horizontoscope in silvicultural work. Hemispherical photography is appropriate for the assessment of light conditions in all stand types and can thus be used in research work. The method has recently been undergoing intensive development. Digitalization of the entire system will speed up standardization. The ALI-2000 instrument is highly suitable for regeneration research in conditions of abundant plant vegetation, in reach sites and modified stands [sl

  19. Research of principles for estimating the freshness of meat products by color analysis method

    Science.gov (United States)

    Gorbunova, Elena V.; Chertov, Aleksandr N.; Petukhova, Daria B.; Alekhin, Artem A.; Korotaev, Valery V.

    2015-03-01

    Color is one of the most important metrics of foodstuffs quality. It gives an indication of freshness, ingredient composition as well as about the presence or absence of falsification. Most often, the color is estimated visually, and thus, the evaluation is subjective. By automating the color analysis a wide application for this method could be found. The aim of this research is to study the principles of color analysis as applied to the task of evaluating the freshness of meat products using modern machine vision systems. From a scientific point of view, the color of meat depends on the proportion of myoglobin and its derivatives. It's the main pigment that characterizes the freshness of meat. Further color of meat can change due to oxidation of myoglobin during storage. Myoglobin exists in three forms. There are oxygenated form, oxidized form and form without oxygen. The meat color changes not only due to the conversion of one form into another. The content of amino acids and ammonia are another characteristics and constant signs of meat products spoilage. The paper presents the results of meat color computer simulation based on data on the content of various forms of myoglobin in different proportions. The spectral characteristic of the light source used to illuminate the meat sample is taken into account. Also the experimental studies were conducted using samples of beef. As a result the correlations between said biochemical indicators of the quality and color of the meat obtained with the help of machine vision system were found.

  20. A variational principle giving gravitational 'superpotentials', the affine connection, Riemann tensor, and Einstein field equations

    International Nuclear Information System (INIS)

    Stachel, J.

    1977-01-01

    A first-order Lagrangian is given, from which follow the definitions of the fully covariant form of the Riemann tensor Rsub(μνkappalambda) in terms of the affine connection and metric; the definition of the affine connection in terms of the metric; the Einstein field equations; and the definition of a set of gravitational 'superpotentials' closely connected with the Komar conservation laws (Phys. Rev.; 113:934 (1959)). Substitution of the definition of the affine connection into this Lagrangian results in a second-order Lagrangian, from which follow the definition of the fully covariant Riemann tensor in terms of the metric, the Einstein equations, and the definition of the gravitational 'superpotentials'. (author)

  1. Advanced Energy Storage Devices: Basic Principles, Analytical Methods, and Rational Materials Design

    Science.gov (United States)

    Liu, Jilei; Wang, Jin; Xu, Chaohe; Li, Chunzhong; Lin, Jianyi

    2017-01-01

    Abstract Tremendous efforts have been dedicated into the development of high‐performance energy storage devices with nanoscale design and hybrid approaches. The boundary between the electrochemical capacitors and batteries becomes less distinctive. The same material may display capacitive or battery‐like behavior depending on the electrode design and the charge storage guest ions. Therefore, the underlying mechanisms and the electrochemical processes occurring upon charge storage may be confusing for researchers who are new to the field as well as some of the chemists and material scientists already in the field. This review provides fundamentals of the similarities and differences between electrochemical capacitors and batteries from kinetic and material point of view. Basic techniques and analysis methods to distinguish the capacitive and battery‐like behavior are discussed. Furthermore, guidelines for material selection, the state‐of‐the‐art materials, and the electrode design rules to advanced electrode are proposed. PMID:29375964

  2. Local causal structures, Hadamard states and the principle of local covariance in quantum field theory

    Energy Technology Data Exchange (ETDEWEB)

    Dappiaggi, Claudio [Erwin Schroedinger Institut fuer Mathematische Physik, Wien (Austria); Pinamonti, Nicola [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Porrmann, Martin [KwaZulu-Natal Univ. (South Africa). Quantum Research Group, School of Physics; National Institute for Theoretical Physics, Durban (South Africa)

    2010-01-15

    In the framework of the algebraic formulation, we discuss and analyse some new features of the local structure of a real scalar quantum field theory in a strongly causal spacetime. In particular we use the properties of the exponential map to set up a local version of a bulk-to-boundary correspondence. The bulk is a suitable subset of a geodesic neighbourhood of any but fixed point p of the underlying background, while the boundary is a part of the future light cone having p as its own tip. In this regime, we provide a novel notion for the extended *-algebra of Wick polynomials on the said cone and, on the one hand, we prove that it contains the information of the bulk counterpart via an injective *-homomorphism while, on the other hand, we associate to it a distinguished state whose pull-back in the bulk is of Hadamard form. The main advantage of this point of view arises if one uses the universal properties of the exponential map and of the light cone in order to show that, for any two given backgrounds M and M{sup '} and for any two subsets of geodesic neighbourhoods of two arbitrary points, it is possible to engineer the above procedure such that the boundary extended algebras are related via a restriction homomorphism. This allows for the pull-back of boundary states in both spacetimes and, thus, to set up a machinery which permits the comparison of expectation values of local field observables in M and M{sup '}. (orig.)

  3. Local causal structures, Hadamard states and the principle of local covariance in quantum field theory

    International Nuclear Information System (INIS)

    Dappiaggi, Claudio; Pinamonti, Nicola

    2010-01-01

    In the framework of the algebraic formulation, we discuss and analyse some new features of the local structure of a real scalar quantum field theory in a strongly causal spacetime. In particular we use the properties of the exponential map to set up a local version of a bulk-to-boundary correspondence. The bulk is a suitable subset of a geodesic neighbourhood of any but fixed point p of the underlying background, while the boundary is a part of the future light cone having p as its own tip. In this regime, we provide a novel notion for the extended *-algebra of Wick polynomials on the said cone and, on the one hand, we prove that it contains the information of the bulk counterpart via an injective *-homomorphism while, on the other hand, we associate to it a distinguished state whose pull-back in the bulk is of Hadamard form. The main advantage of this point of view arises if one uses the universal properties of the exponential map and of the light cone in order to show that, for any two given backgrounds M and M ' and for any two subsets of geodesic neighbourhoods of two arbitrary points, it is possible to engineer the above procedure such that the boundary extended algebras are related via a restriction homomorphism. This allows for the pull-back of boundary states in both spacetimes and, thus, to set up a machinery which permits the comparison of expectation values of local field observables in M and M ' . (orig.)

  4. Gas adsorption in Mg-porphyrin-based porous organic frameworks: A computational simulation by first-principles derived force field.

    Science.gov (United States)

    Pang, Yujia; Li, Wenliang; Zhang, Jingping

    2017-09-15

    A novel type of porous organic frameworks, based on Mg-porphyrin, with diamond-like topology, named POF-Mgs is computationally designed, and the gas uptakes of CO 2 , H 2 , N 2 , and H 2 O in POF-Mgs are investigated by Grand canonical Monte Carlo simulations based on first-principles derived force fields (FF). The FF, which describes the interactions between POF-Mgs and gases, are fitted by dispersion corrected double-hybrid density functional theory, B2PLYP-D3. The good agreement between the obtained FF and the first-principle energies data confirms the reliability of the FF. Furthermore our simulation shows the presence of a small amount of H 2 O (≤ 0.01 kPa) does not much affect the adsorption quantity of CO 2 , but the presence of higher partial pressure of H 2 O (≥ 0.1 kPa) results in the CO 2 adsorption decrease significantly. The good performance of POF-Mgs in the simulation inspires us to design novel porous materials experimentally for gas adsorption and purification. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Field testing, comparison, and discussion of five aeolian sand transport measuring devices operating on different measuring principles

    Science.gov (United States)

    Goossens, Dirk; Nolet, Corjan; Etyemezian, Vicken; Duarte-Campos, Leonardo; Bakker, Gerben; Riksen, Michel

    2018-06-01

    Five types of sediment samplers designed to measure aeolian sand transport were tested during a wind erosion event on the Sand Motor, an area on the west coast of the Netherlands prone to severe wind erosion. Each of the samplers operates on a different principle. The MWAC (Modified Wilson And Cooke) is a passive segmented trap. The modified Leatherman sampler is a passive vertically integrating trap. The Saltiphone is an acoustic sampler that registers grain impacts on a microphone. The Wenglor sampler is an optical sensor that detects particles as they pass through a laser beam. The SANTRI (Standalone AeoliaN Transport Real-time Instrument) detects particles travelling through an infrared beam, but in different channels each associated with a particular grain size spectrum. A procedure is presented to transform the data output, which is different for each sampler, to a common standard so that the samplers can be objectively compared and their relative efficiency calculated. Results show that the efficiency of the samplers is comparable despite the differences in operating principle and the instrumental and environmental uncertainties associated to working with particle samplers in field conditions. The ability of the samplers to register the temporal evolution of a wind erosion event is investigated. The strengths and weaknesses of the samplers are discussed. Some problems inherent to optical sensors are looked at in more detail. Finally, suggestions are made for further improvement of the samplers.

  6. The Virtual Fields Method Extracting Constitutive Mechanical Parameters from Full-field Deformation Measurements

    CERN Document Server

    Pierron, Fabrice

    2012-01-01

    The Virtual Fields Method: Extracting Constitutive Mechanical Parameters from Full-field Deformation Measurements is the first book on the Virtual Fields Method (VFM), a technique to identify materials mechanical properties from full-field measurements. Firmly rooted with extensive theoretical description of the method, the book presents numerous examples of application to a wide range of materials (composites, metals, welds, biomaterials) and situations (static, vibration, high strain rate). The authors give a detailed training section with examples of progressive difficulty to lead the reader to program the VFM and include a set of commented Matlab programs as well as GUI Matlab-based software for more general situations. The Virtual Fields Method: Extracting Constitutive Mechanical Parameters from Full-field Deformation Measurements is an ideal book for researchers, engineers, and students interested in applying the VFM to new situations motivated by their research.  

  7. Two numerical methods for mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2016-01-09

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  8. Two numerical methods for mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2016-01-01

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  9. Overlay control methodology comparison: field-by-field and high-order methods

    Science.gov (United States)

    Huang, Chun-Yen; Chiu, Chui-Fu; Wu, Wen-Bin; Shih, Chiang-Lin; Huang, Chin-Chou Kevin; Huang, Healthy; Choi, DongSub; Pierson, Bill; Robinson, John C.

    2012-03-01

    Overlay control in advanced integrated circuit (IC) manufacturing is becoming one of the leading lithographic challenges in the 3x and 2x nm process nodes. Production overlay control can no longer meet the stringent emerging requirements based on linear composite wafer and field models with sampling of 10 to 20 fields and 4 to 5 sites per field, which was the industry standard for many years. Methods that have emerged include overlay metrology in many or all fields, including the high order field model method called high order control (HOC), and field by field control (FxFc) methods also called correction per exposure. The HOC and FxFc methods were initially introduced as relatively infrequent scanner qualification activities meant to supplement linear production schemes. More recently, however, it is clear that production control is also requiring intense sampling, similar high order and FxFc methods. The added control benefits of high order and FxFc overlay methods need to be balanced with the increased metrology requirements, however, without putting material at risk. Of critical importance is the proper control of edge fields, which requires intensive sampling in order to minimize signatures. In this study we compare various methods of overlay control including the performance levels that can be achieved.

  10. Development of a first-principles code based on the screened KKR method for large super-cells

    International Nuclear Information System (INIS)

    Doi, S; Ogura, M; Akai, H

    2013-01-01

    The procedures of performing first-principles electronic structure calculation using the Korringa-Kohn-Rostoker (KKR) and the screened KKR methods are reviewed with an emphasis put on their numerical efficiency. It is shown that an iterative matrix inversion combined with a suitable preconditioning greatly improves the computational time of screened KKR method. The method is well parallelized and also has an O(N) scaling property

  11. Corestriction principle for non-Abelian cohomology of reductive group schemes over Dedekind rings of integers of local and global fields

    International Nuclear Information System (INIS)

    Nguyen Quoc Thang

    2006-12-01

    We prove some new results on Corestriction principle for non-abelian cohomology of group schemes over the rings of integers of local and global fields. Some connections with Grothendieck - Serre's conjecture are indicated, and applications to the study of class groups of algebraic groups over global fields are given. (author)

  12. Economic evaluations and randomized trials in spinal disorders: principles and methods.

    Science.gov (United States)

    Korthals-de Bos, Ingeborg; van Tulder, Maurits; van Dieten, Hiske; Bouter, Lex

    2004-02-15

    Descriptive methodologic recommendations. To help researchers designing, conducting, and reporting economic evaluations in the field of back and neck pain. Economic evaluations of both existing and new therapeutic interventions are becoming increasingly important. There is a need to improve the methods of economic evaluations in the field of spinal disorders. To improve the methods of economic evaluations in the field of spinal disorders, this article describes the various steps in an economic evaluation, using as example a study on the cost-effectiveness of manual therapy, physiotherapy, and usual care provided by the general practitioner for patients with neck pain. An economic evaluation is a study in which two or more interventions are systematically compared with regard to both costs and effects. There are four types of economic evaluations, based on analysis of: (1) cost-effectiveness, (2) cost-utility, (3) cost-minimization, and (4) cost-benefit. The cost-utility analysis is a special case of cost-effectiveness analysis. The first step in all these economic evaluations is to identify the perspective of the study. The choice of the perspective will have consequences for the identification of costs and effects. Secondly, the alternatives that will be compared should be identified. Thirdly, the relevant costs and effects should be identified. Economic evaluations are usually performed from a societal perspective and include consequently direct health care costs, direct nonhealth care costs, and indirect costs. Fourthly, effect data are collected by means of questionnaires or interviews, and relevant cost data with regard to effect measures and health care utilization, work absenteeism, travel expenses, use of over-the-counter medication, and help from family and friends, are collected by means of cost diaries, questionnaires, or (telephone) interviews. Fifthly, real costs are calculated, or the costs are estimated on the basis of real costs, guideline prices

  13. Method of using triaxial magnetic fields for making particle structures

    Science.gov (United States)

    Martin, James E.; Anderson, Robert A.; Williamson, Rodney L.

    2005-01-18

    A method of producing three-dimensional particle structures with enhanced magnetic susceptibility in three dimensions by applying a triaxial energetic field to a magnetic particle suspension and subsequently stabilizing said particle structure. Combinations of direct current and alternating current fields in three dimensions produce particle gel structures, honeycomb structures, and foam-like structures.

  14. Measuring methods, registration and signal processing for magnetic field research

    International Nuclear Information System (INIS)

    Nagiello, Z.

    1981-01-01

    Some measuring methods and signal processing systems based on analogue and digital technics, which have been applied in magnetic field research using magnetometers with ferromagnetic transducers, are presented. (author)

  15. [Mo2(CN)11]:5- A detailed description of ligand-field spectra and magnetic properties by first-principles calculations.

    Science.gov (United States)

    Hendrickx, Marc F A; Clima, S; Chibotaru, L F; Ceulemans, A

    2005-10-06

    An ab initio multiconfigurational approach has been used to calculate the ligand-field spectrum and magnetic properties of the title cyano-bridged dinuclear molybdenum complex. The rather large magnetic coupling parameter J for a single cyano bridge, as derived experimentally for this complex by susceptibility measurements, is confirmed to a high degree of accuracy by our CASPT2 calculations. Its electronic structure is rationalized in terms of spin-spin coupling between the two constituent hexacyano-monomolybdate complexes. An in-depth analysis on the basis of Anderson's kinetic exchange theory provides a qualitative picture of the calculated CASSCF antiferromagnetic ground-state eigenvector in the Mo dimer. Dynamic electron correlations as incorporated into our first-principles calculations by means of the CASPT2 method are essential to obtain quantitative agreement between theory and experiment.

  16. New Method for Solving Inductive Electric Fields in the Ionosphere

    Science.gov (United States)

    Vanhamäki, H.

    2005-12-01

    We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.

  17. Leaf nitrogen from first principles: field evidence for adaptive variation with climate

    Science.gov (United States)

    Dong, Ning; Prentice, Iain Colin; Evans, Bradley J.; Caddy-Retalic, Stefan; Lowe, Andrew J.; Wright, Ian J.

    2017-01-01

    Nitrogen content per unit leaf area (Narea) is a key variable in plant functional ecology and biogeochemistry. Narea comprises a structural component, which scales with leaf mass per area (LMA), and a metabolic component, which scales with Rubisco capacity. The co-ordination hypothesis, as implemented in LPJ and related global vegetation models, predicts that Rubisco capacity should be directly proportional to irradiance but should decrease with increases in ci : ca and temperature because the amount of Rubisco required to achieve a given assimilation rate declines with increases in both. We tested these predictions using LMA, leaf δ13C, and leaf N measurements on complete species assemblages sampled at sites on a north-south transect from tropical to temperate Australia. Partial effects of mean canopy irradiance, mean annual temperature, and ci : ca (from δ13C) on Narea were all significant and their directions and magnitudes were in line with predictions. Over 80 % of the variance in community-mean (ln) Narea was accounted for by these predictors plus LMA. Moreover, Narea could be decomposed into two components, one proportional to LMA (slightly steeper in N-fixers), and the other to Rubisco capacity as predicted by the co-ordination hypothesis. Trait gradient analysis revealed ci : ca to be perfectly plastic, while species turnover contributed about half the variation in LMA and Narea. Interest has surged in methods to predict continuous leaf-trait variation from environmental factors, in order to improve ecosystem models. Coupled carbon-nitrogen models require a method to predict Narea that is more realistic than the widespread assumptions that Narea is proportional to photosynthetic capacity, and/or that Narea (and photosynthetic capacity) are determined by N supply from the soil. Our results indicate that Narea has a useful degree of predictability, from a combination of LMA and ci : ca - themselves in part environmentally determined - with Rubisco activity

  18. USE PRINCIPLES AND METHODS OF FRANCHISING SYSTEM EFFECTIVE DISTRIBUTION OF GOODS

    OpenAIRE

    Раменська, С.Є.; Сабірова, І.М.

    2013-01-01

    The article contains results of the analysis of factors of influence at franchising relationship in Ukraine. The use of franchising principles and methods for increasing the effectiveness of distribution of goods and services is examined in it. Стаття містить результати аналізу факторів впливу на розвиток франчайзингових відносин в Україні.В ній розглянуто можливість використання принципів і методів франчайзингу для підвищення ефективності дистрибуції товарів і послуг....

  19. A different method for calculation of the deflection angle of light passing close to a massive object by Fermat's principle

    Science.gov (United States)

    Akkus, Harun

    2013-12-01

    We introduce a method for calculating the amount of deflection angle of light passing close to a massive object. It is based on Fermat's principle. The varying refractive index of medium around the massive object is obtained from the Buckingham pi-theorem.

  20. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    Science.gov (United States)

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  1. The scope of the principle of non-discrimination of people with disability in the field of higher education in Colombia

    Directory of Open Access Journals (Sweden)

    Kelly Viviana Aristizábal Gómez

    2015-10-01

    Full Text Available This article aims to identify the scope of the principles of non-discrimination of people with disabilities in the field of higher education in Colombia, based on a thorough analysis that starts specifying the scope and content of the principle of non-discrimination, continues with the characterization of the concept of disability and finally, develops the incorporation of the international rules in the Colombian context and the appropriate discussion about an inclusive higher education.

  2. 3D electric field calculation with surface charge method

    International Nuclear Information System (INIS)

    Yamada, S.

    1992-01-01

    This paper describes an outline and some examples of three dimensional electric field calculations with a computer code developed at NIRS. In the code, a surface charge method is adopted because of it's simplicity in the mesh establishing procedure. The charge density in a triangular mesh is assumed to distribute with a linear function of the position. The electric field distribution is calculated for a pair of drift tubes with the focusing fingers on the opposing surfaces. The field distribution in an acceleration gap is analyzed with a Fourier-Bessel series expansion method. The calculated results excellently reproduces the measured data with a magnetic model. (author)

  3. Field Method for Integrating the First Order Differential Equation

    Institute of Scientific and Technical Information of China (English)

    JIA Li-qun; ZHENG Shi-wang; ZHANG Yao-yu

    2007-01-01

    An important modern method in analytical mechanics for finding the integral, which is called the field-method, is used to research the solution of a differential equation of the first order. First, by introducing an intermediate variable, a more complicated differential equation of the first order can be expressed by two simple differential equations of the first order, then the field-method in analytical mechanics is introduced for solving the two differential equations of the first order. The conclusion shows that the field-method in analytical mechanics can be fully used to find the solutions of a differential equation of the first order, thus a new method for finding the solutions of the first order is provided.

  4. Force-free magnetic fields - The magneto-frictional method

    Science.gov (United States)

    Yang, W. H.; Sturrock, P. A.; Antiochos, S. K.

    1986-01-01

    The problem under discussion is that of calculating magnetic field configurations in which the Lorentz force j x B is everywhere zero, subject to specified boundary conditions. We choose to represent the magnetic field in terms of Clebsch variables in the form B = grad alpha x grad beta. These variables are constant on any field line so that each field line is labeled by the corresponding values of alpha and beta. When the field is described in this way, the most appropriate choice of boundary conditions is to specify the values of alpha and beta on the bounding surface. We show that such field configurations may be calculated by a magneto-frictional method. We imagine that the field lines move through a stationary medium, and that each element of magnetic field is subject to a frictional force parallel to and opposing the velocity of the field line. This concept leads to an iteration procedure for modifying the variables alpha and beta, that tends asymptotically towards the force-free state. We apply the method first to a simple problem in two rectangular dimensions, and then to a problem of cylindrical symmetry that was previously discussed by Barnes and Sturrock (1972). In one important respect, our new results differ from the earlier results of Barnes and Sturrock, and we conclude that the earlier article was in error.

  5. Reporting emissions on EMEP grid. Methods and principles; Rapportering af luftemissioner paa grid. Metoder og principper

    Energy Technology Data Exchange (ETDEWEB)

    Tranekjaer Jensen, M.; Nielsen, Ole-Kenneth; Hjorth Mikkelsen, M.; Winther, M.; Gyldenkaerne, S.; Viuf OErby, P.; Boll Illerup, J.

    2008-03-15

    This report explains methods for reporting emissions on EMEP grid with a resolution of 50km x 50km for the reporting years 1990, 1995, 2000 an 2005. The applied and geographical distributed emission data on grid represents the latest delivery (per March 2007) to UNECE LRTAP (United Nations Economic Commission for Europe Long-range Transboundary Air Pollution). Thus data represents the latest recalculation of historical values. The reporting of emissions on EMEP grid with a resolution of 50km x 50km is a part of the Danish submission under the above mentioned convention (UNECE LRTAP). Emission inventories on grid are reported every fifth year and involves all sectors under UNECE LRTAP. The reporting of emissions on grid includes 14 mandatory emission components, which are: SO{sub 2}, NOx NH{sub 3}, NMVOC, CO, TSP, PM{sub 10}, PM{sub 2.5}, Pb, Cd, Hg, Dioxin, PAH and HCB. It is furthermore possible to make additional reporting for a range of components. The report summarizes the most crucial principles and considerations according to work with distributing air emissions on grid within predefined categories for gridding defined by the United Nations (UN 2003). For each of the reported categories, the report gives a detailed explanation of the specific level for distributing emissions spatially. For most reporting categories the process of distributing emissions has been carried out at the highly detailed SNAP level, whereas for others it has been a necessity to make aggregates of several SNAP categories for spatial distribution. The report present final maps for selected air pollutants (SOx, NOx and NH{sub 3}) and discuss shortly possible reasons for variations within time and space. Based on current experience, the report finally gives some recommendations for improving future reporting of gridded emission data. The recommendations pin point, that the EMEP program should provide harmonized and well-documented basic spatial data sets for gridding, to encourage each

  6. Main principles of the development of the life quality evaluation methods in the case of vitreoretinal pathology

    Directory of Open Access Journals (Sweden)

    I. G. Ovechkin

    2015-01-01

    Full Text Available The aim of the study was to develop a methodology for assessing the quality of life in patients with different types of vitreoretinal pathology. The first stage is detection of target group, collection data of diseases, choosing the key features and principles of valuation developed questionnaire. The detailed list of questions, determination the type of questionnaire, method of data collection, as well as developing (if necessary scaling system (indexation of answers is made on the second stage. The preliminary version of questionnaire on the base on 2 first stage. At the final stage of development assess the reliability, validity and sensitivity of the method of construction with the possibility of some modification of both the questionnaire and psychometric evaluation of the parameters of the test should be accessed. We conducted a broad survey of 28 experts-ophthalmologists (mean age 42.4 years, with extensive experience as a general clinical practice (average work experience is 20.4 years, and immediate surgical operation in the field of medical and surgical treatment of patients with various types of vitreoretinal pathology. The list of suggested questions for patients (total 37 questions, and the time of occurrence of the symptom criteria for psychometric scale responses was presented to the every expert. The choice of psychometric rating scale is (along with the definition of specific issues the basic methodological position of developing the questionnaire. The most appropriate time scaling with the month period. Therefore the answers were presented in the the form of «permanent», «once a day», «every week», «every month», «absent». From a practical point of view it is extremely important to the The consideration of a quantitative estimation of the responses is extremely important from the practical point of view.

  7. Generalized stress field in granular soils heap with Rayleigh–Ritz method

    Directory of Open Access Journals (Sweden)

    Gang Bi

    2017-02-01

    Full Text Available The stress field in granular soils heap (including piled coal will have a non-negligible impact on the settlement of the underlying soils. It is usually obtained by measurements and numerical simulations. Because the former method is not reliable as pressure cells instrumented on the interface between piled coal and the underlying soft soil do not work well, results from numerical methods alone are necessary to be doubly checked with one more method before they are extended to more complex cases. The generalized stress field in granular soils heap is analyzed with Rayleigh–Ritz method. The problem is divided into two cases: case A without horizontal constraint on the base and case B with horizontal constraint on the base. In both cases, the displacement functions u(x, y and v(x, y are assumed to be cubic polynomials with 12 undetermined parameters, which will satisfy the Cauchy's partial differential equations, generalized Hooke's law and boundary equations. A function is built with the Rayleigh–Ritz method according to the principle of minimum potential energy, and the problem is converted into solving two undetermined parameters through the variation of the function, while the other parameters are expressed in terms of these two parameters. By comparison of results from the Rayleigh–Ritz method and numerical simulations, it is demonstrated that the Rayleigh–Ritz method is feasible to study the generalized stress field in granular soils heap. Solutions from numerical methods are verified before being extended to more complicated cases.

  8. Comparison of electric field exposure measurement methods under power lines

    International Nuclear Information System (INIS)

    Korpinen, L.; Kuisti, H.; Tarao, H.; Paeaekkoenen, R.; Elovaara, J.

    2014-01-01

    The object of the study was to investigate extremely low frequency (ELF) electric field exposure measurement methods under power lines. The authors compared two different methods under power lines: in Method A, the sensor was placed on a tripod; and Method B required the measurer to hold the meter horizontally so that the distance from him/her was at least 1.5 m. The study includes 20 measurements in three places under 400 kV power lines. The authors used two commercial three-axis meters, EFA-3 and EFA-300. In statistical analyses, they did not find significant differences between Methods A and B. However, in the future, it is important to take into account that measurement methods can, in some cases, influence ELF electric field measurement results, and it is important to report the methods used so that it is possible to repeat the measurements. (authors)

  9. Wave field restoration using three-dimensional Fourier filtering method.

    Science.gov (United States)

    Kawasaki, T; Takai, Y; Ikuta, T; Shimizu, R

    2001-11-01

    A wave field restoration method in transmission electron microscopy (TEM) was mathematically derived based on a three-dimensional (3D) image formation theory. Wave field restoration using this method together with spherical aberration correction was experimentally confirmed in through-focus images of amorphous tungsten thin film, and the resolution of the reconstructed phase image was successfully improved from the Scherzer resolution limit to the information limit. In an application of this method to a crystalline sample, the surface structure of Au(110) was observed in a profile-imaging mode. The processed phase image showed quantitatively the atomic relaxation of the topmost layer.

  10. Extensions of the auxiliary field method to solve Schroedinger equations

    International Nuclear Information System (INIS)

    Silvestre-Brac, Bernard; Semay, Claude; Buisseret, Fabien

    2008-01-01

    It has recently been shown that the auxiliary field method is an interesting tool to compute approximate analytical solutions of the Schroedinger equation. This technique can generate the spectrum associated with an arbitrary potential V(r) starting from the analytically known spectrum of a particular potential P(r). In the present work, general important properties of the auxiliary field method are proved, such as scaling laws and independence of the results on the choice of P(r). The method is extended in order to find accurate analytical energy formulae for radial potentials of the form aP(r) + V(r), and several explicit examples are studied. Connections existing between the perturbation theory and the auxiliary field method are also discussed

  11. Extensions of the auxiliary field method to solve Schroedinger equations

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre-Brac, Bernard [LPSC Universite Joseph Fourier, Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Avenue des Martyrs 53, F-38026 Grenoble-Cedex (France); Semay, Claude; Buisseret, Fabien [Groupe de Physique Nucleaire Theorique, Universite de Mons-Hainaut, Academie universitaire Wallonie-Bruxelles, Place du Parc 20, B-7000 Mons (Belgium)], E-mail: silvestre@lpsc.in2p3.fr, E-mail: claude.semay@umh.ac.be, E-mail: fabien.buisseret@umh.ac.be

    2008-10-24

    It has recently been shown that the auxiliary field method is an interesting tool to compute approximate analytical solutions of the Schroedinger equation. This technique can generate the spectrum associated with an arbitrary potential V(r) starting from the analytically known spectrum of a particular potential P(r). In the present work, general important properties of the auxiliary field method are proved, such as scaling laws and independence of the results on the choice of P(r). The method is extended in order to find accurate analytical energy formulae for radial potentials of the form aP(r) + V(r), and several explicit examples are studied. Connections existing between the perturbation theory and the auxiliary field method are also discussed.

  12. Sheet metals characterization using the virtual fields method

    Science.gov (United States)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2018-05-01

    In this work, a characterisation method involving a deep-notched specimen subjected to a tensile loading is introduced. This specimen leads to heterogeneous states of stress and strain, the latter being measured using a stereo DIC system (MatchID). This heterogeneity enables the identification of multiple material parameters in a single test. In order to identify material parameters from the DIC data, an inverse method called the Virtual Fields Method is employed. The method combined with recently developed sensitivity-based virtual fields allows to optimally locate areas in the test where information about each material parameter is encoded, improving accuracy of the identification over the traditional user-defined virtual fields. It is shown that a single test performed at 45° to the rolling direction is sufficient to obtain all anisotropic plastic parameters, thus reducing experimental effort involved in characterisation. The paper presents the methodology and some numerical validation.

  13. Assessing Financial Education Methods: Principles vs. Rules-of-Thumb Approaches

    Science.gov (United States)

    Skimmyhorn, William L.; Davies, Evan R.; Mun, David; Mitchell, Brian

    2016-01-01

    Despite thousands of programs and tremendous public and private interest in improving financial decision-making, little is known about how best to teach financial education. Using an experimental approach, the authors estimated the effects of two different education methodologies (principles-based and rules-of-thumb) on the knowledge,…

  14. Principles of European Law on Service Contracts: background, genesis, and drafting method

    NARCIS (Netherlands)

    Jansen, C.E.C.; Zimmermann, R.

    2010-01-01

    The Principles of European Law on Service Contracts (PEL SC) were drafted between 1999 and 2006 by the Tilburg Team of the Study Group on a European Civil Code (SGECC). A slightly modified version of the PEL SC has recently been implemented in Book IV.C of the Draft Common Frame of Reference (DCFR).

  15. Accuracy limits of the equivalent field method for irregular photon fields

    International Nuclear Information System (INIS)

    Sanz, Dario Esteban

    2002-01-01

    A mathematical approach is developed to evaluate the accuracy of the equivalent field method using basic clinical photon beam data. This paper presents an analytical calculation of dose errors arising when field equivalencies, calculated at a certain reference depth, are translated to other depths. The phantom scatter summation is expressed as a Riemann-Stieltjes integral and two categories of irregular fields are introduced: uniform and multiform. It is shown that multiform fields produce errors whose magnitudes are nearly twice those corresponding to uniform fields in extreme situations. For uniform field shapes, the maximum, local, relative dose errors, when the equivalencies are calculated at 10 cm depth on the central axis and translated to a depth of 30 cm, are 3.8% and 8.8% for 6 MV and cobalt-60 photon beams, respectively. In terms of maximum dose those errors are within 1-2%. This supports the conclusion that the equivalencies between rectangular fields, which are examples of uniform fields, are applicable to dose ratio functions irrespective of beam energy. However, the magnitude of such errors could be of importance when assessing the exit dose for in vivo monitoring. This work provides a better understanding of the influence of the irregular field shapes on the accuracy of the equivalent field method. (author)

  16. Multiple coil pulsed magnetic resonance method for measuring cold SSC dipole magnet field quality

    International Nuclear Information System (INIS)

    Clark, W.G.; Moore, J.M.; Wong, W.H.

    1990-01-01

    The operating principles and system architecture for a method to measure the magnetic field multipole expansion coefficients are described in the context of the needs of SSC dipole magnets. The operation of an 8-coil prototype system is discussed. Several of the most important technological issues that influence the design are identified and the basis of their resolution is explained. The new features of a 32-coil system presently under construction are described, along with estimates of its requirements for measurement time and data storage capacity

  17. Exploiting of the Compression Methods for Reconstruction of the Antenna Far-Field Using Only Amplitude Near-Field Measurements

    Directory of Open Access Journals (Sweden)

    J. Puskely

    2010-06-01

    Full Text Available The novel approach exploits the principle of the conventional two-plane amplitude measurements for the reconstruction of the unknown electric field distribution on the antenna aperture. The method combines a global optimization with a compression method. The global optimization method (GO is used to minimize the functional, and the compression method is used to reduce the number of unknown variables. The algorithm employs the Real Coded Genetic Algorithm (RCGA as the global optimization approach. The Discrete Cosine Transform (DCT and the Discrete Wavelet Transform (DWT are applied to reduce the number of unknown variables. Pros and cons of methods are investigated and reported for the solution of the problem. In order to make the algorithm faster, exploitation of amplitudes from a single scanning plane is also discussed. First, the algorithm is used to obtain an initial estimate. Subsequently, the common Fourier iterative algorithm is used to reach global minima with sufficient accuracy. The method is examined measuring the dish antenna.

  18. Practical methods for generating alternating magnetic fields for biomedical research

    Science.gov (United States)

    Christiansen, Michael G.; Howe, Christina M.; Bono, David C.; Perreault, David J.; Anikeeva, Polina

    2017-08-01

    Alternating magnetic fields (AMFs) cause magnetic nanoparticles (MNPs) to dissipate heat while leaving surrounding tissue unharmed, a mechanism that serves as the basis for a variety of emerging biomedical technologies. Unfortunately, the challenges and costs of developing experimental setups commonly used to produce AMFs with suitable field amplitudes and frequencies present a barrier to researchers. This paper first presents a simple, cost-effective, and robust alternative for small AMF working volumes that uses soft ferromagnetic cores to focus the flux into a gap. As the experimental length scale increases to accommodate animal models (working volumes of 100s of cm3 or greater), poor thermal conductivity and volumetrically scaled core losses render that strategy ineffective. Comparatively feasible strategies for these larger volumes instead use low loss resonant tank circuits to generate circulating currents of 1 kA or greater in order to produce the comparable field amplitudes. These principles can be extended to the problem of identifying practical routes for scaling AMF setups to humans, an infrequently acknowledged challenge that influences the extent to which many applications of MNPs may ever become clinically relevant.

  19. GENERAL PRINCIPLES OF EU (CRIMINAL LAW: LEGALITY, EQUALITY, NON-DISCRIMINATION, SPECIALTY AND NE BIS IN IDEM IN THE FIELD OF THE EUROPEAN ARREST WARRANT

    Directory of Open Access Journals (Sweden)

    NOREL NEAGU

    2012-05-01

    Full Text Available This article deals with the case law of the Court of Justice of the European Union in the field of the European arrest warrant, critically analysing the principles invoked in several decisions validating the European legislation in the field: legality, equality and non-discrimination, specialty, ne bis in idem. The author concludes that an area of freedom, security and justice could be built on these principles, but further harmonisation of legislation needs to be realised to avoid a ”journey to the unknown” for European citizens in respect to legislation of other member states of the EU.

  20. A regularization method for extrapolation of solar potential magnetic fields

    Science.gov (United States)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  1. Interferometric methods for mapping static electric and magnetic fields

    DEFF Research Database (Denmark)

    Pozzi, Giulio; Beleggia, Marco; Kasama, Takeshi

    2014-01-01

    The mapping of static electric and magnetic fields using electron probes with a resolution and sensitivity that are sufficient to reveal nanoscale features in materials requires the use of phase-sensitive methods such as the shadow technique, coherent Foucault imaging and the Transport of Intensi......) the model-independent determination of the locations and magnitudes of field sources (electric charges and magnetic dipoles) directly from electron holographic data.......The mapping of static electric and magnetic fields using electron probes with a resolution and sensitivity that are sufficient to reveal nanoscale features in materials requires the use of phase-sensitive methods such as the shadow technique, coherent Foucault imaging and the Transport of Intensity...... on theoretical models that form the basis of the quantitative interpretation of electron holographic data. We review the application of electron holography to a variety of samples (including electric fields associated with p–n junctions in semiconductors, quantized magnetic flux in superconductors...

  2. About solution of multipoint boundary problem of static analysis of deep beam with the use of combined application of finite element method and discrete-continual finite element method. part 1: formulation of the problem and general principles of approximation

    Directory of Open Access Journals (Sweden)

    Lyakhovich Leonid

    2017-01-01

    Full Text Available This paper is devoted to formulation and general principles of approximation of multipoint boundary problem of static analysis of deep beam with the use of combined application of finite element method (FEM discrete-continual finite element method (DCFEM. The field of application of DCFEM comprises structures with regular physical and geometrical parameters in some dimension (“basic” dimension. DCFEM presupposes finite element approximation for non-basic dimension while in the basic dimension problem remains continual. DCFEM is based on analytical solutions of resulting multipoint boundary problems for systems of ordinary differential equations with piecewise-constant coefficients.

  3. The Fundamental Principles Drawn from the Court of Justice of the European Union in the Field of Public Procurement and Concessions

    Directory of Open Access Journals (Sweden)

    Catalin-Silviu SARARU

    2010-11-01

    Full Text Available This article aims to present major guidelines in case-law of the Court of Justice of the European Union (EU in the field of public procurement and concessions. Court, with the mission to enforce EU law in the interpretation and uniform application of the Treaties, has contributed to establishing the content of the principles which apply in the award, conclusion, amendment and termination of public procurement contracts and concessions, and in shaping the principles applicable to review against abuses carried out by the contracting entity in the award procedure. This article analyzed the principles of transparency and impartiality in the award of these contracts and described the means by which these goals are achieved in practice: non-discriminatory description of the subject-matter of the contract, equal treatment of operators involved in awarding the contract, mutualrecognition of diplomas, certificates and other evidence, the principle of equal treatment of public and private operators, appropriate time-limits in which the undertakings concerned of any Member State are able to prepare their offers. Ensuring the application of EU rules in the field of public contractscan not be achieved without the existence of an effective judicial review based on the principle of effectiveness means legal action and the principle of equivalence. Knowledge the content of theseprinciples is particularly important for a uniform application of EU law on public contracts in all Member States.

  4. Lattice field theories: non-perturbative methods of analysis

    International Nuclear Information System (INIS)

    Weinstein, M.

    1978-01-01

    A lecture is given on the possible extraction of interesting physical information from quantum field theories by studying their semiclassical versions. From the beginning the problem of solving for the spectrum states of any given continuum quantum field theory is considered as a giant Schroedinger problem, and then some nonperturbative methods for diagonalizing the Hamiltonian of the theory are explained without recourse to semiclassical approximations. The notion of a lattice appears as an artifice to handle the problems associated with the familiar infrared and ultraviolet divergences of continuum quantum field theory and in fact for all but gauge theories. 18 references

  5. Multigrid methods for the computation of propagators in gauge fields

    International Nuclear Information System (INIS)

    Kalkreuter, T.

    1992-11-01

    In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. We discuss proper averaging operations for bosons and for staggered fermions. An efficient algorithm for computing C numerically is presented. The averaging kernels C can be used not only in deterministic multigrid computations, but also in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies of gauge theories. Actual numerical computations of kernels and propagators are performed in compact four-dimensional SU(2) gauge fields. (orig./HSI)

  6. Natural science methods in field archaeology, with the case study of Crimea

    Science.gov (United States)

    Smekalova, T. N.; Yatsishina, E. B.; Garipov, A. S.; Pasumanskii, A. E.; Ketsko, R. S.; Chudin, A. V.

    2016-07-01

    The natural science methods applied in archaeological field survey are briefly reviewed. They are classified into several groups: remote sensing (analysis of space and airspace photographs, viewshed analysis, study of detailed topographic and special maps, and three-dimensional photogrammetry), geophysical survey, and analysis of cultural layer elements (by geochemical, paleosol, and other methods). The most important principle is the integration of complementary nondestructive and fast natural science methods in order to obtain the most complete and reliable results. Emphasis is placed on the consideration of geophysical methods of the study, primarily, magnetic exploration. A multidisciplinary study of the monuments of ancient Chersonesos and its "barbarian" environment is described as an example of successful application of a complex technique.

  7. Methodological and Methodical Principles of the Empirical Study of Spiritual Development of a Personality

    Directory of Open Access Journals (Sweden)

    Olga Klymyshyn

    2017-06-01

    Full Text Available The article reveals the essence of the methodological principles of the spiritual development of a personality. The results of the theoretical analysis of psychological content of spirituality from the positions of system and structural approach to studying of a personality, age patterns of the mental personality development, the sacramental nature of human person, mechanisms of human spiritual development are taken into consideration. The interpretation of spirituality and the spiritual development of a personality is given. Initial principles of the organization of the empirical research of the spiritual development of a personality (ontogenetic, sociocultural, self-determination, system are presented. Such parameters of the estimation of a personality’s spiritual development as general index of the development of spiritual potential, indexes of the development of ethical, aesthetical, cognitive, existential components of spirituality, index of religiousness of a personality are described. Methodological support of psychological diagnostic research is defined.

  8. Thermodynamic properties of Mg2Si and Mg2Ge investigated by first principles method

    International Nuclear Information System (INIS)

    Wang, Hanfu; Jin, Hao; Chu, Weiguo; Guo, Yanjun

    2010-01-01

    The lattice dynamics and thermodynamic properties of Mg 2 Si and Mg 2 Ge are studied based on the first principles calculations. We obtain the phonon dispersion curves and phonon density of states spectra using the density functional perturbation theory with local density approximations. By employing the quasi-harmonic approximation, we calculate the temperature dependent Helmholtz free energy, bulk modulus, thermal expansion coefficient, specific heat, Debye temperature and overall Grueneisen coefficient. The results are in good agreement with available experimental data and previous theoretical studies. The thermal conductivities of both compounds are then estimated with the Slack's equation. By carefully choosing input parameters, especially the acoustic Debye temperature, we find that the calculated thermal conductivities agree fairly well with the experimental values above 80 K for both compounds. This demonstrates that the lattice thermal conductivity of simple cubic semiconductors may be estimated with satisfactory accuracy by combining the Slack's equation with the necessary thermodynamics parameters derived completely from the first principles calculations.

  9. Constructing the principles: Method and metaphysics in the progress of theoretical physics

    Science.gov (United States)

    Glass, Lawrence C.

    This thesis presents a new framework for the philosophy of physics focused on methodological differences found in the practice of modern theoretical physics. The starting point for this investigation is the longstanding debate over scientific realism. Some philosophers have argued that it is the aim of science to produce an accurate description of the world including explanations for observable phenomena. These scientific realists hold that our best confirmed theories are approximately true and that the entities they propose actually populate the world, whether or not they have been observed. Others have argued that science achieves only frameworks for the prediction and manipulation of observable phenomena. These anti-realists argue that truth is a misleading concept when applied to empirical knowledge. Instead, focus should be on the empirical adequacy of scientific theories. This thesis argues that the fundamental distinction at issue, a division between true scientific theories and ones which are empirically adequate, is best explored in terms of methodological differences. In analogy with the realism debate, there are at least two methodological strategies. Rather than focusing on scientific theories as wholes, this thesis takes as units of analysis physical principles which are systematic empirical generalizations. The first possible strategy, the conservative, takes the assumption that the empirical adequacy of a theory in one domain serves as good evidence for such adequacy in other domains. This then motivates the application of the principle to new domains. The second strategy, the innovative, assumes that empirical adequacy in one domain does not justify the expectation of adequacy in other domains. New principles are offered as explanations in the new domain. The final part of the thesis is the application of this framework to two examples. On the first, Lorentz's use of the aether is reconstructed in terms of the conservative strategy with respect to

  10. Principles for the development of Aboriginal health interventions: culturally appropriate methods through systemic empathy.

    Science.gov (United States)

    Kendall, Elizabeth; Barnett, Leda

    2015-01-01

    To increase Aboriginal participation in mainstream health services, it is necessary to understand the factors that influence health service usage. This knowledge can contribute to the development of culturally appropriate health services that respect Aboriginal ways of being. We used a community-based participatory approach to examine the reasons for underutilization of health services by Aboriginal Australians. Based on three focus groups and 18 interviews with Aboriginal health professionals, leaders, and community members in rural, regional, and urban settings, we identified five factors that influenced usage, including (1) negative historical experiences, (2) cultural incompetence, (3) inappropriate communication, (4) a collective approach to health, and (5) a more holistic approach to health. Given that these factors have shaped negative Aboriginal responses to health interventions, they are likely to be principles by which more appropriate solutions are generated. Although intuitively sensible and well known, these principles remain poorly understood by non-Aboriginal health systems and even less well implemented. We have conceptualized these principles as the foundation of an empathic health system. Without empathy, health systems in Australia, and internationally, will continue to face the challenge of building effective services to improve the state of health for all minority populations.

  11. Principles of the Kenzan Method for Robotic Cell Spheroid-Based Three-Dimensional Bioprinting.

    Science.gov (United States)

    Moldovan, Nicanor I; Hibino, Narutoshi; Nakayama, Koichi

    2017-06-01

    Bioprinting is a technology with the prospect to change the way many diseases are treated, by replacing the damaged tissues with live de novo created biosimilar constructs. However, after more than a decade of incubation and many proofs of concept, the field is still in its infancy. The current stagnation is the consequence of its early success: the first bioprinters, and most of those that followed, were modified versions of the three-dimensional printers used in additive manufacturing, redesigned for layer-by-layer dispersion of biomaterials. In all variants (inkjet, microextrusion, or laser assisted), this approach is material ("scaffold") dependent and energy intensive, making it hardly compatible with some of the intended biological applications. Instead, the future of bioprinting may benefit from the use of gentler scaffold-free bioassembling methods. A substantial body of evidence has accumulated, indicating this is possible by use of preformed cell spheroids, which have been assembled in cartilage, bone, and cardiac muscle-like constructs. However, a commercial instrument capable to directly and precisely "print" spheroids has not been available until the invention of the microneedles-based ("Kenzan") spheroid assembling and the launching in Japan of a bioprinter based on this method. This robotic platform laces spheroids into predesigned contiguous structures with micron-level precision, using stainless steel microneedles ("kenzans") as temporary support. These constructs are further cultivated until the spheroids fuse into cellular aggregates and synthesize their own extracellular matrix, thus attaining the needed structural organization and robustness. This novel technology opens wide opportunities for bioengineering of tissues and organs.

  12. Methods and Principles of Determining the Footwear and Floor Tribological Characteristics

    Directory of Open Access Journals (Sweden)

    D. Stamenković

    2017-09-01

    Full Text Available There are many standards relating to the anti-slip properties of footwear and flooring. These standards describe the different test methods and procedures for determining the footwear and floor slip resistance in different conditions. In this paper authors systematize the standards in this field applied in the EU and in Serbia and cite the Serbian institutes which are certified for this type of testing. In addition, the authors have carried out an analysis and comparison of the tests that are defined in these standards, indicating their advantages and disadvantages. Importance of the static and kinetic friction testing in determining the anti-slip properties of footwear and flooring is specifically indicated. Considering the current standards in area of slip resistance of the footwear and floor covering authors have determined the testing conditions for laboratory measuring the friction forces of different floor and footwear materials. The laboratory measurement has carried out at Faculty of Mechanical Engineering in Niš. The measuring results and their analysis are presented in the paper, as well.

  13. Improvement of vector compensation method for vehicle magnetic distortion field

    Energy Technology Data Exchange (ETDEWEB)

    Pang, Hongfeng, E-mail: panghongfeng@126.com; Zhang, Qi; Li, Ji; Luo, Shitu; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2014-03-15

    Magnetic distortions such as eddy-current field and low frequency magnetic field have not been considered in vector compensation methods. A new compensation method is proposed to suppress these magnetic distortions and improve compensation performance, in which the magnetic distortions related to measurement vectors and time are considered. The experimental system mainly consists of a three-axis fluxgate magnetometer (DM-050), an underwater vehicle and a proton magnetometer, in which the scalar value of magnetic field is obtained with the proton magnetometer and considered to be the true value. Comparing with traditional compensation methods, experimental results show that the magnetic distortions can be further reduced by two times. After compensation, error intensity and RMS error are reduced from 11684.013 nT and 7794.604 nT to 16.219 nT and 5.907 nT respectively. It suggests an effective way to improve the compensation performance of magnetic distortions. - Highlights: • A new vector compensation method is proposed for vehicle magnetic distortion. • The proposed model not only includes magnetometer error but also considers magnetic distortion. • Compensation parameters are computed directly by solving nonlinear equations. • Compared with traditional methods, the proposed method is not related with rotation angle rate. • Error intensity and RMS error can be reduced to 1/2 of the error with traditional methods.

  14. Improvement of vector compensation method for vehicle magnetic distortion field

    International Nuclear Information System (INIS)

    Pang, Hongfeng; Zhang, Qi; Li, Ji; Luo, Shitu; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2014-01-01

    Magnetic distortions such as eddy-current field and low frequency magnetic field have not been considered in vector compensation methods. A new compensation method is proposed to suppress these magnetic distortions and improve compensation performance, in which the magnetic distortions related to measurement vectors and time are considered. The experimental system mainly consists of a three-axis fluxgate magnetometer (DM-050), an underwater vehicle and a proton magnetometer, in which the scalar value of magnetic field is obtained with the proton magnetometer and considered to be the true value. Comparing with traditional compensation methods, experimental results show that the magnetic distortions can be further reduced by two times. After compensation, error intensity and RMS error are reduced from 11684.013 nT and 7794.604 nT to 16.219 nT and 5.907 nT respectively. It suggests an effective way to improve the compensation performance of magnetic distortions. - Highlights: • A new vector compensation method is proposed for vehicle magnetic distortion. • The proposed model not only includes magnetometer error but also considers magnetic distortion. • Compensation parameters are computed directly by solving nonlinear equations. • Compared with traditional methods, the proposed method is not related with rotation angle rate. • Error intensity and RMS error can be reduced to 1/2 of the error with traditional methods

  15. An augmented space recursive method for the first principles study of concentration profiles at CuNi alloy surfaces

    International Nuclear Information System (INIS)

    Dasgupta, I.; Mookerjee, A.

    1995-07-01

    We present here a first principle method for the calculation of effective cluster interactions for semi-infinite solid alloys required for the study of surface segregation and surface ordering on disordered surfaces. Our method is based on the augmented space recursion coupled with the orbital peeling method of Burke in the framework of the TB-LMTO. Our study of surface segregation in CuNi alloys demonstrates strong copper segregation and a monotonic concentration profile throughout the concentration range. (author). 35 refs, 4 figs, 2 tabs

  16. Forecasting experiments of a dynamical-statistical model of the sea surface temperature anomaly field based on the improved self-memorization principle

    Science.gov (United States)

    Hong, Mei; Chen, Xi; Zhang, Ren; Wang, Dong; Shen, Shuanghe; Singh, Vijay P.

    2018-04-01

    With the objective of tackling the problem of inaccurate long-term El Niño-Southern Oscillation (ENSO) forecasts, this paper develops a new dynamical-statistical forecast model of the sea surface temperature anomaly (SSTA) field. To avoid single initial prediction values, a self-memorization principle is introduced to improve the dynamical reconstruction model, thus making the model more appropriate for describing such chaotic systems as ENSO events. The improved dynamical-statistical model of the SSTA field is used to predict SSTA in the equatorial eastern Pacific and during El Niño and La Niña events. The long-term step-by-step forecast results and cross-validated retroactive hindcast results of time series T1 and T2 are found to be satisfactory, with a Pearson correlation coefficient of approximately 0.80 and a mean absolute percentage error (MAPE) of less than 15 %. The corresponding forecast SSTA field is accurate in that not only is the forecast shape similar to the actual field but also the contour lines are essentially the same. This model can also be used to forecast the ENSO index. The temporal correlation coefficient is 0.8062, and the MAPE value of 19.55 % is small. The difference between forecast results in spring and those in autumn is not high, indicating that the improved model can overcome the spring predictability barrier to some extent. Compared with six mature models published previously, the present model has an advantage in prediction precision and length, and is a novel exploration of the ENSO forecast method.

  17. Eddy Covariance Method for CO2 Emission Measurements: CCS Applications, Principles, Instrumentation and Software

    Science.gov (United States)

    Burba, George; Madsen, Rod; Feese, Kristin

    2013-04-01

    The Eddy Covariance method is a micrometeorological technique for direct high-speed measurements of the transport of gases, heat, and momentum between the earth's surface and the atmosphere. Gas fluxes, emission and exchange rates are carefully characterized from single-point in-situ measurements using permanent or mobile towers, or moving platforms such as automobiles, helicopters, airplanes, etc. Since the early 1990s, this technique has been widely used by micrometeorologists across the globe for quantifying CO2 emission rates from various natural, urban and agricultural ecosystems [1,2], including areas of agricultural carbon sequestration. Presently, over 600 eddy covariance stations are in operation in over 120 countries. In the last 3-5 years, advancements in instrumentation and software have reached the point when they can be effectively used outside the area of micrometeorology, and can prove valuable for geological carbon capture and sequestration, landfill emission measurements, high-precision agriculture and other non-micrometeorological industrial and regulatory applications. In the field of geological carbon capture and sequestration, the magnitude of CO2 seepage fluxes depends on a variety of factors. Emerging projects utilize eddy covariance measurement to monitor large areas where CO2 may escape from the subsurface, to detect and quantify CO2 leakage, and to assure the efficiency of CO2 geological storage [3,4,5,6,7,8]. Although Eddy Covariance is one of the most direct and defensible ways to measure and calculate turbulent fluxes, the method is mathematically complex, and requires careful setup, execution and data processing tailor-fit to a specific site and a project. With this in mind, step-by-step instructions were created to introduce a novice to the conventional Eddy Covariance technique [9], and to assist in further understanding the method through more advanced references such as graduate-level textbooks, flux networks guidelines, journals

  18. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research.

    Science.gov (United States)

    Gentles, Stephen J; Charles, Cathy; Nicholas, David B; Ploeg, Jenny; McKibbon, K Ann

    2016-10-11

    Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews, might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research. The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type

  19. New numerical methods for quantum field theories on the continuum

    Energy Technology Data Exchange (ETDEWEB)

    Emirdag, P.; Easter, R.; Guralnik, G.S.; Hahn, S.C

    2000-03-01

    The Source Galerkin Method is a new numerical technique that is being developed to solve Quantum Field Theories on the continuum. It is not based on Monte Carlo techniques and has a measure to evaluate relative errors. It promises to increase the accuracy and speed of calculations, and takes full advantage of symmetries of the theory. The application of this method to the non-linear {sigma} model is outlined.

  20. A novel background field removal method for MRI using projection onto dipole fields (PDF).

    Science.gov (United States)

    Liu, Tian; Khalidov, Ildar; de Rochefort, Ludovic; Spincemaille, Pascal; Liu, Jing; Tsiouris, A John; Wang, Yi

    2011-11-01

    For optimal image quality in susceptibility-weighted imaging and accurate quantification of susceptibility, it is necessary to isolate the local field generated by local magnetic sources (such as iron) from the background field that arises from imperfect shimming and variations in magnetic susceptibility of surrounding tissues (including air). Previous background removal techniques have limited effectiveness depending on the accuracy of model assumptions or information input. In this article, we report an observation that the magnetic field for a dipole outside a given region of interest (ROI) is approximately orthogonal to the magnetic field of a dipole inside the ROI. Accordingly, we propose a nonparametric background field removal technique based on projection onto dipole fields (PDF). In this PDF technique, the background field inside an ROI is decomposed into a field originating from dipoles outside the ROI using the projection theorem in Hilbert space. This novel PDF background removal technique was validated on a numerical simulation and a phantom experiment and was applied in human brain imaging, demonstrating substantial improvement in background field removal compared with the commonly used high-pass filtering method. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Field Science Ethnography: Methods For Systematic Observation on an Expedition

    Science.gov (United States)

    Clancey, William J.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The Haughton-Mars expedition is a multidisciplinary project, exploring an impact crater in an extreme environment to determine how people might live and work on Mars. The expedition seeks to understand and field test Mars facilities, crew roles, operations, and computer tools. I combine an ethnographic approach to establish a baseline understanding of how scientists prefer to live and work when relatively unemcumbered, with a participatory design approach of experimenting with procedures and tools in the context of use. This paper focuses on field methods for systematically recording and analyzing the expedition's activities. Systematic photography and time-lapse video are combined with concept mapping to organize and present information. This hybrid approach is generally applicable to the study of modern field expeditions having a dozen or more multidisciplinary participants, spread over a large terrain during multiple field seasons.

  2. Towards quantitative accuracy in first-principles transport calculations: The GW method applied to alkane/gold junctions

    DEFF Research Database (Denmark)

    Strange, Mikkel; Thygesen, Kristian Sommer

    2011-01-01

    -electron interactions are described by th=e many-body GW approximation. The conductance follows an exponential length dependence: Gn = Gc exp(-βn). The main difference from standard density functional theory (DFT) calculations is a significant reduction of the contact conductance, Gc, due to an improved alignment......The calculation of the electronic conductance of nanoscale junctions from first principles is a long-standing problem in the field of charge transport. Here we demonstrate excellent agreement with experiments for the transport properties of the gold/alkanediamine benchmark system when electron...

  3. Mathematical methods of many-body quantum field theory

    CERN Document Server

    Lehmann, Detlef

    2004-01-01

    Mathematical Methods of Many-Body Quantum Field Theory offers a comprehensive, mathematically rigorous treatment of many-body physics. It develops the mathematical tools for describing quantum many-body systems and applies them to the many-electron system. These tools include the formalism of second quantization, field theoretical perturbation theory, functional integral methods, bosonic and fermionic, and estimation and summation techniques for Feynman diagrams. Among the physical effects discussed in this context are BCS superconductivity, s-wave and higher l-wave, and the fractional quantum Hall effect. While the presentation is mathematically rigorous, the author does not focus solely on precise definitions and proofs, but also shows how to actually perform the computations.Presenting many recent advances and clarifying difficult concepts, this book provides the background, results, and detail needed to further explore the issue of when the standard approximation schemes in this field actually work and wh...

  4. The Grasshopper and the Taxonomer. Use of Song and Structure in Orthoptera Saltatoria for Teaching the Principles of Taxonomy. Part 1. Field and Laboratory Exercises

    Science.gov (United States)

    Broughton, W. B.

    1972-01-01

    Describes the coordinated study of European grasshoppers as living specimens in the field and as permanent laboratory preparations for introducing taxonomic principles. Provides details for the preparation of specimens and sample instructions provided to students. Part I of a three-part series. (AL)

  5. Numerical simulation of turbulent flow and heat transfer in parallel channel with an obstacle and verification of the field synergy principle

    Energy Technology Data Exchange (ETDEWEB)

    Tian, W.; Aye, M.; Qiu, S.; Jia, D. [Xi' an Jiaotong Univ., Dept. of Nuclear and Thermal Power Engineering, Xi' an (China)]. E-mail: wxtian_xjtu@163.com

    2004-07-01

    The field synergy principle was proposed by Guo Z. Y based on 2-D boundary laminar flow and it resulted from a second look at the mechanism of convective heat transfer. The objective of this paper is to numerically verify the applicability of this theory under turbulent flow or even with recirculating flow condition. (author)

  6. Advances in computational methods for Quantum Field Theory calculations

    NARCIS (Netherlands)

    Ruijl, B.J.G.

    2017-01-01

    In this work we describe three methods to improve the performance of Quantum Field Theory calculations. First, we simplify large expressions to speed up numerical integrations. Second, we design Forcer, a program for the reduction of four-loop massless propagator integrals. Third, we extend the R*

  7. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2006-01-01

    is a modification of the traditional method, the modification consisting of the introduction of circular fan stress fields. To ensure proper behaviour for the service load the -value ( = cot, where  is the angle relative to the beam axis of the uniaxial concrete compression) chosen should not be too large...

  8. Field test of a new Australian method of rangeland monitoring

    Science.gov (United States)

    Suzanne Mayne; Neil West

    2001-01-01

    Managers need more efficient means of monitoring changes on the lands they manage. Accordingly, a new Australian approach was field tested and compared to the Daubenmire method of assessing plant cover, litter, and bare soil. The study area was a 2 mile wide by 30.15 mile long strip, mostly covered by salt desert shrub ecosystem types, centered along the SE boundary of...

  9. Three-dimensional wake field analysis by boundary element method

    International Nuclear Information System (INIS)

    Miyata, K.

    1987-01-01

    A computer code HERTPIA was developed for the calculation of electromagnetic wake fields excited by charged particles travelling through arbitrarily shaped accelerating cavities. This code solves transient wave problems for a Hertz vector. The numerical analysis is based on the boundary element method. This program is validated by comparing its results with analytical solutions in a pill-box cavity

  10. General Anisotropy Identification of Paperboard with Virtual Fields Method

    Science.gov (United States)

    J.M. Considine; F. Pierron; K.T. Turner; D.W. Vahey

    2014-01-01

    This work extends previous efforts in plate bending of Virtual Fields Method (VFM) parameter identification to include a general 2-D anisotropicmaterial. Such an extension was needed for instances in which material principal directions are unknown or when specimen orientation is not aligned with material principal directions. A new fixture with a multiaxial force...

  11. Setting up Information Literacy Workshops in School Libraries: Imperatives, Principles and Methods

    Directory of Open Access Journals (Sweden)

    Reza Mokhtarpour

    2010-09-01

    Full Text Available While many professional literature have talked at length about the importance of dealing with information literacy in school libraries in ICT dominated era, but few have dealt with the nature and mode of implementation nor offered a road map. The strategy emphasized in this paper is to hold information literacy sessions through effective workshops. While explaining the reasons behind such workshops being essential in enhancing information literacy skills, the most important principles and stages for setting up of such workshops are offered in a step-by-step manner.

  12. Use of economics methods in the field of radioactive wastes

    International Nuclear Information System (INIS)

    Lepine, Jacques.

    1981-01-01

    The broad principles of the discounted cash flow system which consists in introducing the time factor into the economic calculations are presented. The discounted cash flow (DCF) rate of return corresponds to the global balance between the offer of and demand for capital or between savings and investments. Examples of applications are given: DCF average cost of the nuclear kWh, the cubic metre of stored waste and the cubic metre saved by a reduction on volume. Optimisation is considered: that is to say the total DCF cost minimum of the processing, transport and storage of waste line. The method is limited by other criteria: safety, protection against radiations, political aspects. Nevertheless, it is useful to know their economic impact to avoid reaching prohibitive costs and to ensure that the decisions to come are consistent with those taken in the past [fr

  13. Role of Logic and Mentality as the Basics of Wittgenstein's Picture Theory of Language and Extracting Educational Principles and Methods According to This Theory

    Science.gov (United States)

    Heshi, Kamal Nosrati; Nasrabadi, Hassanali Bakhtiyar

    2016-01-01

    The present paper attempts to recognize principles and methods of education based on Wittgenstein's picture theory of language. This qualitative research utilized inferential analytical approach to review the related literature and extracted a set of principles and methods from his theory on picture language. Findings revealed that Wittgenstein…

  14. Study of blood flow inside the stenosis vessel under the effect of solenoid magnetic field using ferrohydrodynamics principles

    Science.gov (United States)

    Badfar, Homayoun; Motlagh, Saber Yekani; Sharifi, Abbas

    2017-10-01

    In this paper, biomagnetic blood flow in the stenosis vessel under the effect of the solenoid magnetic field is studied using the ferrohydrodynamics (FHD) model. The parabolic profile is considered at an inlet of the axisymmetric stenosis vessel. Blood is modeled as electrically non-conducting, Newtonian and homogeneous fluid. Finite volume and the SIMPLE (Semi-Implicit Method for Pressure Linked Equations) algorithm are utilized to discretize governing equations. The investigation is studied at different magnetic numbers ( MnF=164, 328, 1640 and 3280) and the number of the coil loops (three, five and nine loops). Results indicate an increase in heat transfer, wall shear stress and energy loss (pressure drop) with an increment in the magnetic number (ratio of Kelvin force to dynamic pressure force), arising from the FHD, and the number of solenoid loops. Furthermore, the flow pattern is affected by the magnetic field, and the temperature of blood can be decreased up to 1.48 {}°C under the effect of the solenoid magnetic field with nine loops and reference magnetic field ( B0) of 2 tesla.

  15. International survey of methods used in health technology assessment (HTA: does practice meet the principles proposed for good research?

    Directory of Open Access Journals (Sweden)

    Stephens JM

    2012-08-01

    Full Text Available Jennifer M Stephens,1 Bonnie Handke,2 Jalpa A Doshi3 On behalf of the HTA Principles Working Group, part of the International Society for Pharmacoeconomics and Outcomes Research (ISPOR HTA Special Interest Group (SIG1Pharmerit International, Bethesda, MD, USA; 2Medtronic Neuromodulation, Minneapolis, MN, USA; 3Center for Evidence-Based Practice and Center for Health Incentives and Behavioral Economics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USAObjective: To describe research methods used internationally in health technology assessment (HTA and health-care reimbursement policies; compare the survey findings on research methods and processes to published HTA principles; and discuss important issues/trends reported by HTA bodies related to current research methods and applications of the HTA process.Methods: Representatives from HTA bodies worldwide were recruited to complete an online survey consisting of 47 items within four topics: (1 organizational information and process, (2 primary HTA methodologies and importance of attributes, (3 HTA application and dissemination, and (4 quality of HTA, including key issues. Results were presented as a comparison of current HTA practices and research methods to published HTA principles.Results: The survey was completed by 30 respondents representing 16 countries in five major regions, Australia (n = 3, Canada (n = 2, Europe (n = 17, Latin America (n = 2, and the United States (n = 6. The most common methodologies used were systematic review, meta-analysis, and economic modeling. The most common attributes evaluated were effectiveness (more commonly than efficacy, cost-effectiveness, safety, and quality of life. The attributes assessed, relative importance of the attributes, and conformance with HTA principles varied by region/country. Key issues and trends facing HTA bodies included standardizing methods for economic evaluations and grading of evidence, lack of evidence

  16. Determination of the optimal method for the field-in-field technique in breast tangential radiotherapy

    International Nuclear Information System (INIS)

    Tanaka, Hidekazu; Hayashi, Shinya; Hoshi, Hiroaki

    2014-01-01

    Several studies have reported the usefulness of the field-in-field (FIF) technique in breast radiotherapy. However, the methods for the FIF technique used in these studies vary. These methods were classified into three categories. We simulated a radiotherapy plan with each method and analyzed the outcomes. In the first method, a pair of subfields was added to each main field: the single pair of subfields method (SSM). In the second method, three pairs of subfields were added to each main field: the multiple pairs of subfields method (MSM). In the third method, subfields were alternately added: the alternate subfields method (ASM). A total of 51 patients were enrolled in this study. The maximum dose to the planning target volume (PTV) (Dmax) and the volumes of the PTV receiving 100% of the prescription dose (V100%) were calculated. The thickness of the breast between the chest wall and skin surface was measured, and patients were divided into two groups according to the median. In the overall series, the average V100% with ASM (60.3%) was significantly higher than with SSM (52.6%) and MSM (48.7%). In the thin breast group as well, the average V100% with ASM (57.3%) and SSM (54.2%) was significantly higher than that with MSM (43.3%). In the thick breast group, the average V100% with ASM (63.4%) was significantly higher than that with SSM (51.0%) and MSM (54.4%). ASM resulted in better dose distribution, regardless of the breast size. Moreover, planning for ASM required a relatively short time. ASM was considered the most preferred method. (author)

  17. Four Methods for LIDAR Retrieval of Microscale Wind Fields

    Directory of Open Access Journals (Sweden)

    Thomas Naini

    2012-08-01

    Full Text Available This paper evaluates four wind retrieval methods for micro-scale meteorology applications with volume and time resolution in the order of 30m3 and 5 s. Wind field vectors are estimated using sequential time-lapse volume images of aerosol density fluctuations. Suitably designed mono-static scanning backscatter LIDAR systems, which are sensitive to atmospheric density aerosol fluctuations, are expected to be ideal for this purpose. An important application is wind farm siting and evaluation. In this case, it is necessary to look at the complicated region between the earth’s surface and the boundary layer, where wind can be turbulent and fractal scaling from millimeter to kilometer. The methods are demonstrated using first a simple randomized moving hard target, and then with a physics based stochastic space-time dynamic turbulence model. In the latter case the actual vector wind field is known, allowing complete space-time error analysis. Two of the methods, the semblance method and the spatio-temporal method, are found to be most suitable for wind field estimation.

  18. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    Science.gov (United States)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  19. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2005-01-01

    This paper presents a new design method, which is a modification of the diagonal compression field method, the modification consisting of the introduction of circular fan stress fields. The traditional method does not allow changes of the concrete compression direction throughout a given beam...... if equilibrium is strictly required. This is conservative, since it is not possible fully to utilize the concrete strength in regions with low shear stresses. The larger inclination (the smaller -value) of the uniaxial concrete stress the more transverse shear reinforcement is needed; hence it would be optimal...... if the -value for a given beam could be set to a low value in regions with high shear stresses and thereafter increased in regions with low shear stresses. Thus the shear reinforcement would be reduced and the concrete strength would be utilized in a better way. In the paper it is shown how circular fan stress...

  20. Method for solving quantum field theory in the Heisenberg picture

    International Nuclear Information System (INIS)

    Nakanishi, Noboru

    2004-01-01

    This paper is a review of the method for solving quantum field theory in the Heisenberg picture, developed by Abe and Nakanishi since 1991. Starting from field equations and canonical (anti) commutation relations, one sets up a (q-number) Cauchy problem for the totality of d-dimensional (anti) commutators between the fundamental fields, where d is the number of spacetime dimensions. Solving this Cauchy problem, one obtains the operator solution of the theory. Then one calculates all multiple commutators. A representation of the operator solution is obtained by constructing the set of all Wightman functions for the fundamental fields; the truncated Wightman functions are constructed so as to be consistent with all vacuum expectation values of the multiple commutators mentioned above and with the energy-positivity condition. By applying the method described above, exact solutions to various 2-dimensional gauge-theory and quantum-gravity models are found explicitly. The validity of these solutions is confirmed by comparing them with the conventional perturbation-theoretical results. However, a new anomalous feature, called the ''field-equation anomaly'', is often found to appear, and its perturbation-theoretical counterpart, unnoticed previously, is discussed. The conventional notion of an anomaly with respect to symmetry is reconsidered on the basis of the field-equation anomaly, and the derivation of the critical dimension in the BRS-formulated bosonic string theory is criticized. The method outlined above is applied to more realistic theories by expanding everything in powers of the relevant parameter, but this expansion is not equivalent to the conventional perturbative expansion. The new expansion is BRS-invariant at each order, in contrast to that in the conventional perturbation theory. Higher-order calculations are generally extremely laborious to perform explicitly. (author)

  1. The Factors and Transversal Reorganizations Principles of Romanian Textile Industry Enterprises using Activity-Based Costing Method

    Directory of Open Access Journals (Sweden)

    Sorinel Capusneanu

    2007-04-01

    Full Text Available This article describes the factors and the principles of transversal reorganization of the enterprises from the Romanian textile industry by adapting the Activity-Based Costing method (ABC to its specific. There are presented and analyzed the real possibilities of reorganization of the enterprises in Romania by elaboration of methodological phases that will be covered until the implementation of their transversal organization. Are we ready to adapt the Activity-Based Costing method to the specific of the Romanian textile industry and not only? Here is the question whose response we will find in this article.

  2. Effective-field renormalization-group method for Ising systems

    Science.gov (United States)

    Fittipaldi, I. P.; De Albuquerque, D. F.

    1992-02-01

    A new applicable effective-field renormalization-group (ERFG) scheme for computing critical properties of Ising spins systems is proposed and used to study the phase diagrams of a quenched bond-mixed spin Ising model on square and Kagomé lattices. The present EFRG approach yields results which improves substantially on those obtained from standard mean-field renormalization-group (MFRG) method. In particular, it is shown that the EFRG scheme correctly distinguishes the geometry of the lattice structure even when working with the smallest possible clusters, namely N'=1 and N=2.

  3. Teaching Methods Utilizing a Field Theory Viewpoint in the Elementary Reading Program.

    Science.gov (United States)

    LeChuga, Shirley; Lowry, Heath

    1980-01-01

    Suggests and lists sources of information on reading instruction that discuss the promotion and enrichment of the interactive learning process between children and their environment based on principles underlying the cognitive-field theory of learning. (MKM)

  4. Possible experiments to distinguish between different methods of treating the Pauli principle in nuclear potential models

    International Nuclear Information System (INIS)

    Krolle, D.; Assenbaum, H.J.; Funck, C.; Langanke, K.

    1987-01-01

    The finite Pauli repulsion model of Walliser and Nakaichi-Maeda and the orthogonality condition model are two microscopically motivated potential models for the description of nuclear collisions which, however, differ from each other in the way they incorporate antisymmetrization effects into the nucleus-nucleus interaction. We have used α+α scattering at low energies as a tool to distinguish between the two different treatments of the Pauli principle. Both models are consistent with the presently available on-shell (elastic) and off-shell (bremsstrahlung) data. We suggest further measurements of α+α bremsstrahlung including the coplanar laboratory differential cross section in Harvard geometry at α-particle angles of around 27 0 and the γ-decay width of the 4 + resonance at E/sub c.m./ = 11.4 MeV, because in both cases the two models make significantly different predictions

  5. Process system and method for fabricating submicron field emission cathodes

    Science.gov (United States)

    Jankowski, Alan F.; Hayes, Jeffrey P.

    1998-01-01

    A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape.

  6. Methods for studying plasma charge transport across a magnetic field

    International Nuclear Information System (INIS)

    Popovich, A.S.

    1978-01-01

    A comparative analysis of experimental methods for the diffusion transfer of plasma charged particles accross the magnetic field at the study of its confinement effectiveness, instability effect is carried out. Considered are the methods based on the analysis of particle balance in the charge and possibilities of diffusion coefficient determination according to measuring parameters of density gradient and particle flow on the wall, rate of plasma decay after switching off ionization source radial profile of plasma density outside the active region of stationary charge. Much attension is payed to the research methods of diffusion transfer, connected with the study of propagation of periodic and aperiodic density perturbation in a plasma. Analysed is the Golubev and Granovsky method of diffusion waves and its different modifications, phase analysis method of ''test charges'' movement, as well as different modifications of correlation methods. Considered are physical preconditions of the latter and criticized is unilateral interpretation of correlation measurings, carried out in a number of works. The analysis of study possibilities of independent (non-ambipolar) diffusion of electrons and ions in a plasma in the magnetic field is executed

  7. The reduced basis method for the electric field integral equation

    International Nuclear Information System (INIS)

    Fares, M.; Hesthaven, J.S.; Maday, Y.; Stamm, B.

    2011-01-01

    We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.

  8. A geologic approach to field methods in fluvial geomorphology

    Science.gov (United States)

    Fitzpatrick, Faith A.; Thornbush, Mary J; Allen, Casey D; Fitzpatrick, Faith A.

    2014-01-01

    A geologic approach to field methods in fluvial geomorphology is useful for understanding causes and consequences of past, present, and possible future perturbations in river behavior and floodplain dynamics. Field methods include characterizing river planform and morphology changes and floodplain sedimentary sequences over long periods of time along a longitudinal river continuum. Techniques include topographic and bathymetric surveying of fluvial landforms in valley bottoms and describing floodplain sedimentary sequences through coring, trenching, and examining pits and exposures. Historical sediment budgets that include floodplain sedimentary records can characterize past and present sources and sinks of sediment along a longitudinal river continuum. Describing paleochannels and floodplain vertical accretion deposits, estimating long-term sedimentation rates, and constructing historical sediment budgets can assist in management of aquatic resources, habitat, sedimentation, and flooding issues.

  9. Modeling Enzymatic Transition States by Force Field Methods

    DEFF Research Database (Denmark)

    Hansen, Mikkel Bo; Jensen, Hans Jørgen Aagaard; Jensen, Frank

    2009-01-01

    The SEAM method, which models a transition structure as a minimum on the seam of two diabatic surfaces represented by force field functions, has been used to generate 20 transition structures for the decarboxylation of orotidine by the orotidine-5'-monophosphate decarboxylase enzyme. The dependence...... of the TS geometry on the flexibility of the system has been probed by fixing layers of atoms around the active site and using increasingly larger nonbonded cutoffs. The variability over the 20 structures is found to decrease as the system is made more flexible. Relative energies have been calculated...... by various electronic structure methods, where part of the enzyme is represented by a force field description and the effects of the solvent are represented by a continuum model. The relative energies vary by several hundreds of kJ/mol between the transition structures, and tests showed that a large part...

  10. Method for imaging with low frequency electromagnetic fields

    Science.gov (United States)

    Lee, Ki H.; Xie, Gan Q.

    1994-01-01

    A method for imaging with low frequency electromagnetic fields, and for interpreting the electromagnetic data using ray tomography, in order to determine the earth conductivity with high accuracy and resolution. The imaging method includes the steps of placing one or more transmitters, at various positions in a plurality of transmitter holes, and placing a plurality of receivers in a plurality of receiver holes. The transmitters generate electromagnetic signals which diffuse through a medium, such as earth, toward the receivers. The measured diffusion field data H is then transformed into wavefield data U. The traveltimes corresponding to the wavefield data U, are then obtained, by charting the wavefield data U, using a different regularization parameter .alpha. for each transform. The desired property of the medium, such as conductivity, is then derived from the velocity, which in turn is constructed from the wavefield data U using ray tomography.

  11. Multi-phase-field method for surface tension induced elasticity

    Science.gov (United States)

    Schiedung, Raphael; Steinbach, Ingo; Varnik, Fathollah

    2018-01-01

    A method, based on the multi-phase-field framework, is proposed that adequately accounts for the effects of a coupling between surface free energy and elastic deformation in solids. The method is validated via a number of analytically solvable problems. In addition to stress states at mechanical equilibrium in complex geometries, the underlying multi-phase-field framework naturally allows us to account for the influence of surface energy induced stresses on phase transformation kinetics. This issue, which is of fundamental importance on the nanoscale, is demonstrated in the limit of fast diffusion for a solid sphere, which melts due to the well-known Gibbs-Thompson effect. This melting process is slowed down when coupled to surface energy induced elastic deformation.

  12. The virtual fields method applied to spalling tests on concrete

    Directory of Open Access Journals (Sweden)

    Forquin P.

    2012-08-01

    Full Text Available For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s−1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM. First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative ‘load cell’. This method applied to three spalling tests allowed to identify Young’s modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.

  13. Method of a covering space in quantum field theory

    International Nuclear Information System (INIS)

    Serebryanyj, E.M.

    1982-01-01

    To construct the Green function of the Laplace operator in the domain M bounded by conducting surfaces the generalized method of images is used. It is based on replacement of the domain M by its discrete bundle and that is why the term ''method of covering space'' is used. Continuing one of the coordinates to imaginary values the euclidean Green function is transformed into the causal one. This allows one to compute vacuum stress-energy tensor of the scalar massless field if the vacuum is stable [ru

  14. Investigation of drag effect using the field signature method

    International Nuclear Information System (INIS)

    Wan, Zhengjun; Liao, Junbi; Tian, Gui Yun; Cheng, Liang

    2011-01-01

    The potential drop (PD) method is an established non-destructive evaluation (NDE) technique. The monitoring of internal corrosion, erosion and cracks in piping systems, based on electrical field mapping or direct current potential drop array, is also known as the field signature method (FSM). The FSM has been applied in the field of submarine pipe monitoring and land-based oil and gas transmission pipes and containers. In the experimental studies, to detect and calculate the degree of pipe corrosion, the FSM analyses the relationships between the electrical resistance and pipe thickness using an electrode matrix. The relevant drag effect or trans-resistance will cause a large margin of error in the application of resistance arrays. It is the first time that the drag effect in the paper is investigated and analysed in resistance networks with the help of the FSM. Subsequently, a method to calculate the drag factors and eliminate its errors is proposed and presented. Theoretical analysis, simulation and experimental results show that the measurement accuracy can be improved by eliminating the errors caused by the drag effect

  15. Design of Edible Oil Degradation Tool by Using Electromagnetic Field Absorbtion Principle which was Characterized to Peroxide Number

    Science.gov (United States)

    Isnen, M.; Nasution, T. I.; Perangin-angin, B.

    2016-08-01

    The identification of changes in oil quality has been conducted by indicating the change of dielectric constant which was showed by sensor voltage. Sensor was formed from two parallel flats that worked by electromagnetic wave propagation principle. By measuring its amplitude of electromagnetic wave attenuation caused by interaction between edible oil samples and the sensor, dielectric constant could be identified and estimated as well as peroxide number. In this case, the parallel flats were connected to an electric oscillator 700 kHz. Furthermore, sensor system could showed measurable voltage differences for each different samples. The testing carried out to five oil samples after undergoing an oxidation treatment at fix temperature of 235oC for 0, 5, 10, 15 and 20 minutes. Iodometry method testing showed peroxide values about 1.99, 9.95, 5.96, 11.86, and 15.92 meq/kg respectively with rising trend. Besides that, the testing result by sensor system showed voltages values 1.139, 1.147, 1.165, 1.173, and 1.176 volts with rising trend, respectively. It means that the higher sensor voltages showed the higher damage rate of edible oil when the change in sensor voltage was caused by the change in oil dielectric constant in which heating process caused damage in edible oil molecules structure. The more damage of oil structure caused the more difficulties of oil molecules to polarize and it is indicated by smaller dielectric constant. Therefore electric current would be smaller when sensor voltage was higher. On the other side, the higher sensor voltage means the smaller dielectric constant and the higher peroxide number.

  16. Design of Edible Oil Degradation Tool by Using Electromagnetic Field Absorbtion Principle which was Characterized to Peroxide Number

    International Nuclear Information System (INIS)

    Isnen, M; Nasution, T I; Perangin-angin, B

    2016-01-01

    The identification of changes in oil quality has been conducted by indicating the change of dielectric constant which was showed by sensor voltage. Sensor was formed from two parallel flats that worked by electromagnetic wave propagation principle. By measuring its amplitude of electromagnetic wave attenuation caused by interaction between edible oil samples and the sensor, dielectric constant could be identified and estimated as well as peroxide number. In this case, the parallel flats were connected to an electric oscillator 700 kHz. Furthermore, sensor system could showed measurable voltage differences for each different samples. The testing carried out to five oil samples after undergoing an oxidation treatment at fix temperature of 235 o C for 0, 5, 10, 15 and 20 minutes. Iodometry method testing showed peroxide values about 1.99, 9.95, 5.96, 11.86, and 15.92 meq/kg respectively with rising trend. Besides that, the testing result by sensor system showed voltages values 1.139, 1.147, 1.165, 1.173, and 1.176 volts with rising trend, respectively. It means that the higher sensor voltages showed the higher damage rate of edible oil when the change in sensor voltage was caused by the change in oil dielectric constant in which heating process caused damage in edible oil molecules structure. The more damage of oil structure caused the more difficulties of oil molecules to polarize and it is indicated by smaller dielectric constant. Therefore electric current would be smaller when sensor voltage was higher. On the other side, the higher sensor voltage means the smaller dielectric constant and the higher peroxide number. (paper)

  17. Field Sample Preparation Method Development for Isotope Ratio Mass Spectrometry

    International Nuclear Information System (INIS)

    Leibman, C.; Weisbrod, K.; Yoshida, T.

    2015-01-01

    Non-proliferation and International Security (NA-241) established a working group of researchers from Los Alamos National Laboratory (LANL), Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) to evaluate the utilization of in-field mass spectrometry for safeguards applications. The survey of commercial off-the-shelf (COTS) mass spectrometers (MS) revealed no instrumentation existed capable of meeting all the potential safeguards requirements for performance, portability, and ease of use. Additionally, fieldable instruments are unlikely to meet the International Target Values (ITVs) for accuracy and precision for isotope ratio measurements achieved with laboratory methods. The major gaps identified for in-field actinide isotope ratio analysis were in the areas of: 1. sample preparation and/or sample introduction, 2. size reduction of mass analyzers and ionization sources, 3. system automation, and 4. decreased system cost. Development work in 2 through 4, numerated above continues, in the private and public sector. LANL is focusing on developing sample preparation/sample introduction methods for use with the different sample types anticipated for safeguard applications. Addressing sample handling and sample preparation methods for MS analysis will enable use of new MS instrumentation as it becomes commercially available. As one example, we have developed a rapid, sample preparation method for dissolution of uranium and plutonium oxides using ammonium bifluoride (ABF). ABF is a significantly safer and faster alternative to digestion with boiling combinations of highly concentrated mineral acids. Actinides digested with ABF yield fluorides, which can then be analyzed directly or chemically converted and separated using established column chromatography techniques as needed prior to isotope analysis. The reagent volumes and the sample processing steps associated with ABF sample digestion lend themselves to automation and field

  18. Grassmann phase space methods for fermions. II. Field theory

    Energy Technology Data Exchange (ETDEWEB)

    Dalton, B.J., E-mail: bdalton@swin.edu.au [Centre for Quantum and Optical Science, Swinburne University of Technology, Melbourne, Victoria 3122 (Australia); Jeffers, J. [Department of Physics, University of Strathclyde, Glasgow G4ONG (United Kingdom); Barnett, S.M. [School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ (United Kingdom)

    2017-02-15

    In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker–Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.

  19. Grassmann phase space methods for fermions. II. Field theory

    International Nuclear Information System (INIS)

    Dalton, B.J.; Jeffers, J.; Barnett, S.M.

    2017-01-01

    In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, though fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker–Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.

  20. The experimental field work as practical learning method

    Directory of Open Access Journals (Sweden)

    Nicolás Fernández Losa

    2014-11-01

    Full Text Available This paper describes a teaching experience about experimental field work as practical learning method implemented in the subject of Organizational Behaviour. With this teaching experience we pretend to change the practical training, as well as in its evaluation process, in order to favour the development of transversal skills of students. For this purpose, the use of a practice plan, tackled through an experimental field work and carried out with the collaboration of a business organization within a work team (as organic unity of learning, arises as an alternative to the traditional method of practical teachings and allows the approach of business reality into the classroom, as well as actively promote the use of transversal skills. In particular, we develop the experience in three phases. Initially, the students, after forming a working group and define a field work project, should get the collaboration of a nearby business organization in which to obtain data on one or more functional areas of organizational behaviour. Subsequently, students carry out the field work with the realization of the scheduled visits and elaboration of a memory to establish a diagnosis of the strategy followed by the company in these functional areas in order to propose and justify alternative actions that improve existing ones. Finally, teachers assess the different field work memories and their public presentations according to evaluation rubrics, which try to objectify and unify to the maximum the evaluation criteria and serve to guide the learning process of students. The results of implementation of this teaching experience, measured through a Likert questionnaire, are very satisfactory for students.

  1. Cosmological principles. II. Physical principles

    International Nuclear Information System (INIS)

    Harrison, E.R.

    1974-01-01

    The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)

  2. Laplace transform overcoming principle drawbacks in application of the variational iteration method to fractional heat equations

    Directory of Open Access Journals (Sweden)

    Wu Guo-Cheng

    2012-01-01

    Full Text Available This note presents a Laplace transform approach in the determination of the Lagrange multiplier when the variational iteration method is applied to time fractional heat diffusion equation. The presented approach is more straightforward and allows some simplification in application of the variational iteration method to fractional differential equations, thus improving the convergence of the successive iterations.

  3. Field Methods for the Study of Slope and Fluvial Processes

    Science.gov (United States)

    Leopold, Luna Bergere; Leopold, Luna Bergere

    1967-01-01

    In Belgium during the summer of 1966 the Commission on Slopes and the Commission on Applied Geomorphology of the International Geographical Union sponsored a joint symposium, with field excursions, and meetings of the two commissions. As a result of the conference and associated discussions, the participants expressed the view that it would be a contribution to scientific work relating to the subject area if the Commission on Applied Geomorphology could prepare a small manual describling the methods of field investigation being used by research scientists throughout the world in the study of various aspects of &lope development and fluvial processes. The Commission then assumed this responsibility and asked as many persons as were known to be. working on this subject to contribute whatever they wished in the way of descriptions of methods being employed.The purpose of the present manual is to show the variety of study methods now in use, to describe from the experience gained the limitations and advantages of different techniques, and to give pertinent detail which might be useful to other investigators. Some details that would be useful to know are not included in scientific publications, but in a manual on methods the details of how best t6 use a method has a place. Various persons have learned certain things which cannot be done, as well as some methods that are successful. It is our hope that comparison of methods tried will give the reader suggestions as to how a particular method might best be applied to his own circumstance.The manual does not purport to include methods used by all workers. In particular, it does not interfere with a more systematic treatment of the subject (1) or with various papers already published in the present journal. In fact we are sure that there are pertinent research methods that we do not know of and the Commission would be glad to receive additions and other ideas from those who find they have something to contribute. Also, the

  4. Local Field Response Method Phenomenologically Introducing Spin Correlations

    Science.gov (United States)

    Tomaru, Tatsuya

    2018-03-01

    The local field response (LFR) method is a way of searching for the ground state in a similar manner to quantum annealing. However, the LFR method operates on a classical machine, and quantum effects are introduced through a priori information and through phenomenological means reflecting the states during the computations. The LFR method has been treated with a one-body approximation, and therefore, the effect of entanglement has not been sufficiently taken into account. In this report, spin correlations are phenomenologically introduced as one of the effects of entanglement, by which multiple tunneling at anticrossing points is taken into account. As a result, the accuracy of solutions for a 128-bit system increases by 31% compared with that without spin correlations.

  5. Field effect transistors based on phosphorene nanoribbon with selective edge-adsorption: A first-principles study

    Science.gov (United States)

    Hu, Mengli; Yang, Zhixiong; Zhou, Wenzhe; Li, Aolin; Pan, Jiangling; Ouyang, Fangping

    2018-04-01

    By using density functional theory (DFT) and nonequilibrium Green's function (NEGF), field effect transistor (FET) based on zigzag shaped phosphorene nanoribbons (ZPNR) are investigated. The FETs are constructed with bare-edged ZPNRs as electrodes and H, Cl or OH adsorbed ZPNRs as channel. It is found FETs with the three kinds of channel show similar transport properties. The FET is p-type with a maximum current on/off ratio of 104 and a minimum off-current of 1 nA. The working mode of FETs is dependent on the parity of channel length. It can be either enhancement mode or depletion mode and the off-state current shows an even-odd oscillation. The current oscillations are interpreted with density of states (DOS) analysis and methods of evolution operator and tight-binding Hamiltonian. Operating mechanism of the designed FETs is also presented with projected local density of states and band diagrams.

  6. Probabilistic methods in the field of reactor safety in Germany

    Energy Technology Data Exchange (ETDEWEB)

    Birkhofer, A [Technische Univ. Muenchen (Germany, F.R.). Lehrstuhl fuer Reaktordynamik und Reaktorsicherheit

    1979-01-01

    The present status and future prospects in Germany of reliability, as well as risk analysis, in the field of reactor safety are examined. The development of analytical methods with respect to the available data base is reviewed with consideration of the roles of reliability codes, component data, common mode failures, human influence, structural analysis and process computers. Some examples of the application of probability assessments are discussed and the extension of reliability analysis beyond the loss-of-coolant accident is considered. In the case of risk analysis, the object is to determine not only the probability of failure of systems but also the probability and extent of possible consequences. Some risk studies under investigation in Germany and the methodology of risk analysis are discussed. Reliability and risk analysis are involved to an increasing extent in safety research and licensing procedures and their influence in other fields such as the public perception of risk is also discussed.

  7. Precise magnetostatic field using the finite element method

    International Nuclear Information System (INIS)

    Nascimento, Francisco Rogerio Teixeira do

    2013-01-01

    The main objective of this work is to simulate electromagnetic fields using the Finite Element Method. Even in the easiest case of electrostatic and magnetostatic numerical simulation some problems appear when the nodal finite element is used. It is difficult to model vector fields with scalar functions mainly in non-homogeneous materials. With the aim to solve these problems two types of techniques are tried: the adaptive remeshing using nodal elements and the edge finite element that ensure the continuity of tangential components. Some numerical analysis of simple electromagnetic problems with homogeneous and non-homogeneous materials are performed using first, the adaptive remeshing based in various error indicators and second, the numerical solution of waveguides using edge finite element. (author)

  8. Magnetic field adjustment structure and method for a tapered wiggler

    Science.gov (United States)

    Halbach, Klaus

    1988-01-01

    An improved method and structure is disclosed for adjusting the magnetic field generated by a group of electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charged particles which comprises providing more than one winding on at least some of the electromagnet poles; connecting one respective winding on each of several consecutive adjacent electromagnet poles to a first power supply, and the other respective winding on the electromagnet pole to a different power supply in staggered order; and independently adjusting one power supply to independently vary the current in one winding on each electromagnet pole in a group whereby the magnetic field strength of each of a group of electromagnet poles may be changed in smaller increments.

  9. METHOD AND APPARATUS FOR TRAPPING IONS IN A MAGNETIC FIELD

    Science.gov (United States)

    Luce, J.S.

    1962-04-17

    A method and apparatus are described for trapping ions within an evacuated container and within a magnetic field utilizing dissociation and/or ionization of molecular ions to form atomic ions and energetic neutral particles. The atomic ions are magnetically trapped as a result of a change of charge-to- mass ratio. The molecular ions are injected into the container and into the path of an energetic carbon arc discharge which dissociates and/or ionizes a portion of the molecular ions into atomic ions and energetic neutrals. The resulting atomic ions are trapped by the magnetic field to form a circulating beam of atomic ions, and the energetic neutrals pass out of the system and may be utilized in a particle accelerator. (AEC)

  10. Design principles for target stations and methods of remote handling at PSI

    International Nuclear Information System (INIS)

    Steiner, E.W.

    1992-01-01

    Two design concepts for target stations used at Paul Scherrer Institute (PSI) are shown. The method of the remote handling of activated elements is described and some conclusions with respect to a radioactive beam facility are given

  11. 40Ar-39Ar method for age estimation: principles, technique and application in orogenic regions

    International Nuclear Information System (INIS)

    Dalmejer, R.

    1984-01-01

    A variety of the K-Ar method for age estimation by 40 Ar/ 39 Ar recently developed is described. This method doesn't require direct analysis of potassium, its content is calculated as a function of 39 Ar, which is formed from 39 K under neutron activation. Errors resulted from interactions between potassium and calcium nuclei with neutrons are considered. The attention is paid to the technique of gradual heating, used in 40 Ar- 39 Ar method, and of obtaining age spectrum. Aplicabilities of isochronous diagram is discussed for the case of presence of excessive argon in a sample. Examples of 40 Ar- 39 Ar method application for dating events in orogenic regions are presented

  12. On the representation of the diffracted field of Hermite-Gaussian modes in an alien basis and the young diffraction principle

    International Nuclear Information System (INIS)

    Smirnov, V.N.; Strokovskii, G.A.

    1994-01-01

    An analytical form of expansion coefficients of a diffracted field for an arbitrary Hermite-Gaussian beam in an alien Hermite-Gaussian basis is obtained. A possible physical interpretation of the well-known Young phenomenological diffraction principle and experiments on diffraction of Hermite-Gaussian beams of the lowest types (n = 0 - 5) from half-plane are discussed. The case of nearly homogenous expansion corresponding to misalignment and mismatch of optical systems is also analyzed. 7 refs., 2 figs

  13. Operation safety of control systems. Principles and methods; Surete de fonctionnement des systemes de commande. Principes et methodes

    Energy Technology Data Exchange (ETDEWEB)

    Aubry, J.F. [Institut National Polytechnique, 54 - Nancy (France); Chatelet, E. [Universite de Technologie de Troyes, 10 (France)

    2008-09-15

    This article presents the main operation safety methods that can be implemented to design safe control systems taking into account the behaviour of the different components with each other (binary 'operation/failure' behaviours, non-consistent behaviours and 'hidden' failures, dynamical behaviours and temporal aspects etc). To take into account these different behaviours, advanced qualitative and quantitative methods have to be used which are described in this article: 1 - qualitative methods of analysis: functional analysis, preliminary risk analysis, failure mode and failure effects analyses; 2 - quantitative study of systems operation safety: binary representation models, state space-based methods, event space-based methods; 3 - application to the design of control systems: safe specifications of a control system, qualitative analysis of operation safety, quantitative analysis, example of application; 4 - conclusion. (J.S.)

  14. Field testing, comparison, and discussion of five aeolian sand transport measuring devices operating on different measuring principles

    NARCIS (Netherlands)

    Goossens, Dirk; Nolet, Corjan; Etyemezian, Vicken; Duarte-campos, Leonardo; Bakker, Gerben; Riksen, Michel

    2018-01-01

    Five types of sediment samplers designed to measure aeolian sand transport were tested during a wind erosion event on the Sand Motor, an area on the west coast of the Netherlands prone to severe wind erosion. Each of the samplers operates on a different principle. The MWAC (Modified Wilson And

  15. Chemometrics Methods for Specificity, Authenticity and Traceability Analysis of Olive Oils: Principles, Classifications and Applications

    Directory of Open Access Journals (Sweden)

    Habib Messai

    2016-11-01

    Full Text Available Background. Olive oils (OOs show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends’ preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i characterization by specific markers; (ii authentication by fingerprint patterns; and (iii monitoring by traceability analysis. Methods. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. Results. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Conclusion. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors.

  16. Chemometrics Methods for Specificity, Authenticity and Traceability Analysis of Olive Oils: Principles, Classifications and Applications

    Science.gov (United States)

    Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil

    2016-01-01

    Background. Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends’ preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. Methods. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. Results. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Conclusion. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors. PMID:28231172

  17. Methods to assess bioavailability of hydrophobic organic contaminants: Principles, operations, and limitations.

    Science.gov (United States)

    Cui, Xinyi; Mayer, Philipp; Gan, Jay

    2013-01-01

    Many important environmental contaminants are hydrophobic organic contaminants (HOCs), which include PCBs, PAHs, PBDEs, DDT and other chlorinated insecticides, among others. Owing to their strong hydrophobicity, HOCs have their final destination in soil or sediment, where their ecotoxicological effects are closely regulated by sorption and thus bioavailability. The last two decades have seen a dramatic increase in research efforts in developing and applying partitioning based methods and biomimetic extractions for measuring HOC bioavailability. However, the many variations of both analytical methods and associated measurement endpoints are often a source of confusion for users. In this review, we distinguish the most commonly used analytical approaches based on their measurement objectives, and illustrate their practical operational steps, strengths and limitations using simple flowcharts. This review may serve as guidance for new users on the selection and use of established methods, and a reference for experienced investigators to identify potential topics for further research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. A field method for monitoring thoron-daughter working level

    International Nuclear Information System (INIS)

    Khan, A.H.; Dhandayatham, R.; Raghavayya, M.; Nambiar, P.P.V.J.

    1975-01-01

    The concept of working level, generally used for radon daughters, has been extended to the daughter products of thoron. Accordingly, thorondaughter working level (TWL) has been defined as the alpha energy released from the ultimate decay of 100 pCi/1 each of the short-lived decay products of thoron. In order to facilitate the evaluation of inhalation hazard in thorium handling areas, a simple field method has been suggested to measure the thoron-daughter working level. A comparison of the potential alpha energies from radon-daughters and that from thoron-daughter is included. (K.B.)

  19. Algebraic methods in statistical mechanics and quantum field theory

    CERN Document Server

    Emch, Dr Gérard G

    2009-01-01

    This systematic algebraic approach concerns problems involving a large number of degrees of freedom. It extends the traditional formalism of quantum mechanics, and it eliminates conceptual and mathematical difficulties common to the development of statistical mechanics and quantum field theory. Further, the approach is linked to research in applied and pure mathematics, offering a reflection of the interplay between formulation of physical motivations and self-contained descriptions of the mathematical methods.The four-part treatment begins with a survey of algebraic approaches to certain phys

  20. The large deviation principle and steady-state fluctuation theorem for the entropy production rate of a stochastic process in magnetic fields

    International Nuclear Information System (INIS)

    Chen, Yong; Ge, Hao; Xiong, Jie; Xu, Lihu

    2016-01-01

    Fluctuation theorem is one of the major achievements in the field of nonequilibrium statistical mechanics during the past two decades. There exist very few results for steady-state fluctuation theorem of sample entropy production rate in terms of large deviation principle for diffusion processes due to the technical difficulties. Here we give a proof for the steady-state fluctuation theorem of a diffusion process in magnetic fields, with explicit expressions of the free energy function and rate function. The proof is based on the Karhunen-Loève expansion of complex-valued Ornstein-Uhlenbeck process.

  1. Obtaining the lattice energy of the anthracene crystal by modern yet affordable first-principles methods

    Science.gov (United States)

    Sancho-García, J. C.; Aragó, J.; Ortí, E.; Olivier, Y.

    2013-05-01

    The non-covalent interactions in organic molecules are known to drive their self-assembly to form molecular crystals. We compare, in the case of anthracene and against experimental (electronic-only) sublimation energy, how modern quantum-chemical methods are able to calculate this cohesive energy taking into account all the interactions between occurring dimers in both first-and second-shells. These include both O(N6)- and O(N5)-scaling methods, Local Pair Natural Orbital-parameterized Coupled-Cluster Single and Double, and Spin-Component-Scaled-Møller-Plesset perturbation theory at second-order, respectively, as well as the most modern family of conceived density functionals: double-hybrid expressions in several variants (B2-PLYP, mPW2-PLYP, PWPB95) with customized dispersion corrections (-D3 and -NL). All-in-all, it is shown that these methods behave very accurately producing errors in the 1-2 kJ/mol range with respect to the experimental value taken into account the experimental uncertainty. These methods are thus confirmed as excellent tools for studying all kinds of interactions in chemical systems.

  2. A Modified Fluorimetric Method for Determination of Hydrogen Peroxide Using Homovanillic Acid Oxidation Principle

    Directory of Open Access Journals (Sweden)

    Biswaranjan Paital

    2014-01-01

    Full Text Available Hydrogen peroxide (H2O2 level in biological samples is used as an important index in various studies. Quantification of H2O2 level in tissue fractions in presence of H2O2 metabolizing enzymes may always provide an incorrect result. A modification is proposed for the spectrofluorimetric determination of H2O2 in homovanillic acid (HVA oxidation method. The modification was included to precipitate biological samples with cold trichloroacetic acid (TCA, 5% w/v followed by its neutralization with K2HPO4 before the fluorimetric estimation of H2O2 is performed. TCA was used to precipitate the protein portions contained in the tissue fractions. After employing the above modification, it was observed that H2O2 content in tissue samples was ≥2 fold higher than the content observed in unmodified method. Minimum 2 h incubation of samples in reaction mixture was required for completion of the reaction. The stability of the HVA dimer as reaction product was found to be >12 h. The method was validated by using known concentrations of H2O2 and catalase enzyme that quenches H2O2 as substrate. This method can be used efficiently to determine more accurate tissue H2O2 level without using internal standard and multiple samples can be processed at a time with additional low cost reagents such as TCA and K2HPO4.

  3. Collecting and analyzing qualitative data: Hermeneutic principles, methods and case examples

    Science.gov (United States)

    Michael E. Patterson; Daniel R. Williams

    2002-01-01

    Over the past three decades, the use of qualitative research methods has become commonplace in social science as a whole and increasingly represented in tourism and recrearion research. In tourism, for example, Markwell and Basche (1998) recently noted the emergence of a pluralistic perspective on science and the growth of research employing qualitative frameworks....

  4. All-union Conference. Principles and methods of regional and geochemical investigations into radionuclide migration

    International Nuclear Information System (INIS)

    Khitrov, L.M.

    1989-01-01

    The collection presents abstracts of papers concerning landscape-geochemical research of radionuclides migration; aspects of 'hot particles' study; radionuclides forms and behaviour in soils, in soil-plant; soil-natural water systems, as well as in water ecosystems. Methods of natural objects artificial radioactivity study are reviewed. Distribution of natural radionuclides in soils. natural waters, etc. is discussed

  5. Thermodynamic assessment of the Pd−Rh−Ru system using calphad and first-principles methods

    Energy Technology Data Exchange (ETDEWEB)

    Gossé, S., E-mail: stephane.gosse@cea.fr [DEN-Service de la Corrosion et du Comportement des Matériaux dans leur Environnement (SCCME), CEA, Université Paris-Saclay, F-91191, Gif-sur-Yvette (France); Dupin, N. [Calcul Thermodynamique, Rue de l' avenir, 63670, Orcet (France); Guéneau, C. [DEN-Service de la Corrosion et du Comportement des Matériaux dans leur Environnement (SCCME), CEA, Université Paris-Saclay, F-91191, Gif-sur-Yvette (France); Crivello, J.-C.; Joubert, J.-M. [Chimie Métallurgique des Terres Rares, Université Paris Est, ICMPE (UMR 7182), CNRS, UPEC, F-94320, Thiais (France)

    2016-06-15

    Palladium, rhodium and ruthenium are abundant fission products that form in oxide fuels in nuclear reactors. Under operating conditions, these Platinum-Group Metal (PGM) fission products accumulate in high concentration at the rim of the oxide fuel and mainly precipitate into metallic solid solutions. Their thermochemistry is of significant interest to predict the high temperature chemical interactions between the fuel and the cladding or the possible precipitation of PGM phases in high level nuclear waste glasses. To predict the thermodynamic properties of these PGM fission products, a thermodynamic modeling is being developed on the ternary Pd−Rh−Ru system using the Calphad method. Because experimental thermodynamic data are scarce, Special Quasirandom Structures coupled with Density Functional Theory methods were used to calculate mixing enthalpy data in the solid solutions. The resulting thermodynamic description based on only binary interaction parameters is in good agreement with the few data on the ternary system. - Highlights: • The mixing enthalpy of solid solutions in the Pd−Rh−Ru system was calculated using the DFT and SQS methods. • A thermodynamic assessment of the Pd−Rh−Ru ternary system was performed using the Calphad method. • The extrapolation based on only binary interaction parameters leads to a good agreement with the data on the ternary.

  6. Field method for detemining thorium-230 in soils

    International Nuclear Information System (INIS)

    Dechant, G.

    1989-05-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center at the DOE Grand Junction, Colorado, Projects Office to develop and/or recommend measurement methods for use in support of its remedial action programs. This report describes a field method for the determination of Th-230 in soils for remedial action projects. The method involves a cold acid digestion, organic extraction, and precipitation of Th-230 for counting by alpha spectrometry. An internal Th-228 spike is added early in the method to eliminate the need to determine losses through the separation steps. A step-by-step procedure is included in the report. The method requires a small portable or mobile laboratory equipped with electrical power and ventilation. All additional equipment is commercially available and no special equipment is required. Chemical wastes for each analysis are stored for appropriate disposal off site. A chemist or chemical technician can complete 15--30 analyses per day depending primarily on the number of alpha spectrometers available. 8 refs., 2 figs., 7 tabs

  7. Scattering in an intense radiation field: Time-independent methods

    International Nuclear Information System (INIS)

    Rosenberg, L.

    1977-01-01

    The standard time-independent formulation of nonrelativistic scattering theory is here extended to take into account the presence of an intense external radiation field. In the case of scattering by a static potential the extension is accomplished by the introduction of asymptotic states and intermediate-state propagators which account for the absorption and induced emission of photons by the projectile as it propagates through the field. Self-energy contributions to the propagator are included by a systematic summation of forward-scattering terms. The self-energy analysis is summarized in the form of a modified perturbation expansion of the type introduced by Watson some time ago in the context of nuclear-scattering theory. This expansion, which has a simple continued-fraction structure in the case of a single-mode field, provides a generally applicable successive approximation procedure for the propagator and the asymptotic states. The problem of scattering by a composite target is formulated using the effective-potential method. The modified perturbation expansion which accounts for self-energy effects is applicable here as well. A discussion of a coupled two-state model is included to summarize and clarify the calculational procedures

  8. Novel method for detecting weak magnetic fields at low frequencies

    Science.gov (United States)

    González-Martínez, S.; Castillo-Torres, J.; Mendoza-Santos, J. C.; Zamorano-Ulloa, R.

    2005-06-01

    A low-level-intensity magnetic field detection system has been designed and developed based on the amplification-selection process of signals. This configuration is also very sensitive to magnetic field changes produced by harmonic-like electrical currents transported in finite-length wires. Experimental and theoretical results of magnetic fields detection as low as 10-9T at 120Hz are also presented with an accuracy of around 13%. The assembled equipment is designed to measure an electromotive force induced in a free-magnetic-core coil in order to recover signals which are previously selected, despite the fact that their intensities are much lower than the environment electromagnetic radiation. The prototype has a signal-to-noise ratio of 60dB. This system also presents the advantage for using it as a portable unit of measurement. The concept and prototype may be applied, for example, as a nondestructive method to analyze any corrosion formation in metallic oil pipelines which are subjected to cathodic protection.

  9. Fields Institute International Symposium on Asymptotic Methods in Stochastics

    CERN Document Server

    Kulik, Rafal; Haye, Mohamedou; Szyszkowicz, Barbara; Zhao, Yiqiang

    2015-01-01

    This book contains articles arising from a conference in honour of mathematician-statistician Miklόs Csörgő on the occasion of his 80th birthday, held in Ottawa in July 2012. It comprises research papers and overview articles, which provide a substantial glimpse of the history and state-of-the-art of the field of asymptotic methods in probability and statistics, written by leading experts. The volume consists of twenty articles on topics on limit theorems for self-normalized processes, planar processes, the central limit theorem and laws of large numbers, change-point problems, short and long range dependent time series, applied probability and stochastic processes, and the theory and methods of statistics. It also includes Csörgő’s list of publications during more than 50 years, since 1962.

  10. Antiferromagnetic spintronics of Mn{sub 2}Au: An experiment, first principle, mean field and series expansions calculations study

    Energy Technology Data Exchange (ETDEWEB)

    Masrour, R., E-mail: rachidmasrour@hotmail.com [Laboratory of Materials, Processes, Environment and Quality, Cady Ayyed University, National School of Applied Sciences, 63 46000, Safi (Morocco); LMPHE (URAC 12), Faculty of Science, Mohammed V-Agdal University, Rabat (Morocco); Hlil, E.K. [Institut Néel, CNRS et Université Joseph Fourier, BP 166, F-38042 Grenoble Cedex 9 (France); Hamedoun, M. [Institute of Nanomaterials and Nanotechnologies, MAScIR, Rabat (Morocco); Benyoussef, A. [LMPHE (URAC 12), Faculty of Science, Mohammed V-Agdal University, Rabat (Morocco); Institute of Nanomaterials and Nanotechnologies, MAScIR, Rabat (Morocco); Hassan II Academy of Science and Technology, Rabat (Morocco); Boutahar, A.; Lassri, H. [LPMMAT, Université Hassan II-Casablanca, Faculté des Sciences, BP 5366 Maârif (Morocco)

    2015-11-01

    The self-consistent ab initio calculations, based on DFT (Density Functional Theory) approach and using FLAPW (Full potential Linear Augmented Plane Wave) method, are performed to investigate both electronic and magnetic properties of the Mn{sub 2}Au. Polarized spin and spin–orbit coupling are included in calculations within the framework of the antiferromagnetic state between two adjacent Mn plans. Magnetic moment considered to lie along (110) axes are computed. Obtained data from ab initio calculations are used as input for the high temperature series expansions (HTSEs) calculations to compute other magnetic parameters. The exchange interactions between the magnetic atoms Mn–Mn in Mn{sub 2}Au are given by using the experiment results and the mean field theory. The High Temperature Series Expansions (HTSEs) of the magnetic susceptibility with the magnetic moments in Mn{sub 2}Au (m{sub Mn}) is given up to tenth order series in, 1/k{sub B}T. The Néel temperature T{sub N} is obtained by HTSEs combined with the Padé approximant method. The critical exponent associated with the magnetic susceptibility is deduced as well. - Highlights: • The both electronic and magnetic properties of the Mn{sub 2}Au are studied. • The exchange interactions between the magnetic atoms Mn–Mn in Mn{sub 2}Au are given. • The Néel temperature T{sub N} of Mn{sub 2}Au is obtained by HTSEs method. • The critical exponent associated with the magnetic susceptibility is deduced.

  11. σ-SCF: A direct energy-targeting method to mean-field excited states.

    Science.gov (United States)

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D; Van Voorhis, Troy

    2017-12-07

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry-a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states-ground or excited-are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H 2 , HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  12. σ-SCF: A direct energy-targeting method to mean-field excited states

    Science.gov (United States)

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D.; Van Voorhis, Troy

    2017-12-01

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry—a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states—ground or excited—are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  13. Methods for experimental design principles and applications for physicists and chemists

    CERN Document Server

    Goupy, J

    1993-01-01

    A method for organizing and conducting scientific experiments is described in this volume which enables experimenters to reduce the number of trials run, while retaining all the parameters that may influence the result. The choice of ideal experiments is based on mathematical concepts, but the author adopts a practical approach and uses theory only when necessary. Written for experimenters by an experimenter, it is an introduction to the philosophy of scientific investigation. Researchers with limited time and resources at their disposal will find this text a valuable guide for solving specific problems efficiently. The presentation makes extensive use of examples, and the approach and methods are graphical rather than numerical. All calculations can be performed on a personal computer; readers are assumed to have no previous knowledge of the subject. The presentation is such that the beginner may acquire a thorough understanding of the basic concepts. However, there is also sufficient material to challenge t...

  14. Principles and methods of neutron activation analysis (NAA) in improved water resources development

    International Nuclear Information System (INIS)

    Dim, L. A.

    2000-01-01

    The methods of neutron activation analysis (NAA) as it applies to water resources exploration, exploitation and management has been reviewed and its capabilities demonstrated. NAA has been found to be superior and offer higher sensitivity to many other analytical techniques in analysis of water. The implications of chemical and element concentrations (water pollution and quality) determined in water on environmental impact assessment to aquatic life and human health are briefly highlighted

  15. Transient plant transformation mediated by Agrobacterium tumefaciens: Principles, methods and applications.

    Science.gov (United States)

    Krenek, Pavel; Samajova, Olga; Luptovciak, Ivan; Doskocilova, Anna; Komis, George; Samaj, Jozef

    2015-11-01

    Agrobacterium tumefaciens is widely used as a versatile tool for development of stably transformed model plants and crops. However, the development of Agrobacterium based transient plant transformation methods attracted substantial attention in recent years. Transient transformation methods offer several applications advancing stable transformations such as rapid and scalable recombinant protein production and in planta functional genomics studies. Herein, we highlight Agrobacterium and plant genetics factors affecting transfer of T-DNA from Agrobacterium into the plant cell nucleus and subsequent transient transgene expression. We also review recent methods concerning Agrobacterium mediated transient transformation of model plants and crops and outline key physical, physiological and genetic factors leading to their successful establishment. Of interest are especially Agrobacterium based reverse genetics studies in economically important crops relying on use of RNA interference (RNAi) or virus-induced gene silencing (VIGS) technology. The applications of Agrobacterium based transient plant transformation technology in biotech industry are presented in thorough detail. These involve production of recombinant proteins (plantibodies, vaccines and therapeutics) and effectoromics-assisted breeding of late blight resistance in potato. In addition, we also discuss biotechnological potential of recombinant GFP technology and present own examples of successful Agrobacterium mediated transient plant transformations. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Principles and Overview of Sampling Methods for Modeling Macromolecular Structure and Dynamics.

    Science.gov (United States)

    Maximova, Tatiana; Moffatt, Ryan; Ma, Buyong; Nussinov, Ruth; Shehu, Amarda

    2016-04-01

    Investigation of macromolecular structure and dynamics is fundamental to understanding how macromolecules carry out their functions in the cell. Significant advances have been made toward this end in silico, with a growing number of computational methods proposed yearly to study and simulate various aspects of macromolecular structure and dynamics. This review aims to provide an overview of recent advances, focusing primarily on methods proposed for exploring the structure space of macromolecules in isolation and in assemblies for the purpose of characterizing equilibrium structure and dynamics. In addition to surveying recent applications that showcase current capabilities of computational methods, this review highlights state-of-the-art algorithmic techniques proposed to overcome challenges posed in silico by the disparate spatial and time scales accessed by dynamic macromolecules. This review is not meant to be exhaustive, as such an endeavor is impossible, but rather aims to balance breadth and depth of strategies for modeling macromolecular structure and dynamics for a broad audience of novices and experts.

  17. A hybrid method of incorporating extended priority list into equal incremental principle for energy-saving generation dispatch of thermal power systems

    International Nuclear Information System (INIS)

    Cheng, Chuntian; Li, Shushan; Li, Gang

    2014-01-01

    The energy-saving generation dispatch (ESGD) policy released by Chinese Government in 2007 is a new code for optimally dispatching electric power generation portfolio in the country with the dual objectives of improving energy efficiency and reducing environmental pollution. The ESGD is substantially different from the competitive market in the developed economies, the traditional economic dispatching or the rational dispatching principle implemented in China prior to the new policy. This paper develops a hybrid method that integrates the extended priority list (EPL), the equal incremental principle (EIP) and a heuristic method to optimize daily generation schedules under ESGD. The EPL is presented to search desirable units set that satisfies the complicated duration period requirements based on thermal unit generation priority list. The EIP is developed to allocate load among the committed units within the combined set. A heuristic method is proposed to deal with inequality constraints, which usually result in difficulty for power allocation, and used to improve these results. The algorithm has been embedded into a newly developed decision support system that is currently being used by operators of the Guizhou Province Power Grid to make day-ahead quarter-hourly generation schedules. - Highlights: • Electric power industry is one of key and important fields for energy conservation and emission reduction in China. • The energy-saving generation dispatch policy was released by Chinese government in 2007. • A Hybrid algorithm for energy-saving generation dispatch scheduling of thermal power system is presented. • The algorithm has been embedded into a newly developed decision support system

  18. Core principles of evolutionary medicine: A Delphi study.

    Science.gov (United States)

    Grunspan, Daniel Z; Nesse, Randolph M; Barnes, M Elizabeth; Brownell, Sara E

    2018-01-01

    Evolutionary medicine is a rapidly growing field that uses the principles of evolutionary biology to better understand, prevent and treat disease, and that uses studies of disease to advance basic knowledge in evolutionary biology. Over-arching principles of evolutionary medicine have been described in publications, but our study is the first to systematically elicit core principles from a diverse panel of experts in evolutionary medicine. These principles should be useful to advance recent recommendations made by The Association of American Medical Colleges and the Howard Hughes Medical Institute to make evolutionary thinking a core competency for pre-medical education. The Delphi method was used to elicit and validate a list of core principles for evolutionary medicine. The study included four surveys administered in sequence to 56 expert panelists. The initial open-ended survey created a list of possible core principles; the three subsequent surveys winnowed the list and assessed the accuracy and importance of each principle. Fourteen core principles elicited at least 80% of the panelists to agree or strongly agree that they were important core principles for evolutionary medicine. These principles over-lapped with concepts discussed in other articles discussing key concepts in evolutionary medicine. This set of core principles will be helpful for researchers and instructors in evolutionary medicine. We recommend that evolutionary medicine instructors use the list of core principles to construct learning goals. Evolutionary medicine is a young field, so this list of core principles will likely change as the field develops further.

  19. A New Method for Coronal Magnetic Field Reconstruction

    Science.gov (United States)

    Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung

    2017-08-01

    A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between

  20. The Applicability of Traditional Protection Methods to Lines Emanating from VSC-HVDC Interconnectors and a Novel Protection Principle

    Directory of Open Access Journals (Sweden)

    Shimin Xue

    2016-05-01

    Full Text Available Voltage source converter (VSC-based high voltage direct current (VSC-HVDC interconnectors can realize accurate and fast control of power transmission among AC networks, and provide emergency power support for AC networks. VSC-HVDC interconnectors bring exclusive fault characteristics to AC networks, thus influencing the performance of traditional protections. Since fault characteristics are related to the control schemes of interconnectors, a fault ride-through (FRT strategy which is applicable to the interconnector operating characteristic of working in four quadrants and capable of eliminating negative-sequence currents under unbalanced fault conditions is proposed first. Then, the additional terms of measured impedances of distance relays caused by fault resistances are derived using a symmetrical component method. Theoretical analysis shows the output currents of interconnectors are controllable after faults, which may cause malfunctions in distance protections installed on lines emanating from interconnectors under the effect of fault resistances. Pilot protection is also inapplicable to lines emanating from interconnectors. Furthermore, a novel pilot protection principle based on the ratio between phase currents and the ratio between negative-sequence currents flowing through both sides is proposed for lines emanating from the interconnectors whose control scheme aims at eliminating negative-sequence currents. The validity of theoretical analysis and the protection principle is verified by PSCAD/EMTDC simulations.

  1. A novel method for passing cerebrospinal fluid shunt tubing: a proof of principle study.

    Science.gov (United States)

    Tubbs, R Shane; Goodrich, Dylan; Tubbs, Isaiah; Loukas, Marios; Cohen-Gadol, Aaron A

    2014-12-01

    Few innovations in the method of tunneling shunt tubing for cerebrospinal fluid (CSF) shunt diversion have been made since this treatment of hydrocephalus was first developed. Therefore, this feasibility study was performed with the hope of identifying an improved technique that could potentially carry fewer complications. On 10 cadaver sides and when placed in the supine position, small skin incisions were made at the clavicle and ipsilateral subcostal region, and magnets were used to pass standard shunt tubing between the two incisions. Nickel-plated magnets were less effective in pulling the shunt tubing below the skin compared with ceramic magnets. Of these, magnets with pull strengths of 150-200 lbs were the most effective in dragging the subcutaneous tubing between the two incisions. No obvious damage to the skin from the overlying magnet was seen in any specimen. Few options exist for tunneling distal shunt tubing for CSF shunt procedures. Future patient studies are needed to determine if the technique described herein is superior to current methods, particularly when examining patient groups that are at a greater risk for injury during tunneling shunt catheters.

  2. Methodical principles of assessment of financial compensation for clinical trial volunteer participants

    Directory of Open Access Journals (Sweden)

    V. Ye. Dobrova

    2013-10-01

    Full Text Available Introduction. Due to the necessity to obtain the reliable results of a clinical trial and to distribute it to the general population of patients the problem of recruiting the adequate number of individuals to participate in the study as objects of observation in the group receiving the investigational medicinal product or as a member of the control group should to be solved. Aim of study. The aim of our study was to research and to justify practically the methodological approaches to determining financial compensation for participation of volunteers in the clinical trials and the appropriate methods of its calculation. Material and methods. For the purpose of determining the baseline factors for calculating the hourly compensation the survey of healthy volunteers and of expert professionals as well as the analysis of its results have been done. Questioning healthy volunteers regarding their attitudes towards inconvenience and discomfort during participation in clinical trials was held at the Ukrainian clinical research centers. Survey participants number was 99, they were healthy volunteers who took part in the first phase clinical trial or bioequivalence studies. The expert survey included questioning of the 193 professionals from Ukrainian clinical research centers, CRO, pharmaceutical manufacturers – research sponsors and collaborators State Expert Center Ministry of Health of Ukraine, who were involved in the planning, organization, implementation and evaluation of clinical trials as well as their regulatory control. Results of study. Using the method of pairwise comparisons and iterative refinement procedures the collective estimate of experts questionnaire results has been performed, by the results of which the nine indicators have been identified and the importance of each of them as units of discomfort have been established. Motivational factors of voluntary participation in clinical trials have been studied. Motivation system for

  3. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    Science.gov (United States)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  4. Handbook of thin film deposition processes and techniques principles, methods, equipment and applications

    CERN Document Server

    Seshan, Krishna

    2002-01-01

    New second edition of the popular book on deposition (first edition by Klaus Schruegraf) for engineers, technicians, and plant personnel in the semiconductor and related industries. This book traces the technology behind the spectacular growth in the silicon semiconductor industry and the continued trend in miniaturization over the last 20 years. This growth has been fueled in large part by improved thin film deposition techniques and the development of highly specialized equipment to enable this deposition. The book includes much cutting-edge material. Entirely new chapters on contamination and contamination control describe the basics and the issues-as feature sizes shrink to sub-micron dimensions, cleanliness and particle elimination has to keep pace. A new chapter on metrology explains the growth of sophisticated, automatic tools capable of measuring thickness and spacing of sub-micron dimensions. The book also covers PVD, laser and e-beam assisted deposition, MBE, and ion beam methods to bring together a...

  5. Principles of disaster management. Lesson 7: Management leadership styles and methods.

    Science.gov (United States)

    Cuny, F C

    2000-01-01

    This lesson explores the use of different management leadership styles and methods that are applied to disaster management situations. Leadership and command are differentiated. Mechanisms that can be used to influence others developed include: 1) coercion; 2) reward; 3) position; 4) knowledge; and 5) admiration. Factors that affect leadership include: 1) individual characteristics; 2) competence; 3) experience; 4) self-confidence; 5) judgment; 6) decision-making; and 8) style. Experience and understanding the task are important factors for leadership. Four styles of leadership are developed: 1) directive; 2) supportive; 3) participative; and 4) achievement oriented. Application of each of these styles is discussed. The styles are discussed further as they relate to the various stages of a disaster. The effects of interpersonal relationships and the effects of the environment are stressed. Lastly, leadership does not just happen because a person is appointed as a manager--it must be earned.

  6. Principle of diffraction enhanced imaging (DEI) and computed tomography based on DEI method

    International Nuclear Information System (INIS)

    Zhu Peiping; Huang Wanxia; Yuan Qingxi; Wang Junyue; Zheng Xin; Shu Hang; Chen Bo; Liu Yijin; Li Enrong; Wu Ziyu; Yu Jian

    2006-01-01

    In the first part of this article a more general DEI equation was derived using simple concepts. Not only does the new DEI equation explain all the problems that can be done by the DEI equation proposed by Chapman, but also explains the problem that can not be explained with the old DEI equation, such as the noise background caused by the small angle scattering reflected by the analyzer. In the second part, a DEI-PI-CT formula has been proposed and the contour contrast caused by the extinction of refraction beam has been qualitatively explained, and then based on the work of Ando's group two formulae of refraction CT with DEI method has been proposed. Combining one refraction CT formula proposed by Dilmanian with the two refraction CT formulae proposed by us, the whole framework of CT algorithm can be made to reconstruct three components of the gradient of refractive index. (authors)

  7. Nomenclature and principle of neutron humidistats design and methods of their checking

    International Nuclear Information System (INIS)

    Chaladze, A.P.; Melkumyan, V.E.

    1980-01-01

    The state of neutron hydrometry in ferrous metallurgy is considered. The nomenclature and technical characteristics of neutron humidistats and methods of their testing are presented as well as the local testing diagram for imitator certification and the testing of devices. Taking into account the design, neutron humidistats can be classified into two- and three-channel. As regards their structural realization, humidistats are classified into devices of the external type designed for measuring humidity in technological capacities and devices of the superposition type, designed for measuring the humidity of the material on a moving conveyer. The design of imitators for all types of humidistats is similar, that is the use of neutron retarders and absorbers, displaced relatively to each other [ru

  8. Human-Automation Integration: Principle and Method for Design and Evaluation

    Science.gov (United States)

    Billman, Dorrit; Feary, Michael

    2012-01-01

    Future space missions will increasingly depend on integration of complex engineered systems with their human operators. It is important to ensure that the systems that are designed and developed do a good job of supporting the needs of the work domain. Our research investigates methods for needs analysis. We included analysis of work products (plans for regulation of the space station) as well as work processes (tasks using current software), in a case study of Attitude Determination and Control Officers (ADCO) planning work. This allows comparing how well different designs match the structure of the work to be supported. Redesigned planning software that better matches the structure of work was developed and experimentally assessed. The new prototype enabled substantially faster and more accurate performance in plan revision tasks. This success suggests the approach to needs assessment and use in design and evaluation is promising, and merits investigatation in future research.

  9. First-principle study on bonding mechanism of ZnO by LDA+U method

    International Nuclear Information System (INIS)

    Zhou, G.C.; Sun, L.Z.; Zhong, X.L.; Chen Xiaoshuang; Wei Lu; Wang, J.B.

    2007-01-01

    The electronic structure and the bonding mechanism of ZnO have been studied by using the Full-Potential Linear Augmented Plane Wave (FP-LAPW) method within the density-functional theory (DFT) based on LDA+U exchange correlation potential. The valence and the bonding charge density are calculated and compared with those derived from LDA and GGA to describe the bonding mechanism. The charge transfer along with the bonding process is analyzed by using the theory of Atoms in Molecules (AIM). The bonding, the topological characteristics and the p-d coupling effects on the bonding mechanism of ZnO are shown quantitatively with the critical points (CPs) along the bonding trajectory and the charge in the atomic basins. Meanwhile, the bonding characteristics for wurtzite, zinc blende and rocksalt phase of ZnO are discussed systematically in the present paper

  10. Embedded and real time system development a software engineering perspective concepts, methods and principles

    CERN Document Server

    Saeed, Saqib; Darwish, Ashraf; Abraham, Ajith

    2014-01-01

    Nowadays embedded and real-time systems contain complex software. The complexity of embedded systems is increasing, and the amount and variety of software in the embedded products are growing. This creates a big challenge for embedded and real-time software development processes and there is a need to develop separate metrics and benchmarks. “Embedded and Real Time System Development: A Software Engineering Perspective: Concepts, Methods and Principles” presents practical as well as conceptual knowledge of the latest tools, techniques and methodologies of embedded software engineering and real-time systems. Each chapter includes an in-depth investigation regarding the actual or potential role of software engineering tools in the context of the embedded system and real-time system. The book presents state-of-the art and future perspectives with industry experts, researchers, and academicians sharing ideas and experiences including surrounding frontier technologies, breakthroughs, innovative solutions and...

  11. Teaching geometrical principles to design students

    NARCIS (Netherlands)

    Feijs, L.M.G.; Bartneck, C.

    2009-01-01

    We propose a new method of teaching the principles of geometry to design students. The students focus on a field of design in which geometry is the design: tessellation. We review different approaches to geometry and the field of tessellation before we discuss the setup of the course. Instead of

  12. A Trustworthiness Evaluation Method for Software Architectures Based on the Principle of Maximum Entropy (POME and the Grey Decision-Making Method (GDMM

    Directory of Open Access Journals (Sweden)

    Rong Jiang

    2014-09-01

    Full Text Available As the early design decision-making structure, a software architecture plays a key role in the final software product quality and the whole project. In the software design and development process, an effective evaluation of the trustworthiness of a software architecture can help making scientific and reasonable decisions on the architecture, which are necessary for the construction of highly trustworthy software. In consideration of lacking the trustworthiness evaluation and measurement studies for software architecture, this paper provides one trustworthy attribute model of software architecture. Based on this model, the paper proposes to use the Principle of Maximum Entropy (POME and Grey Decision-making Method (GDMM as the trustworthiness evaluation method of a software architecture and proves the scientificity and rationality of this method, as well as verifies the feasibility through case analysis.

  13. Beam Based Measurements of Field Multipoles in the RHIC Low Beta Insertions and Extrapolation of the Method to the LHC

    CERN Document Server

    Koutchouk, Jean-Pierre; Ptitsyn, V I

    2001-01-01

    The multipolar content of the dipoles and quadrupoles is known to limit the stability of the beam dynamics in super-conducting machines like RHIC and even more in LHC. The low-beta quadrupoles are thus equipped with correcting coils up to the dodecapole order. The correction is planned to rely on magnetic measurements. We show that a relatively simple method allows an accurate measurement of the multipolar field aberrations using the beam. The principle is to displace the beam in the non-linear fields by local closed orbit bumps and to measure the variation of sensitive beam observable. The resolution and robustness of the method are found appropriate. Experimentation at RHIC showed clearly the presence of normal and skew sextupolar field components in addition to a skew quadrupolar component in the interaction regions. Higher-order components up to decapole order appear as well.

  14. Quantum statistical field theory an introduction to Schwinger's variational method with Green's function nanoapplications, graphene and superconductivity

    CERN Document Server

    Morgenstern Horing, Norman J

    2017-01-01

    This book provides an introduction to the methods of coupled quantum statistical field theory and Green's functions. The methods of coupled quantum field theory have played a major role in the extensive development of nonrelativistic quantum many-particle theory and condensed matter physics. This introduction to the subject is intended to facilitate delivery of the material in an easily digestible form to advanced undergraduate physics majors at a relatively early stage of their scientific development. The main mechanism to accomplish this is the early introduction of variational calculus and the Schwinger Action Principle, accompanied by Green's functions. Important achievements of the theory in condensed matter and quantum statistical physics are reviewed in detail to help develop research capability. These include the derivation of coupled field Green's function equations-of-motion for a model electron-hole-phonon system, extensive discussions of retarded, thermodynamic and nonequilibrium Green's functions...

  15. Electronic structure and lattice dynamics of CaPd3B studied by first-principles methods

    International Nuclear Information System (INIS)

    Music, Denis; Ahuja, Rajeev; Schneider, Jochen M.

    2006-01-01

    Using first-principles methods, we have studied the electronic structure and lattice dynamics of CaPd 3 B and compared them to isostructural MgNi 3 C. CaPd 3 B possesses less electronic states at the Fermi level, but more phonon modes at low frequencies, than MgNi 3 C. According to the phonon density of states, low frequency acoustic modes are dominated by Pd states, corresponding to Ni in MgNi 3 C. Furthermore, these Pd modes show soft phonons, which may be significant for second-order phase transitions. Based on the comparison to MgNi 3 C, we suggest that the properties of these two compounds may be similar

  16. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Nasser, Hassan; Cessac, Bruno; Marre, Olivier

    2013-01-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  17. A method for real time detecting of non-uniform magnetic field

    Science.gov (United States)

    Marusenkov, Andriy

    2015-04-01

    The principle of measuring magnetic signatures for observing diverse objects is widely used in Near Surface work (unexploded ordnance (UXO); engineering & environmental; archaeology) and security and vehicle detection systems as well. As a rule, the magnitude of the signals to be measured is much lower than that of the quasi-uniform Earth magnetic field. Usually magnetometers for these purposes contain two or more spatially separated sensors to estimate the full tensor gradient of the magnetic field or, more frequently, only partial gradient components. The both types (scalar and vector) of magnetic sensors could be used. The identity of the scale factors and proper alignment of the sensitivity axes of the vector sensors are very important for deep suppression of the ambient field and detection of weak target signals. As a rule, the periodical calibration procedure is used to keep matching sensors' parameters as close as possible. In the present report we propose the technique for detection magnetic anomalies, which is almost insensitive to imperfect matching of the sensors. This method based on the idea that the difference signals between two sensors are considerably different when the instrument is rotated or moved in uniform and non-uniform fields. Due to the misfit of calibration parameters the difference signal observed at the rotation in the uniform field is similar to the total signal - the sum of the signals of both sensors. Zero change of the difference and total signals is expected, if the instrument moves in the uniform field along a straight line. In contrast, the same move in the non-uniform field produces some response of each of the sensors. In case one measures dB/dx and moves along x direction, the sensors signals is shifted in time with the lag proportional to the distance between sensors and the speed of move. It means that the difference signal looks like derivative of the total signal at move in the non-uniform field. So, using quite simple

  18. Principles of development of the industry of technogenic waste processing

    Directory of Open Access Journals (Sweden)

    Maria A. Bayeva

    2014-01-01

    Full Text Available Objective to identify and substantiate the principles of development of the industry of technogenic waste processing. Methods systemic analysis and synthesis method of analogy. Results basing on the analysis of the Russian and foreign experience in the field of waste management and environmental protection the basic principles of development activities on technogenic waste processing are formulated the principle of legal regulation the principle of efficiency technologies the principle of ecological safety the principle of economic support. The importance of each principle is substantiated by the description of the situation in this area identifying the main problems and ways of their solution. Scientific novelty the fundamental principles of development of the industry of the industrial wastes processing are revealed the measures of state support are proposed. Practical value the presented theoretical conclusions and proposals are aimed primarily on theoretical and methodological substantiation and practical solutions to modern problems in the sphere of development of the industry of technogenic waste processing.

  19. Method and apparatus for measuring weak magnetic fields

    DEFF Research Database (Denmark)

    1995-01-01

    When measuring weak magnetic fields, a container containing a medium, such as a solution containing a stable radical, is placed in a polarising magnetic field, which is essentially at right angles to the field to be measured. The polarising field is interrupted rapidly, the interruption being...

  20. Microhydrodynamics principles and selected applications

    CERN Document Server

    Kim, Sangtae; Brenner, Howard

    1991-01-01

    Microhydrodynamics: Principles and Selected Applications presents analytical and numerical methods for describing motion of small particles suspended in viscous fluids. The text first covers the fundamental principles of low-Reynolds-number flow, including the governing equations and fundamental theorems; the dynamics of a single particle in a flow field; and hydrodynamic interactions between suspended particles. Next, the book deals with the advances in the mathematical and computational aspects of viscous particulate flows that point to innovations for large-scale simulations on parallel co

  1. Fourier transformation methods in the field of gamma spectrometry

    Indian Academy of Sciences (India)

    The basic principles of a new version of Fourier transformation is presented. This new version was applied to solve some main problems such as smoothing, and denoising in gamma spectroscopy. The mathematical procedures were first tested by simulated data and then by actual experimental data.

  2. Path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong; Le Maî tre, Olivier P.; Hoteit, Ibrahim; Knio, Omar

    2016-01-01

    , we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values

  3. Low field orientation magnetic separation methods for magnetotactic bacteria

    International Nuclear Information System (INIS)

    Moeschler, F.D.

    1999-01-01

    Microbial biomineralisation of iron often results in a biomass that is magnetic and can be separated from water systems by the application of a magnetic field. Magnetotactic bacteria form magnetic membrane bound crystals within their structure, generally of magnetite. In nature, this enables magnetotactic bacteria to orientate themselves with respect to the local geomagnetic field. The bacteria then migrate with flagellar driven motion towards their preferred environment. This property has been harnessed to produce a process in which metal loaded magnetotactic bacteria can be recovered from a waste stream. This process is known as orientation magnetic separation. Several methods exist which permit the unique magnetic properties of individual magnetotactic bacteria to be studied, such as U-turn analysis, transmission electron microscopy and single wire cell studies. In this work an extension of U-turn analysis was developed. The bacteria were rendered non-motile by the addition of specific metal ions and the resulting 'flip time' which occurs during a field reversal enabled the magnetic moment of individual bacteria to be determined. This method proved to be much faster and more accurate than previous methods. For a successful process to be developed, large scale culturing of magnetotactic bacteria is required Experiments showed that culture vessel geometry was an important factor for high-density growth. Despite intensive studies reproducible culturing at volumes exceeding one litre was not achieved. This work showed that numerous metal ions rendered magnetotactic bacteria non-motile at concentrations below 10 ppm. Sequential adaptation raised typical levels to in excess of 100 ppm for a number of ions. such as zinc and tin. However, specific ions. such as copper or nickel, remained motility inhibiting at lower concentrations. To achieve separation using orientation magnetic separation, motile, field susceptible MTB are required. Despite successful adaptation, the

  4. Bernoulli's Principle

    Science.gov (United States)

    Hewitt, Paul G.

    2004-01-01

    Some teachers have difficulty understanding Bernoulli's principle particularly when the principle is applied to the aerodynamic lift. Some teachers favor using Newton's laws instead of Bernoulli's principle to explain the physics behind lift. Some also consider Bernoulli's principle too difficult to explain to students and avoid teaching it…

  5. Zymography Principles.

    Science.gov (United States)

    Wilkesman, Jeff; Kurz, Liliana

    2017-01-01

    Zymography, the detection, identification, and even quantification of enzyme activity fractionated by gel electrophoresis, has received increasing attention in the last years, as revealed by the number of articles published. A number of enzymes are routinely detected by zymography, especially with clinical interest. This introductory chapter reviews the major principles behind zymography. New advances of this method are basically focused towards two-dimensional zymography and transfer zymography as will be explained in the rest of the chapters. Some general considerations when performing the experiments are outlined as well as the major troubleshooting and safety issues necessary for correct development of the electrophoresis.

  6. Introduction to Maxxam All-Season Passive Sampling System and Principles of Proper Use of Passive Samplers in the Field Study

    Directory of Open Access Journals (Sweden)

    Hongmao Tang

    2001-01-01

    Full Text Available Maxxam all-season passive sampling system (PASS is introduced in this paper. The PASS can be used to quantitatively and accurately monitor SO2 , NO2, O 3, and H2 S in air in all weather conditions with flexible exposure times from several hours to several months. The air pollution detection limits of PASS are very low. They can be from sub ppb to ppt levels. The principles of proper use of passive samplers in the field study are discussed by using the PASS as an example.

  7. Campbell's MSV method the neutron-gamma discrimination in mixed field of nuclear reactor

    International Nuclear Information System (INIS)

    Stankovic, S. J.; Loncar, B.; Avramovic, I.; Osmokrovic, P.

    2003-10-01

    In this paper it is carried out the analysis some capabilities of Campbell's MSV (Mean Square Value) measuring chain on base the principles derived by Campbell's theorem. Nevertheless, measurements have performed with digitized MSV method and results have compared related to they attained with classic measuring chain, when the mean value of signal from detector output has measured. In our case, detector element was uncompensated ionization chamber for mixed n-gamma fields. Thermal neutron flux, absorbed dose rate, equivalent dose rate and exposure rate in surrounding the reactor vessel of system HERBE, at nuclear reactor RB in 'VINCA' Institute, are determined. The examination of discrimination for gamma relate to neutron component in signal of detector output is performed whereby experimental work and the calculation according to linear theoretical model. The dependencies of changes for variance and mean value output detector signal versus four-decade change of fission reactor power, in range from 10 mW to 22W, are obtained. The advantage of MSV method is confirmed and concluded that the order n-gamma discrimination in MSV signal processing is around fifty times larger than classical measuring method. (author)

  8. On-line and real-time diagnosis method for proton membrane fuel cell (PEMFC) stack by the superposition principle

    Science.gov (United States)

    Lee, Young-Hyun; Kim, Jonghyeon; Yoo, Seungyeol

    2016-09-01

    The critical cell voltage drop in a stack can be followed by stack defect. A method of detecting defective cell is the cell voltage monitoring. The other methods are based on the nonlinear frequency response. In this paper, the superposition principle for the diagnosis of PEMFC stack is introduced. If critical cell voltage drops exist, the stack behaves as a nonlinear system. This nonlinearity can explicitly appear in the ohmic overpotential region of a voltage-current curve. To detect the critical cell voltage drop, a stack is excited by two input direct test-currents which have smaller amplitude than an operating stack current and have an equal distance value from the operating current. If the difference between one voltage excited by a test current and the voltage excited by a load current is not equal to the difference between the other voltage response and the voltage excited by the load current, the stack system acts as a nonlinear system. This means that there is a critical cell voltage drop. The deviation from the value zero of the difference reflects the grade of the system nonlinearity. A simulation model for the stack diagnosis is developed based on the SPP, and experimentally validated.

  9. Principle and application of low energy inverse photoemission spectroscopy: A new method for measuring unoccupied states of organic semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Hiroyuki, E-mail: hyoshida@chiba-u.jp

    2015-10-01

    Highlights: • Principle of low energy inverse photoemission spectroscopy is described. • Instruments including electron sources and photon detectors are shown. • Recent results about organic devices and fundamental studies are reviewed. • Electron affinities of typical organic semiconductors are compiled. - Abstract: Information about the unoccupied states is crucial to both fundamental and applied physics of organic semiconductors. However, there were no available experimental methods that meet the requirement of such research. In this review, we describe a new experimental method to examine the unoccupied states, called low-energy inverse photoemission spectroscopy (LEIPS). An electron having the kinetic energy lower than the damage threshold of organic molecules is introduced to a sample film, and an emitted photon in the near-ultraviolet range is detected with high resolution and sensitivity. Unlike the previous inverse photoemission spectroscopy, the sample damage is negligible and the overall resolution is a factor of two improved to 0.25 eV. Using LEIPS, electron affinity of organic semiconductor can be determined with the same precision as photoemission spectroscopy for ionization energy. The instruments including an electron source and photon detectors as well as application to organic semiconductors are presented.

  10. Principle and application of low energy inverse photoemission spectroscopy: A new method for measuring unoccupied states of organic semiconductors

    International Nuclear Information System (INIS)

    Yoshida, Hiroyuki

    2015-01-01

    Highlights: • Principle of low energy inverse photoemission spectroscopy is described. • Instruments including electron sources and photon detectors are shown. • Recent results about organic devices and fundamental studies are reviewed. • Electron affinities of typical organic semiconductors are compiled. - Abstract: Information about the unoccupied states is crucial to both fundamental and applied physics of organic semiconductors. However, there were no available experimental methods that meet the requirement of such research. In this review, we describe a new experimental method to examine the unoccupied states, called low-energy inverse photoemission spectroscopy (LEIPS). An electron having the kinetic energy lower than the damage threshold of organic molecules is introduced to a sample film, and an emitted photon in the near-ultraviolet range is detected with high resolution and sensitivity. Unlike the previous inverse photoemission spectroscopy, the sample damage is negligible and the overall resolution is a factor of two improved to 0.25 eV. Using LEIPS, electron affinity of organic semiconductor can be determined with the same precision as photoemission spectroscopy for ionization energy. The instruments including an electron source and photon detectors as well as application to organic semiconductors are presented.

  11. Evaluation of different field methods for measuring soil water infiltration

    Science.gov (United States)

    Pla-Sentís, Ildefonso; Fonseca, Francisco

    2010-05-01

    Soil infiltrability, together with rainfall characteristics, is the most important hydrological parameter for the evaluation and diagnosis of the soil water balance and soil moisture regime. Those balances and regimes are the main regulating factors of the on site water supply to plants and other soil organisms and of other important processes like runoff, surface and mass erosion, drainage, etc, affecting sedimentation, flooding, soil and water pollution, water supply for different purposes (population, agriculture, industries, hydroelectricity), etc. Therefore the direct measurement of water infiltration rates or its indirect deduction from other soil characteristics or properties has become indispensable for the evaluation and modelling of the previously mentioned processes. Indirect deductions from other soil characteristics measured under laboratory conditions in the same soils, or in other soils, through the so called "pedo-transfer" functions, have demonstrated to be of limited value in most of the cases. Direct "in situ" field evaluations have to be preferred in any case. In this contribution we present the results of past experiences in the measurement of soil water infiltration rates in many different soils and land conditions, and their use for deducing soil water balances under variable climates. There are also presented and discussed recent results obtained in comparing different methods, using double and single ring infiltrometers, rainfall simulators, and disc permeameters, of different sizes, in soils with very contrasting surface and profile characteristics and conditions, including stony soils and very sloping lands. It is concluded that there are not methods universally applicable to any soil and land condition, and that in many cases the results are significantly influenced by the way we use a particular method or instrument, and by the alterations in the soil conditions by the land management, but also due to the manipulation of the surface

  12. On the Momentum Transported by the Radiation Field of a Long Transient Dipole and Time Energy Uncertainty Principle

    Directory of Open Access Journals (Sweden)

    Vernon Cooray

    2016-11-01

    Full Text Available The paper describes the net momentum transported by the transient electromagnetic radiation field of a long transient dipole in free space. In the dipole a current is initiated at one end and propagates towards the other end where it is absorbed. The results show that the net momentum transported by the radiation is directed along the axis of the dipole where the currents are propagating. In general, the net momentum P transported by the electromagnetic radiation of the dipole is less than the quantity U / c , where U is the total energy radiated by the dipole and c is the speed of light in free space. In the case of a Hertzian dipole, the net momentum transported by the radiation field is zero because of the spatial symmetry of the radiation field. As the effective wavelength of the current decreases with respect to the length of the dipole (or the duration of the current decreases with respect to the travel time of the current along the dipole, the net momentum transported by the radiation field becomes closer and closer to U / c , and for effective wavelengths which are much shorter than the length of the dipole, P ≈ U / c . The results show that when the condition P ≈ U / c is satisfied, the radiated fields satisfy the condition Δ t Δ U ≥ h / 4 π where Δ t is the duration of the radiation, Δ U is the uncertainty in the dissipated energy and h is the Plank constant.

  13. Dark matter and the equivalence principle

    Science.gov (United States)

    Frieman, Joshua A.; Gradwohl, Ben-Ami

    1993-01-01

    A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.

  14. Electric field control methods for foil coils in high-voltage linear actuators

    NARCIS (Netherlands)

    Beek, van T.A.; Jansen, J.W.; Lomonova, E.A.

    2015-01-01

    This paper describes multiple electric field control methods for foil coils in high-voltage coreless linear actuators. The field control methods are evaluated using 2-D and 3-D boundary element methods. A comparison is presented between the field control methods and their ability to mitigate

  15. [Development and application of electroanalytical methods in biomedical fields].

    Science.gov (United States)

    Kusu, Fumiyo

    2015-01-01

    To summarize our electroanalytical research in the biomedical field over the past 43 years, this review describes studies on specular reflection measurement, redox potential determination, amperometric acid sensing, HPLC with electrochemical detection, and potential oscillation across a liquid membrane. The specular reflection method was used for clarifying the adsorption of neurotransmitters and their related drugs onto a gold electrode and the interaction between dental alloys and compound iodine glycerin. A voltammetric screening test using a redox potential for the antioxidative effect of flavonoids was proposed. Amperometric acid sensing based on the measurement of the reduction prepeak current of 2-methyl-1,4-naphthoquinone (VK3) or 3,5-di-tert-buty1-1,2-benzoquinone (DBBQ) was applied to determine acid values of fats and oils, titrable acidity of coffee, and enzyme activity of lipase, free fatty acids (FFAs) in serum, short-chain fatty acids in feces, etc. The electrode reactions of phenothiazines, catechins, and cholesterol were applied to biomedical analysis using HPLC with electrochemical detection. A three-channel electrochemical detection system was utilized for the sensitive determination of redox compounds in Chinese herbal medicines. The behavior of barbituric acid derivatives was examined based on potential oscillation measurements.

  16. Electrodeless plasma acceleration system using rotating magnetic field method

    Directory of Open Access Journals (Sweden)

    T. Furukawa

    2017-11-01

    Full Text Available We have proposed Rotating Magnetic Field (RMF acceleration method as one of electrodeless plasma accelerations. In our experimental scheme, plasma generated by an rf (radio frequency antenna, is accelerated by RMF antennas, which consist of two-pair, opposed, facing coils, and these antennas are outside of a discharge tube. Therefore, there is no wear of electrodes, degrading the propulsion performance. Here, we will introduce our RMF acceleration system developed, including the experimental device, e.g., external antennas, a tapered quartz tube, a vacuum chamber, external magnets, and a pumping system. In addition, we can change RMF operation parameters (RMF applied current IRMF and RMF current phase difference ϕ, focusing on RMF current frequency fRMF by adjusting matching conditions of RMF, and investigate the dependencies on plasma parameters (electron density ne and ion velocity vi; e.g., higher increases of ne and vi (∼360 % and 55 %, respectively than previous experimental results were obtained by decreasing fRMF from 5 MHz to 0.7 MHz, whose RMF penetration condition was better according to Milroy’s expression. Moreover, time-varying component of RMF has been measured directly to survey the penetration condition experimentally.

  17. Semi-empirical equivalent field method for dose determination in midline block fields for cobalt - 60 beam

    International Nuclear Information System (INIS)

    Tagoe, S.N.A.; Nani, E.K.; Yarney, J.; Edusa, C.; Quayson-Sackey, K.; Nyamadi, K.M.; Sasu, E.

    2012-01-01

    For teletherapy treatment time calculations, midline block fields are resolved into two fields, but neglecting scattering from other fields, the effective equivalent square field size of the midline block is assumed to the resultant field. Such approach is underestimation, and may be detrimental in achieving the recommended uncertainty of ± 5 % for patient's radiation dose delivery. By comparison, the deviations of effective equivalent square field sizes by calculations and experiments were within 13.2 % for cobalt 60 beams of GWGP80 cobalt 60 teletherapy. Therefore, a modified method incorporating the scatter contributions was adopted to estimate the effective equivalent square field size for midline block field. The measured outputs of radiation beams with the block were compared with outputs of square fields without the blocks (only the block tray) at depths of 5 and 10 cm for the teletherapy machine employing isocentric technique, and the accuracy was within ± 3 % for the cobalt 60 beams. (au)

  18. Application of field synergy principle for optimization fluid flow and convective heat transfer in a tube bundle of a pre-heater

    International Nuclear Information System (INIS)

    Hamid, Mohammed O.A.; Zhang, Bo; Yang, Luopeng

    2014-01-01

    The big problems facing solar-assisted MED (multiple-effect distillation) desalination unit are the low efficiency and bulky heat exchangers, which worsen its systematic economic feasibility. In an attempt to develop heat transfer technologies with high energy efficiency, a mathematical study is established, and optimization analysis using FSP (field synergy principle) is proposed to support meaning of heat transfer enhancement of a pre-heater in a solar-assisted MED desalination unit. Numerical simulations are performed on fluid flow and heat transfer characteristics in a circular and elliptical tube bundle. The numerical results are analyzed using the concept of synergy angle and synergy number as an indication of synergy between velocity vector and temperature gradient fields. Heat transfer in elliptical tube bundle is enhanced significantly with increasing initial velocity of the feed seawater and field synergy number and decreasing of synergy angle. Under the same operating conditions of the two designs, the total average synergy angle is 78.97° and 66.31° in circular and elliptical tube bundle, respectively. Optimization of the pre-heater by FSP shows that in case of elliptical tube bundle design, the average synergy number and heat transfer rate are increased by 22.68% and 35.98% respectively. - Highlights: • FSP (field synergy principle) is used to investigate heat transfer enhancement. • Numerical simulations are performed in circular and elliptical tubes pre-heater. • Numerical results are analyzed using concept of synergy angle and synergy number. • Optimization of elliptical tube bundle by FSP has better performance

  19. Efficiency principles of consulting entrepreneurship

    OpenAIRE

    Moroz Yustina S.; Drozdov Igor N.

    2015-01-01

    The article reviews the primary goals and problems of consulting entrepreneurship. The principles defining efficiency of entrepreneurship in the field of consulting are generalized. The special attention is given to the importance of ethical principles of conducting consulting entrepreneurship activity.

  20. Method and apparatus for scanning a transverse field

    International Nuclear Information System (INIS)

    Stoddart, H.F.

    1978-01-01

    A transverse radionuclide scan-field imaging apparatus is described for use in scanning with particular reference to the brain. It comprises a plurality of highly focussed collimators surrounding and being focussed inwardly with respect to the scan-field and means for imparting movement to the collimators. Adjacent collimators can be stepped in radially opposite directions after each tangential scan, so that the focal point of each collimator scans at least one half of the scan-field. Each collimator is associated with a scintillator crystal and photodetector whose output is used to calculate the radioactive emission intensity at a number of points in the scan-field. (author)

  1. On the uncertainty principle. V

    International Nuclear Information System (INIS)

    Halpern, O.

    1976-01-01

    The treatment of ideal experiments connected with the uncertainty principle is continued. The author analyzes successively measurements of momentum and position, and discusses the common reason why the results in all cases differ from the conventional ones. A similar difference exists for the measurement of field strengths. The interpretation given by Weizsaecker, who tried to interpret Bohr's complementarity principle by introducing a multi-valued logic is analyzed. The treatment of the uncertainty principle ΔE Δt is deferred to a later paper as is the interpretation of the method of variation of constants. Every ideal experiment discussed shows various lower limits for the value of the uncertainty product which limits depend on the experimental arrangement and are always (considerably) larger than h. (Auth.)

  2. Method and apparatus for steady-state magnetic measurement of poloidal magnetic field near a tokamak plasma

    Science.gov (United States)

    Woolley, Robert D.

    1998-01-01

    A method and apparatus for the steady-state measurement of poloidal magnetic field near a tokamak plasma, where the tokamak is configured with respect to a cylindrical coordinate system having z, phi (toroidal), and r axes. The method is based on combining the two magnetic field principles of induction and torque. The apparatus includes a rotor assembly having a pair of inductive magnetic field pickup coils which are concentrically mounted, orthogonally oriented in the r and z directions, and coupled to remotely located electronics which include electronic integrators for determining magnetic field changes. The rotor assembly includes an axle oriented in the toroidal direction, with the axle mounted on pivot support brackets which in turn are mounted on a baseplate. First and second springs are located between the baseplate and the rotor assembly restricting rotation of the rotor assembly about its axle, the second spring providing a constant tensile preload in the first spring. A strain gauge is mounted on the first spring, and electronic means to continually monitor strain gauge resistance variations is provided. Electronic means for providing a known current pulse waveform to be periodically injected into each coil to create a time-varying torque on the rotor assembly in the toroidal direction causes mechanical strain variations proportional to the torque in the mounting means and springs so that strain gauge measurement of the variation provides periodic magnetic field measurements independent of the magnetic field measured by the electronic integrators.

  3. Coal Field Fire Fighting - Practiced methods, strategies and tactics

    Science.gov (United States)

    Wündrich, T.; Korten, A. A.; Barth, U. H.

    2009-04-01

    achieved. For an effective and efficient fire fighting optimal tactics are requiered and can be divided into four fundamental tactics to control fire hazards: - Defense (digging away the coal, so that the coal can not begin to burn; or forming a barrier, so that the fire can not reach the not burning coal), - Rescue the coal (coal mining of a not burning seam), - Attack (active and direct cooling of burning seam), - Retreat (only monitoring till self-extinction of a burning seam). The last one is used when a fire exceeds the organizational and/or technical scope of a mission. In other words, "to control a coal fire" does not automatically and in all situations mean "to extinguish a coal fire". Best-practice tactics or a combination of them can be selected for control of a particular coal fire. For the extinguishing works different extinguishing agents are available. They can be applied by different application techniques and varying distinctive operating expenses. One application method may be the drilling of boreholes from the surface or covering the surface with low permeability soils. The mainly used extinction agents for coal field fire are as followed: Water (with or without additives), Slurry, Foaming mud/slurry, Inert gases, Dry chemicals and materials and Cryogenic agents. Because of its tremendous dimension and its complexity the worldwide challenge of coal fires is absolutely unique - it can only be solved with functional application methods, best fitting strategies and tactics, organisation and research as well as the dedication of the involved fire fighters, who work under extreme individual risks on the burning coal fields.

  4. Field Deployable Method for Arsenic Speciation in Water.

    Science.gov (United States)

    Voice, Thomas C; Flores Del Pino, Lisveth V; Havezov, Ivan; Long, David T

    2011-01-01

    Contamination of drinking water supplies by arsenic is a world-wide problem. Total arsenic measurements are commonly used to investigate and regulate arsenic in water, but it is well understood that arsenic occurs in several chemical forms, and these exhibit different toxicities. It is problematic to use laboratory-based speciation techniques to assess exposure as it has been suggested that the distribution of species is not stable during transport in some types of samples. A method was developed in this study for the on-site speciation of the most toxic dissolved arsenic species: As (III), As (V), monomethylarsonic acid (MMA) and dimethylarsenic acid (DMA). Development criteria included ease of use under field conditions, applicable at levels of concern for drinking water, and analytical performance.The approach is based on selective retention of arsenic species on specific ion-exchange chromatography cartridges followed by selective elution and quantification using graphite furnace atomic absorption spectroscopy. Water samples can be delivered to a set of three cartridges using either syringes or peristaltic pumps. Species distribution is stable at this point, and the cartridges can be transported to the laboratory for elution and quantitative analysis. A set of ten replicate spiked samples of each compound, having concentrations between 1 and 60 µg/L, were analyzed. Arsenic recoveries ranged from 78-112 % and relative standard deviations were generally below 10%. Resolution between species was shown to be outstanding, with the only limitation being that the capacity for As (V) was limited to approximately 50 µg/L. This could be easily remedied by changes in either cartridge design, or the extraction procedure. Recoveries were similar for two spiked hard groundwater samples indicating that dissolved minerals are not likely to be problematic. These results suggest that this methodology can be use for analysis of the four primary arsenic species of concern in

  5. Estimating the diffuseness of sound fields: A wavenumber analysis method

    DEFF Research Database (Denmark)

    Nolan, Melanie; Davy, John L.; Brunskog, Jonas

    2017-01-01

    The concept of a diffuse sound field is widely used in the analysis of sound in enclosures. The diffuse sound field is generally described as composed of plane waves with random phases, which wave number vectors are uniformly distributed over all angles of incidence. In this study, an interpretat...

  6. A different method for calculation of the deflection angle of light passing close to a massive object by Fermat’s principle

    Energy Technology Data Exchange (ETDEWEB)

    Akkus, Harun, E-mail: physicisthakkus@gmail.com

    2013-12-15

    We introduce a method for calculating the amount of deflection angle of light passing close to a massive object. It is based on Fermat’s principle. The varying refractive index of medium around the massive object is obtained from the Buckingham pi-theorem. Highlights: •A different and simpler method for the calculation of deflection angle of light. •Not a curved space, only 2-D Euclidean space. •Getting a varying refractive index from the Buckingham pi-theorem. •Obtaining the some results of general relativity from Fermat’s principle.

  7. A different method for calculation of the deflection angle of light passing close to a massive object by Fermat’s principle

    International Nuclear Information System (INIS)

    Akkus, Harun

    2013-01-01

    We introduce a method for calculating the amount of deflection angle of light passing close to a massive object. It is based on Fermat’s principle. The varying refractive index of medium around the massive object is obtained from the Buckingham pi-theorem. Highlights: •A different and simpler method for the calculation of deflection angle of light. •Not a curved space, only 2-D Euclidean space. •Getting a varying refractive index from the Buckingham pi-theorem. •Obtaining the some results of general relativity from Fermat’s principle

  8. Time-optimal path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong

    2016-01-06

    An ensemble-based approach is developed to conduct time-optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where a set deterministic predictions is used to model and quantify uncertainty in the predictions. In the operational setting, much about dynamics, topography and forcing of the ocean environment is uncertain, and as a result a single path produced by a model simulation has limited utility. To overcome this limitation, we rely on a finitesize ensemble of deterministic forecasts to quantify the impact of variability in the dynamics. The uncertainty of flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each the resulting realizations of the uncertain current field, we predict the optimal path by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of sampling strategy, and develop insight into extensions dealing with regional or general circulation models. In particular, the ensemble method enables us to perform a statistical analysis of travel times, and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  9. ALTERNATIVE FIELD METHODS TO TREAT MERCURY IN SOIL

    Energy Technology Data Exchange (ETDEWEB)

    Ernest F. Stine Jr; Steven T. Downey

    2002-08-14

    U.S. Department of Energy (DOE) used large quantities of mercury in the uranium separating process from the 1950s until the late 1980s in support of national defense. Some of this mercury, as well as other hazardous metals and radionuclides, found its way into, and under, several buildings, soil and subsurface soils and into some of the surface waters. Several of these areas may pose potential health or environmental risks and must be dealt with under current environmental regulations. DOE's National Energy Technology Laboratory (NETL) awarded a contract ''Alternative Field Methods to Treat Mercury in Soil'' to IT Group, Knoxville TN (IT) and its subcontractor NFS, Erwin, TN to identify remedial methods to clean up mercury-contaminated high-clay content soils using proven treatment chemistries. The sites of interest were the Y-12 National Security Complex located in Oak Ridge, Tennessee, the David Witherspoon properties located in Knoxville, Tennessee, and at other similarly contaminated sites. The primary laboratory-scale contract objectives were (1) to safely retrieve and test samples of contaminated soil in an approved laboratory and (2) to determine an acceptable treatment method to ensure that the mercury does not leach from the soil above regulatory levels. The leaching requirements were to meet the TC (0.2 mg/l) and UTS (0.025 mg/l) TCLP criteria. In-situ treatments were preferred to control potential mercury vapors emissions and liquid mercury spills associated with ex-situ treatments. All laboratory work was conducted in IT's and NFS laboratories. Mercury contaminated nonradioactive soil from under the Alpha 2 building in the Y-12 complex was used. This soils contained insufficient levels of leachable mercury and resulted in TCLP mercury concentrations that were similar to the applicable LDR limits. The soil was spiked at multiple levels with metallic (up to 6000 mg/l) and soluble mercury compounds (up to 500 mg/kg) to

  10. Principles of Mechanical Excavation

    International Nuclear Information System (INIS)

    Lislerud, A.

    1997-12-01

    Mechanical excavation of rock today includes several methods such as tunnel boring, raiseboring, roadheading and various continuous mining systems. Of these raiseboring is one potential technique for excavating shafts in the repository for spent nuclear fuel and dry blind boring is promising technique for excavation of deposition holes, as demonstrated in the Research Tunnel at Olkiluoto. In addition, there is potential for use of other mechanical excavation techniques in different parts of the repository. One of the main objectives of this study was to analyze the factors which affect the feasibility of mechanical rock excavation in hard rock conditions and to enhance the understanding of factors which affect rock cutting so as to provide an improved basis for excavator performance prediction modeling. The study included the following four main topics: (a) phenomenological model based on similarity analysis for roller disk cutting, (b) rock mass properties which affect rock cuttability and tool life, (c) principles for linear and field cutting tests and performance prediction modeling and (d) cutter head lacing design procedures and principles. As a conclusion of this study, a test rig was constructed, field tests were planned and started up. The results of the study can be used to improve the performance prediction models used to assess the feasibility of different mechanical excavation techniques at various repository investigation sites. (orig.)

  11. Principles of Mechanical Excavation

    Energy Technology Data Exchange (ETDEWEB)

    Lislerud, A. [Tamrock Corp., Tampere (Finland)

    1997-12-01

    Mechanical excavation of rock today includes several methods such as tunnel boring, raiseboring, roadheading and various continuous mining systems. Of these raiseboring is one potential technique for excavating shafts in the repository for spent nuclear fuel and dry blind boring is promising technique for excavation of deposition holes, as demonstrated in the Research Tunnel at Olkiluoto. In addition, there is potential for use of other mechanical excavation techniques in different parts of the repository. One of the main objectives of this study was to analyze the factors which affect the feasibility of mechanical rock excavation in hard rock conditions and to enhance the understanding of factors which affect rock cutting so as to provide an improved basis for excavator performance prediction modeling. The study included the following four main topics: (a) phenomenological model based on similarity analysis for roller disk cutting, (b) rock mass properties which affect rock cuttability and tool life, (c) principles for linear and field cutting tests and performance prediction modeling and (d) cutter head lacing design procedures and principles. As a conclusion of this study, a test rig was constructed, field tests were planned and started up. The results of the study can be used to improve the performance prediction models used to assess the feasibility of different mechanical excavation techniques at various repository investigation sites. (orig.). 21 refs.

  12. A Carleman estimate and the balancing principle in the quasi-reversibility method for solving the Cauchy problem for the Laplace equation

    International Nuclear Information System (INIS)

    Cao Hui; Pereverzev, Sergei V; Klibanov, Michael V

    2009-01-01

    The quasi-reversibility method of solving the Cauchy problem for the Laplace equation in a bounded domain Ω is considered. With the help of the Carleman estimation technique improved error and stability bounds in a subdomain Ω σ is a subset of Ω are obtained. This paves the way for the use of the balancing principle for an a posteriori choice of the regularization parameter ε in the quasi-reversibility method. As an adaptive regularization parameter choice strategy, the balancing principle does not require a priori knowledge of either the solution smoothness or a constant K appearing in the stability bound estimation. Nevertheless, this principle allows an a posteriori parameter choice that up to a controllable constant achieves the best accuracy guaranteed by the Carleman estimate

  13. The principle of the indistinguishability of identical particles and the Lie algebraic approach to the field quantisation

    International Nuclear Information System (INIS)

    Govorkov, A.B.

    1980-01-01

    The density matrix, rather than the wavefunction describing the system of a fixed number of non-relativistic identical particles, is subject to the second quantisation. Here the bilinear operators which move a particle from a given state to another appear and satisfy the Lie algebraic relations of the unitary group SU(rho) when the dimension rho→infinity. The drawing into consideration of the system with a variable number of particles implies the extension of this algebra into one of the simple Lie algebras of classical (orthogonal, symplectic or unitary) groups in the even-dimensional spaces. These Lie algebras correspond to the para-Fermi-, para-Bose- and para-uniquantisation of fields, respectively. (author)

  14. Muscles, Ligaments and Tendons Journal – Basic principles and recommendations in clinical and field Science Research: 2016 Update

    Science.gov (United States)

    Padulo, Johnny; Oliva, Francesco; Frizziero, Antonio; Maffulli, Nicola

    2016-01-01

    Summary The proper design and implementation of a study as well as a balanced and well-supported evaluation and interpretation of its main findings are of crucial importance when reporting and disseminating research. Also accountability, funding acknowledgement and adequately declaring any conflict of interest play a major role in science. Since the Muscles, Ligaments and Tendons Journal (MLTJ) is committed to the highest scientific and ethical standards, we encourage all Authors to take into account and to comply, as much as possible, to the contents and issues discussed in this official editorial. This could be useful for improving the quality of the manuscripts, as well as to stimulate interest and debate and to promote constructive change, reflecting upon uses and misuses within our disciplines belonging to the field of “Clinical and Sport - Science Research”. PMID:27331026

  15. Muscles, Ligaments and Tendons Journal - Basic principles and recommendations in clinical and field Science Research: 2016 Update.

    Science.gov (United States)

    Padulo, Johnny; Oliva, Francesco; Frizziero, Antonio; Maffulli, Nicola

    2016-01-01

    The proper design and implementation of a study as well as a balanced and well-supported evaluation and interpretation of its main findings are of crucial importance when reporting and disseminating research. Also accountability, funding acknowledgement and adequately declaring any conflict of interest play a major role in science. Since the Muscles, Ligaments and Tendons Journal (MLTJ) is committed to the highest scientific and ethical standards, we encourage all Authors to take into account and to comply, as much as possible, to the contents and issues discussed in this official editorial. This could be useful for improving the quality of the manuscripts, as well as to stimulate interest and debate and to promote constructive change, reflecting upon uses and misuses within our disciplines belonging to the field of "Clinical and Sport - Science Research".

  16. Field calculations. Part I: Choice of variables and methods

    International Nuclear Information System (INIS)

    Turner, L.R.

    1981-01-01

    Magnetostatic calculations can involve (in order of increasing complexity) conductors only, material with constant or infinite permeability, or material with variable permeability. We consider here only the most general case, calculations involving ferritic material with variable permeability. Variables suitable for magnetostatic calculations are the magnetic field, the magnetic vector potential, and the magnetic scalar potential. For two-dimensional calculations the potentials, which each have only one component, have advantages over the field, which has two components. Because it is a single-valued variable, the vector potential is perhaps the best variable for two-dimensional calculations. In three dimensions, both the field and the vector potential have three components; the scalar potential, with only one component,provides a much smaller system of equations to be solved. However the scalar potential is not single-valued. To circumvent this problem, a calculation with two scalar potentials can be performed. The scalar potential whose source is the conductors can be calculated directly by the Biot-Savart law, and the scalar potential whose source is the magnetized material is single valued. However in some situations, the fields from the two potentials nearly cancel; and the numerical accuracy is lost. The 3-D magnetostatic program TOSCA employs a single total scalar potential; the program GFUN uses the magnetic field as its variable

  17. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    Science.gov (United States)

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  18. DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS

    Science.gov (United States)

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  19. Fusion research principles

    CERN Document Server

    Dolan, Thomas James

    2013-01-01

    Fusion Research, Volume I: Principles provides a general description of the methods and problems of fusion research. The book contains three main parts: Principles, Experiments, and Technology. The Principles part describes the conditions necessary for a fusion reaction, as well as the fundamentals of plasma confinement, heating, and diagnostics. The Experiments part details about forty plasma confinement schemes and experiments. The last part explores various engineering problems associated with reactor design, vacuum and magnet systems, materials, plasma purity, fueling, blankets, neutronics

  20. Neutron optics using transverse field neutron spin echo method

    International Nuclear Information System (INIS)

    Achiwa, Norio; Hino, Masahiro; Yamauchi, Yoshihiro; Takakura, Hiroyuki; Tasaki, Seiji; Akiyoshi, Tsunekazu; Ebisawa, Toru.

    1993-01-01

    A neutron spin echo (NSE) spectrometer with perpendicular magnetic field to the neutron scattering plane, using an iron yoke type electro-magnet has been developed. A combination of cold neutron guider, supermirror neutron polarizer of double reflection type and supermirror neutron analyser was adopted for the spectrometer. The first application of the NSE spectrometer to neutron optics by passing Larmor precessing neutrons through gas, solid and liquid materials of several different lengths which are inserted in one of the precession field have been examined. Preliminary NSE spectra of this sample geometry are discussed. (author)

  1. Inverse transformation algorithm of transient electromagnetic field and its high-resolution continuous imaging interpretation method

    International Nuclear Information System (INIS)

    Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua

    2015-01-01

    We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution.Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result. (paper)

  2. First-principles calculations on strain and electric field induced band modulation and phase transition of bilayer WSe2sbnd MoS2 heterostructure

    Science.gov (United States)

    Lei, Xiang; Yu, Ke

    2018-04-01

    A purposeful modulation of physical properties of material via change external conditions has long captured people's interest and can provide many opportunities to improve the specific performance of electronic devices. In this work, a comprehensive first-principles survey was performed to elucidate that the bandgap and electronic properties of WSe2sbnd MoS2 heterostructure exhibited unusual response to exterior strain and electric field in comparison with pristine structures. It demonstrates that the WSe2sbnd MoS2 is a typical type-II heterostructure, and thus the electron-hole pairs can be effectively spatially separated. The external effects can trigger the electronic phase transition from semiconducting to metallic state, which originates from the internal electric evolution induced energy-level shift. Interestingly, the applied strain shows no direction-depended character for the modulation of bandgap of WSe2sbnd MoS2 heterostructure, while it exists in the electric field tuning processes and strongly depends on the direction of the electric field. Our findings elucidate the tunable electronic property of bilayer WSe2sbnd MoS2 heterostructure, and would provide a valuable reference to design the electronic nanodevices.

  3. Three-dimensional numerical study of heat transfer characteristics of plain plate fin-and-tube heat exchangers from view point of field synergy principle

    International Nuclear Information System (INIS)

    He, Y.L.; Tao, W.Q.; Song, F.Q.; Zhang, W.

    2005-01-01

    In this paper, 3-D numerical simulations were performed for laminar heat transfer and fluid flow characteristics of plate fin-and-tube heat exchanger. The effects of five factors were examined: Re number, fin pitch, tube row number, spanwise and longitudinal tube pitch. The Reynolds number based on the tube diameter varied from 288 to 5000, the non-dimensional fin pitch based on the tube diameter varied from 0.04 to 0.5, the tube row number from 1 to 4, the spanwise tube pitch S 1 /d varies from 1.2 to 3, and the longitudinal tube pitch S 2 /d from 1.0 to 2.4. The numerical results were analyzed from the view point of field synergy principle, which says that the reduction of the intersection angle between velocity and fluid temperature gradient is the basic mechanism to enhance convective heat transfer. It is found that the effects of the five parameters on the heat transfer performance of the finned tube banks can be well described by the field synergy principle, i.e., the enhancement or deterioration of the convective heat transfer across the finned tube banks is inherently related to the variation of the intersection angle between the velocity and the fluid temperature gradient. It is also recommended that to further enhance the convective heat transfer, the enhancement techniques, such as slotting the fin, should be adopted mainly in the rear part of the fin where the synergy between local velocity and temperature gradient become worse

  4. Enhancing Field Research Methods with Mobile Survey Technology

    Science.gov (United States)

    Glass, Michael R.

    2015-01-01

    This paper assesses the experience of undergraduate students using mobile devices and a commercial application, iSurvey, to conduct a neighborhood survey. Mobile devices offer benefits for enhancing student learning and engagement. This field exercise created the opportunity for classroom discussions on the practicalities of urban research, the…

  5. Computational methods in several fields of radiation dosimetry

    International Nuclear Information System (INIS)

    Paretzke, Herwig G.

    2010-01-01

    Full text: Radiation dosimetry has to cope with a wide spectrum of applications and requirements in time and size. The ubiquitous presence of various radiation fields or radionuclides in the human home, working, urban or agricultural environment can lead to various dosimetric tasks starting from radioecology, retrospective and predictive dosimetry, personal dosimetry, up to measurements of radionuclide concentrations in environmental and food product and, finally in persons and their excreta. In all these fields measurements and computational models for the interpretation or understanding of observations are employed explicitly or implicitly. In this lecture some examples of own computational models will be given from the various dosimetric fields, including a) Radioecology (e.g. with the code systems based on ECOSYS, which was developed far before the Chernobyl reactor accident, and tested thoroughly afterwards), b) Internal dosimetry (improved metabolism models based on our own data), c) External dosimetry (with the new ICRU-ICRP-Voxelphantom developed by our lab), d) Radiation therapy (with GEANT IV as applied to mixed reactor radiation incident on individualized voxel phantoms), e) Some aspects of nanodosimetric track structure computations (not dealt with in the other presentation of this author). Finally, some general remarks will be made on the high explicit or implicit importance of computational models in radiation protection and other research field dealing with large systems, as well as on good scientific practices which should generally be followed when developing and applying such computational models

  6. Effective and efficient method of calculating Bessel beam fields

    CSIR Research Space (South Africa)

    Litvin, IA

    2005-01-01

    Full Text Available Bessel beams have gathered much interest of late due to their properties of near diffraction free propagation and self reconstruction after obstacles. Such laser beams have already found applications in fields such as optical tweezers and as pump...

  7. Grassmann methods in lattice field theory and statistical mechanics

    International Nuclear Information System (INIS)

    Bilgici, E.; Gattringer, C.; Huber, P.

    2006-01-01

    Full text: In two dimensions models of loops can be represented as simple Grassmann integrals. In our work we explore the generalization of these techniques to lattice field theories and statistical mechanic systems in three and four dimensions. We discuss possible strategies and applications for representations of loop and surface models as Grassmann integrals. (author)

  8. Killing vector fields in three dimensions: a method to solve massive gravity field equations

    Energy Technology Data Exchange (ETDEWEB)

    Guerses, Metin, E-mail: gurses@fen.bilkent.edu.t [Department of Mathematics, Faculty of Sciences, Bilkent University, 06800 Ankara (Turkey)

    2010-10-21

    Killing vector fields in three dimensions play an important role in the construction of the related spacetime geometry. In this work we show that when a three-dimensional geometry admits a Killing vector field then the Ricci tensor of the geometry is determined in terms of the Killing vector field and its scalars. In this way we can generate all products and covariant derivatives at any order of the Ricci tensor. Using this property we give ways to solve the field equations of topologically massive gravity (TMG) and new massive gravity (NMG) introduced recently. In particular when the scalars of the Killing vector field (timelike, spacelike and null cases) are constants then all three-dimensional symmetric tensors of the geometry, the Ricci and Einstein tensors, their covariant derivatives at all orders, and their products of all orders are completely determined by the Killing vector field and the metric. Hence, the corresponding three-dimensional metrics are strong candidates for solving all higher derivative gravitational field equations in three dimensions.

  9. Field test of available methods to measure remotely SOx and NOx emissions from ships

    Science.gov (United States)

    Balzani Lööv, J. M.; Alfoldy, B.; Gast, L. F. L.; Hjorth, J.; Lagler, F.; Mellqvist, J.; Beecken, J.; Berg, N.; Duyzer, J.; Westrate, H.; Swart, D. P. J.; Berkhout, A. J. C.; Jalkanen, J.-P.; Prata, A. J.; van der Hoff, G. R.; Borowiak, A.

    2014-08-01

    Methods for the determination of ship fuel sulphur content and NOx emission factors based on remote measurements have been compared in the harbour of Rotterdam and compared to direct stack emission measurements on the ferry Stena Hollandica. The methods were selected based on a review of the available literature on ship emission measurements. They were either optical (LIDAR, Differential Optical Absorption Spectroscopy (DOAS), UV camera), combined with model-based estimates of fuel consumption, or based on the so called "sniffer" principle, where SO2 or NOx emission factors are determined from simultaneous measurement of the increase of CO2 and SO2 or NOx concentrations in the plume of the ship compared to the background. The measurements were performed from stations at land, from a boat and from a helicopter. Mobile measurement platforms were found to have important advantages compared to the land-based ones because they allow optimizing the sampling conditions and sampling from ships on the open sea. Although optical methods can provide reliable results it was found that at the state of the art level, the "sniffer" approach is the most convenient technique for determining both SO2 and NOx emission factors remotely. The average random error on the determination of SO2 emission factors comparing two identical instrumental set-ups was 6%. However, it was found that apparently minor differences in the instrumental characteristics, such as response time, could cause significant differences between the emission factors determined. Direct stack measurements showed that about 14% of the fuel sulphur content was not emitted as SO2. This was supported by the remote measurements and is in agreement with the results of other field studies.

  10. Field test of available methods to measure remotely SO2 and NOx emissions from ships

    Science.gov (United States)

    Balzani Lööv, J. M.; Alfoldy, B.; Beecken, J.; Berg, N.; Berkhout, A. J. C.; Duyzer, J.; Gast, L. F. L.; Hjorth, J.; Jalkanen, J.-P.; Lagler, F.; Mellqvist, J.; Prata, F.; van der Hoff, G. R.; Westrate, H.; Swart, D. P. J.; Borowiak, A.

    2013-11-01

    Methods for the determination of ship fuel sulphur content and NOx emission factors from remote measurements have been compared in the harbour of Rotterdam and compared to direct stack emission measurements on the ferry Stena Hollandica. The methods were selected based on a review of the available literature on ship emission measurements. They were either optical (LIDAR, DOAS, UV camera), combined with model based estimates of fuel consumption, or based on the so called "sniffer" principle, where SO2 or NOx emission factors are determined from simultaneous measurement of the increase of CO2 and SO2 or NOx concentrations in the plume of the ship compared to the background. The measurements were performed from stations at land, from a boat, and from a helicopter. Mobile measurement platforms were found to have important advantages compared to the landbased ones because they allow to optimize the sampling conditions and to sample from ships on the open sea. Although optical methods can provide reliable results, it was found that at the state of the art, the "sniffer" approach is the most convenient technique for determining both SO2 and NOx emission factors remotely. The average random error on the determination of SO2 emission factors comparing two identical instrumental set-ups was 6%. However, it was found that apparently minor differences in the instrumental characteristics, such as response time, could cause significant differences between the emission factors determined. Direct stack measurements showed that about 14% of the fuel sulphur content was not emitted as SO2. This was supported by the remote measurements and is in agreement with the results of other field studies.

  11. Power spectrum of the geomagnetic field by the maximum entropy method

    International Nuclear Information System (INIS)

    Kantor, I.J.; Trivedi, N.B.

    1980-01-01

    Monthly mean values of Vassouras (state of Rio de Janeiro) geomagnetic field are analyzed us the maximum entropy method. The method is described and compared with other methods of spectral analysis, and its advantages and disadvantages are presented. (Author) [pt

  12. On multiplying methods in the field of research evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Derrick, G.; Molas-Gallart, J.; De Rijcke, S.; Meijer, I.; Van der Weijden, I.; Wouters, P.

    2016-07-01

    This special session forms part of a larger program aimed at the multiplication and integration of methodological approaches in the research evaluation and innovation policy field. The session builds on previous initiatives by Gemma Derrick and colleagues at CWTS, INGENIO, the Rathenau Instituut and SPRU, exploring the advantages of qualitative methodological tools at the STI/ENID conference in Lugano, and an international workshop in London in October 2015. The program is highly topical: the research evaluation field is currently reconsidering its methodological foundations in light of new research questions arising from policy initiatives regarding a) the move toward open science; b) a reconceptualization of research excellence to include societal relevance; c) diversification of academic careers, and d) the search for indicators showcasing responsible research behavior and innovation. This new special session at STI2016 will advance and broaden the scope of previous initiatives by building bridges between cutting edge research involving quantitative, qualitative, and mixed methodological research designs. Bringing together leading experts and promising researchers with distinctive methodological skill-sets, the session will demonstrate the advantages of cross-fertilization between ‘core’ and ‘peripheral’ methodological approaches for the research evaluation and science indicators field. (Author)

  13. XAFS Spectroscopy : Fundamental Principles and Data Analysis

    NARCIS (Netherlands)

    Koningsberger, D.C.; Mojet, B.L.; Dorssen, G.E. van; Ramaker, D.E.

    2000-01-01

    The physical principles of XAFS spectroscopy are given at a sufficiently basic level to enable scientists working in the field of catalysis to critically evaluate articles dealing with XAFS studies on catalytic materials. The described data-analysis methods provide the basic tools for studying the

  14. Measurement and data analysis methods for field-scale wind erosion studies and model validation

    NARCIS (Netherlands)

    Zobeck, T.M.; Sterk, G.; Funk, R.F.; Rajot, J.L.; Stout, J.E.; Scott Van Pelt, R.

    2003-01-01

    Accurate and reliable methods of measuring windblown sediment are needed to confirm, validate, and improve erosion models, assess the intensity of aeolian processes and related damage, determine the source of pollutants, and for other applications. This paper outlines important principles to

  15. ACCOUNTING PRINCIPLES: EVOLUTION, CONTENT, CONSEQUENCES

    Directory of Open Access Journals (Sweden)

    Liliana LAZARI

    2017-04-01

    Full Text Available Accounting principles are rules that help producers of financial information to recognize, evaluate, classify and present information. At the same time, they are very general rules, which can be implemented in several ways, generating more accounting treatments. Although the foreign literature presents numerous classifications of the principles in terms of the given research, we will continue to deal with the terms that have the value of accounting principles, their evolution in the Republic of Moldova, but also the influence on keeping the accounting and financial reporting. In the research, we used the method of comparison, analysis, deduction, but also the historical method. The results of the research on accounting principles will contribute to the development of their applicability both for those studying and researching the field of accounting, as well as for those who apply in practice the keeping of accounting in all stages of work: elaboration of accounting policies, recognition, evaluation and accounting for economic transactions and preparing the financial statements of the entity.

  16. Electrostatic field in inhomogeneous dielectric media. I. Indirect boundary element method

    International Nuclear Information System (INIS)

    Goel, N.S.; Gang, F.; Ko, Z.

    1995-01-01

    A computationally fast method is presented for calculating electrostatic field in arbitrary inhomogeneous dielectric media with open boundary condition. The method involves dividing the whole space into cubical cells and then finding effective dielectric parameters for interfacial cells consisting of several dielectrics. The electrostatic problem is then solved using either the indirect boundary element method described in this paper or the so-called volume element method described in the companion paper. Both methods are tested for accuracy by comparing the numerically calculated electrostatic fields against those analytically obtained for a dielectric sphere and dielectric ellipsoid in a uniform field and for a dielectric sphere in a point charge field

  17. Contribution of surface analysis spectroscopic methods to the lubrication field

    International Nuclear Information System (INIS)

    Blanc, C.

    1979-01-01

    The analytical surface technics such as ESCA, AES and SIMS are tested to be applied to a particular lubrication field. One deals with a 100 C 6 steel surface innumered in tricresylphosphate at 110 0 C for 15 days. The nature of the first layers is studied after relevant solvant cleaning. An iron oxide layer is produced on the bearing surface, namely αFe 2 -O 3 . ESCA, AES and SIMS studies show an overlayer of iron phosphate. The exact nature of iron phosphate is not clearly established but the formation of a ferrous phosphate coating can be assumed from ESCA analysis [fr

  18. Electron interaction and spin effects in quantum wires, quantum dots and quantum point contacts: a first-principles mean-field approach

    International Nuclear Information System (INIS)

    Zozoulenko, I V; Ihnatsenka, S

    2008-01-01

    We have developed a mean-field first-principles approach for studying electronic and transport properties of low dimensional lateral structures in the integer quantum Hall regime. The electron interactions and spin effects are included within the spin density functional theory in the local density approximation where the conductance, the density, the effective potentials and the band structure are calculated on the basis of the Green's function technique. In this paper we present a systematic review of the major results obtained on the energetics, spin polarization, effective g factor, magnetosubband and edge state structure of split-gate and cleaved-edge overgrown quantum wires as well as on the conductance of quantum point contacts (QPCs) and open quantum dots. In particular, we discuss how the spin-resolved subband structure, the current densities, the confining potentials, as well as the spin polarization of the electron and current densities in quantum wires and antidots evolve when an applied magnetic field varies. We also discuss the role of the electron interaction and spin effects in the conductance of open systems focusing our attention on the 0.7 conductance anomaly in the QPCs. Special emphasis is given to the effect of the electron interaction on the conductance oscillations and their statistics in open quantum dots as well as to interpretation of the related experiments on the ultralow temperature saturation of the coherence time in open dots

  19. Implementation of visual programming methods for numerical techniques used in electromagnetic field theory

    Directory of Open Access Journals (Sweden)

    Metin Varan

    2017-08-01

    Full Text Available Field theory is one of the two sub-field theories in electrical and electronics engineering that for creates difficulties for undergraduate students. In undergraduate period, field theory has been taught under the theory of electromagnetic fields by which describes using partial differential equations and integral methods. Analytical methods for solution of field problems on the basis of a mathematical model may result the understanding difficulties for undergraduate students due to their mathematical and physical infrastructure. The analytical methods which can be applied in simple model lose their applicability to more complex models. In this case, the numerical methods are used to solve more complex equations. In this study, by preparing some field theory‘s web-based graphical user interface numerical methods of applications it has been aimed to increase learning levels of field theory problems for undergraduate and graduate students while taking in mind their computer programming capabilities.

  20. First-principles method for calculating the rate constants of internal-conversion and intersystem-crossing transitions.

    Science.gov (United States)

    Valiev, R R; Cherepanov, V N; Baryshnikov, G V; Sundholm, D

    2018-02-28

    A method for calculating the rate constants for internal-conversion (k IC ) and intersystem-crossing (k ISC ) processes within the adiabatic and Franck-Condon (FC) approximations is proposed. The applicability of the method is demonstrated by calculation of k IC and k ISC for a set of organic and organometallic compounds with experimentally known spectroscopic properties. The studied molecules were pyrromethene-567 dye, psoralene, hetero[8]circulenes, free-base porphyrin, naphthalene, and larger polyacenes. We also studied fac-Alq 3 and fac-Ir(ppy) 3 , which are important molecules in organic light emitting diodes (OLEDs). The excitation energies were calculated at the multi-configuration quasi-degenerate second-order perturbation theory (XMC-QDPT2) level, which is found to yield excitation energies in good agreement with experimental data. Spin-orbit coupling matrix elements, non-adiabatic coupling matrix elements, Huang-Rhys factors, and vibrational energies were calculated at the time-dependent density functional theory (TDDFT) and complete active space self-consistent field (CASSCF) levels. The computed fluorescence quantum yields for the pyrromethene-567 dye, psoralene, hetero[8]circulenes, fac-Alq 3 and fac-Ir(ppy) 3 agree well with experimental data, whereas for the free-base porphyrin, naphthalene, and the polyacenes, the obtained quantum yields significantly differ from the experimental values, because the FC and adiabatic approximations are not accurate for these molecules.