The SPH homogeneization method
International Nuclear Information System (INIS)
Kavenoky, Alain
1978-01-01
The homogeneization of a uniform lattice is a rather well understood topic while difficult problems arise if the lattice becomes irregular. The SPH homogeneization method is an attempt to generate homogeneized cross sections for an irregular lattice. Section 1 summarizes the treatment of an isolated cylindrical cell with an entering surface current (in one velocity theory); Section 2 is devoted to the extension of the SPH method to assembly problems. Finally Section 3 presents the generalisation to general multigroup problems. Numerical results are obtained for a PXR rod bundle assembly in Section 4
Research on reactor physics analysis method based on Monte Carlo homogenization
International Nuclear Information System (INIS)
Ye Zhimin; Zhang Peng
2014-01-01
In order to meet the demand of nuclear energy market in the future, many new concepts of nuclear energy systems has been put forward. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multigroup cross section libraries. Due to its strong geometry modeling capability and the application of continuous energy cross section libraries, the Monte Carlo method has been widely used in reactor physics calculations, and more and more researches on Monte Carlo method has been carried out. Neutronics-thermal hydraulics coupling analysis based on Monte Carlo method has been realized. However, it still faces the problems of long computation time and slow convergence which make it not applicable to the reactor core fuel management simulations. Drawn from the deterministic core analysis method, a new two-step core analysis scheme is proposed in this work. Firstly, Monte Carlo simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Secondly, the core diffusion calculations can be done with these multigroup cross sections. The new scheme can achieve high efficiency while maintain acceptable precision, so it can be used as an effective tool for the design and analysis of innovative nuclear energy systems. Numeric tests have been done in this work to verify the new scheme. (authors)
Physical applications of homogeneous balls
Scarr, Tzvi
2005-01-01
One of the mathematical challenges of modern physics lies in the development of new tools to efficiently describe different branches of physics within one mathematical framework. This text introduces precisely such a broad mathematical model, one that gives a clear geometric expression of the symmetry of physical laws and is entirely determined by that symmetry. The first three chapters discuss the occurrence of bounded symmetric domains (BSDs) or homogeneous balls and their algebraic structure in physics. The book further provides a discussion of how to obtain a triple algebraic structure ass
Fluid-structure interaction in tube bundles: homogenization methods, physical analysis
International Nuclear Information System (INIS)
Broc, D.; Sigrist, J.F.
2009-01-01
It is well known that the movements of a structure may be strongly influenced by fluid. This topic, called 'Fluid Structure Interaction' is important in many industrial applications. Tube bundles immersed in fluid are found in many cases, especially in nuclear industry: (core reactors, steam generators,...). The fluid leads to 'inertial effects' (with a decrease of the vibration frequencies) and 'dissipative effects' (with higher damping). The paper first presents the methods used for the simulation of the dynamic behaviour of tube bundles immersed in a fluid, with industrial examples. The methods used are based on the Euler equations for the fluid (perfect fluid), which allow to take into account the inertial effects. It is possible to take into account dissipative effects also, by using a Rayleigh damping. The conclusion focuses on improvements of the methods, in order to take into account with more accuracy the influence of the fluid, mainly the dissipative effects, which may be very important, especially in the case of a global fluid flow. (authors)
ISOTOPE METHODS IN HOMOGENEOUS CATALYSIS.
Energy Technology Data Exchange (ETDEWEB)
BULLOCK,R.M.; BENDER,B.R.
2000-12-01
The use of isotope labels has had a fundamentally important role in the determination of mechanisms of homogeneously catalyzed reactions. Mechanistic data is valuable since it can assist in the design and rational improvement of homogeneous catalysts. There are several ways to use isotopes in mechanistic chemistry. Isotopes can be introduced into controlled experiments and followed where they go or don't go; in this way, Libby, Calvin, Taube and others used isotopes to elucidate mechanistic pathways for very different, yet important chemistries. Another important isotope method is the study of kinetic isotope effects (KIEs) and equilibrium isotope effect (EIEs). Here the mere observation of where a label winds up is no longer enough - what matters is how much slower (or faster) a labeled molecule reacts than the unlabeled material. The most careti studies essentially involve the measurement of isotope fractionation between a reference ground state and the transition state. Thus kinetic isotope effects provide unique data unavailable from other methods, since information about the transition state of a reaction is obtained. Because getting an experimental glimpse of transition states is really tantamount to understanding catalysis, kinetic isotope effects are very powerful.
Borden, Brett; Luscombe, James
2017-10-01
Physics is expressed in the language of mathematics; it is deeply ingrained in how physics is taught and how it's practiced. A study of the mathematics used in science is thus a sound intellectual investment for training as scientists and engineers. This first volume of two is centered on methods of solving partial differential equations and the special functions introduced. This text is based on a course offered at the Naval Postgraduate School (NPS) and while produced for NPS needs, it will serve other universities well.
A second stage homogenization method
International Nuclear Information System (INIS)
Makai, M.
1981-01-01
A second homogenization is needed before the diffusion calculation of the core of large reactors. Such a second stage homogenization is outlined here. Our starting point is the Floquet theorem for it states that the diffusion equation for a periodic core always has a particular solution of the form esup(j)sup(B)sup(x) u (x). It is pointed out that the perturbation series expansion of function u can be derived by solving eigenvalue problems and the eigenvalues serve to define homogenized cross sections. With the help of these eigenvalues a homogenized diffusion equation can be derived the solution of which is cos Bx, the macroflux. It is shown that the flux can be expressed as a series of buckling. The leading term in this series is the well known Wigner-Seitz formula. Finally three examples are given: periodic absorption, a cell with an absorber pin in the cell centre, and a cell of three regions. (orig.)
Homogenization methods for heterogeneous assemblies
International Nuclear Information System (INIS)
Wagner, M.R.
1980-01-01
The third session of the IAEA Technical Committee Meeting is concerned with the problem of homogenization of heterogeneous assemblies. Six papers will be presented on the theory of homogenization and on practical procedures for deriving homogenized group cross sections and diffusion coefficients. That the problem of finding so-called ''equivalent'' diffusion theory parameters for the use in global reactor calculations is of great practical importance. In spite of this, it is fair to say that the present state of the theory of second homogenization is far from being satisfactory. In fact, there is not even a uniquely accepted approach to the problem of deriving equivalent group diffusion parameters. Common agreement exists only about the fact that the conventional flux-weighting technique provides only a first approximation, which might lead to acceptable results in certain cases, but certainly does not guarantee the basic requirement of conservation of reaction rates
Homogenization versus homogenization-free method to measure muscle glycogen fractions.
Mojibi, N; Rasouli, M
2016-12-01
The glycogen is extracted from animal tissues with or without homogenization using cold perchloric acid. Three methods were compared for determination of glycogen in rat muscle at different physiological states. Two groups of five rats were kept at rest or 45 minutes muscular activity. The glycogen fractions were extracted and measured by using three methods. The data of homogenization method shows that total glycogen decreased following 45 min physical activity and the change occurred entirely in acid soluble glycogen (ASG), while AIG did not change significantly. Similar results were obtained by using "total-glycogen-fractionation methods". The findings of "homogenization-free method" indicate that the acid insoluble fraction (AIG) was the main portion of muscle glycogen and the majority of changes occurred in AIG fraction. The results of "homogenization method" are identical with "total glycogen fractionation", but differ with "homogenization-free" protocol. The ASG fraction is the major portion of muscle glycogen and is more metabolically active form.
Hybrid diffusion–transport spatial homogenization method
International Nuclear Information System (INIS)
Kooreman, Gabriel; Rahnema, Farzad
2014-01-01
Highlights: • A new hybrid diffusion–transport homogenization method. • An extension of the consistent spatial homogenization (CSH) transport method. • Auxiliary cross section makes homogenized diffusion consistent with heterogeneous diffusion. • An on-the-fly re-homogenization in transport. • The method is faster than fine-mesh transport by 6–8 times. - Abstract: A new hybrid diffusion–transport homogenization method has been developed by extending the consistent spatial homogenization (CSH) transport method to include diffusion theory. As in the CSH method, an “auxiliary cross section” term is introduced into the source term, making the resulting homogenized diffusion equation consistent with its heterogeneous counterpart. The method then utilizes an on-the-fly re-homogenization in transport theory at the assembly level in order to correct for core environment effects on the homogenized cross sections and the auxiliary cross section. The method has been derived in general geometry and tested in a 1-D boiling water reactor (BWR) core benchmark problem for both controlled and uncontrolled configurations. The method has been shown to converge to the reference solution with less than 1.7% average flux error in less than one third the computational time as the CSH method – 6 to 8 times faster than fine-mesh transport
Computational Method for Atomistic-Continuum Homogenization
National Research Council Canada - National Science Library
Chung, Peter
2002-01-01
The homogenization method is used as a framework for developing a multiscale system of equations involving atoms at zero temperature at the small scale and continuum mechanics at the very large scale...
Layout optimization using the homogenization method
Suzuki, Katsuyuki; Kikuchi, Noboru
1993-01-01
A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.
DNA Dynamics Studied Using the Homogeneous Balance Method
International Nuclear Information System (INIS)
Zayed, E. M. E.; Arnous, A. H.
2012-01-01
We employ the homogeneous balance method to construct the traveling waves of the nonlinear vibrational dynamics modeling of DNA. Some new explicit forms of traveling waves are given. It is shown that this method provides us with a powerful mathematical tool for solving nonlinear evolution equations in mathematical physics. Strengths and weaknesses of the proposed method are discussed. (general)
Statistical methods for assessment of blend homogeneity
DEFF Research Database (Denmark)
Madsen, Camilla
2002-01-01
In this thesis the use of various statistical methods to address some of the problems related to assessment of the homogeneity of powder blends in tablet production is discussed. It is not straight forward to assess the homogeneity of a powder blend. The reason is partly that in bulk materials......, it is shown how to set up parametric acceptance criteria for the batch that gives a high confidence that future samples with a probability larger than a specified value will pass the USP threeclass criteria. Properties and robustness of proposed changes to the USP test for content uniformity are investigated...
Core homogenization method for pebble bed reactors
International Nuclear Information System (INIS)
Kulik, V.; Sanchez, R.
2005-01-01
This work presents a core homogenization scheme for treating a stochastic pebble bed loading in pebble bed reactors. The reactor core is decomposed into macro-domains that contain several pebble types characterized by different degrees of burnup. A stochastic description is introduced to account for pebble-to-pebble and pebble-to-helium interactions within a macro-domain as well as for interactions between macro-domains. Performance of the proposed method is tested for the PROTEUS and ASTRA critical reactor facilities. Numerical simulations accomplished with the APOLLO2 transport lattice code show good agreement with the experimental data for the PROTEUS reactor facility and with the TRIPOLI4 Monte Carlo simulations for the ASTRA reactor configuration. The difference between the proposed method and the traditional volume-averaged homogenization technique is negligible while only one type of fuel pebbles present in the system, but it grows rapidly with the level of pebble heterogeneity. (authors)
Investigation of methods for hydroclimatic data homogenization
Steirou, E.; Koutsoyiannis, D.
2012-04-01
We investigate the methods used for the adjustment of inhomogeneities of temperature time series covering the last 100 years. Based on a systematic study of scientific literature, we classify and evaluate the observed inhomogeneities in historical and modern time series, as well as their adjustment methods. It turns out that these methods are mainly statistical, not well justified by experiments and are rarely supported by metadata. In many of the cases studied the proposed corrections are not even statistically significant. From the global database GHCN-Monthly Version 2, we examine all stations containing both raw and adjusted data that satisfy certain criteria of continuity and distribution over the globe. In the United States of America, because of the large number of available stations, stations were chosen after a suitable sampling. In total we analyzed 181 stations globally. For these stations we calculated the differences between the adjusted and non-adjusted linear 100-year trends. It was found that in the two thirds of the cases, the homogenization procedure increased the positive or decreased the negative temperature trends. One of the most common homogenization methods, 'SNHT for single shifts', was applied to synthetic time series with selected statistical characteristics, occasionally with offsets. The method was satisfactory when applied to independent data normally distributed, but not in data with long-term persistence. The above results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.
A new concept of equivalent homogenization method
Energy Technology Data Exchange (ETDEWEB)
Kim, Young Jin; Pogoskekyan, Leonid; Kim, Young Il; Ju, Hyung Kook; Chang, Moon Hee [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1996-07-01
A new concept of equivalent homogenization is proposed. The concept employs new set of homogenized parameters: homogenized cross sections (XS) and interface matrix (IM), which relates partial currents at the cell interfaces. The idea of interface matrix generalizes the idea of discontinuity factors (DFs), proposed and developed by K. Koebke and K. Smith. The offered concept covers both those of K. Koebke and K. Smith; both of them can be simulated within framework of new concept. Also, the offered concept covers Siemens KWU approach for baffle/reflector simulation, where the equivalent homogenized reflector XS are derived from the conservation of response matrix at the interface in 1D simi-infinite slab geometry. The IM and XS of new concept satisfy the same assumption about response matrix conservation in 1D semi-infinite slab geometry. It is expected that the new concept provides more accurate approximation of heterogeneous cell, especially in case of the steep flux gradients at the cell interfaces. The attractive shapes of new concept are: improved accuracy, simplicity of incorporation in the existing codes, equal numerical expenses in comparison to the K. Smith`s approach. The new concept is useful for: (a) explicit reflector/baffle simulation; (b) control blades simulation; (c) mixed UO{sub 2}/MOX core simulation. The offered model has been incorporated in the finite difference code and in the nodal code PANDOX. The numerical results show good accuracy of core calculations and insensitivity of homogenized parameters with respect to in-core conditions. 9 figs., 7 refs. (Author).
A Modified Homogeneous Balance Method and Its Applications
International Nuclear Information System (INIS)
Liu Chunping
2011-01-01
A modified homogeneous balance method is proposed by improving some key steps in the homogeneous balance method. Bilinear equations of some nonlinear evolution equations are derived by using the modified homogeneous balance method. Generalized Boussinesq equation, KP equation, and mKdV equation are chosen as examples to illustrate our method. This approach is also applicable to a large variety of nonlinear evolution equations. (general)
Computational Method for Atomistic-Continuum Homogenization
National Research Council Canada - National Science Library
Chung, Peter
2002-01-01
...." Physical Review Letters. vol. 61, no. 25, pp. 2879-2882, 19 December 1988; Brenner, D. W. "Empirical Potential for Hydrocarbons for Use in Simulating the Chemical Vapor Deposition of Diamond Films...
On some methods in homogenization and their applications
International Nuclear Information System (INIS)
Allaire, Gregoire
1993-01-01
This report (which reproduces an 'Habilitation' thesis) is concerned with the homogenization theory which can be defined as the union of all mathematical techniques allowing to pass from a microscopic behavior to a macroscopic (or averaged, or effective) behavior of a physical phenomenon, modeled by one or several partial differential equations. Some new results are discussed, both from the point of view of methods, and from that of applications. The first chapter deals with viscous incompressible fluid flows in porous media, and, in particular, contains a derivation of Darcy and Brinkman's law. The second chapter is dedicated to the two-scale convergence method. The third chapter focus on the problem of optimal bounds for the effective properties of composite materials. Finally, in the fourth chapter the previous results are applied to the optimal design problem for elastic shapes. (author) [fr
International Nuclear Information System (INIS)
Takeda, T.; Uto, N.
1988-01-01
Several methods to determine cell-averaged group cross sections and anisotropic diffusion coefficients which consider the interaction effect between core fuel cells and control rods or control rod followers have been compared to discuss the physical meaning included in cell homogenization. As the cell homogenization methods considered are the commonly used flux-weighting method, the reaction rate preservation method and the reactivity preservation method. These homogenization methods have been applied to control rod worth calculations in 1-D slab cores to investigate their applicability. (author). 6 refs, 2 figs, 9 tabs
[Methods for enzymatic determination of triglycerides in liver homogenates].
Höhn, H; Gartzke, J; Burck, D
1987-10-01
An enzymatic method is described for the determination of triacylglycerols in liver homogenate. In contrast to usual methods, higher reliability and selectivity are achieved by omitting the extraction step.
Hydrogen storage materials and method of making by dry homogenation
Jensen, Craig M.; Zidan, Ragaiy A.
2002-01-01
Dry homogenized metal hydrides, in particular aluminum hydride compounds, as a material for reversible hydrogen storage is provided. The reversible hydrogen storage material comprises a dry homogenized material having transition metal catalytic sites on a metal aluminum hydride compound, or mixtures of metal aluminum hydride compounds. A method of making such reversible hydrogen storage materials by dry doping is also provided and comprises the steps of dry homogenizing metal hydrides by mechanical mixing, such as be crushing or ball milling a powder, of a metal aluminum hydride with a transition metal catalyst. In another aspect of the invention, a method of powering a vehicle apparatus with the reversible hydrogen storage material is provided.
Applications of a systematic homogenization theory for nodal diffusion methods
International Nuclear Information System (INIS)
Zhang, Hong-bin; Dorning, J.J.
1992-01-01
The authors recently have developed a self-consistent and systematic lattice cell and fuel bundle homogenization theory based on a multiple spatial scales asymptotic expansion of the transport equation in the ratio of the mean free path to the reactor characteristics dimension for use with nodal diffusion methods. The mathematical development leads naturally to self-consistent analytical expressions for homogenized diffusion coefficients and cross sections and flux discontinuity factors to be used in nodal diffusion calculations. The expressions for the homogenized nuclear parameters that follow from the systematic homogenization theory (SHT) are different from those for the traditional flux and volume-weighted (FVW) parameters. The calculations summarized here show that the systematic homogenization theory developed recently for nodal diffusion methods yields accurate values for k eff and assembly powers even when compared with the results of a fine mesh transport calculation. Thus, it provides a practical alternative to equivalence theory and GET (Ref. 3) and to simplified equivalence theory, which requires auxiliary fine-mesh calculations for assemblies embedded in a typical environment to determine the discontinuity factors and the equivalent diffusion coefficient for a homogenized assembly
Spatial homogenization method based on the inverse problem
International Nuclear Information System (INIS)
Tóta, Ádám; Makai, Mihály
2015-01-01
Highlights: • We derive a spatial homogenization method in slab and cylindrical geometries. • The fluxes and the currents on the boundary are preserved. • The reaction rates and the integral of the fluxes are preserved. • We present verification computations utilizing two- and four-energy groups. - Abstract: We present a method for deriving homogeneous multi-group cross sections to replace a heterogeneous region’s multi-group cross sections; providing that the fluxes, the currents on the external boundary, the reaction rates and the integral of the fluxes are preserved. We consider one-dimensional geometries: a symmetric slab and a homogeneous cylinder. Assuming that the boundary fluxes are given, two response matrices (RMs) can be defined concerning the current and the flux integral. The first one derives the boundary currents from the boundary fluxes, while the second one derives the flux integrals from the boundary fluxes. Further RMs can be defined that connects reaction rates to the boundary fluxes. Assuming that these matrices are known, we present formulae that reconstruct the multi-group diffusion cross-section matrix, the diffusion coefficients and the reaction cross sections in case of one-dimensional (1D) homogeneous regions. We apply these formulae to 1D heterogeneous regions and thus obtain a homogenization method. This method produces such an equivalent homogeneous material, that the fluxes and the currents on the external boundary, the reaction rates and the integral of the fluxes are preserved for any boundary fluxes. We carry out the exact derivations in 1D slab and cylindrical geometries. Verification computations for the presented homogenization method were performed using two- and four-group material cross sections, both in a slab and in a cylindrical geometry
International Nuclear Information System (INIS)
Khidirov, I.; Khajdarov, T.
1995-01-01
Elasticity characteristics of cubic and tetragonal phases of titanium nitride in the homogeneity range were studied for the first time by ultrasonic resonance method. It is established that the Young modulus, the shift and volume module of cubic titanium nitride elasticity in the homogeneity range change nonlinearly with decrease in nitrogen concentration and correlate with concentration dependences of other physical properties.15 refs., 2 figs
Tidal Dissipation in a Homogeneous Spherical Body. 1. Methods
2014-11-01
mantle (with χ = χlmpq ≡ |ωlmpq| being the physical forcing frequency). The dependency J̄ (χ ) follows from the rheological model . Evidently, the... current paper. Key words: planets and satellites: dynamical evolution and stability – planets and satellites: formation – planets and satellites: general... modeling the body with a homogeneous sphere of a certain rheology. However, the simplistic nature of the approach limits the precision of the ensuing
Homogenized description and retrieval method of nonlinear metasurfaces
Liu, Xiaojun; Larouche, Stéphane; Smith, David R.
2018-03-01
A patterned, plasmonic metasurface can strongly scatter incident light, functioning as an extremely low-profile lens, filter, reflector or other optical device. When the metasurface is patterned uniformly, its linear optical properties can be expressed using effective surface electric and magnetic polarizabilities obtained through a homogenization procedure. The homogenized description of a nonlinear metasurface, however, presents challenges both because of the inherent anisotropy of the medium as well as the much larger set of potential wave interactions available, making it challenging to assign effective nonlinear parameters to the otherwise inhomogeneous layer of metamaterial elements. Here we show that a homogenization procedure can be developed to describe nonlinear metasurfaces, which derive their nonlinear response from the enhanced local fields arising within the structured plasmonic elements. With the proposed homogenization procedure, we are able to assign effective nonlinear surface polarization densities to a nonlinear metasurface, and link these densities to the effective nonlinear surface susceptibilities and averaged macroscopic pumping fields across the metasurface. These effective nonlinear surface polarization densities are further linked to macroscopic nonlinear fields through the generalized sheet transition conditions (GSTCs). By inverting the GSTCs, the effective nonlinear surface susceptibilities of the metasurfaces can be solved for, leading to a generalized retrieval method for nonlinear metasurfaces. The application of the homogenization procedure and the GSTCs are demonstrated by retrieving the nonlinear susceptibilities of a SiO2 nonlinear slab. As an example, we investigate a nonlinear metasurface which presents nonlinear magnetoelectric coupling in near infrared regime. The method is expected to apply to any patterned metasurface whose thickness is much smaller than the wavelengths of operation, with inclusions of arbitrary geometry
Analysis of spectral methods for the homogeneous Boltzmann equation
Filbet, Francis
2011-04-01
The development of accurate and fast algorithms for the Boltzmann collision integral and their analysis represent a challenging problem in scientific computing and numerical analysis. Recently, several works were devoted to the derivation of spectrally accurate schemes for the Boltzmann equation, but very few of them were concerned with the stability analysis of the method. In particular there was no result of stability except when the method was modified in order to enforce the positivity preservation, which destroys the spectral accuracy. In this paper we propose a new method to study the stability of homogeneous Boltzmann equations perturbed by smoothed balanced operators which do not preserve positivity of the distribution. This method takes advantage of the "spreading" property of the collision, together with estimates on regularity and entropy production. As an application we prove stability and convergence of spectral methods for the Boltzmann equation, when the discretization parameter is large enough (with explicit bound). © 2010 American Mathematical Society.
Analysis of spectral methods for the homogeneous Boltzmann equation
Filbet, Francis; Mouhot, Clé ment
2011-01-01
The development of accurate and fast algorithms for the Boltzmann collision integral and their analysis represent a challenging problem in scientific computing and numerical analysis. Recently, several works were devoted to the derivation of spectrally accurate schemes for the Boltzmann equation, but very few of them were concerned with the stability analysis of the method. In particular there was no result of stability except when the method was modified in order to enforce the positivity preservation, which destroys the spectral accuracy. In this paper we propose a new method to study the stability of homogeneous Boltzmann equations perturbed by smoothed balanced operators which do not preserve positivity of the distribution. This method takes advantage of the "spreading" property of the collision, together with estimates on regularity and entropy production. As an application we prove stability and convergence of spectral methods for the Boltzmann equation, when the discretization parameter is large enough (with explicit bound). © 2010 American Mathematical Society.
Gao, Kai
2015-06-05
The development of reliable methods for upscaling fine-scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. Therefore, we have proposed a numerical homogenization algorithm based on multiscale finite-element methods for simulating elastic wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that was similar to the rotated staggered-grid finite-difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity in which the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.
Developing a multi-physics solver in APOLLO3 and applications to cross section homogenization
International Nuclear Information System (INIS)
Dugan, Kevin-James
2016-01-01
Multi-physics coupling is becoming of large interest in the nuclear engineering and computational science fields. The ability to obtain accurate solutions to realistic models is important to the design and licensing of novel reactor designs, especially in design basis accident situations. The physical models involved in calculating accident behavior in nuclear reactors includes: neutron transport, thermal conduction/convection, thermo-mechanics in fuel and support structure, fuel stoichiometry, among others. However, this thesis focuses on the coupling between two models, neutron transport and thermal conduction/convection.The goal of this thesis is to develop a multi-physics solver for simulating accidents in nuclear reactors. The focus is both on the simulation environment and the data treatment used in such simulations.This work discusses the development of a multi-physics framework based around the Jacobian-Free Newton-Krylov (JFNK) method. The framework includes linear and nonlinear solvers, along with interfaces to existing numerical codes that solve neutron transport and thermal hydraulics models (APOLLO3 and MCTH respectively) through the computation of residuals. a new formulation for the neutron transport residual is explored, which reduces the solution size and search space by a large factor; instead of the residual being based on the angular flux, it is based on the fission source.The question of whether using a fundamental mode distribution of the neutron flux for cross section homogenization is sufficiently accurate during fast transients is also explored. It is shown that in an infinite homogeneous medium, using homogenized cross sections produced with a fundamental mode flux differ significantly from a reference solution. The error is remedied by using an alternative weighting flux taken from a time dependent calculation; either a time-integrated flux or an asymptotic solution. The time-integrated flux comes from the multi-physics solution of the
Title: a simple method to evaluate linac beam homogeneity
International Nuclear Information System (INIS)
Monti, A.F.; Ostinelli, A.; Gelosa, S.; Frigerio, M.
1995-01-01
Quality Control (QC) tests in Radiotherapy represent a basic requirement to asses treatment units performance and treatment quality. Since they are generally time consuming, it is worth while to introduce procedures and methods which can be carried on more easily and quickly. Since 1994 in the Radiotherapy Department of S. Anna Hospital, it had been employed a commercially available solid phantom (PRECITRON) with a 10 diodes array, to investigate beam homogeneity (symmetry and flatness). In particular, global symmetry percentage indexes were defined which consider pairs of corresponding points along each axis (x and y) and compare the readings of the respective diodes, following the formula: (I gs =((X d + X -d ) - (Y d + Y -d )((X d + X -d ) + (Y d + Y -d )*200 where X d and X -d are points 8 or 10 cm equally spaced from the beam centre along x axis and the same for Y d and Y -d along y axis. Even if non supporting international protocols requirements as a whole, this parameter gives an important information about beam homogeneity, when only few points of measure are available in a plane, and it can be daily determined, thus fulfilling the aim of lightning immediately each situation capable to compromise treatment accuracy and effectiveness. In this poster we report the results concerning this parameter for a linear accelerator (Varian Clinac 1800), since September 1994 to September 1995
Cluster-cell calculation using the method of generalized homogenization
International Nuclear Information System (INIS)
Laletin, N.I.; Boyarinov, V.F.
1988-01-01
The generalized-homogenization method (GHM), used for solving the neutron transfer equation, was applied to calculating the neutron distribution in the cluster cell with a series of cylindrical cells with cylindrically coaxial zones. Single-group calculations of the technological channel of the cell of an RBMK reactor were performed using GHM. The technological channel was understood to be the reactor channel, comprised of the zirconium rod, the water or steam-water mixture, the uranium dioxide fuel element, and the zirconium tube, together with the adjacent graphite layer. Calculations were performed for channels with no internal sources and with unit incoming current at the external boundary as well as for channels with internal sources and zero current at the external boundary. The PRAKTINETs program was used to calculate the symmetric neutron distributions in the microcell and in channels with homogenized annular zones. The ORAR-TsM program was used to calculate the antisymmetric distribution in the microcell. The accuracy of the calculations were compared for the two channel versions
Continuous energy Monte Carlo method based homogenization multi-group constants calculation
International Nuclear Information System (INIS)
Li Mancang; Wang Kan; Yao Dong
2012-01-01
The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P 1 cross sections were used to calculate the diffusion coefficients for diffusion reactor simulator codes. B N theory is applied to take the leakage effect into account when the infinite lattice of identical symmetric motives is assumed. The MCMC code was developed and the code was applied in four assembly configurations to assess the accuracy and the applicability. At core-level, A PWR prototype core is examined. The results show that the Monte Carlo based multi-group constants behave well in average. The method could be applied to complicated configuration nuclear reactor core to gain higher accuracy. (authors)
Study on critical effect in lattice homogenization via Monte Carlo method
International Nuclear Information System (INIS)
Li Mancang; Wang Kan; Yao Dong
2012-01-01
In contrast to the traditional deterministic lattice codes, generating the homogenization multigroup constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum. thus provides more accuracy parameters. An infinite lattice of identical symmetric motives is usually assumed when performing the homogenization. However, the finite size of a reactor is reality and it should influence the lattice calculation. In practice of the homogenization with Monte Carlo method, B N theory is applied to take the leakage effect into account. The fundamental mode with the buckling B is used as a measure of the finite size. The critical spectrum in the solution of 0-dimensional fine-group B 1 equations is used to correct the weighted spectrum for homogenization. A PWR prototype core is examined to verify that the presented method indeed generates few group constants effectively. In addition, a zero power physical experiment verification is performed. The results show that B N theory is adequate for leakage correction in the multigroup constants generation via Monte Carlo method. (authors)
Iterative and variational homogenization methods for filled elastomers
Goudarzi, Taha
Elastomeric composites have increasingly proved invaluable in commercial technological applications due to their unique mechanical properties, especially their ability to undergo large reversible deformation in response to a variety of stimuli (e.g., mechanical forces, electric and magnetic fields, changes in temperature). Modern advances in organic materials science have revealed that elastomeric composites hold also tremendous potential to enable new high-end technologies, especially as the next generation of sensors and actuators featured by their low cost together with their biocompatibility, and processability into arbitrary shapes. This potential calls for an in-depth investigation of the macroscopic mechanical/physical behavior of elastomeric composites directly in terms of their microscopic behavior with the objective of creating the knowledge base needed to guide their bottom-up design. The purpose of this thesis is to generate a mathematical framework to describe, explain, and predict the macroscopic nonlinear elastic behavior of filled elastomers, arguably the most prominent class of elastomeric composites, directly in terms of the behavior of their constituents --- i.e., the elastomeric matrix and the filler particles --- and their microstructure --- i.e., the content, size, shape, and spatial distribution of the filler particles. This will be accomplished via a combination of novel iterative and variational homogenization techniques capable of accounting for interphasial phenomena and finite deformations. Exact and approximate analytical solutions for the fundamental nonlinear elastic response of dilute suspensions of rigid spherical particles (either firmly bonded or bonded through finite size interphases) in Gaussian rubber are first generated. These results are in turn utilized to construct approximate solutions for the nonlinear elastic response of non-Gaussian elastomers filled with a random distribution of rigid particles (again, either firmly
Vignolles, M L; Lopez, C; Madec, M N; Ehrhardt, J J; Méjean, S; Schuck, P; Jeantet, R
2009-01-01
Changes in fat properties were studied before, during, and after the drying process (including during storage) to determine the consequences on powder physical properties. Several methods were combined to characterize changes in fat structure and thermal properties as well as the physical properties of powders. Emulsion droplet size and droplet aggregation depended on the homogenizing pressures and were also affected by spray atomization. Aggregation was usually greater after spray atomization, resulting in greater viscosities. These processes did not have the same consequences on the stability of fat in the powders. The quantification of free fat is a pertinent indicator of fat instability in the powders. Confocal laser scanning microscopy permitted the characterization of the structure of fat in situ in the powders. Powders from unhomogenized emulsions showed greater free fat content. Surface fat was always overrepresented, regardless of the composition and process parameters. Differential scanning calorimetry melting experiments showed that fat was partially crystallized in situ in the powders stored at 20 degrees C, and that it was unstable on a molecular scale. Thermal profiles were also related to the supramolecular structure of fat in the powder particle matrix. Powder physical properties depended on both composition and process conditions. The free fat content seemed to have a greater influence than surface fat on powder physical properties, except for wettability. This study clearly showed that an understanding of fat behavior is essential for controlling and improving the physical properties of fat-filled dairy powders and their overall quality.
Methods of experimental physics
Williams, Dudley
1962-01-01
Methods of Experimental Physics, Volume 3: Molecular Physics focuses on molecular theory, spectroscopy, resonance, molecular beams, and electric and thermodynamic properties. The manuscript first considers the origins of molecular theory, molecular physics, and molecular spectroscopy, as well as microwave spectroscopy, electronic spectra, and Raman effect. The text then ponders on diffraction methods of molecular structure determination and resonance studies. Topics include techniques of electron, neutron, and x-ray diffraction and nuclear magnetic, nuclear quadropole, and electron spin reson
Method of the characteristics for calculation of VVER without homogenization
Energy Technology Data Exchange (ETDEWEB)
Suslov, I.R.; Komlev, O.G.; Novikova, N.N.; Zemskov, E.A.; Tormyshev, I.V.; Melnikov, K.G.; Sidorov, E.B. [Institute of Physics and Power Engineering, Obninsk (Russian Federation)
2005-07-01
The first stage of the development of characteristics code MCCG3D for calculation of the VVER-type reactor without homogenization is presented. The parallel version of the code for MPI was developed and tested on cluster PC with LINUX-OS. Further development of the MCCG3D code for design-level calculations with full-scale space-distributed feedbacks is discussed. For validation of the MCCG3D code we use the critical assembly VENUS-2. The geometrical models with and without homogenization have been used. With both models the MCCG3D results agree well with the experimental power distribution and with results generated by the other codes, but model without homogenization provides better results. The perturbation theory for MCCG3D code is developed and implemented in the module KEFSFGG. The calculations with KEFSFGG are in good agreement with direct calculations. (authors)
Directory of Open Access Journals (Sweden)
Dieisson Pivoto
2016-04-01
Full Text Available ABSTRACT: The study aimed to i quantify the measurement uncertainty in the physical tests of rice and beans for a hypothetical defect, ii verify whether homogenization and sample reduction in the physical classification tests of rice and beans is effective to reduce the measurement uncertainty of the process and iii determine whether the increase in size of beans sample increases accuracy and reduces measurement uncertainty in a significant way. Hypothetical defects in rice and beans with different damage levels were simulated according to the testing methodology determined by the Normative Ruling of each product. The homogenization and sample reduction in the physical classification of rice and beans are not effective, transferring to the final test result a high measurement uncertainty. The sample size indicated by the Normative Ruling did not allow an appropriate homogenization and should be increased.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
International Nuclear Information System (INIS)
Shin, Y.W.; Wiedermann, A.H.
1979-10-01
A solution method is presented for transient, homogeneous, equilibrium, two-phase flows of a single-component fluid in one space dimension. The method combines a direct finite-difference procedure and the method of characteristics. The finite-difference procedure solves the interior points of the computing domain; the boundary information is provided by a separate procedure based on the characteristics theory. The solution procedure for boundary points requires information in addition to the physical boundary conditions. This additional information is obtained by a new procedure involving integration of characteristics in the hodograph plane. Sample problems involving various combinations of basic boundary types are calculated for two-phase water/steam mixtures and single-phase nitrogen gas, and compared with independent method-of-characteristics solutions using very fine characteristic mesh. In all cases, excellent agreement is demonstrated
Cell homogenization methods for pin-by-pin core calculations tested in slab geometry
International Nuclear Information System (INIS)
Yamamoto, Akio; Kitamura, Yasunori; Yamane, Yoshihiro
2004-01-01
In this paper, performances of spatial homogenization methods for fuel or non-fuel cells are compared in slab geometry in order to facilitate pin-by-pin core calculations. Since the spatial homogenization methods were mainly developed for fuel assemblies, systematic study of their performance for the cell-level homogenization has not been carried out. Importance of cell-level homogenization is recently increasing since the pin-by-pin mesh core calculation in actual three-dimensional geometry, which is less approximate approach than current advanced nodal method, is getting feasible. Four homogenization methods were investigated in this paper; the flux-volume weighting, the generalized equivalence theory, the superhomogenization (SPH) method and the nonlinear iteration method. The last one, the nonlinear iteration method, was tested as the homogenization method for the first time. The calculations were carried out in simplified colorset assembly configurations of PWR, which are simulated by slab geometries, and homogenization performances were evaluated through comparison with the reference cell-heterogeneous calculations. The calculation results revealed that the generalized equivalence theory showed best performance. Though the nonlinear iteration method can significantly reduce homogenization error, its performance was not as good as that of the generalized equivalence theory. Through comparison of the results obtained by the generalized equivalence theory and the superhomogenization method, important byproduct was obtained; deficiency of the current superhomogenization method, which could be improved by incorporating the 'cell-level discontinuity factor between assemblies', was clarified
Energy Technology Data Exchange (ETDEWEB)
Cadilhac, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1963-11-15
After a general survey of the theory of neutron thermalization in homogeneous media, one introduces, through a proper formulation, a simplified model generalizing both the Horowitz model (generalized heavy free gas approximation) and the proton gas model. When this model is used, the calculation of spectra is reduced to the solution of linear second order differential equations. Since it depends on two arbitrary functions, the model gives a good approximation of any usual moderator for reactor physics purposes. The choice of these functions is discussed from a theoretical point of view; a method based on the consideration of the first two moments of the scattering law is investigated. Finally, the possibility of discriminating models by using experimental informations is considered. (author) [French] Apres un passage en revue de generalites sur la thermalisation des neutrons dans les milieux homogenes, on developpe un formalisme permettant de definir et d'etudier un modele simplifie de thermaliseur. Ce modele generalise l'approximation proposee par J. HOROWITZ (''gaz lourd generalise'') et comporte comme cas particulier le modele ''hydrogene gazeux monoatomique''. Il ramene le calcul des spectres a la resolution d'equations differentielles lineaires du second ordre. Il fait intervenir deux fonctions arbitraires, ce qui lui permet de representer les thermaliseurs usuels de facon satisfaisante pour les besoins de la physique des reacteurs. L'ajustement theorique de ces fonctions est discute; on etudie une methode basee sur la consideration des deux premiers moments de la loi de diffusion. On envisage enfin la possibilite de discriminer les modeles d'apres des renseignements d'origine experimentale. (auteur)
An homogeneization method applied to the seismic analysis of LMFBR cores
International Nuclear Information System (INIS)
Brochard, D.; Hammami, L.
1991-01-01
Important structures like nuclear reactor cores, steam generator bundle, are schematically composed by a great number of beams, immersed in a fluid. The fluid structure interaction is an important phenomenon influencing the dynamical response of bundle. The study of this interaction through classical methods would need a refined modelisation at the scale of the beams and lead to important size of problems. The homogeneization method constitutes an alternative approach if we are mainly interested by the global behaviour of the bundle. Similar approaches have been already used for other types of industrial structures (Sanchez-Palencia 1980, Bergman and al. 1985, Theodory 1984, Benner and al. 1981). This method consists in replacing the physical heterogeneous medium by an homogeneous medium, which characteristics are determined from the resolution of a set problems on the elementary cell. In the first part of this paper the main assumptions of the method will be summarized. Moreover, other important phenomena may contribute to the dynamical behaviour of the industrial above mentioned structures: those are the impacts between the beams. These impacts could be due to supports limiting the displacements of the beams or to differences in the vibratory characteristics of the various beams. The second part of the paper will concern the way of taking into account the impacts in the linear hemogeneous formalism. Finally an application to the seismic analysis of the FBR core mock-up RAPSODIE will be presented
Geostatistical Analysis Methods for Estimation of Environmental Data Homogeneity
Directory of Open Access Journals (Sweden)
Aleksandr Danilov
2018-01-01
Full Text Available The methodology for assessing the spatial homogeneity of ecosystems with the possibility of subsequent zoning of territories in terms of the degree of disturbance of the environment is considered in the study. The degree of pollution of the water body was reconstructed on the basis of hydrochemical monitoring data and information on the level of the technogenic load in one year. As a result, the greatest environmental stress zones were isolated and correct zoning using geostatistical analysis techniques was proved. Mathematical algorithm computing system was implemented in an object-oriented programming C #. A software application has been obtained that allows quickly assessing the scale and spatial localization of pollution during the initial analysis of the environmental situation.
Statistical homogeneity tests applied to large data sets from high energy physics experiments
Trusina, J.; Franc, J.; Kůs, V.
2017-12-01
Homogeneity tests are used in high energy physics for the verification of simulated Monte Carlo samples, it means if they have the same distribution as a measured data from particle detector. Kolmogorov-Smirnov, χ 2, and Anderson-Darling tests are the most used techniques to assess the samples’ homogeneity. Since MC generators produce plenty of entries from different models, each entry has to be re-weighted to obtain the same sample size as the measured data has. One way of the homogeneity testing is through the binning. If we do not want to lose any information, we can apply generalized tests based on weighted empirical distribution functions. In this paper, we propose such generalized weighted homogeneity tests and introduce some of their asymptotic properties. We present the results based on numerical analysis which focuses on estimations of the type-I error and power of the test. Finally, we present application of our homogeneity tests to data from the experiment DØ in Fermilab.
Methods of statistical physics
Akhiezer, Aleksandr I
1981-01-01
Methods of Statistical Physics is an exposition of the tools of statistical mechanics, which evaluates the kinetic equations of classical and quantized systems. The book also analyzes the equations of macroscopic physics, such as the equations of hydrodynamics for normal and superfluid liquids and macroscopic electrodynamics. The text gives particular attention to the study of quantum systems. This study begins with a discussion of problems of quantum statistics with a detailed description of the basics of quantum mechanics along with the theory of measurement. An analysis of the asymptotic be
Nuclear physics mathematical methods
International Nuclear Information System (INIS)
Balian, R.; Gervois, A.; Giannoni, M.J.; Levesque, D.; Maille, M.
1984-01-01
The nuclear physics mathematical methods, applied to the collective motion theory, to the reduction of the degrees of freedom and to the order and disorder phenomena; are investigated. In the scope of the study, the following aspects are discussed: the entropy of an ensemble of collective variables; the interpretation of the dissipation, applying the information theory; the chaos and the universality; the Monte-Carlo method applied to the classical statistical mechanics and quantum mechanics; the finite elements method, and the classical ergodicity [fr
Directory of Open Access Journals (Sweden)
Zhenzhong Pan
2015-01-01
Full Text Available The nanosuspension of 5% lambda-cyhalothrin with 0.2% surfactants was prepared by the melt emulsification-high pressure homogenization method. The surfactants composition, content, and homogenization process were optimized. The anionic surfactant (1-dodecanesulfonic acid sodium salt and polymeric surfactant (maleic rosin-polyoxypropylene-polyoxyethylene ether sulfonate screened from 12 types of commercially common-used surfactants were used to prepare lambda-cyhalothrin nanosuspension with high dispersity and stability. The mean particle size and polydispersity index of the nanosuspension were 16.01 ± 0.11 nm and 0.266 ± 0.002, respectively. The high zeta potential value of −41.7 ± 1.3 mV and stable crystalline state of the nanoparticles indicated the excellent physical and chemical stability. The method could be widely used for preparing nanosuspension of various pesticides with melting points below boiling point of water. This formulation may avoid the use of organic solvents and reduce surfactants and is perspective for improving bioavailability and reducing residual pollution of pesticide in agricultural products and environment.
Modelling of the dynamical behaviour of LWR internals by homogeneization methods
International Nuclear Information System (INIS)
Brochard, D.; Lepareux, M.; Gibert, R.J.; Delaigue, D.; Planchard, J.
1987-01-01
The upper plenum of the internals of PWR, the steam generator bundle, the nuclear reactor core, may be schematically represented by a beam bundle immersed in a fluid. The dynamical study of such a system needs to take into account fluid structure interaction. A refined modelisation at the scale of the tubes can be used but leads to a very important size of problem difficult to solve even on the biggest computers. The homogeneization method allows to have an approximation of the fluid structure interaction for the global behaviour of the bundle. It consists in replacing the heterogeneous physical medium (tubes and fluid) by an equivalent homogeneous medium whose characteristics are determined from the resolution of a set of problems on the elementary cell. The aim of this paper is to present the main steps of the determination of this equivalent medium in the case of small displacements (acoustic behaviour of the fluid) and in using displacement variables for both fluid and tubes. Then some precisions about the implementation of this method in computer codes will be given. (orig.)
Malinsky, Michelle Duval; Jacoby, Cliffton B; Reagen, William K
2011-01-10
We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100±13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented. Copyright © 2010 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Malinsky, Michelle Duval; Jacoby, Cliffton B.; Reagen, William K.
2011-01-01
We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100 ± 13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented.
Energy Technology Data Exchange (ETDEWEB)
Malinsky, Michelle Duval, E-mail: mmalinsky@mmm.com [3M Environmental Laboratory, 3M Center, Building 0260-05-N-17, St. Paul, MN 55144-1000 (United States); Jacoby, Cliffton B.; Reagen, William K. [3M Environmental Laboratory, 3M Center, Building 0260-05-N-17, St. Paul, MN 55144-1000 (United States)
2011-01-10
We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100 {+-} 13% with a precision (%RSD) {<=}18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented.
Feldenkrais, Moshé
1981-01-01
Moshe Feldenkrais is known from the textbooks as a collaborator of Joliot-Curie, Langevin, and Kowarski participating in the first nuclear fission experiments. During the war he went to Great Britain and worked on the development of submarine detection devices. From experimental physics, following finally a suggestion of Lew Kowarski, he turned his interest to neurophysiology and neuropsychology. He studied the cybernetical organisation between human body dynamics and the mind. He developed his method known as "Functional integration" and "Awareness through movement". It has been applied with surprising results to post-traumatic rehabilitation, psychotherapy, re-education of the mentally or physically handicapped, and improvement of performance in sports. It can be used by everybody who wants to discover his natural grace of movement.
Fluid structure interaction in LMFBR cores modelling by an homogenization method
International Nuclear Information System (INIS)
Brochard, D.
1988-01-01
The upper plenum of the internals of PWR, the steam generator bundle, the nuclear reactor core, may be schematically represented by a beam bundle immersed in a fluid. The dynamical study of such a system needs to take into account fluid structure interaction. A refined model at the scale of the tubes can be used but leads to a very difficult problem to solve even on the largest computers. The homogenization method allows to have an approximation of the fluid structure interaction for the global behaviour of the bundle. It consists of replacing the heterogeneous physical medium (tubes and fluid) by an equivalent homogeneous medium whose characteristics are determined from the resolution of a set of problems on the elementary cell. The aim of this paper is to present the main steps of the determination of this equivalent medium in the case of small displacements (acoustic behaviour of the fluid). Then an application to LMFBR core geometry has been realised, which shows the lowering effect on eigenfrequencies due to the fluid. Some comparisons with test results will be presented. 6 refs, 7 figs, 2 tabs
Homogenization of metamaterials: Parameters retrieval methods and intrinsic problems
DEFF Research Database (Denmark)
Andryieuski, Andrei; Malureanu, Radu; Lavrinenko, Andrei
2010-01-01
Metamaterials (MTMs) claim a lot of attention worldwide. Description of the MTMs in terms of effective parameters is a simple and useful tool for characterisation of their electromagnetic properties. So a reliable effective parameters restoration method is on demand. In this paper we report about...
Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic
2017-03-01
This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.
The colour analysis method applied to homogeneous rocks
Directory of Open Access Journals (Sweden)
Halász Amadé
2015-12-01
Full Text Available Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.
Method to study the effect of blend flowability on the homogeneity of acetaminophen.
Llusá, Marcos; Pingali, Kalyana; Muzzio, Fernando J
2013-02-01
Excipient selection is key to product development because it affects their processability and physical properties, which ultimately affect the quality attributes of the pharmaceutical product. To study how the flowability of lubricated formulations affects acetaminophen (APAP) homogeneity. The formulations studied here contain one of two types of cellulose (Avicel 102 or Ceollus KG-802), one of three grades of Mallinckrodt APAP (fine, semi-fine, or micronized), lactose (Fast-Flo) and magnesium stearate. These components are mixed in a 300-liter bin blender. Blend flowability is assessed with the Gravitational Displacement Rheometer. APAP homogeneity is assessed with off-line NIR. Excluding blends dominated by segregation, there is a trend between APAP homogeneity and blend flow index. Blend flowability is affected by the type of microcrystalline cellulose and by the APAP grade. The preliminary results suggest that the methodology used in this paper is adequate to study of the effect of blend flow index on APAP homogeneity.
Lugovtsova, Y. D.; Soldatov, A. I.
2016-01-01
Three different methods for pile integrity testing are proposed to compare on a cylindrical homogeneous polyamide specimen. The methods are low strain pile integrity testing, multichannel pile integrity testing and testing with a shaker system. Since the low strain pile integrity testing is well-established and standardized method, the results from it are used as a reference for other two methods.
Pan, Zhenzhong; Cui, Bo; Zeng, Zhanghua; Feng, Lei; Liu, Guoqiang; Cui, Haixin; Pan, Hongyu
2015-01-01
The nanosuspension of 5% lambda-cyhalothrin with 0.2% surfactants was prepared by the melt emulsification-high pressure homogenization method. The surfactants composition, content, and homogenization process were optimized. The anionic surfactant (1-dodecanesulfonic acid sodium salt) and polymeric surfactant (maleic rosin-polyoxypropylene-polyoxyethylene ether sulfonate) screened from 12 types of commercially common-used surfactants were used to prepare lambda-cyhalothrin nanosuspension with ...
A Proposal on the Quantitative Homogeneity Analysis Method of SEM Images for Material Inspections
Energy Technology Data Exchange (ETDEWEB)
Kim, Song Hyun; Kim, Jong Woo; Shin, Chang Ho [Hanyang University, Seoul (Korea, Republic of); Choi, Jung-Hoon; Cho, In-Hak; Park, Hwan Seo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-05-15
A scanning electron microscope (SEM) is a method to inspect the surface microstructure of materials. The SEM uses electron beams for imaging high magnifications of material surfaces; therefore, various chemical analyses can be performed from the SEM images. Therefore, it is widely used for the material inspection, chemical characteristic analysis, and biological analysis. For the nuclear criticality analysis field, it is an important parameter to check the homogeneity of the compound material for using it in the nuclear system. In our previous study, the SEM was tried to use for the homogeneity analysis of the materials. In this study, a quantitative homogeneity analysis method of SEM images is proposed for the material inspections. The method is based on the stochastic analysis method with the information of the grayscales of the SEM images.
A Proposal on the Quantitative Homogeneity Analysis Method of SEM Images for Material Inspections
International Nuclear Information System (INIS)
Kim, Song Hyun; Kim, Jong Woo; Shin, Chang Ho; Choi, Jung-Hoon; Cho, In-Hak; Park, Hwan Seo
2015-01-01
A scanning electron microscope (SEM) is a method to inspect the surface microstructure of materials. The SEM uses electron beams for imaging high magnifications of material surfaces; therefore, various chemical analyses can be performed from the SEM images. Therefore, it is widely used for the material inspection, chemical characteristic analysis, and biological analysis. For the nuclear criticality analysis field, it is an important parameter to check the homogeneity of the compound material for using it in the nuclear system. In our previous study, the SEM was tried to use for the homogeneity analysis of the materials. In this study, a quantitative homogeneity analysis method of SEM images is proposed for the material inspections. The method is based on the stochastic analysis method with the information of the grayscales of the SEM images
International Nuclear Information System (INIS)
Mecozzi, M.; Cicero, A.M.
1995-01-01
In this paper a comparison method to verify the homogeneity/inhomogeneity of environmental monitoring data is described. The comparison method is based on the simultaneous application of three statistical tests: One-Way ANOVA, Kruskal Wallis and One-Way IANOVA. Robust tests such as IANOVA and Kruskal Wallis can be more efficient than the usual ANOVA methods because are resistant against the presence of outliers and divergences from the normal distribution of the data. The evidences of the study are that the validation of the result about presence/absence of homogeneity in the data set is obtained when it is confirmed by two tests at least
Methods study of homogeneity and stability test from cerium oxide CRM candidate
International Nuclear Information System (INIS)
Samin; Susanna TS
2016-01-01
The methods study of homogeneity and stability test from cerium oxide CRM candidate has been studied based on ISO 13258 and KAN DP. 01. 34. The purpose of this study was to select the test method homogeneity and stability tough on making CRM cerium oxide. Prepared 10 sub samples of cerium oxide randomly selected types of analytes which represent two compounds, namely CeO_2 and La_2O_3. At 10 sub sample is analyzed CeO_2 and La_2O_3 contents in duplicate with the same analytical methods, by the same analyst, and in the same laboratory. Data analysis results calculated statistically based on ISO 13528 and KAN DP.01.34. According to ISO 13528 Cerium Oxide samples said to be homogeneous if Ss ≤ 0.3 σ and is stable if | Xr – Yr | ≤ 0.3 σ. In this study, the data of homogeneity test obtained CeO_2 is Ss = 2.073 x 10-4 smaller than 0.3 σ (0.5476) and the stability test obtained | Xr - Yr | = 0.225 and the price is < 0.3 σ. Whereas for La_2O_3, the price for homogeneity test obtained Ss = 1.649 x 10-4 smaller than 0.3 σ (0.4865) and test the stability of the price obtained | Xr - Yr | = 0.2185 where the price is < 0.3 σ. Compared with the method from KAN, a sample of cerium oxide has also been homogenized for Fcalc < Ftable and stable, because | Xi - Xhm | < 0.3 x n IQR. Provided that the results of the evaluation homogeneity and stability test from CeO_2 CRM candidate test data were processed using statistical methods ISO 13528 is not significantly different with statistical methods from KAN DP.01.34, which together meet the requirements of a homogeneous and stable. So the test method homogeneity and stability test based on ISO 13528 can be used to make CRM cerium oxide. (author)
Homogeneous SiGe crystal growth in microgravity by the travelling liquidus-zone method
International Nuclear Information System (INIS)
Kinoshita, K; Arai, Y; Inatomi, Y; Sakata, K; Takayanagi, M; Yoda, S; Miyata, H; Tanaka, R; Sone, T; Yoshikawa, J; Kihara, T; Shibayama, H; Kubota, Y; Shimaoka, T; Warashina, Y
2011-01-01
Homogeneous SiGe crystal growth experiments will be performed on board the ISS 'Kibo' using a gradient heating furnace (GHF). A new crystal growth method invented for growing homogeneous mixed crystals named 'travelling liquidus-zone (TLZ) method' is evaluated by the growth of Si 0.5 Ge 0.5 crystals in space. We have already succeeded in growing homogeneous 2mm diameter Si 0.5 Ge 0.5 crystals on the ground but large diameter homogeneous crystals are difficult to be grown due to convection in a melt. In microgravity, larger diameter crystals can be grown with suppressing convection. Radial concentration profiles as well as axial profiles in microgravity grown crystals will be measured and will be compared with our two-dimensional TLZ growth model equation and compositional variation is analyzed. Results are beneficial for growing large diameter mixed crystals by the TLZ method on the ground. Here, we report on the principle of the TLZ method for homogeneous crystal growth, results of preparatory experiments on the ground and plan for microgravity experiments.
Multigrid Finite Element Method in Calculation of 3D Homogeneous and Composite Solids
Directory of Open Access Journals (Sweden)
A.D. Matveev
2016-12-01
Full Text Available In the present paper, a method of multigrid finite elements to calculate elastic three-dimensional homogeneous and composite solids under static loading has been suggested. The method has been developed based on the finite element method algorithms using homogeneous and composite three-dimensional multigrid finite elements (MFE. The procedures for construction of MFE of both rectangular parallelepiped and complex shapes have been shown. The advantages of MFE are that they take into account, following the rules of the microapproach, heterogeneous and microhomogeneous structures of the bodies, describe the three-dimensional stress-strain state (without any simplifying hypotheses in homogeneous and composite solids, as well as generate small dimensional discrete models and numerical solutions with a high accuracy.
International Nuclear Information System (INIS)
Dorning, J.J.
1991-01-01
A simultaneous pin lattice cell and fuel bundle homogenization theory has been developed for use with nodal diffusion calculations of practical reactors. The theoretical development of the homogenization theory, which is based on multiple-scales asymptotic expansion methods carried out through fourth order in a small parameter, starts from the transport equation and systematically yields: a cell-homogenized bundled diffusion equation with self-consistent expressions for the cell-homogenized cross sections and diffusion tensor elements; and a bundle-homogenized global reactor diffusion equation with self-consistent expressions for the bundle-homogenized cross sections and diffusion tensor elements. The continuity of the angular flux at cell and bundle interfaces also systematically yields jump conditions for the scaler flux or so-called flux discontinuity factors on the cell and bundle interfaces in terms of the two adjacent cell or bundle eigenfunctions. The expressions required for the reconstruction of the angular flux or the 'de-homogenization' theory were obtained as an integral part of the development; hence the leading order transport theory angular flux is easily reconstructed throughout the reactor including the regions in the interior of the fuel bundles or computational nodes and in the interiors of the pin lattice cells. The theoretical development shows that the exact transport theory angular flux is obtained to first order from the whole-reactor nodal diffusion calculations, done using the homogenized nuclear data and discontinuity factors, is a product of three computed quantities: a ''cell shape function''; a ''bundle shape function''; and a ''global shape function''. 10 refs
Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris
2015-07-17
Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation.
Homogenization of Periodic Masonry Using Self-Consistent Scheme and Finite Element Method
Kumar, Nitin; Lambadi, Harish; Pandey, Manoj; Rajagopal, Amirtham
2016-01-01
Masonry is a heterogeneous anisotropic continuum, made up of the brick and mortar arranged in a periodic manner. Obtaining the effective elastic stiffness of the masonry structures has been a challenging task. In this study, the homogenization theory for periodic media is implemented in a very generic manner to derive the anisotropic global behavior of the masonry, through rigorous application of the homogenization theory in one step and through a full three-dimensional behavior. We have considered the periodic Eshelby self-consistent method and the finite element method. Two representative unit cells that represent the microstructure of the masonry wall exactly are considered for calibration and numerical application of the theory.
Homogenized parameters of light water fuel elements computed by a perturbative (perturbation) method
International Nuclear Information System (INIS)
Koide, Maria da Conceicao Michiyo
2000-01-01
A new analytic formulation for material parameters homogenization of the two dimensional and two energy-groups diffusion model has been successfully used as a fast computational tool for recovering the detailed group fluxes in full reactor cores. The homogenization method which has been proposed does not require the solution of the diffusion problem by a numerical method. As it is generally recognized that currents at assembly boundaries must be computed accurately, a simple numerical procedure designed to improve the values of currents obtained by nodal calculations is also presented. (author)
Frequency-dependant homogenized properties of composite using spectral analysis method
International Nuclear Information System (INIS)
Ben Amor, M; Ben Ghozlen, M H; Lanceleur, P
2010-01-01
An inverse procedure is proposed to determine the material constants of multilayered composites using a spectral analysis homogenization method. Recursive process gives interfacial displacement perpendicular to layers in term of deepness. A fast-Fourier transform (FFT) procedure has been used in order to extract the wave numbers propagating in the multilayer. The upper frequency bound of this homogenization domain is estimated. Inside the homogenization domain, we discover a maximum of three planes waves susceptible to propagate in the medium. A consistent algorithm is adopted to develop an inverse procedure for the determination of the materials constants of multidirectional composite. The extracted wave numbers are used as the inputs for the procedure. The outputs are the elastic constants of multidirectional composite. Using this method, the frequency dependent effective elastic constants are obtained and example for [0/90] composites is given.
Multivariate analysis methods in physics
International Nuclear Information System (INIS)
Wolter, M.
2007-01-01
A review of multivariate methods based on statistical training is given. Several multivariate methods useful in high-energy physics analysis are discussed. Selected examples from current research in particle physics are discussed, both from the on-line trigger selection and from the off-line analysis. Also statistical training methods are presented and some new application are suggested [ru
Methods of experimental physics
Pergament, M I
2014-01-01
IntroductionIndirect Data and Inverse ProblemsExperiment and Stochasticity of the Physical WorldGeneral Properties of Measuring-Recording SystemsLinear Measuring-Recording SystemsTransfer Function and Convolution EquationTransfer Ratio, Amplitude-Frequency and Phase-Frequency Characteristics, and Relation Between Input and Output Signals in Fourier SpaceSome ConsequencesDiscretizationCommunication Theory ApproachDetermination of the Measuring-Recording System ParametersStudying Pulse Processes<
Homogeneity of Inorganic Glasses
DEFF Research Database (Denmark)
Jensen, Martin; Zhang, L.; Keding, Ralf
2011-01-01
Homogeneity of glasses is a key factor determining their physical and chemical properties and overall quality. However, quantification of the homogeneity of a variety of glasses is still a challenge for glass scientists and technologists. Here, we show a simple approach by which the homogeneity...... of different glass products can be quantified and ranked. This approach is based on determination of both the optical intensity and dimension of the striations in glasses. These two characteristic values areobtained using the image processing method established recently. The logarithmic ratio between...
International Nuclear Information System (INIS)
Sigrist, Jean-Francois; Laine, Christian; Broc, Daniel
2006-01-01
The present paper exposes a homogenization method developed in order to perform the seismic analysis of a nuclear reactor with internal structures modelling and taking fluid structure interaction effects into account. The numerical resolution of fluid-structure interactions has made tremendous progress over the past decades and some applications of the various developed techniques in the industrial field can be found in the literature. As builder of nuclear naval propulsion reactors (ground prototype reactor or embarked reactor on submarines), DCN Propulsion has been working with French nuclear committee CEA for several years in order to integrate fluid-structure analysis in the design stage of current projects. In previous papers modal and seismic analyses of a nuclear reactor with fluid-structure interaction effect were exposed. The studies highlighted the importance of fluid- structure coupling phenomena in the industrial case and focussed on added mass and added stiffness effects. The numerical model used in the previous studies did not take into account the presence of internal structures within the pressure vessel. The present study aims at improving the numerical model of the nuclear reactor to take into account the presence of the internal structures. As the internal structures are periodical within the inner and outer structure of the pressure vessel the proposed model is based on the development of a homogenization method: the presence of internal structure and its effect on the fluid-structure physical interaction is taken into account, although they are not geometrically modeled. The basic theory of the proposed homogenization method is recalled, leading to the modification of fluid-structure coupling operator in the finite element model. The physical consistency of the method is proved by an evaluation of the system mass with the various mass operators (structure, fluid and fluid-structure operators). The method is exposed and validated in a 2 D case
International Nuclear Information System (INIS)
Jeong, Yang Su; Oh, Byeong Seong
2010-05-01
This book introduces measurement and error, statistics of experimental data, population, sample variable, distribution function, propagation of error, mean and measurement of error, adjusting to rectilinear equation, common sense of error, experiment method, and record and statement. It also explains importance of error of estimation, systematic error, random error, treatment of single variable, significant figure, deviation, mean value, median, mode, sample mean, sample standard deviation, binomial distribution, gauss distribution, and method of least squares.
Sokołowski, Damian; Kamiński, Marcin
2018-01-01
This study proposes a framework for determination of basic probabilistic characteristics of the orthotropic homogenized elastic properties of the periodic composite reinforced with ellipsoidal particles and a high stiffness contrast between the reinforcement and the matrix. Homogenization problem, solved by the Iterative Stochastic Finite Element Method (ISFEM) is implemented according to the stochastic perturbation, Monte Carlo simulation and semi-analytical techniques with the use of cubic Representative Volume Element (RVE) of this composite containing single particle. The given input Gaussian random variable is Young modulus of the matrix, while 3D homogenization scheme is based on numerical determination of the strain energy of the RVE under uniform unit stretches carried out in the FEM system ABAQUS. The entire series of several deterministic solutions with varying Young modulus of the matrix serves for the Weighted Least Squares Method (WLSM) recovery of polynomial response functions finally used in stochastic Taylor expansions inherent for the ISFEM. A numerical example consists of the High Density Polyurethane (HDPU) reinforced with the Carbon Black particle. It is numerically investigated (1) if the resulting homogenized characteristics are also Gaussian and (2) how the uncertainty in matrix Young modulus affects the effective stiffness tensor components and their PDF (Probability Density Function).
Probabilistic methods for physics
International Nuclear Information System (INIS)
Cirier, G
2013-01-01
We present an asymptotic method giving a probability of presence of the iterated spots of R d by a polynomial function f. We use the well-known Perron Frobenius operator (PF) that lets certain sets and measure invariant by f. Probabilistic solutions can exist for the deterministic iteration. If the theoretical result is already known, here we quantify these probabilities. This approach seems interesting to use for computing situations when the deterministic methods don't run. Among the examined applications, are asymptotic solutions of Lorenz, Navier-Stokes or Hamilton's equations. In this approach, linearity induces many difficult problems, all of whom we have not yet resolved.
A homogenization method for ductile-brittle composite laminates at large deformations
DEFF Research Database (Denmark)
Poulios, Konstantinos; Niordson, Christian Frithiof
2018-01-01
-elastic behavior in the reinforcement as well as for the bending stiffness of the reinforcement layers. Additionally to previously proposed models, the present method includes Lemaitre type damage for the reinforcement, making it applicable to a wider range of engineering applications. The capability...... of the proposed method in representing the combined effect of plasticity, damage and buckling at microlevel within a homogenized setting is demonstrated by means of direct comparisons to a reference discrete model.......This paper presents a high fidelity homogenization method for periodically layered composite structures that accounts for plasticity in the matrix material and quasi-brittle damage in the reinforcing layers, combined with strong geometrical nonlinearities. A set of deliberately chosen internal...
Evolving self-assembly in autonomous homogeneous robots: experiments with two physical robots.
Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Christensen, Anders Lyhne; Dorigo, Marco
2009-01-01
This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot. The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.
Directory of Open Access Journals (Sweden)
Abbas Jafarizad
2017-08-01
Full Text Available Background: Mitoxantrone (MXT is a drug for cancer therapy and a hazardous pharmaceutical to the environment which must be removed from contaminated waste streams. In this work, the removal of MXT by the electro-Fenton process over heterogeneous and homogenous catalysts is reported. Methods: The effects of the operational conditions (reaction medium pH, catalyst concentration and utilized current intensity were studied. The applied electrodes were carbon cloth (CC without any processing (homogenous process, graphene oxide (GO coated carbon cloth (GO/CC (homogenous process and Fe3O4@GO nanocomposite coated carbon cloth (Fe3O4@GO/CC (heterogeneous process. The characteristic properties of the electrodes were determined by atomic force microscopy (AFM, field emission scanning electron microscopy (FE-SEM and cathode polarization. MXT concentrations were determined by using ultraviolet-visible (UV-Vis spectrophotometer. Results: In a homogenous reaction, the high concentration of Fe catalyst (>0.2 mM decreased the MXT degradation rate. The results showed that the Fe3O4@GO/CC electrode included the most contact surface. The optimum operational conditions were pH 3.0 and current intensity of 450 mA which resulted in the highest removal efficiency (96.9% over Fe3O4@GO/CC electrode in the heterogeneous process compared with the other two electrodes in a homogenous process. The kinetics of the MXT degradation was obtained as a pseudo-first order reaction. Conclusion: The results confirmed the high potential of the developed method to purify contaminated wastewaters by MXT.
Kim, Sung-Hou [Moraga, CA; Kim, Rosalind [Moraga, CA; Jancarik, Jamila [Walnut Creek, CA
2012-01-31
An optimum solubility screen in which a panel of buffers and many additives are provided in order to obtain the most homogeneous and monodisperse protein condition for protein crystallization. The present methods are useful for proteins that aggregate and cannot be concentrated prior to setting up crystallization screens. A high-throughput method using the hanging-drop method and vapor diffusion equilibrium and a panel of twenty-four buffers is further provided. Using the present methods, 14 poorly behaving proteins have been screened, resulting in 11 of the proteins having highly improved dynamic light scattering results allowing concentration of the proteins, and 9 were crystallized.
Synthesis and Characterization of Anatase TiO_2 Powder using a Homogeneous Precipitation Method
International Nuclear Information System (INIS)
Choi, Soon Ok; Cho, Jee Hee; Lim, Sung Hwan; Chung, Eun Young
2011-01-01
This paper studies the experimental method that uses the homogeneous precipitation method to prepare mica flakes coated with anatase-type titania pearlescent pigment with urea as precipitant. The optimum technology parameters, the chemical composition, the microstructure, and the color property of resulting pigments are discussed. The coating principle of mica coated titania with various coating thickness is analyzed by X-ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy(TEM) and tested by spectrophotometer analysis. The colored nanocrystalline pigments with different morphology and coating thickness 45-170 nm were prepared by homogeneous precipitation treatment of TiOSO_4(titanum oxysulfate) aqueous solutions. Characterizations on the pigments show that the pearlescent effects of the pigments depend mainly on mica size, thickness of the metal oxide deposit, its chemical composition, and crystal structure.
DEFF Research Database (Denmark)
Lupp, Daniel; Christensen, Niels Johan; Fristrup, Peter
2014-01-01
n this Perspective, we will focus on the use of both experimental and theoretical methods in the exploration of reaction mechanisms in homogeneous transition metal catalysis. We briefly introduce the use of Hammett studies and kinetic isotope effects (KIE). Both of these techniques can be complem......n this Perspective, we will focus on the use of both experimental and theoretical methods in the exploration of reaction mechanisms in homogeneous transition metal catalysis. We briefly introduce the use of Hammett studies and kinetic isotope effects (KIE). Both of these techniques can...... be complemented by computational chemistry – in particular in cases where interpretation of the experimental results is not straightforward. The good correspondence between experiment and theory is only possible due to recent advances within the applied theoretical framework. We therefore also highlight...
Jaws calibration method to get a homogeneous distribution of dose in the junction of hemi fields
International Nuclear Information System (INIS)
Cenizo de Castro, E.; Garcia Pareja, S.; Moreno Saiz, C.; Hernandez Rodriguez, R.; Bodineau Gil, C.; Martin-Viera Cueto, J. A.
2011-01-01
Hemi fields treatments are widely used in radiotherapy. Because the tolerance established for the positioning of each jaw is 1 mm, may be cases of overlap or separation of up to 2 mm. This implies heterogeneity of doses up to 40% in the joint area. This paper presents an accurate method of calibration of the jaws so as to obtain homogeneous dose distributions when using this type of treatment. (Author)
Methods of modern mathematical physics
Reed, Michael
1980-01-01
This book is the first of a multivolume series devoted to an exposition of functional analysis methods in modern mathematical physics. It describes the fundamental principles of functional analysis and is essentially self-contained, although there are occasional references to later volumes. We have included a few applications when we thought that they would provide motivation for the reader. Later volumes describe various advanced topics in functional analysis and give numerous applications in classical physics, modern physics, and partial differential equations.
The extensions of space-time. Physics in the 8-dimensional homogeneous space D = SU(2,2)/K
International Nuclear Information System (INIS)
Barut, A.O.
1993-07-01
The Minkowski space-time is only a boundary of a bigger homogeneous space of the conformal group. The conformal group is the symmetry group of our most fundamental massless wave equations. These extended groups and spaces have many remarkable properties and physical implications. (author). 36 refs
Computational Methods in Plasma Physics
Jardin, Stephen
2010-01-01
Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,
An infrared small target detection method based on multiscale local homogeneity measure
Nie, Jinyan; Qu, Shaocheng; Wei, Yantao; Zhang, Liming; Deng, Lizhen
2018-05-01
Infrared (IR) small target detection plays an important role in the field of image detection area owing to its intrinsic characteristics. This paper presents a multiscale local homogeneity measure (MLHM) for infrared small target detection, which can enhance the performance of IR small target detection system. Firstly, intra-patch homogeneity of the target itself and the inter-patch heterogeneity between target and the local background regions are integrated to enhance the significant of small target. Secondly, a multiscale measure based on local regions is proposed to obtain the most appropriate response. Finally, an adaptive threshold method is applied to small target segmentation. Experimental results on three different scenarios indicate that the MLHM has good performance under the interference of strong noise.
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Santoso, B.
1997-01-01
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
Directory of Open Access Journals (Sweden)
Bolin Lian
2017-01-01
Full Text Available The aim of this study was to prepare 10-hydroxycamptothecin nanocolloidal particles (HCPTNPs to increase the solubility of drugs, reduce the toxicity, improve the stability of the drug, and so forth. HCPTNPs was prepared by antisolvent precipitation (AP method combined with high pressure homogenization (HPH, followed by lyophilization. The main parameters during antisolvent process including volume ratio of dimethyl sulfoxide (DMSO and H2O and dripping speed were optimized and their effects on mean particle size (MPS and yield of HCPT primary particles were investigated. In the high pressure homogeneous procedure, types of surfactants, amount of surfactants, and homogenization pressure (HP were optimized and their influences on MPS, zeta potential (ZP, and morphology were analyzed. The optimum conditions of HCPTNPs were as follows: 0.2 mg/mL HCPT aqueous suspension, 1% of ASS, 1000 bar of HP, and 20 passes. Finally, the HCPTNPs via lyophilization using glucose as lyoprotectant under optimum conditions had an MPS of 179.6 nm and a ZP of 28.79 ± 1.97 mV. The short-term stability of HCPTNPs indicated that the MPS changed in a small range.
Statistical methods for physical science
Stanford, John L
1994-01-01
This volume of Methods of Experimental Physics provides an extensive introduction to probability and statistics in many areas of the physical sciences, with an emphasis on the emerging area of spatial statistics. The scope of topics covered is wide-ranging-the text discusses a variety of the most commonly used classical methods and addresses newer methods that are applicable or potentially important. The chapter authors motivate readers with their insightful discussions, augmenting their material withKey Features* Examines basic probability, including coverage of standard distributions, time s
Alayoubi, Alaadin; Abu-Fayyad, Ahmed; Rawas-Qalaji, Mutasem M; Sylvester, Paul W; Nazzal, Sami
2015-01-01
Recently there has been a growing interest in vitamin E for its potential use in cancer therapy. The objective of this work was therefore to formulate a physically stable parenteral lipid emulsion to deliver higher doses of vitamin E than commonly used in commercial products. Specifically, the objectives were to study the effects of homogenization pressure, number of homogenizing cycles, viscosity of the oil phase, and oil content on the physical stability of emulsions fortified with high doses of vitamin E (up to 20% by weight). This was done by the use of a 27-run, 4-factor, 3-level Box-Behnken statistical design. Viscosity, homogenization pressure, and number of cycles were found to have a significant effect on particle size, which ranged from 213 to 633 nm, and on the percentage of vitamin E remaining emulsified after storage, which ranged from 17 to 100%. Increasing oil content from 10 to 20% had insignificant effect on the responses. Based on the results it was concluded that stable vitamin E rich emulsions could be prepared by repeated homogenization at higher pressures and by lowering the viscosity of the oil phase, which could be adjusted by blending the viscous vitamin E with medium-chain triglycerides (MCT).
Quinn, Matt; Smith, Lincoln; Mayley, Giles; Husbands, Phil
2003-10-15
We report on recent work in which we employed artificial evolution to design neural network controllers for small, homogeneous teams of mobile autonomous robots. The robots were evolved to perform a formation-movement task from random starting positions, equipped only with infrared sensors. The dual constraints of homogeneity and minimal sensors make this a non-trivial task. We describe the behaviour of a successful system in which robots adopt and maintain functionally distinct roles in order to achieve the task. We believe this to be the first example of the use of artificial evolution to design coordinated, cooperative behaviour for real robots.
Qualitative methods in theoretical physics
Maslov, Dmitrii
2018-01-01
This book comprises a set of tools which allow researchers and students to arrive at a qualitatively correct answer without undertaking lengthy calculations. In general, Qualitative Methods in Theoretical Physics is about combining approximate mathematical methods with fundamental principles of physics: conservation laws and symmetries. Readers will learn how to simplify problems, how to estimate results, and how to apply symmetry arguments and conduct dimensional analysis. A comprehensive problem set is included. The book will appeal to a wide range of students and researchers.
Synthesis of CuO-NiO core-shell nanoparticles by homogeneous precipitation method
International Nuclear Information System (INIS)
Bayal, Nisha; Jeevanandam, P.
2012-01-01
Highlights: ► CuO-NiO core-shell nanoparticles have been synthesized using a simple homogeneous precipitation method for the first time. ► Mechanism of the formation of core-shell nanoparticles has been investigated. ► The synthesis route may be extended for the synthesis of other mixed metal oxide core-shell nanoparticles. - Abstract: Core-shell CuO–NiO mixed metal oxide nanoparticles in which CuO is the core and NiO is the shell have been successfully synthesized using homogeneous precipitation method. This is a simple synthetic method which produces first a layered double hydroxide precursor with core-shell morphology which on calcination at 350 °C yields the mixed metal oxide nanoparticles with the retention of core-shell morphology. The CuO–NiO mixed metal oxide precursor and the core-shell nanoparticles were characterized by powder X-ray diffraction, FT-IR spectroscopy, thermal gravimetric analysis, elemental analysis, scanning electron microscopy, transmission electron microscopy, and diffuse reflectance spectroscopy. The chemical reactivity of the core-shell nanoparticles was tested using catalytic reduction of 4-nitrophenol with NaBH 4 . The possible growth mechanism of the particles with core-shell morphology has also been investigated.
Geometric Methods in Physics XXXV
Odzijewicz, Anatol; Previato, Emma
2018-01-01
This book features a selection of articles based on the XXXV Białowieża Workshop on Geometric Methods in Physics, 2016. The series of Białowieża workshops, attended by a community of experts at the crossroads of mathematics and physics, is a major annual event in the field. The works in this book, based on presentations given at the workshop, are previously unpublished, at the cutting edge of current research, typically grounded in geometry and analysis, and with applications to classical and quantum physics. In 2016 the special session "Integrability and Geometry" in particular attracted pioneers and leading specialists in the field. Traditionally, the Białowieża Workshop is followed by a School on Geometry and Physics, for advanced graduate students and early-career researchers, and the book also includes extended abstracts of the lecture series.
International Nuclear Information System (INIS)
Ohmori, Shinji; Nose, Hideko
1985-01-01
Exposure of the dialyzed supernatant of bovine lens homogenate to ultraviolet (UV) light led to increases in its turbidity, pigmentation, and viscosity. These photochemically induced alterations of lens proteins were prevented by glutathione, cysteine and N-acetylcysteine, but not by ascorbic acid, S-(1,2-dicarboxyethyl)-glutathione or dulcitol. (author)
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; Fu, Shubin; Efendiev, Yalchin R.
2015-01-01
The development of reliable methods for upscaling fine-scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters
The necessity of improvement for the current LWR fuel assembly homogenization method
International Nuclear Information System (INIS)
Tang Chuntao; Huang Hao; Zhang Shaohong
2007-01-01
When the modern LWR core analysis method is used to do core nuclear design and in-core fuel management calculation, how to accurately obtain the fuel assembly homogenized parameters is a crucial issue. In this paper, taking the NEA C5G7-MOX benchmark problem as a severe test problem, which involves low-enriched uranium assemblies interspersed with MOX assemblies, we have re-examined the applicability of the two major assumptions of the modern equivalence theory for fuel assembly homoge- nization, i.e. the isolated assembly spatial spectrum assumption and the condensed two- group representation assumption. Numerical results have demonstrated that for LWR cores with strong spectrum interaction, both of these two assumptions are no longer applicable and the improvement for the homogenization method is necessary, the current two-group representation should be improved by the multigroup representation and the current reflective assembly boundary condition should be improved by the 'real' assembly boundary condition. This is a research project supported by National Natural Science Foundation of China (10605016). (authors)
A Method for Ferulic Acid Production from Rice Bran Oil Soapstock Using a Homogenous System
Directory of Open Access Journals (Sweden)
Hoa Thi Truong
2017-08-01
Full Text Available Ferulic acid (FA is widely used as an antioxidant, e.g., as a Ultraviolet (UV protectant in cosmetics and in various medical applications. It has been produced by the hydrolysis of γ-oryzanol found in rice bran oil soapstock. In this study, the base-catalyzed, homogenous hydrolysis of γ-oryzanol was conducted using various ratios of potassium hydroxide (KOH to γ-oryzanol, initial concentrations of γ-oryzanol in the reaction mixture, and ratios of ethanol (EtOH (as cosolvent/ethyl acetate (EtOAc (γ-oryzanol solution. Acceleration of the reaction using a planar type of ultrasound sonicator (78 and 130 kHz at different reaction temperatures was explored. By using a heating method, the 80% yield of FA was attained at 75 °C in 4 h under homogeneous conditions (initial concentration of γ-oryzanol 12 mg/mL, the KOH/γ-oryzanol ratio (wt/wt 10/1, and EtOH/EtOAc ratio (v/v 5/1. With the assistance of 78 and 130 kHz irradiation, the yields reached 90%. The heating method was applied for the γ-oryzanol-containing extract prepared from rice bran oil soapstock. From soapstock, the 74.3% yield of FA was obtained, but 20% of the trans-FA in the reaction mixture was transformed into cis-form within one month.
A method to improve the B0 homogeneity of the heart in vivo.
Jaffer, F A; Wen, H; Balaban, R S; Wolff, S D
1996-09-01
A homogeneous static (B0) magnetic field is required for many NMR experiments such as echo planar imaging, localized spectroscopy, and spiral scan imaging. Although semi-automated techniques have been described to improve the B0 field homogeneity, none has been applied to the in vivo heart. The acquisition of cardiac field maps is complicated by motion, blood flow, and chemical shift artifact from epicardial fat. To overcome these problems, an ungated three-dimensional (3D) chemical shift image (CSI) was collected to generate a time and motion-averaged B0 field map. B0 heterogeneity in the heart was minimized by using a previous algorithm that solves for the optimal shim coil currents for an input field map, using up to third-order current-bounded shims (1). The method improved the B0 homogenelty of the heart in all 11 normal volunteers studied. After application of the algorithm to the unshimmed cardiac field maps, the standard deviation of proton frequency decreased by 43%, the magnitude 1H spectral linewidth decreased by 24%, and the peak-peak gradient decreased by 35%. Simulations of the high-order (second- and third-order) shims in B0 field correction of the heart show that high order shims are important, resulting for nearly half of the improvement in homogeneity for several subjects. The T2* of the left ventricular anterior wall before and after field correction was determined at 4.0 Tesis. Finally, results show that cardiac shimming is of benefit in cardiac 31P NMR spectroscopy and cardiac echo planar imaging.
Nuclear methods in medical physics
International Nuclear Information System (INIS)
Jeraj, R.
2003-01-01
A common ground for both, reactor and medical physics is a demand for high accuracy of particle transport calculations. In reactor physics, safe operation of nuclear power plants has been asking for high accuracy of calculation methods. Similarly, dose calculation in radiation therapy for cancer has been requesting high accuracy of transport methods to ensure adequate dosimetry. Common to both problems has always been a compromise between achievable accuracy and available computer power leading into a variety of calculation methods developed over the decades. On the other hand, differences of subjects (nuclear reactor vs. humans) and radiation types (neutron/photon vs. photon/electron or ions) are calling for very field-specific approach. Nevertheless, it is not uncommon to see drift of researches from one field to another. Several examples from both fields will be given with the aim to compare the problems, indicating their similarities and discussing their differences. As examples of reactor physics applications, both deterministic and Monte Carlo calculations will be presented for flux distributions of the VENUS and TRIGA Mark II benchmark. These problems will be paralleled to medical physics applications in linear accelerator radiation field determination and dose distribution calculations. Applicability of the adjoint/forward transport will be discussed in the light of both transport problems. Boron neutron capture therapy (BNCT) as an example of the close collaboration between the fields will be presented. At last, several other examples from medical physics, which can and cannot find corresponding problems in reactor physics, will be discussed (e.g., beam optimisation in inverse treatment planning, imaging applications). (author)
A Homogeneous Time-Resolved Fluorescence Immunoassay Method for the Measurement of Compound W.
Huang, Biao; Yu, Huixin; Bao, Jiandong; Zhang, Manda; Green, William L; Wu, Sing-Yung
2018-01-01
Using compound W (a 3,3'-diiodothyronine sulfate [T 2 S] immuno-crossreactive material)-specific polyclonal antibodies and homogeneous time-resolved fluorescence immunoassay assay techniques (AlphaLISA) to establish an indirect competitive compound W (ICW) quantitative detection method. Photosensitive particles (donor beads) coated with compound W or T 2 S and rabbit anti-W antibody were incubated with biotinylated goat anti-rabbit antibody. This constitutes a detection system with streptavidin-coated acceptor particle. We have optimized the test conditions and evaluated the detection performance. The sensitivity of the method was 5 pg/mL, and the detection range was 5 to 10 000 pg/mL. The intra-assay coefficient of variation averages W levels in extracts of maternal serum samples. This may have clinical application to screen congenital hypothyroidism in utero.
An analysis of the Rose's shim method for improvement of magnetic field homogeneity
International Nuclear Information System (INIS)
Ban, Etsuo
1981-01-01
Well known Rose's method has been applied to the magnets requiring high homogeneity (e.g. for magnetic resonance). The analysis of the Rose's shim is based on the conformal representation, and it is applicable to the poles of any form obtained by the combination of polygons. It provides rims for the magnetic poles of 90 deg edges. In this paper, the solution is determined by the elliptic function to give the magnetic field at any point in the space, directly integrating by the Schwarz-Christoffel transformation, instead of the approximate numerical integration employed by Rose, and compared with the example having applied it to a cylindrical pole. For the conditions of Rose's optimum correction, the exact solution is given as the case that the parameters of Jacobi's third kind elliptic function are equal to a half of first kind perfect elliptic integral. Since Rose depended on the approximate numerical integration, Rose's diagram showed a little insufficient correction. It was found that the pole shape giving excess correction of 10 -4 or so produced a good result for the cylindrical magnetic pole having the ratio of pole diameter to gap length of 2.5. In order to obtain the correction by which the change in homogeneity is small up to considerably intense field, the pole edges are required to be of curved surfaces. (Wakatsuki, Y.)
Directory of Open Access Journals (Sweden)
Trujillo-Cayado, L. A.
2015-09-01
Full Text Available The goal of this work was to investigate the influence of the emulsification method on the rheological properties, droplet size distribution and physical stability of O/W green emulsions formulated with an eco-friendly surfactant derived from cocoa oil. The methodology used can be applied to other emulsions. Polyoxyethylene glycerol esters are non-ionic surfactants obtained from a renewable source which fulfill the environmental and toxicological requirements to be used as eco-friendly emulsifying agents. In the same way, N,NDimethyloctanamide and α-Pinene (solvents used as oil phase could be considered green solvents. Emulsions with submicron mean diameters and slight shear thinning behavior were obtained regardless of the homogenizer, pressure or number of passes used. All emulsions exhibited destabilization by creaming and a further coalescence process which was applied to the coarse emulsion prepared with a rotor-stator homogenizer. The emulsion obtained with high pressure at 15000 psi and 1-pass was the most stable.El objetivo de este trabajo fue estudiar la influencia del método de emulsificación sobre las propiedades reológicas, la distribución de tamaños de gota y la estabilidad física de emulsiones verdes O/W formuladas con un tensioactivo derivado del aceite de coco respetuoso con el medioambiente. La metodología empleada puede ser aplicada a cualquier otro tipo de emulsiones. Los ésteres polietoxilados de glicerina son tensioactivos no iónicos obtenidos de fuentes renovables que cumplen requisitos medioambientales y toxicológicos para ser usados como agentes emulsionantes ecológicos. Del mismo modo, la N,N-dimetil octanamida y el α-Pineno (disolventes usados como fase oleosa pueden ser considerados como disolventes verdes. Se han obtenido emulsiones con diámetros medio submicrónicos y comportamiento ligeramente pseudoplástico independientemente del equipo, la presión o el número de pasadas empleados. Todas las
Durner, Wolfgang; Huber, Magdalena; Yangxu, Li; Steins, Andi; Pertassek, Thomas; Göttlein, Axel; Iden, Sascha C.; von Unold, Georg
2017-04-01
The particle-size distribution (PSD) is one of the main properties of soils. To determine the proportions of the fine fractions silt and clay, sedimentation experiments are used. Most common are the Pipette and Hydrometer method. Both need manual sampling at specific times. Both are thus time-demanding and rely on experienced operators. Durner et al. (Durner, W., S.C. Iden, and G. von Unold (2017): The integral suspension pressure method (ISP) for precise particle-size analysis by gravitational sedimentation, Water Resources Research, doi:10.1002/2016WR019830) recently developed the integral suspension method (ISP) method, which is implemented in the METER Group device PARIOTM. This new method estimates continuous PSD's from sedimentation experiments by recording the temporal evolution of the suspension pressure at a certain measurement depth in a sedimentation cylinder. It requires no manual interaction after start and thus no specialized training of the lab personnel. The aim of this study was to test the precision and accuracy of new method with a variety of materials, to answer the following research questions: (1) Are the results obtained by PARIO reliable and stable? (2) Are the results affected by the initial mixing technique to homogenize the suspension, or by the presence of sand in the experiment? (3) Are the results identical to the one that are obtained with the Pipette method as reference method? The experiments were performed with a pure quartz silt material and four real soil materials. PARIO measurements were done repetitively on the same samples in a temperature-controlled lab to characterize the repeatability of the measurements. Subsequently, the samples were investigated by the pipette method to validate the results. We found that the statistical error for silt fraction from replicate and repetitive measurements was in the range of 1% for the quartz material to 3% for soil materials. Since the sand fractions, as in any sedimentation method, must
Hybrid design method for air-core solenoid with axial homogeneity
Energy Technology Data Exchange (ETDEWEB)
Huang, Li; Lee, Sang Jin [Uiduk University, Gyeongju (Korea, Republic of); Choi, Suk Jin [Institute for Basic Science, Daejeon (Korea, Republic of)
2016-03-15
In this paper, a hybrid method is proposed to design an air-core superconducting solenoid system for 6 T axial uniform magnetic field using Niobium Titanium (NbTi) superconducting wire. In order to minimize the volume of conductor, the hybrid optimization method including a linear programming and a nonlinear programming was adopted. The feasible space of solenoid is divided by several grids and the magnetic field at target point is approximated by the sum of magnetic field generated by an ideal current loop at the center of each grid. Using the linear programming, a global optimal current distribution in the feasible space can be indicated by non-zero current grids. Furthermore the clusters of the non-zero current grids also give the information of probable solenoids in the feasible space, such as the number, the shape, and so on. Applying these probable solenoids as the initial model, the final practical configuration of solenoids with integer layers can be obtained by the nonlinear programming. The design result illustrates the efficiency and the flexibility of the hybrid method. And this method can also be used for the magnet design which is required the high homogeneity within several ppm (parts per million)
Physical acoustics principles and methods
Mason, Warren P
2012-01-01
Physical Acoustics: Principles and Methods, Volume IV, Part B: Applications to Quantum and Solid State Physics provides an introduction to the various applications of quantum mechanics to acoustics by describing several processes for which such considerations are essential. This book discusses the transmission of sound waves in molten metals. Comprised of seven chapters, this volume starts with an overview of the interactions that can happen between electrons and acoustic waves when magnetic fields are present. This text then describes acoustic and plasma waves in ionized gases wherein oscillations are subject to hydrodynamic as well as electromagnetic forces. Other chapters examine the resonances and relaxations that can take place in polymer systems. This book discusses as well the general theory of the interaction of a weak sinusoidal field with matter. The final chapter describes the sound velocities in the rocks composing the Earth. This book is a valuable resource for physicists and engineers.
Environment-based pin-power reconstruction method for homogeneous core calculations
International Nuclear Information System (INIS)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-01-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)
Physical acoustics principles and methods
Mason, Warren P
1964-01-01
Physical Acoustics: Principles and Methods, Volume l-Part A focuses on high frequency sound waves in gases, liquids, and solids that have been proven as powerful tools in analyzing the molecular, defect, domain wall, and other types of motions. The selection first tackles wave propagation in fluids and normal solids and guided wave propagation in elongated cylinders and plates. Discussions focus on fundamentals of continuum mechanics; small-amplitude waves in a linear viscoelastic medium; representation of oscillations and waves; and special effects associated with guided elastic waves in plat
A New and Simple Method for Crosstalk Estimation in Homogeneous Trench-Assisted Multi-Core Fibers
DEFF Research Database (Denmark)
Ye, Feihong; Tu, Jiajing; Saitoh, Kunimasa
2014-01-01
A new and simple method for inter-core crosstalk estimation in homogeneous trench-assisted multi-core fibers is presented. The crosstalk calculated by this method agrees well with experimental measurement data for two kinds of fabricated 12-core fibers.......A new and simple method for inter-core crosstalk estimation in homogeneous trench-assisted multi-core fibers is presented. The crosstalk calculated by this method agrees well with experimental measurement data for two kinds of fabricated 12-core fibers....
Theory of homogeneous condensation from small nuclei. I. Modified Mayer theory of physical clusters
International Nuclear Information System (INIS)
Lockett, A.M. III
1980-01-01
A theory of physical clusters is developed within the framework of the Theory of Imperfect Gases. Physical monomers and clusters are redefined diagrammatically thereby removing the unphysical nature of the usual Mayer clusters while retaining essentially all of the desirable features of the Mayer theory. The resulting formulation is simple, unambiguous, and well suited for incorporation into a kinetic theory of condensation which is computationally tractable
Directory of Open Access Journals (Sweden)
FERNANDO L. LOBO B. CARNEIRO
2000-12-01
Full Text Available Einstein, in 1911, published an article on the application of the principle of dimensional homogeneity to three problems of the physics of solids: the characteristic frequency of the atomic nets of crystalline solids as a function of their moduli of compressibility or of their melting points, and the thermal conductivity of crystalline insulators. Recognizing that the physical dimensions of temperature are not the same as those of energy and heat, Einstein had recourse to the artifice of replace that physical parameter by its product by the Boltzmann constant, so obtaining correct results. But nowadays, with the new basic quantities "Thermodynamic Temperature theta (unit- Kelvin'', "Electric Current I (unit Ampère'' and "Amount of Substance MOL (unit-mole'', incorporated to the SI International System of Units, in 1960 and 1971, the same results are obtained in a more direct and coherent way. At the time of Einstein's article only three basic physical quantities were considered - length L, mass M, and time T. He ignored the pi theorem of dimensional analysis diffused by Buckingham three years later, and obtained the "pi numbers'' by trial and error. In the present paper is presented a revisitation of the article of Einstein, conducted by the modern methodology of dimensional analysis and theory of physical similitude.
Group theoretical methods in Physics
International Nuclear Information System (INIS)
Olmo, M.A. del; Santander, M.; Mateos Guilarte, J.M.
1993-01-01
The meeting had 102 papers. These was distributed in following areas: -Quantum groups,-Integrable systems,-Physical Applications of Group Theory,-Mathematical Results,-Geometry, Topology and Quantum Field Theory,-Super physics,-Super mathematics,-Atomic, Molecular and Condensed Matter Physics. Nuclear and Particle Physics,-Symmetry and Foundations of classical and Quantum mechanics
Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood
Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models
DEFF Research Database (Denmark)
Arslanagic, Samel; Hansen, Troels Vejle; Mortensen, N. Asger
2013-01-01
The scattering-parameter extraction method of metamaterial homogenization is reviewed to show that the only ambiguity is that related to the choice of the branch of the complex logarithmic function (or the complex inverse cosine function). It is shown that the method has no ambiguity for the sign...
Capdeville, Yann; Métivier, Ludovic
2018-05-01
Seismic imaging is an efficient tool to investigate the Earth interior. Many of the different imaging techniques currently used, including the so-called full waveform inversion (FWI), are based on limited frequency band data. Such data are not sensitive to the true earth model, but to a smooth version of it. This smooth version can be related to the true model by the homogenization technique. Homogenization for wave propagation in deterministic media with no scale separation, such as geological media, has been recently developed. With such an asymptotic theory, it is possible to compute an effective medium valid for a given frequency band such that effective waveforms and true waveforms are the same up to a controlled error. In this work we make the link between limited frequency band inversion, mainly FWI, and homogenization. We establish the relation between a true model and an FWI result model. This relation is important for a proper interpretation of FWI images. We numerically illustrate, in the 2-D case, that an FWI result is at best the homogenized version of the true model. Moreover, it appears that the homogenized FWI model is quite independent of the FWI parametrization, as long as it has enough degrees of freedom. In particular, inverting for the full elastic tensor is, in each of our tests, always a good choice. We show how the homogenization can help to understand FWI behaviour and help to improve its robustness and convergence by efficiently constraining the solution space of the inverse problem.
International Nuclear Information System (INIS)
Ilic, R.D.; Vojvodic, V.I.; Orlic, M.P.
1981-01-01
The stochastic nature of photon interactions with matter and the characteristics of photon transport through real materials, are very well suited for applications of the Monte Carlo method in calculations of the energy-space distribution of photons. Starting from general principles of the Monte Carlo method, physical-mathematical model of photon transport from a pulsed source is given for the homogeneous air environment. Based on that model, a computer program is written which is applied in calculations of scattered photons delay spectra and changes of the photon energy spectrum. Obtained results provide the estimation of the timespace function of the electromagnetic field generated by photon from a pulsed source. (author)
International Nuclear Information System (INIS)
Mora, Rolando
2013-01-01
A Statistical method was used to delineate homogeneous layers of volcanic soils in two sites where dynamic penetration soundings have been implemented. The study includes two perforations (DPL 1A and DPL 1B) with dynamic penetrometer light (DPL), carried out in the canton de La Union, Cartago. The data of the number of blows as a function od the depth of the DPL perforations depth were used to calculate the intraclass correlation coefficient (IR) and clearly determine the limits of homogeneous layers in volcanic soils. The physical and mechanical properties of each determined layer were established with the help of computers programs, as well as the variation according to depth of its allowable bearing capacity. With the obtained results is has been possible to determine the most suitable site to establish the foundation of a potable water storage tank [es
DEFF Research Database (Denmark)
Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian
2013-01-01
In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...
Using a homogenization-evaporation method, beta-carotene (BC) loaded nano-particles were prepared with different ratios of food-grade sodium caseinate (SC), whey protein isolate (WPI), or soy protein isolate (SPI) to BC and evaluated for their physiochemical stability, in vitro cytotoxicity, and cel...
Matsuda, Tetsuya; Okumura, Dai
2015-01-01
This volume presents a collection of contributions on materials modeling, which were written to celebrate the 65th birthday of Prof. Nobutada Ohno. The book follows Prof. Ohno’s scientific topics, starting with creep damage problems and ending with homogenization methods.
Constitutive modeling of two phase materials using the Mean Field method for homogenization
Perdahcioglu, Emin Semih; Geijselaers, Hubertus J.M.
2010-01-01
A Mean-Field homogenization framework for constitutive modeling of materials involving two distinct elastic-plastic phases is presented. With this approach it is possible to compute the macroscopic mechanical behavior of this type of materials based on the constitutive models of the constituent
A method to eliminate wetting during the homogenization of HgCdTe
Su, Ching-Hua; Lehoczky, S. L.; Szofran, F. R.
1986-01-01
Adhesion of HgCdTe samples to fused silica ampoule walls, or 'wetting', during the homogenization process was eliminated by adopting a slower heating rate. The idea is to decrease Cd activity in the sample so as to reduce the rate of reaction between Cd and the silica wall.
International Nuclear Information System (INIS)
Chae, Byung Gon; Choi, Jung Hae; Ichikawa, Yasuaki; Seo, Yong Seok
2012-01-01
To compute a permeability coefficient along a rough fracture that takes into account the fracture geometry, this study performed detailed measurements of fracture roughness using a confocal laser scanning microscope, a quantitative analysis of roughness using a spectral analysis, and a homogenization analysis to calculate the permeability coefficient on the microand macro-scale. The homogenization analysis is a type of perturbation theory that characterizes the behavior of microscopically inhomogeneous material with a periodic boundary condition in the microstructure. Therefore, it is possible to analyze accurate permeability characteristics that are represented by the local effect of the fracture geometry. The Cpermeability coefficients that are calculated using the homogenization analysis for each rough fracture model exhibit an irregular distribution and do not follow the relationship of the cubic law. This distribution suggests that the permeability characteristics strongly depend on the geometric conditions of the fractures, such as the roughness and the aperture variation. The homogenization analysis may allow us to produce more accurate results than are possible with the preexisting equations for calculating permeability.
van der Burg, Eeke; de Leeuw, Jan; Verdegaal, Renée
1988-01-01
Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple
Watanabe, Masahito; Eguchi, Minoru; Hibiya, Taketoshi
1999-07-01
A novel method for control and homogenization oxygen distribution in silicon crystals by using electromagnetic force (EMF) to rotate the melt without crucible rotation has been developed. We call it electromagnetic Czochralski method. An EMF in the azimuthal direction is generated in the melt by the interaction between an electric current through the melt in the radial direction and a vertical magnetic field. (B). The rotation rate (ωm) of the silicon melt is continuously changed from 0 to over 105 rpm under I equals 0 to 8 A and B equals 0 to 0.1 T. Thirty-mm-diameter silicon single crystals free of dislocations could be grown under several conditions. The oxygen concentration in the crystals was continuously changed from 1 X 1017 to 1 X 1018 atoms/cm3 with increase of melt rotation by electromagnetic force. The homogeneous oxygen distributions in the radial directions were achieved. The continuous change of oxygen concentration and the homogenization of oxygen distribution along the radial direction are attributed to the control of the diffusion-boundary-layer at both the melt/crucible and crystal/melt by forced flow due to the EMF. This new method would be useful for growth of the large-diameter silicon crystals with a homogeneous distribution of oxygen.
Mathematical methods of classical physics
Cortés, Vicente
2017-01-01
This short primer, geared towards students with a strong interest in mathematically rigorous approaches, introduces the essentials of classical physics, briefly points out its place in the history of physics and its relation to modern physics, and explains what benefits can be gained from a mathematical perspective. As a starting point, Newtonian mechanics is introduced and its limitations are discussed. This leads to and motivates the study of different formulations of classical mechanics, such as Lagrangian and Hamiltonian mechanics, which are the subjects of later chapters. In the second part, a chapter on classical field theories introduces more advanced material. Numerous exercises are collected in the appendix.
International Nuclear Information System (INIS)
Leyva, Ana G.; Vega, Daniel R.; Trimarco, Veronica G.; Marchi, Daniel E.
1999-01-01
The (U,Gd)O 2 sintered pellets are fabricated by different methods. The homogeneity characterisation of Gd content seems to be necessary as a production control to qualify the process and the final product. The micrographic technique is the most common method used to analyse the homogeneity of these samples, this method requires time and expertise to obtain good results. In this paper, we propose an analysis of the X-ray diffraction powder patterns through the Rietveld method, in which the differences between the experimental data and the calculated from a crystalline structure model proposed are evaluated. This result allows to determine the cell parameters, that can be correlated with the Gd concentration, and the existence of other phases with different Gd ratio. (author)
Debussche, A.; Dubois, T.; Temam, R.
1993-01-01
Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.
International Nuclear Information System (INIS)
Ngoc Tam, Nguyen; Nakamura, Yasunori; Terao, Toshihiro; Kuramae, Hiroyuki; Nakamachi, Eiji; Sakamoto, Hidetoshi; Morimoto, Hideo
2007-01-01
Recently, the asymmetric rolling (ASR) has been applied to the material processing of aluminum alloy sheet to control micro-crystal structure and texture in order to improve the mechanical properties. Previously, several studies aimed at high formability sheet generation have been carried out experimentally, but finite element simulations to predict the deformation induced texture evolution of the asymmetrically rolled sheet metals have not been investigated rigorously. In this study, crystallographic homogenized finite element (FE) codes are developed and applied to analyze the asymmetrical rolling processes. The textures of sheet metals were measured by electron back scattering diffraction (EBSD), and compared with FE simulations. The results from the dynamic explicit type Crystallographic homogenization FEM code shows that this type of simulation is a comprehensive tool to predict the plastic induced texture evolution
Statistical methods in radiation physics
Turner, James E; Bogard, James S
2012-01-01
This statistics textbook, with particular emphasis on radiation protection and dosimetry, deals with statistical solutions to problems inherent in health physics measurements and decision making. The authors begin with a description of our current understanding of the statistical nature of physical processes at the atomic level, including radioactive decay and interactions of radiation with matter. Examples are taken from problems encountered in health physics, and the material is presented such that health physicists and most other nuclear professionals will more readily understand the application of statistical principles in the familiar context of the examples. Problems are presented at the end of each chapter, with solutions to selected problems provided online. In addition, numerous worked examples are included throughout the text.
Statistical methods in physical mapping
International Nuclear Information System (INIS)
Nelson, D.O.
1995-05-01
One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like fragile X syndrome, cystic fibrosis and myotonic muscular dystrophy. This dissertation concentrates on constructing high-resolution physical maps. It demonstrates how probabilistic modeling and statistical analysis can aid molecular geneticists in the tasks of planning, execution, and evaluation of physical maps of chromosomes and large chromosomal regions. The dissertation is divided into six chapters. Chapter 1 provides an introduction to the field of physical mapping, describing the role of physical mapping in gene isolation and ill past efforts at mapping chromosomal regions. The next two chapters review and extend known results on predicting progress in large mapping projects. Such predictions help project planners decide between various approaches and tactics for mapping large regions of the human genome. Chapter 2 shows how probability models have been used in the past to predict progress in mapping projects. Chapter 3 presents new results, based on stationary point process theory, for progress measures for mapping projects based on directed mapping strategies. Chapter 4 describes in detail the construction of all initial high-resolution physical map for human chromosome 19. This chapter introduces the probability and statistical models involved in map construction in the context of a large, ongoing physical mapping project. Chapter 5 concentrates on one such model, the trinomial model. This chapter contains new results on the large-sample behavior of this model, including distributional results, asymptotic moments, and detection error rates. In addition, it contains an optimality result concerning experimental procedures based on the trinomial model. The last chapter explores unsolved problems and describes future work
Statistical methods in physical mapping
Energy Technology Data Exchange (ETDEWEB)
Nelson, David O. [Univ. of California, Berkeley, CA (United States)
1995-05-01
One of the great success stories of modern molecular genetics has been the ability of biologists to isolate and characterize the genes responsible for serious inherited diseases like fragile X syndrome, cystic fibrosis and myotonic muscular dystrophy. This dissertation concentrates on constructing high-resolution physical maps. It demonstrates how probabilistic modeling and statistical analysis can aid molecular geneticists in the tasks of planning, execution, and evaluation of physical maps of chromosomes and large chromosomal regions. The dissertation is divided into six chapters. Chapter 1 provides an introduction to the field of physical mapping, describing the role of physical mapping in gene isolation and ill past efforts at mapping chromosomal regions. The next two chapters review and extend known results on predicting progress in large mapping projects. Such predictions help project planners decide between various approaches and tactics for mapping large regions of the human genome. Chapter 2 shows how probability models have been used in the past to predict progress in mapping projects. Chapter 3 presents new results, based on stationary point process theory, for progress measures for mapping projects based on directed mapping strategies. Chapter 4 describes in detail the construction of all initial high-resolution physical map for human chromosome 19. This chapter introduces the probability and statistical models involved in map construction in the context of a large, ongoing physical mapping project. Chapter 5 concentrates on one such model, the trinomial model. This chapter contains new results on the large-sample behavior of this model, including distributional results, asymptotic moments, and detection error rates. In addition, it contains an optimality result concerning experimental procedures based on the trinomial model. The last chapter explores unsolved problems and describes future work.
Some mathematical methods of physics
Goertzel, Gerald
2014-01-01
This well-rounded, thorough treatment for advanced undergraduates and graduate students introduces basic concepts of mathematical physics involved in the study of linear systems. The text emphasizes eigenvalues, eigenfunctions, and Green's functions. Prerequisites include differential equations and a first course in theoretical physics.The three-part presentation begins with an exploration of systems with a finite number of degrees of freedom (described by matrices). In part two, the concepts developed for discrete systems in previous chapters are extended to continuous systems. New concepts u
International Nuclear Information System (INIS)
Österreicher, Johannes Albert; Kumar, Manoj; Schiffl, Andreas; Schwarz, Sabine; Hillebrand, Daniel; Bourret, Gilles Remi
2016-01-01
Characterization of Mg-Si precipitates is crucial for optimizing the homogenization heat treatment of Al-Mg-Si alloys. Although sample preparation is key for high quality scanning electron microscopy imaging, most common methods lead to dealloying of Mg-Si precipitates. In this article we systematically evaluate different sample preparation methods: mechanical polishing, etching with various reagents, and electropolishing using different electrolytes. We demonstrate that the use of a nitric acid and methanol electrolyte for electropolishing a homogenized Al-Mg-Si alloy prevents the dissolution of Mg-Si precipitates, resulting in micrographs of higher quality. This preparation method is investigated in depth and the obtained scanning electron microscopy images are compared with transmission electron micrographs: the shape and size of Mg-Si precipitates appear very similar in either method. The scanning electron micrographs allow proper identification and measurement of the Mg-Si phases including needles with lengths of roughly 200 nm. These needles are β″ precipitates as confirmed by high resolution transmission electron microscopy. - Highlights: •Secondary precipitation in homogenized 6xxx Al alloys is crucial for extrudability. •Existing sample preparation methods for SEM are improvable. •Electropolishing with nitric acid/methanol yields superior quality in SEM. •The obtained micrographs are compared to TEM micrographs.
Energy Technology Data Exchange (ETDEWEB)
Österreicher, Johannes Albert; Kumar, Manoj [LKR Light Metals Technologies Ranshofen, Austrian Institute of Technology, Postfach 26, 5282 Ranshofen (Austria); Schiffl, Andreas [Hammerer Aluminium Industries Extrusion GmbH, Lamprechtshausener Straße 69, 5282 Ranshofen (Austria); Schwarz, Sabine [University Service Centre for Transmission Electron Microscopy, Vienna University of Technology, Wiedner Hauptstr. 8-10, 1040 Wien (Austria); Hillebrand, Daniel [Hammerer Aluminium Industries Extrusion GmbH, Lamprechtshausener Straße 69, 5282 Ranshofen (Austria); Bourret, Gilles Remi, E-mail: gilles.bourret@sbg.ac.at [Department of Materials Science and Physics, University of Salzburg, Hellbrunner Straße 34, 5020 Salzburg (Austria)
2016-12-15
Characterization of Mg-Si precipitates is crucial for optimizing the homogenization heat treatment of Al-Mg-Si alloys. Although sample preparation is key for high quality scanning electron microscopy imaging, most common methods lead to dealloying of Mg-Si precipitates. In this article we systematically evaluate different sample preparation methods: mechanical polishing, etching with various reagents, and electropolishing using different electrolytes. We demonstrate that the use of a nitric acid and methanol electrolyte for electropolishing a homogenized Al-Mg-Si alloy prevents the dissolution of Mg-Si precipitates, resulting in micrographs of higher quality. This preparation method is investigated in depth and the obtained scanning electron microscopy images are compared with transmission electron micrographs: the shape and size of Mg-Si precipitates appear very similar in either method. The scanning electron micrographs allow proper identification and measurement of the Mg-Si phases including needles with lengths of roughly 200 nm. These needles are β″ precipitates as confirmed by high resolution transmission electron microscopy. - Highlights: •Secondary precipitation in homogenized 6xxx Al alloys is crucial for extrudability. •Existing sample preparation methods for SEM are improvable. •Electropolishing with nitric acid/methanol yields superior quality in SEM. •The obtained micrographs are compared to TEM micrographs.
Reactor physics methods development at Westinghouse
International Nuclear Information System (INIS)
Mueller, E.; Mayhue, L.; Zhang, B.
2007-01-01
The current state of reactor physics methods development at Westinghouse is discussed. The focus is on the methods that have been or are under development within the NEXUS project which was launched a few years ago. The aim of this project is to merge and modernize the methods employed in the PWR and BWR steady-state reactor physics codes of Westinghouse. (author)
Method of producing homogeneous mixed metal oxides and metal--metal oxide mixtures
International Nuclear Information System (INIS)
Quinby, T.C.
1978-01-01
Metal powders, metal oxide powders, and mixtures thereof of controlled particle size are provided by reacting an aqueous solution containing dissolved metal values with excess urea. Upon heating, urea reacts with water from the solution to leave a molten urea solution containing the metal values. The molten urea solution is heated to above about 180 0 C, whereupon metal values precipitate homogeneously as a powder. The powder is reduced to metal or calcined to form oxide particles. One or more metal oxides in a mixture can be selectively reduced to produce metal particles or a mixture of metal and metal oxide particles
A Method for Ferulic Acid Production from Rice Bran Oil Soapstock Using a Homogenous System
Hoa Thi Truong; Manh Do Van; Long Duc Huynh; Linh Thi Nguyen; Anh Do Tuan; Thao Le Xuan Thanh; Hung Duong Phuoc; Norimichi Takenaka; Kiyoshi Imamura; Yasuaki Maeda
2017-01-01
Ferulic acid (FA) is widely used as an antioxidant, e.g., as a Ultraviolet (UV) protectant in cosmetics and in various medical applications. It has been produced by the hydrolysis of γ-oryzanol found in rice bran oil soapstock. In this study, the base-catalyzed, homogenous hydrolysis of γ-oryzanol was conducted using various ratios of potassium hydroxide (KOH) to γ-oryzanol, initial concentrations of γ-oryzanol in the reaction mixture, and ratios of ethanol (EtOH) (as cosolvent)/ethyl acetate...
Development of triple scale finite element analyses based on crystallographic homogenization methods
International Nuclear Information System (INIS)
Nakamachi, Eiji
2004-01-01
Crystallographic homogenization procedure is implemented in the piezoelectric and elastic-crystalline plastic finite element (FE) code to assess its macro-continuum properties of piezoelectric ceramics and BCC and FCC sheet metals. Triple scale hierarchical structure consists of an atom cluster, a crystal aggregation and a macro- continuum. In this paper, we focus to discuss a triple scale numerical analysis for piezoelectric material, and apply to assess a macro-continuum material property. At first, we calculate material properties of Perovskite crystal of piezoelectric material, XYO3 (such as BaTiO3 and PbTiO3) by employing ab-initio molecular analysis code CASTEP. Next, measured results of SEM and EBSD observations of crystal orientation distributions, shapes and boundaries of a real material (BaTiO3) are employed to define an inhomogeneity of crystal aggregation, which corresponds to a unit cell of micro-structure, and satisfies the periodicity condition. This procedure is featured as a first scaling up from the molecular to the crystal aggregation. Finally, the conventional homogenization procedure is implemented in FE code to evaluate a macro-continuum property. This final procedure is featured as a second scaling up from the crystal aggregation (unit cell) to macro-continuum. This triple scale analysis is applied to design piezoelectric ceramic and finds an optimum crystal orientation distribution, in which a macroscopic piezoelectric constant d33 has a maximum value
Directory of Open Access Journals (Sweden)
Juin-Ling Tseng
2016-01-01
Full Text Available Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.
Nebbak, A; El Hamzaoui, B; Berenger, J-M; Bitam, I; Raoult, D; Almeras, L; Parola, P
2017-12-01
Ticks and fleas are vectors for numerous human and animal pathogens. Controlling them, which is important in combating such diseases, requires accurate identification, to distinguish between vector and non-vector species. Recently, matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS) was applied to the rapid identification of arthropods. The growth of this promising tool, however, requires guidelines to be established. To this end, standardization protocols were applied to species of Rhipicephalus sanguineus (Ixodida: Ixodidae) Latreille and Ctenocephalides felis felis (Siphonaptera: Pulicidae) Bouché, including the automation of sample homogenization using two homogenizer devices, and varied sample preservation modes for a period of 1-6 months. The MS spectra were then compared with those obtained from manual pestle grinding, the standard homogenization method. Both automated methods generated intense, reproducible MS spectra from fresh specimens. Frozen storage methods appeared to represent the best preservation mode, for up to 6 months, while storage in ethanol is also possible, with some caveats for tick specimens. Carnoy's buffer, however, was shown to be less compatible with MS analysis for the purpose of identifying ticks or fleas. These standard protocols for MALDI-TOF MS arthropod identification should be complemented by additional MS spectrum quality controls, to generalize their use in monitoring arthropods of medical interest. © 2017 The Royal Entomological Society.
A review of the physics methods for advanced gas-cooled reactors
International Nuclear Information System (INIS)
Buckler, A.N.
1982-01-01
A review is given of steady-state reactor physics methods and associated codes used in AGR design and operation. These range from the basic lattice codes (ARGOSY, WIMS), through homogeneous-diffusion theory fuel management codes (ODYSSEUS, MOPSY) to a fully heterogeneous code (HET). The current state of development of the methods is discussed, together with illustrative examples of their application. (author)
Energy Technology Data Exchange (ETDEWEB)
Griebel, M. [Technische Universitaet Muenchen (Germany)
1996-12-31
For problems which model locally strong varying phenomena on a micro-scale level, the grid for numerical simulation can not be chosen sufficiently fine enough due to reasons of storage requirements and numerical complexity. A typical example for such kind of a problem is the diffusion equation with strongly varying diffusion coefficients as it arises as Darcy law in reservoir simulation and related problems for flow in porous media. Therefore, on the macro-scale level, it is necessary to work with averaged equations which describe directly the large-scale behavior of the problem under consideration. In the numerical simulation of reservoir performance this is achieved e.g. by renormalization or homogenization, as simpler approaches like the arithmetic, geometric or harmonic mean turn out to be invalid for systems with strong permeability variations.
Methods in relativistic nuclear physics
International Nuclear Information System (INIS)
Danos, M.; Gillet, V.; Cauvin, M.
1984-01-01
This book is intended to provide the methods and tools for performing actual calculations for finite many-body systems of bound relativistic constituent particles. The aim is to cover thoroughly the methodological aspects of the relativistic many-body problem for bound states while avoiding the presentation of specific models. The many examples contained in the later part of the work are meant to give concrete illustrations of how to actually apply the methods which are given in the first part. The basic framework of the approach is the lagrangian field theory solved in the time-independent Schroedinger picture. (Auth.)
Nuclear physics methods in materials research
International Nuclear Information System (INIS)
Bethge, K.; Baumann, H.; Jex, H.; Rauch, F.
1980-01-01
Proceedings of the seventh divisional conference of the Nuclear Physics Division held at Darmstadt, Germany, from 23rd through 26th of September, 1980. The scope of this conference was defined as follows: i) to inform solid state physicists and materials scientists about the application of nuclear physics methods; ii) to show to nuclear physicists open questions and problems in solid state physics and materials science to which their methods can be applied. According to the intentions of the conference, the various nuclear physics methods utilized in solid state physics and materials science and especially new developments were reviewed by invited speakers. Detailed aspects of the methods and typical examples extending over a wide range of applications were presented as contributions in poster sessions. The Proceedings contain all the invited papers and about 90% of the contributed papers. (orig./RW)
Directory of Open Access Journals (Sweden)
Yoshio Kobayashi
2015-09-01
Full Text Available The present work proposes a method to fabricate indium tin oxide (ITO particles using precursor particles synthesized with a combination of a homogeneous precipitation method and a seeding technique, and it also describes their electronic conductivity properties. Seed nanoparticles were produced using a co-precipitation method with aqueous solutions of indium (III chloride, tin (IV chloride aqueous solution and sodium hydroxide. Three types of ITO nanoparticles were fabricated. The first type was fabricated using the co-precipitation method (c-ITO. The second and third types were fabricated using a homogeneous precipitation method with the seed nanoparticles (s-ITO and without seeds (n-ITO. The as-prepared precursor particles were annealed in air at 500 °C, and their crystal structures were cubic ITO. The c-ITO nanoparticles formed irregular-shaped agglomerates of nanoparticles. The n-ITO nanoparticles had a rectangular-parallelepiped or quasi-cubic structure. Most s-ITO nanoparticles had a quasi-cubic structure, and their size was larger than the n-ITO particles. The volume resistivities of the c-ITO, n-ITO and s-ITO powders decreased in that order because the regular-shaped particles were made to strongly contact with each other.
Energy Technology Data Exchange (ETDEWEB)
Zhao, Haiqiang; Qi, Weihong, E-mail: qiwh216@csu.edu.cn; Ji, Wenhai; Wang, Tianran; Peng, Hongcheng; Wang, Qi; Jia, Yanlin; He, Jieting [Central South University, School of Materials Science and Engineering (China)
2017-05-15
Fivefold symmetry appears only in small particles and quasicrystals because internal stress in the particles increases with the particle size. However, a typical Marks decahedron with five re-entrant grooves located at the ends of the twin boundaries can further reduce the strain energy. During hydrothermal synthesis, it is difficult to stir the reaction solution contained in a digestion high-pressure tank because of the relatively small size and high-temperature and high-pressure sealed environment. In this work, we optimized a hydrothermal reaction system by replacing the conventional drying oven with a homogeneous reactor to shift the original static reaction solution into a full mixing state. Large Marks-decahedral Pd nanoparticles (~90 nm) have been successfully synthesized in the optimized hydrothermal synthesis system. Additionally, in the products, round Marks-decahedral Pd particles were also found for the first time. While it remains a challenge to understand the growth mechanism of the fivefold twinned structure, we proposed a plausible growth-mediated mechanism for Marks-decahedral Pd nanoparticles based on observations of the synthesis process.
International Nuclear Information System (INIS)
Zhao, Haiqiang; Qi, Weihong; Ji, Wenhai; Wang, Tianran; Peng, Hongcheng; Wang, Qi; Jia, Yanlin; He, Jieting
2017-01-01
Fivefold symmetry appears only in small particles and quasicrystals because internal stress in the particles increases with the particle size. However, a typical Marks decahedron with five re-entrant grooves located at the ends of the twin boundaries can further reduce the strain energy. During hydrothermal synthesis, it is difficult to stir the reaction solution contained in a digestion high-pressure tank because of the relatively small size and high-temperature and high-pressure sealed environment. In this work, we optimized a hydrothermal reaction system by replacing the conventional drying oven with a homogeneous reactor to shift the original static reaction solution into a full mixing state. Large Marks-decahedral Pd nanoparticles (~90 nm) have been successfully synthesized in the optimized hydrothermal synthesis system. Additionally, in the products, round Marks-decahedral Pd particles were also found for the first time. While it remains a challenge to understand the growth mechanism of the fivefold twinned structure, we proposed a plausible growth-mediated mechanism for Marks-decahedral Pd nanoparticles based on observations of the synthesis process.
Methods for assessing homogeneity in ThO2--UO2 fuels (LWBR Development Program)
International Nuclear Information System (INIS)
Berman, R.M.
1978-06-01
ThO 2 -UO 2 solid solutions fabricated as LWBR fuel pellets are examined for uniform uranium distribution by means of autoradiography. Kodak NTA plates are used. Images of inhomogeneities are 29 +- 10 microns larger in diameter than the high-urania segregations that caused them, due to the range of alpha particles in the emulsion, and an appropriate correction must be made. Photographic density is approximately linear with urania content in the region between underexposure and overexposure, but the slope of the calibration curve varies with aging and growth of alpha activity from the parasitic 232 U and its decomposition products. A calibration must therefore be performed using two known points--the average photographic density (corresponding to the average composition) and an extrapolated background (corresponding to zero urania). As part of production pellet inspection, plates are evaluated by inspectors, who count segregations by size classes. This is supplemented by microdensitometer scans of the autoradiograph and by electron probe studies of the original sample if apparent homogeneity is marginal
Delvaux, Elaine; Mastroeni, Diego; Nolz, Jennifer; Coleman, Paul D
2016-06-01
We describe a novel method for assessing the "open" or "closed" state of chromatin at selected locations within the genome. This method combines the use of Benzonase, which can digest DNA in the presence of actin, with qPCR to define digested regions. We demonstrate the application of this method in brain homogenates and laser captured cells. We also demonstrate application to selected sites within more than one gene and multiple sites within one gene. We demonstrate the validity of the method by treating cells with valproate, known to render chromatin more permissive, and by comparison with classical digestion with DNase I in an in vitro preparation. Although we demonstrate the use of this method in brain tissue we also recognize its applicability to other tissue types.
Directory of Open Access Journals (Sweden)
Elaine Delvaux
2016-06-01
Full Text Available We describe a novel method for assessing the “open” or “closed” state of chromatin at selected locations within the genome. This method combines the use of Benzonase, which can digest DNA in the presence of actin, with quantitative polymerase chain reaction to define digested regions. We demonstrate the application of this method in brain homogenates and laser captured cells. We also demonstrate application to selected sites within more than 1 gene and multiple sites within 1 gene. We demonstrate the validity of the method by treating cells with valproate, known to render chromatin more permissive, and by comparison with classical digestion with DNase I in an in vitro preparation. Although we demonstrate the use of this method in brain tissue, we also recognize its applicability to other tissue types.
Czech Academy of Sciences Publication Activity Database
Pauporté, T.; Jirka, Ivan
2009-01-01
Roč. 54, č. 28 (2009), s. 7558-7564 ISSN 0013-4686 R&D Projects: GA AV ČR IAA400400909 Institutional research plan: CEZ:AV0Z40400503 Keywords : electrodeposition * ZnO * room temperature * photoluminiscence Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.325, year: 2009
Joyce, Duncan; Parnell, William J; Assier, Raphaël C; Abrahams, I David
2017-05-01
In Parnell & Abrahams (2008 Proc. R. Soc. A 464 , 1461-1482. (doi:10.1098/rspa.2007.0254)), a homogenization scheme was developed that gave rise to explicit forms for the effective antiplane shear moduli of a periodic unidirectional fibre-reinforced medium where fibres have non-circular cross section. The explicit expressions are rational functions in the volume fraction. In that scheme, a (non-dilute) approximation was invoked to determine leading-order expressions. Agreement with existing methods was shown to be good except at very high volume fractions. Here, the theory is extended in order to determine higher-order terms in the expansion. Explicit expressions for effective properties can be derived for fibres with non-circular cross section, without recourse to numerical methods. Terms appearing in the expressions are identified as being associated with the lattice geometry of the periodic fibre distribution, fibre cross-sectional shape and host/fibre material properties. Results are derived in the context of antiplane elasticity but the analogy with the potential problem illustrates the broad applicability of the method to, e.g. thermal, electrostatic and magnetostatic problems. The efficacy of the scheme is illustrated by comparison with the well-established method of asymptotic homogenization where for fibres of general cross section, the associated cell problem must be solved by some computational scheme.
Shi-Ying, Jin; Jin, Han; Shi-Xiao, Jin; Qing-Yuan, Lv; Jin-Xia, Bai; Chen, Hong-Ge; Rui-Sheng, Li; Wei, Wu; Hai-Long, Yuan
2014-01-01
To improve the absorption and bioavailability of baicalin using a nanocrystal (or nanosuspension) drug delivery system. A tandem, ultrasonic-homogenization-fluid bed drying technology was applied to prepare baicalin-nanocrystal dried powders, and the physicochemical properties of baicalin-nanocrystals were characterized by scanning electron microscopy, photon correlation spectroscopy, powder X-ray diffraction, physical stability, and solubility experiments. Furthermore, in situ intestine single-pass perfusion experiments and pharmacokinetics in rats were performed to make a comparison between the microcrystals of baicalin and pure baicalin in their absorption properties and bioavailability in vivo. The mean particle size of baicalin-nanocrystals was 236 nm, with a polydispersity index of 0.173, and a zeta potential value of -34.8 mV, which provided a guarantee for the stability of the reconstituted nanosuspension. X-Ray diffraction results indicated that the crystallinity of baicalin was decreased through the ultrasonic-homogenization process. Physical stability experiments showed that the prepared baicalin-nanocrystals were sufficiently stable. It was shown that the solubility of baicalin in the form of nanocrystals, at 495 μg·mL(-1), was much higher than the baicalin-microcrystals and the physical mixture (135 and 86.4 μg·mL(-1), respectively). In situ intestine perfusion experiments demonstrated a clear advantage in the dissolution and absorption characteristics for baicalin-nanocrystals compared to the other formulations. In addition, after oral administration to rats, the particle size decrease from the micron to nanometer range exhibited much higher in vivo bioavailability (with the AUC(0-t) value of 206.96 ± 21.23 and 127.95 ± 14.41 mg·L(-1)·h(-1), respectively). The nanocrystal drug delivery system using an ultrasonic-homogenization-fluid bed drying process is able to improve the absorption and in vivo bioavailability of baicalin, compared with pure
International Nuclear Information System (INIS)
Jayanthi, S.; Kutty, T.R.N.
2004-01-01
Ca-substituted BaTiO 3 with extended homogeneity range upto ∼50 mol% CaTiO 3 have been prepared by three different chemical routes namely carbonate-oxalate (COBCT), gel-carbonate (GCBCT), and gel-to-crystallite conversion (GHBCT) followed by heat treatment above 1150 deg. C. X-ray powder diffraction (XRD) data show continuous decrease in the tetragonal unit cell parameters as well as c 0 /a 0 ratio with CaTiO 3 content, which are in accordance with the substitution of smaller sized Ca 2+ ions at the barium sites. The microstructure as well as the dielectric properties are greatly influenced by the cationic ratio, α=(Ba+Ca)/Ti. The grain size decreases with CaTiO 3 content for the stoichiometric samples (α=1), whereas ultrafine microstructure is observed in the case of off-stoichiometric samples (α>1) for the whole compositional range of CaTiO 3 concentrations. Sharper ε r -T characteristics at lower calcium content and broader ε r -T with decreased ε max , in the higher calcium range are observed in the case of α=1. Whereas nanometer grained ceramics exhibiting diffuse ε r -T characteristics are obtained in the case of α>1. The positive temperature coefficient of resistivity (PTCR) is realized for barium calcium titanate ceramics having 0.3 at.% Sb as the donor dopant for higher CaTiO 3 (typically 30 mol%) containing samples (α=1), indicating that Ca 2+ ions do not behave as acceptors if they were to substitute at the Ti 4+ sites. Whereas the off-stoichiometric (α>1) ceramics retained high resistivity, indicative of the Ti-site occupancy for Ca 2+ in fine grain ceramics
Industrial applications of neutron physics methods
International Nuclear Information System (INIS)
Gozani, T.
1994-01-01
Three areas where nuclear based techniques have significant are briefly described. These are: Nuclear material control and non-proliferation, on-line elemental analysis of coal and minerals, and non- detection of explosives and other contraband. The nuclear physics and the role of reactor physics methods are highlighted. (author). 5 refs., 10 figs., 5 tabs
Li, Y; Joyner, H S; Carter, B G; Drake, M A
2018-04-01
Fluid milk may be pasteurized by high-temperature short-time pasteurization (HTST) or ultrapasteurization (UP). Literature suggests that UP increases milk astringency, but definitive studies have not demonstrated this effect. Thus, the objective of this study was to determine the effects of pasteurization method, fat content, homogenization pressure, and storage time on milk sensory and mechanical behaviors. Raw skim (fat), 2%, and 5% fat milk was pasteurized in duplicate by indirect UP (140°C, 2.3 s) or by HTST pasteurization (78°C, 15 s), homogenized at 20.7 MPa, and stored at 4°C for 8 wk. Additionally, 2% fat milk was processed by indirect UP and homogenized at 13.8, 20.7, and 27.6 MPa and stored at 4°C for 8 wk. Sensory profiling, instrumental viscosity, and friction profiles of all milk were evaluated at 25°C after storage times of 1, 4, and 8 wk. Sodium dodecyl sulfate PAGE and confocal laser scanning microscopy were used to determine protein structural changes in milk at these time points. Fresh HTST milk was processed at wk 7 for wk 8 evaluations. Ultrapasteurization increased milk sensory and instrumental viscosity compared with HTST pasteurization. Increased fat content increased sensory and instrumental viscosity, and decreased astringency and friction profiles. Astringency, mixed regimen friction profiles, and sensory viscosity also increased for UP versus HTST. Increased storage time showed no effect on sensory viscosity or mechanical viscosity. However, increased storage time generally resulted in increased friction profiles and astringency. Sodium dodecyl sulfate PAGE and confocal laser scanning microscopy showed increased denatured whey protein in UP milk compared with HTST milk. The aggregates or network formed by these proteins and casein micelles likely caused the increase in viscosity and friction profiles during storage. Homogenization pressure did not significantly affect friction behaviors, mechanical viscosity, or astringency; however
Energy Technology Data Exchange (ETDEWEB)
Sugino, Kazuteru [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center; Iwai, Takehiko
1998-07-01
A standard data base for FBR core nuclear design is under development in order to improve the accuracy of FBR design calculation. As a part of the development, we investigated an improved treatment of double-heterogeneity and a method to calculate homogenized control rod cross sections in a commercial reactor geometry, for the betterment of the analytical accuracy of commercial FBR core characteristics. As an improvement in the treatment of double-heterogeneity, we derived a new method (the direct method) and compared both this and conventional methods with continuous energy Monte-Carlo calculations. In addition, we investigated the applicability of the reaction rate ratio preservation method as a advanced method to calculate homogenized control rod cross sections. The present studies gave the following information: (1) An improved treatment of double-heterogeneity: for criticality the conventional method showed good agreement with Monte-Carlo result within one sigma standard deviation; the direct method was consistent with conventional one. Preliminary evaluation of effects in core characteristics other than criticality showed that the effect of sodium void reactivity (coolant reactivity) due to the double-heterogeneity was large. (2) An advanced method to calculate homogenize control rod cross sections: for control rod worths the reaction rate ratio preservation method agreed with those produced by the calculations with the control rod heterogeneity included in the core geometry; in Monju control rod worth analysis, the present method overestimated control rod worths by 1 to 2% compared with the conventional method, but these differences were caused by more accurate model in the present method and it is considered that this method is more reliable than the conventional one. These two methods investigated in this study can be directly applied to core characteristics other than criticality or control rod worth. Thus it is concluded that these methods will
Ultrasonic methods in solid state physics
Truell, John; Elbaum, Charles
1969-01-01
Ultrasonic Methods in Solid State Physics is devoted to studies of energy loss and velocity of ultrasonic waves which have a bearing on present-day problems in solid-state physics. The discussion is particularly concerned with the type of investigation that can be carried out in the megacycle range of frequencies from a few megacycles to kilomegacycles; it deals almost entirely with short-duration pulse methods rather than with standing-wave methods. The book opens with a chapter on a classical treatment of wave propagation in solids. This is followed by separate chapters on methods and techni
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés [Departamento de Física, Facultad de Ciencias, Universidad de Chile (Chile)
2016-07-07
The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of the intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.
The Effect of Homogenization Pressures on Extraction of Avocado Oil by Wet Method
Basuni Hamzah
2013-01-01
Avocado tree usually planted by people of Indonesia in rural are small in scale. Mostly, in the modern and high scale industry especially company has a large avocado farm the extraction of avocado oil is extracted through vacuum drying in low temperature. However, in rural area avocado tree spread out in small number of tree, so it needs alternative method of avocado oil extraction. In this experiment, wet method of avocado extraction was applied similar to traditional extraction of coconut o...
Mathematical methods for physical and analytical chemistry
Goodson, David Z
2011-01-01
Mathematical Methods for Physical and Analytical Chemistry presents mathematical and statistical methods to students of chemistry at the intermediate, post-calculus level. The content includes a review of general calculus; a review of numerical techniques often omitted from calculus courses, such as cubic splines and Newton's method; a detailed treatment of statistical methods for experimental data analysis; complex numbers; extrapolation; linear algebra; and differential equations. With numerous example problems and helpful anecdotes, this text gives chemistry students the mathematical
Measuring the homogeneity of Bi(2223)/Ag tapes by four-probe method and a Hall probe array
International Nuclear Information System (INIS)
Kovac, P.
1999-01-01
The nature of the BSCCO compound and application of the powder-in-tube technique usually lead to non-uniform quality across and/or along the ceramic fibres and finally to variations in the critical current and its irregular distribution in the Bi(2223)/Ag tape. Therefore, the gliding four-probe method and contactless field monitoring measurements have been used for homogeneity studies. The gliding potential contacts moved along the tape surface and a sensitive system based on an integrated Hall probe array containing 16 or 19 in-line probes supported by PC-compatible electronics with software allowed us to make a comparison of contact and contactless measurements at any elements of Bi(2223)/Ag sample. The results of both methods show very good correlation and the possibility of using a sensitive Hall probe array for monitoring the final quality of Bi(2223)/Ag tapes. (author)
Directory of Open Access Journals (Sweden)
Changwei Zhou
2017-02-01
Full Text Available In this article, the analytical homogenization method of periodic discrete media (HPDM and the numerical condensed wave finite element method (CWFEM are employed to study the longitudinal and transverse vibrations of framed structures. The valid frequency range of the HPDM is re-evaluated using the wave propagation feature identified by the CWFEM. The relative error of the wavenumber by the HPDM compared to that by the CWFEM is illustrated in functions of frequency and scale ratio. A parametric study on the thickness of the structure is carried out where the dispersion relation and the relative error are given for three different thicknesses. The dynamics of a finite structure such as natural frequency and forced response are also investigated using the HPDM and the CWFEM.
Advanced analysis methods in particle physics
Energy Technology Data Exchange (ETDEWEB)
Bhat, Pushpalatha C.; /Fermilab
2010-10-01
Each generation of high energy physics experiments is grander in scale than the previous - more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.
Application of cybernetic methods in physics
Energy Technology Data Exchange (ETDEWEB)
Fradkov, Aleksandr L [Institute of Problems of Mechanical Engineering, Russian Academy of Sciences, St.-Petersburg (Russian Federation)
2005-02-28
Basic aspects of the subject and methodology for a new and rapidly developing area of research that has emerged at the intersection of physics and control theory (cybernetics) and emphasizes the application of cybernetic methods to the study of physical systems are reviewed. Speed-gradient and Hamiltonian solutions for energy control problems in conservative and dissipative systems are presented. Application examples such as the Kapitza pendulum, controlled overcoming of a potential barrier, and controlling coupled oscillators and molecular systems are presented. A speed-gradient approach to modeling the dynamics of physical systems is discussed. (reviews of topical problems)
Application of cybernetic methods in physics
International Nuclear Information System (INIS)
Fradkov, Aleksandr L
2005-01-01
Basic aspects of the subject and methodology for a new and rapidly developing area of research that has emerged at the intersection of physics and control theory (cybernetics) and emphasizes the application of cybernetic methods to the study of physical systems are reviewed. Speed-gradient and Hamiltonian solutions for energy control problems in conservative and dissipative systems are presented. Application examples such as the Kapitza pendulum, controlled overcoming of a potential barrier, and controlling coupled oscillators and molecular systems are presented. A speed-gradient approach to modeling the dynamics of physical systems is discussed. (reviews of topical problems)
Advanced Analysis Methods in High Energy Physics
Energy Technology Data Exchange (ETDEWEB)
Pushpalatha C. Bhat
2001-10-03
During the coming decade, high energy physics experiments at the Fermilab Tevatron and around the globe will use very sophisticated equipment to record unprecedented amounts of data in the hope of making major discoveries that may unravel some of Nature's deepest mysteries. The discovery of the Higgs boson and signals of new physics may be around the corner. The use of advanced analysis techniques will be crucial in achieving these goals. The author discusses some of the novel methods of analysis that could prove to be particularly valuable for finding evidence of any new physics, for improving precision measurements and for exploring parameter spaces of theoretical models.
Homogenization of Mammalian Cells.
de Araújo, Mariana E G; Lamberti, Giorgia; Huber, Lukas A
2015-11-02
Homogenization is the name given to the methodological steps necessary for releasing organelles and other cellular constituents as a free suspension of intact individual components. Most homogenization procedures used for mammalian cells (e.g., cavitation pump and Dounce homogenizer) rely on mechanical force to break the plasma membrane and may be supplemented with osmotic or temperature alterations to facilitate membrane disruption. In this protocol, we describe a syringe-based homogenization method that does not require specialized equipment, is easy to handle, and gives reproducible results. The method may be adapted for cells that require hypotonic shock before homogenization. We routinely use it as part of our workflow to isolate endocytic organelles from mammalian cells. © 2015 Cold Spring Harbor Laboratory Press.
International Nuclear Information System (INIS)
Sanchez, R.; Ragusa, J.; Santandrea, S.
2004-01-01
The problem of the determination of a homogeneous reflector that preserves a set of prescribed albedo is considered. Duality is used for a direct estimation of the derivatives needed in the iterative calculation of the optimal homogeneous cross sections. The calculation is based on the preservation of collapsed multigroup albedo obtained from detailed reference calculations and depends on the low-order operator used for core calculations. In this work we analyze diffusion and transport as low-order operators and argue that the P 0 transfers are the best choice for the unknown cross sections to be adjusted. Numerical results illustrate the new approach for SP N core calculations. (Author)
Directory of Open Access Journals (Sweden)
R Dinarvand
2008-09-01
Full Text Available Background: The inherent shortcomings of conventional drug delivery systems containing estrogens and the potential of nanoparticles (NPs have offered tremendous scope for investigation. Although polymeric NPs have been used as drug carriers for many active agents, the use of appropriate polymer and method of NP preparation to overcome different challenges is very important. Materials and methods: Poly lactide-co-glycolide (PLGA NPs containing estradiol valerate were prepared by the modified spontaneous emulsification solvent diffusion method. Several parameters including the drug/polymer ratios in range of 2.5-10%, poly vinyl alcohol (PVA in concentration of 0-4% as stabilizer and internal phase volume and composition were examined to optimize formulation. The size distribution and morphology of the NPs, encapsulation efficacy and in vitro release profile in phosphate buffer medium (pH 7.4 during 12 hrs were then investigated. Results: The NPs prepared in this study were spherical with a relatively mono-dispersed size distribution. By adjustment of the process parameters, the size and the drug encapsulation efficacy as well as the drug release kinetics can be optimally controlled. The mean particle size of the best formula with encapsulation efficiency of 100% was 175 ± 19, in which release profile was best fitted to Higuchi's model of release which showed that release mechanism was mainly controlled by diffusion of the drug to the release medium. Conclusion: According to the size and surface properties of the prepared particles, it may be concluded that they are a good formulation for non-parenteral routes of administration.
International Nuclear Information System (INIS)
Koczorowska, E.
1995-01-01
The radiotracer methods have been worked out for analysis of physico-chemical phenomena. The analysis were carried out for rubber mixings in technological conditions and played a deciding role for quality of manufacturing assortment. The 35 S and 65 Zn have been used as radiotracers. Analysis of different technological parameters influence on behaviour of soluble and polymerized sulfur, zinc compounds as well as other components of the rubber mixing enables to formulate the conclusions important for rubber industry and technology from view point of quality of manufacturing products
Nanophase materials produced by physical methods
International Nuclear Information System (INIS)
Noda, Shoji
1992-01-01
A nanophase material is mainly characterized by the component's size and the large interface area. Some nanophase materials are briefly described. Ion implantation and oblique vapor deposition are taken as the methods to provide nanophase materials, and their features are described. These physical methods are non-equilibrium material processes, and the unique nanophase materials are demonstrated to be provided by these methods with little thermodynamic restriction. (author)
Method of producing homogeneous mixed metal oxides and metal-metal oxide mixtures
International Nuclear Information System (INIS)
Quinby, T.C.
1980-01-01
A method for preparing particulate metal or metal oxide of controlled partile size comprises contacting an an aqueous solution containing dissolved metal values with excess urea at a temperature sufficient to cause urea to react with water to provide a molten urea solution containing the metal values; heating the molten urea solution to cause the metal values to precipitate, forming a mixture containing precipitated metal values; heating the mixture containing precipitated metal values to evaporate volatile material leaving a dry powder containing said metal values. The dry powder can be calcined to provide particulate metal oxide or reduced to provide particulate metal. Oxide mixtures are provided when the aqueous solution contains values of more than one metal. Homogeneousmetal-metal oxide mistures for preparing cermets can be prepared by selectively reducing at least one of the metal oxides. (auth)
Numerical methods in physical and economic sciences
International Nuclear Information System (INIS)
Lions, J.L.; Marchouk, G.I.
1974-01-01
This book is the first of a series to be published simultaneously in French and Russian. Some results obtained in the framework of an agreement of French-Soviet scientific collaboration in the field of the information processing are exposed. In the first part, the iterative methods for solving linear systems are studied with new methods which are compared to already known methods. Iterative methods of minimization of quadratic functionals are then studied. In the second part, the optimization problems with one or many criteria, issued from Physics and Economics problems are considered and splitting and decentralizing methods systematically studied [fr
International Nuclear Information System (INIS)
Patra, A.; Saha Ray, S.
2014-01-01
Highlights: • A stationary transport equation has been solved using the technique of Haar wavelet Collocation Method. • This paper intends to provide the great utility of Haar wavelets to nuclear science problem. • In the present paper, two-dimensional Haar wavelets are applied. • The proposed method is mathematically very simple, easy and fast. - Abstract: This paper emphasizes on finding the solution for a stationary transport equation using the technique of Haar wavelet Collocation Method (HWCM). Haar wavelet Collocation Method is efficient and powerful in solving wide class of linear and nonlinear differential equations. Recently Haar wavelet transform has gained the reputation of being a very effective tool for many practical applications. This paper intends to provide the great utility of Haar wavelets to nuclear science problem. In the present paper, two-dimensional Haar wavelets are applied for solution of the stationary Neutron Transport Equation in homogeneous isotropic medium. The proposed method is mathematically very simple, easy and fast. To demonstrate about the efficiency of the method, one test problem is discussed. It can be observed from the computational simulation that the numerical approximate solution is much closer to the exact solution
The method of sections in molecular physics
International Nuclear Information System (INIS)
Natarajan, P.; Zygelman, B.
1993-01-01
In the standard Born-Oppenheimer theory the nuclear wave-function for a bound, rotating, di-atom system is described by the Wigner functions. Unlike the spherical harmonics, the Wigner functions exhibit cusp singularities at the poles of the space-fixed coordinate system. These singularities are identical to the ones encountered in the quantum mechanics treatment of a charged particle under the influence of a magnetic monopole. In the latter case the method of sectional was introduced to eliminate the singularities. The method of sections was also introduced in molecular physics. We discuss here, in detail, their properties and advantage of using this construction in molecular physics
Physical acoustics v.8 principles and methods
Mason, Warren P
1971-01-01
Physical Acoustics: Principles and Methods, Volume VIII discusses a number of themes on physical acoustics that are divided into seven chapters. Chapter 1 describes the principles and applications of a tool for investigating phonons in dielectric crystals, the spin phonon spectrometer. The next chapter discusses the use of ultrasound in investigating Landau quantum oscillations in the presence of a magnetic field and their relation to the strain dependence of the Fermi surface of metals. The third chapter focuses on the ultrasonic measurements that are made by pulsing methods with velo
Geometric Methods in Physics : XXXIII Workshop
Bieliavsky, Pierre; Odzijewicz, Anatol; Schlichenmaier, Martin; Voronov, Theodore
2015-01-01
This book presents a selection of papers based on the XXXIII Białowieża Workshop on Geometric Methods in Physics, 2014. The Białowieża Workshops are among the most important meetings in the field and attract researchers from both mathematics and physics. The articles gathered here are mathematically rigorous and have important physical implications, addressing the application of geometry in classical and quantum physics. Despite their long tradition, the workshops remain at the cutting edge of ongoing research. For the last several years, each Białowieża Workshop has been followed by a School on Geometry and Physics, where advanced lectures for graduate students and young researchers are presented; some of the lectures are reproduced here. The unique atmosphere of the workshop and school is enhanced by its venue, framed by the natural beauty of the Białowieża forest in eastern Poland. The volume will be of interest to researchers and graduate students in mathematical physics, theoretical physics and m...
International Nuclear Information System (INIS)
Cooling, C.M.; Williams, M.M.R.; Nygaard, E.T.; Eaton, M.D.
2013-01-01
Highlights: • A point kinetics model for the Medical Isotope Production Reactor is formulated. • Reactivity insertions are simulated using this model. • Polynomial chaos is used to simulate uncertainty in reactor parameters. • The computational efficiency of polynomial chaos is compared to that of Monte Carlo. -- Abstract: This paper models a conceptual Medical Isotope Production Reactor (MIPR) using a point kinetics model which is used to explore power excursions in the event of a reactivity insertion. The effect of uncertainty of key parameters is modelled using intrusive polynomial chaos. It is found that the system is stable against reactivity insertions and power excursions are all bounded and tend towards a new equilibrium state due to the negative feedbacks inherent in Aqueous Homogeneous Reactors (AHRs). The Polynomial Chaos Expansion (PCE) method is found to be much more computationally efficient than that of Monte Carlo simulation in this application
Particle identification methods in High Energy Physics
Energy Technology Data Exchange (ETDEWEB)
Va' Vra, J.
2000-01-27
This paper deals with two major particle identification methods: dE/dx and Cherenkov detection. In the first method, the authors systematically compare existing dE/dx data with various predictions available in the literature, such as the Particle Data group recommendation, and judge the overall consistency. To my knowledge, such comparison was not done yet in a published form for the gaseous detectors used in High-Energy physics. As far as the second method, there are two major Cherenkov light detection techniques: the threshold and the Ring imaging methods. The authors discuss the recent trend in these techniques.
Software piracy: Physical and legal protection methods
Energy Technology Data Exchange (ETDEWEB)
Orlandi, E
1991-02-01
Advantages and disadvantages, in terms of reliability and cost, are assessed for different physical and legal methods of protection of computer software, e.g., encryption and key management. The paper notes, however, that no protection system is 100% safe; the best approach is to implement a sufficient amount of protection such as to make piracy uneconomical relative to the risks involved.
Energy Technology Data Exchange (ETDEWEB)
Sanchez, R.; Ragusa, J.; Santandrea, S. [Commissariat a l' Energie Atomique, Direction de l' Energie Nucleaire, Service d' Etudes de Reacteurs et de Modelisation Avancee, CEA de Saclay, DM2S/SERMA 91 191 Gif-sur-Yvette cedex (France)]. e-mail: richard.sanchez@cea.fr
2004-07-01
The problem of the determination of a homogeneous reflector that preserves a set of prescribed albedo is considered. Duality is used for a direct estimation of the derivatives needed in the iterative calculation of the optimal homogeneous cross sections. The calculation is based on the preservation of collapsed multigroup albedo obtained from detailed reference calculations and depends on the low-order operator used for core calculations. In this work we analyze diffusion and transport as low-order operators and argue that the P{sub 0} transfers are the best choice for the unknown cross sections to be adjusted. Numerical results illustrate the new approach for SP{sub N} core calculations. (Author)
Resonance interference method in lattice physics code stream
International Nuclear Information System (INIS)
Choi, Sooyoung; Khassenov, Azamat; Lee, Deokjung
2015-01-01
Newly developed resonance interference model is implemented in the lattice physics code STREAM, and the model shows a significant improvement in computing accurate eigenvalues. Equivalence theory is widely used in production calculations to generate the effective multigroup (MG) cross-sections (XS) for commercial reactors. Although a lot of methods have been developed to enhance the accuracy in computing effective XSs, the current resonance treatment methods still do not have a clear resonance interference model. The conventional resonance interference model simply adds the absorption XSs of resonance isotopes to the background XS. However, the conventional models show non-negligible errors in computing effective XSs and eigenvalues. In this paper, a resonance interference factor (RIF) library method is proposed. This method interpolates the RIFs in a pre-generated RIF library and corrects the effective XS, rather than solving the time consuming slowing down calculation. The RIF library method is verified for homogeneous and heterogeneous problems. The verification results using the proposed method show significant improvements of accuracy in treating the interference effect. (author)
International Nuclear Information System (INIS)
Dybczynski, R.; Polkowska-Motrenko, H.; Samczynski, Z.; Szopa, Z.
1994-01-01
The aim of this project in the long run has been reparation of a new biological reference material: Tobacco leaves of the 'Virginia' type and its certification for the content of possibly great number of trace elements. Further aims have been: development of the suitable methods for checking the homogeneity with the special emphasis on homogeneity of small samples and the critical analysis of the performance of various analytical techniques
PHYSICAL METHODS IN AGRO-FOOD CHAIN
Directory of Open Access Journals (Sweden)
ANNA ALADJADJIYAN
2009-06-01
Full Text Available Chemical additives (fertilizers and plant protection preparations are largely used for improving the production yield of food produce. Their application often causes the contamination of raw materials for food production, which can be dangerous for the health of consumers. Alternative methods are developed and implemented to improve and ensure the safety of on-farm production. The substitution of chemical fertilizers and soil additives with alternative treatment methods, such as irradiation, ultrasound and the use of electromagnetic energy are discussed. Successful application of physical methods in different stages of food-preparation is recommended.
Evaluation of methods to assess physical activity
Leenders, Nicole Y. J. M.
Epidemiological evidence has accumulated that demonstrates that the amount of physical activity-related energy expenditure during a week reduces the incidence of cardiovascular disease, diabetes, obesity, and all-cause mortality. To further understand the amount of daily physical activity and related energy expenditure that are necessary to maintain or improve the functional health status and quality of life, instruments that estimate total (TDEE) and physical activity-related energy expenditure (PAEE) under free-living conditions should be determined to be valid and reliable. Without evaluation of the various methods that estimate TDEE and PAEE with the doubly labeled water (DLW) method in females there will be eventual significant limitations on assessing the efficacy of physical activity interventions on health status in this population. A triaxial accelerometer (Tritrac-R3D, (TT)), an uniaxial (Computer Science and Applications Inc., (CSA)) activity monitor, a Yamax-Digiwalker-500sp°ler , (YX-stepcounter), by measuring heart rate responses (HR method) and a 7-d Physical Activity Recall questionnaire (7-d PAR) were compared with the "criterion method" of DLW during a 7-d period in female adults. The DLW-TDEE was underestimated on average 9, 11 and 15% using 7-d PAR, HR method and TT. The underestimation of DLW-PAEE by 7-d PAR was 21% compared to 47% and 67% for TT and YX-stepcounter. Approximately 56% of the variance in DLW-PAEE*kgsp{-1} is explained by the registration of body movement with accelerometry. A larger proportion of the variance in DLW-PAEE*kgsp{-1} was explained by jointly incorporating information from the vertical and horizontal movement measured with the CSA and Tritrac-R3D (rsp2 = 0.87). Although only a small amount of variance in DLW-PAEE*kgsp{-1} is explained by the number of steps taken per day, because of its low cost and ease of use, the Yamax-stepcounter is useful in studies promoting daily walking. Thus, studies involving the
Mathematical methods in engineering and physics
Felder, Gary N
2016-01-01
This text is intended for the undergraduate course in math methods, with an audience of physics and engineering majors. As a required course in most departments, the text relies heavily on explained examples, real-world applications and student engagement. Supporting the use of active learning, a strong focus is placed upon physical motivation combined with a versatile coverage of topics that can be used as a reference after students complete the course. Each chapter begins with an overview that includes a list of prerequisite knowledge, a list of skills that will be covered in the chapter, and an outline of the sections. Next comes the motivating exercise, which steps the students through a real-world physical problem that requires the techniques taught in each chapter.
Directory of Open Access Journals (Sweden)
KUDRYAVTSEV Pavel Gennadievich
2015-02-01
Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for discription of properties of dispersed systems. The authors applied statistical polymer method based on consideration of average structures of all possible macromolecules of the same weight. The equiations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the mehods of fractal theory. The method of fractal polymer can be also applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.
Monte Carlo method applied to medical physics
International Nuclear Information System (INIS)
Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.
2000-01-01
The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)
The generator coordinate method in nuclear physics
International Nuclear Information System (INIS)
Giraud, B.G.
1981-01-01
The generator coordinate method is introduced as a physical description of a N-body system in a subspace of a reduced number of degrees of freedom. Special attention is placed on the identification of these special, 'collective' degrees of freedom. It is shown in particular that the method has close links with the Born-Oppenheimer approximation and also that considerations of differential geometry are useful in the theory. A set of applications is discussed and in particular the case of nuclear collisions is considered. (Author) [pt
Geometric Methods in Physics : XXXII Workshop
Bieliavsky, Pierre; Odesskii, Alexander; Odzijewicz, Anatol; Schlichenmaier, Martin; Voronov, Theodore; Geometric Methods in Physics
2014-01-01
The Białowieża Workshops on Geometric Methods in Physics, which are hosted in the unique setting of the Białowieża natural forest in Poland, are among the most important meetings in the field. Every year some 80 to 100 participants from both the mathematics and physics world join to discuss new developments and to exchange ideas. The current volume was produced on the occasion of the 32nd meeting in 2013. It is now becoming a tradition that the Workshop is followed by a School on Geometry and Physics, which consists of advanced lectures for graduate students and young researchers. Selected speakers at the 2013 Workshop were asked to contribute to this book, and their work was supplemented by additional review articles. The selection shows that, despite its now long tradition, the workshop remains at the cutting edge of research. The 2013 Workshop also celebrated the 75th birthday of Daniel Sternheimer, and on this occasion the discussion mainly focused on his contributions to mathematical physics such as ...
Montes de Oca-Ávalos, J M; Candal, R J; Herrera, M L
2017-10-01
Nanoemulsions stabilized by sodium caseinate (NaCas) were prepared using a combination of a high-energy homogenization and evaporative ripening methods. The effects of protein concentration and sucrose addition on physical properties were analyzed by dynamic light scattering (DLS), Turbiscan analysis, confocal laser scanning microscopy (CLSM) and small angle X-ray scattering (SAXS). Droplets sizes were smaller (~100nm in diameter) than the ones obtained by other methods (200 to 2000nm in diameter). The stability behavior was also different. These emulsions were not destabilized by creaming. As droplets were so small, gravitational forces were negligible. On the contrary, when they showed destabilization the main mechanism was flocculation. Stability of nanoemulsions increased with increasing protein concentrations. Nanoemulsions with 3 or 4wt% NaCas were slightly turbid systems that remained stable for at least two months. According to SAXS and Turbiscan results, aggregates remained in the nano range showing small tendency to aggregation. In those systems, interactive forces were weak due to the small diameter of flocs. Copyright © 2017 Elsevier Ltd. All rights reserved.
High homogeneity powder of Ti-Ba-Ca-Cu-O (2223) prepared by Freeze-Drying method
International Nuclear Information System (INIS)
Al-Shakarchi, Emad Kh.; Toma, Ziad A.
1999-01-01
Full text.Homogeneous high temerature superconductor ceramic powder of TI-Ba-Ca-Cu-O with transition temperature [Tc=123K] have been successfully prepared from the mixture of nitrate salts [TlNO 3 , Ba(NO 3 ) 2 , Ca(NO 3 ) 2 .4H 2 O and Cu(NO 3 ) 2 .3H 2 O] by using freeze-drying method. Freeze-dryer that was used in this work designed locally in our laboratory. This technique consider a better to get a fine powder of ceramic materials by depending on the procedure of frozen droplets with present of liquid nitrogen. SEM pictures showed the size of grains of about [0.8 μm]. We conclude that the high sintering temperature, for the prepared powders in this technique, for long time [120 hrs] will increase the inter diffusion between the grains ahich caused the decreasing in the density of the sample which may be given a better results than the obtained in a previous works
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
Homogenization approach in engineering
International Nuclear Information System (INIS)
Babuska, I.
1975-10-01
Homogenization is an approach which studies the macrobehavior of a medium by its microproperties. Problems with a microstructure play an essential role in such fields as mechanics, chemistry, physics, and reactor engineering. Attention is concentrated on a simple specific model problem to illustrate results and problems typical of the homogenization approach. Only the diffusion problem is treated here, but some statements are made about the elasticity of composite materials. The differential equation is solved for linear cases with and without boundaries and for the nonlinear case. 3 figures, 1 table
New trends in reactor physics design methods
International Nuclear Information System (INIS)
Jagannathan, V.
1993-01-01
Reactor physics design methods are aimed at safe and efficient management of nuclear materials in a reactor core. The design methodologies require a high level of integration of different calculational modules of many a key areas like neutronics, thermal hydraulics, radiation transport etc in order to follow different 3-D phenomena under normal and transient operating conditions. The evolution of computer hardware technology is far more rapid than the software development and has rendered such integration a meaningful and realizable proposition. The aim of this paper is to assess the state of art of the physics design codes used in Indian thermal power reactor applications with respect to meeting the design, operational and safety requirements. (author). 50 refs
Mechanical Homogenization Increases Bacterial Homogeneity in Sputum
Stokell, Joshua R.; Khan, Ammad
2014-01-01
Sputum obtained from patients with cystic fibrosis (CF) is highly viscous and often heterogeneous in bacterial distribution. Adding dithiothreitol (DTT) is the standard method for liquefaction prior to processing sputum for molecular detection assays. To determine if DTT treatment homogenizes the bacterial distribution within sputum, we measured the difference in mean total bacterial abundance and abundance of Burkholderia multivorans between aliquots of DTT-treated sputum samples with and without a mechanical homogenization (MH) step using a high-speed dispersing element. Additionally, we measured the effect of MH on bacterial abundance. We found a significant difference between the mean bacterial abundances in aliquots that were subjected to only DTT treatment and those of the aliquots which included an MH step (all bacteria, P = 0.04; B. multivorans, P = 0.05). There was no significant effect of MH on bacterial abundance in sputum. Although our results are from a single CF patient, they indicate that mechanical homogenization increases the homogeneity of bacteria in sputum. PMID:24759710
Benchmarking monthly homogenization algorithms
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
Surface physics theoretical models and experimental methods
Mamonova, Marina V; Prudnikova, I A
2016-01-01
The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...
BLUES function method in computational physics
Indekeu, Joseph O.; Müller-Nedebock, Kristian K.
2018-04-01
We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.
Breakthrough Propulsion Physics Project: Project Management Methods
Millis, Marc G.
2004-01-01
To leap past the limitations of existing propulsion, the NASA Breakthrough Propulsion Physics (BPP) Project seeks further advancements in physics from which new propulsion methods can eventually be derived. Three visionary breakthroughs are sought: (1) propulsion that requires no propellant, (2) propulsion that circumvents existing speed limits, and (3) breakthrough methods of energy production to power such devices. Because these propulsion goals are presumably far from fruition, a special emphasis is to identify credible research that will make measurable progress toward these goals in the near-term. The management techniques to address this challenge are presented, with a special emphasis on the process used to review, prioritize, and select research tasks. This selection process includes these key features: (a) research tasks are constrained to only address the immediate unknowns, curious effects or critical issues, (b) reliability of assertions is more important than the implications of the assertions, which includes the practice where the reviewers judge credibility rather than feasibility, and (c) total scores are obtained by multiplying the criteria scores rather than by adding. Lessons learned and revisions planned are discussed.
Lizotte, Todd E.
2011-03-01
Over the years, technological achievements within the laser medical diagnostic, treatment, and therapy markets have led to ever increasing requirements for greater control of critical laser beam parameters. Increased laser power/energy stabilization, temporal and spatial beam shaping and flexible laser beam delivery systems with ergonomic focusing or imaging lens systems are sought by leading medical laser system producers. With medical procedures that utilize laser energy, there is a constant emphasis on reducing adverse effects that come about by the laser itself or its optical system, but even when these variables are well controlled the medical professional will still need to deal with the multivariate nature of the human body. Focusing on the variables that can be controlled, such as accurate placement of the laser beam where it will expose a surface being treated as well as laser beam shape and uniformity is critical to minimizing adverse conditions. This paper covers the use of fiber optic beam delivery as a means of defining the beam shape (intensity/power distribution uniformity) at the target plane as well as the use of fiber delivery as a means to allow more flexible articulation of the laser beam over the surface being treated. The paper will present a new concept of using a square core fiber beam delivery design utilizing a unique micro lens array (MLA) launch method that improves the overall stability of the system, by minimizing the impact of the laser instability. The resulting performance of the prototype is presented to demonstrate its stability in comparison to simple lens launch techniques, with an emphasis on homogenization and articulated fiber delivery.
Directory of Open Access Journals (Sweden)
T. Deshler
2017-06-01
≥ 30 hPa, and thus the transfer function can be characterized as a simple ratio of the less sensitive to the more sensitive method. This ratio is 0.96 for both solution concentration and ozonesonde type. The ratios differ at pressures < 30 hPa such that OZ0. 5%/OZ1. 0 % = 0. 90 + 0. 041 ⋅ log10(p and OZSciencePump/OZENSCI = 0. 764 + 0. 133 ⋅ log10(p for p in units of hPa. For the manufacturer-recommended solution concentrations the dispersion of the ratio (SP-1.0 / EN-0.5 %, while significant, is generally within 3 % and centered near 1.0, such that no changes are recommended. For stations which have used multiple ozonesonde types with solution concentrations different from the WMO's and manufacturer's recommendations, this work suggests that a reasonably homogeneous data set can be created if the quantitative relationships specified above are applied to the non-standard measurements. This result is illustrated here in an application to the Nairobi data set.
Kyazimova, V. K.; Alekperov, A. I.; Zakhrabekova, Z. M.; Azhdarov, G. Kh.
2014-05-01
A distribution of Al and In impurities in Ge1 - x Si x crystals (0 ≤ x ≤ 0.3) grown by a modified Czochralski method (with continuous feeding of melt using a Si rod) have been studied experimentally and theoretically. Experimental Al and In concentrations along homogeneous crystals have been determined from Hall measurements. The problem of Al and In impurity distribution in homogeneous Ge-Si single crystals grown in the same way is solved within the Pfann approximation. A set of dependences of Al and In concentrations on the crystal length obtained within this approximation demonstrates a good correspondence between the experimental and theoretical data.
Tonti, Marco; Salvatore, Sergio
2015-01-01
The problem of the measurement of emotion is a widely debated one. In this article we propose an instrument, the Homogenization of Classification Functions Measure (HOCFUN), designed for assessing the influence of emotional arousal on a rating task consisting of the evaluation of a sequence of images. The instrument defines an indicator (κ) that measures the degree of homogenization of the ratings given over 2 rating scales (pleasant-unpleasant and relevant-irrelevant). Such a degree of homogenization is interpreted as the effect of emotional arousal on thinking and therefore lends itself to be used as a marker of emotional arousal. A preliminary study of validation was implemented. The association of the κ indicator with 3 additional indicators was analyzed. Consistent with the hypotheses, the κ indicator proved to be associated, even if weakly and nonlinearly, with a marker of the homogenization of classification functions derived from a separate rating task and with 2 indirect indicators of emotional activation: the speed of performance on the HOCFUN task and an indicator of mood intensity. Taken as a whole, such results provide initial evidence supporting the HOCFUN construct validity.
Energy Technology Data Exchange (ETDEWEB)
Point, David; Davis, W.C.; Christopher, Steven J.; Ellisor, Michael B.; Pugh, Rebecca S.; Becker, Paul R. [Hollings Marine Laboratory, National Institute of Standards and Technology, Analytical Chemistry Division, Charleston, SC (United States); Donard, Olivier F.X. [Laboratoire de Chimie Analytique BioInorganique et Environnement UMR 5034 du CNRS, Pau (France); Porter, Barbara J.; Wise, Stephen A. [National Institute of Standards and Technology, Analytical Chemistry Division, Gaithersburg, MD (United States)
2007-04-15
An accurate, ultra-sensitive and robust method for speciation of mono, di, and tributyltin (MBT, DBT, and TBT) by speciated isotope-dilution gas chromatography-inductively coupled plasma-mass spectrometry (SID-GC-ICPMS) has been developed for quantification of butyltin concentrations in cryogenic biological materials maintained in an uninterrupted cryo-chain from storage conditions through homogenization and bottling. The method significantly reduces the detection limits, to the low pg g{sup -1} level (as Sn), and was validated by using the European reference material (ERM) CE477, mussel tissue, produced by the Institute for Reference Materials and Measurements. It was applied to three different cryogenic biological materials - a fresh-frozen mussel tissue (SRM 1974b) together with complex materials, a protein-rich material (whale liver control material, QC03LH03), and a lipid-rich material (whale blubber, SRM 1945) containing up to 72% lipids. The commutability between frozen and freeze-dried materials with regard to spike equilibration/interaction, extraction efficiency, and the absence of detectable transformations was carefully investigated by applying complementary methods and by varying extraction conditions and spiking strategies. The inter-method results enabled assignment of reference concentrations of butyltins in cryogenic SRMs and control materials for the first time. The reference concentrations of MBT, DBT, and TBT in SRM 1974b were 0.92 {+-} 0.06, 2.7 {+-} 0.4, and 6.58 {+-} 0.19 ng g{sup -1} as Sn (wet-mass), respectively; in SRM 1945 they were 0.38 {+-} 0.06, 1.19 {+-} 0.26, and 3.55 {+-} 0.44 ng g{sup -1}, respectively, as Sn (wet-mass). In QC03LH03, DBT and TBT concentrations were 30.0 {+-} 2.7 and 2.26 {+-} 0.38 ng g{sup -1} as Sn (wet-mass). The concentration range of butyltins in these materials is one to three orders of magnitude lower than in ERM CE477. This study demonstrated that cryogenically processed and stored biological materials are
Assessment of physical protection systems: EVA method
International Nuclear Information System (INIS)
Bernard, J.-L.; Lamotte, C.; Jorda, A.
2001-01-01
CEA's missions in various sectors of activity such as nuclear, defence, industrial contracts and the associated regulatory requirements, make it necessary to develop a strategy in the field of physical protection. In particular, firms having nuclear materials are subject to the July 25, 1980 law no.80-572 on the protection and control of nuclear materials. A holding permit delivered by the regulatory authority is conditioned to the protection by the operator of the nuclear materials used. In France it is the nuclear operator who must demonstrate, in the form of a security study, that potential aggressors would be neutralised before they could escape with the material. To meet these requirements, we have developed methods to assess the vulnerability of our facilities. The EVA method, the French acronym for 'Evaluation de la vulnerabilite des Acces' (access vulnerability system) allows dealing with internal and external threats involving brutal actions. In scenarios relating to external threat, the intruders get past the various barriers of our protection system, attempting to steal a large volume of material in one swoop and then escape. In the case of internal threat, the goal is the same. However, as the intruder usually has access to the material in the scope of his activities, the action begins at the level of the target. Our protection system is based on in-depth defense where the intruders are detected and then delayed in their advance towards their target to allow time for intervention forces to intercept them
International Nuclear Information System (INIS)
Lee, Yoon Hee; Cho, Bum Hee; Cho, Nam Zin
2016-01-01
As a type of accident-tolerant fuel, fully ceramic microencapsulated (FCM) fuel was proposed after the Fukushima accident in Japan. The FCM fuel consists of tristructural isotropic particles randomly dispersed in a silicon carbide (SiC) matrix. For a fuel element with such high heterogeneity, we have proposed a two-temperature homogenized model using the particle transport Monte Carlo method for the heat conduction problem. This model distinguishes between fuel-kernel and SiC matrix temperatures. Moreover, the obtained temperature profiles are more realistic than those of other models. In Part I of the paper, homogenized parameters for the FCM fuel in which tristructural isotropic particles are randomly dispersed in the fine lattice stochastic structure are obtained by (1) matching steady-state analytic solutions of the model with the results of particle transport Monte Carlo method for heat conduction problems, and (2) preserving total enthalpies in fuel kernels and SiC matrix. The homogenized parameters have two desirable properties: (1) they are insensitive to boundary conditions such as coolant bulk temperatures and thickness of cladding, and (2) they are independent of operating power density. By performing the Monte Carlo calculations with the temperature-dependent thermal properties of the constituent materials of the FCM fuel, temperature-dependent homogenized parameters are obtained
Synthesis of homogeneous Ca0.5Sr0.5FeO2.5+δ compound using a mirror furnace method
International Nuclear Information System (INIS)
Mahboub, M.S.; Zeroual, S.; Boudjada, A.
2012-01-01
Graphical abstract: X-ray diffraction pattern indexing of Ca 0.5 Sr 0.5 FeO 2.5+δ powder sample obtained by mirror furnace method after thermal treatment. Highlights: ► A homogenous compound Ca 0.5 Sr 0.5 FeO 2.5+δ has been synthesized for the first time by a mirror furnace method. ► Ca 0.5 Sr 0.5 FeO 2.5+δ powder sample is perfectly homogenous, confirmed by X-ray diffraction, Raman spectroscopy and EDS technique. ► The thermal treatment of Ca 0.5 Sr 0.5 FeO 2.5+δ powder sample can increase their average grain sizes. -- Abstract: A new synthesis method using melting zone technique via the double mirror furnace around 1600 °C is used to obtain homogenous brownmillerite compounds Ca 1−x Sr x FeO 2.5+δ in the range 0.3 ≤ x ≤ 0.7. These compounds play important role in understanding the mystery of the oxygen diffusion in the perovskite-related oxides. We have successfully solved the miscibility gap problem by synthesizing a good quality of homogenous powder samples of Ca 0.5 Sr 0.5 FeO 2.5+δ compound. Our result was confirmed by X-rays diffraction, Raman spectroscopy and energy dispersive spectroscopy analysis. Thermal treatment was also applied until 800 °C under vacuum to confirm again the homogeneity of powder samples, improve its quality and show that no decomposition or return to form Ca- and Sr-enriched microdomains takes place as a result of phase separation.
Blanchard, Philippe
2015-01-01
The second edition of this textbook presents the basic mathematical knowledge and skills that are needed for courses on modern theoretical physics, such as those on quantum mechanics, classical and quantum field theory, and related areas. The authors stress that learning mathematical physics is not a passive process and include numerous detailed proofs, examples, and over 200 exercises, as well as hints linking mathematical concepts and results to the relevant physical concepts and theories. All of the material from the first edition has been updated, and five new chapters have been added on such topics as distributions, Hilbert space operators, and variational methods. The text is divided into three main parts. Part I is a brief introduction to distribution theory, in which elements from the theories of ultradistributions and hyperfunctions are considered in addition to some deeper results for Schwartz distributions, thus providing a comprehensive introduction to the theory of generalized functions. P...
Applying the Socratic Method to Physics Education
Corcoran, Ed
2005-04-01
We have restructured University Physics I and II in accordance with methods that PER has shown to be effective, including a more interactive discussion- and activity-based curriculum based on the premise that developing understanding requires an interactive process in which students have the opportunity to talk through and think through ideas with both other students and the teacher. Studies have shown that in classes implementing this approach to teaching as compared to classes using a traditional approach, students have significantly higher gains on the Force Concept Inventory (FCI). This has been true in UPI. However, UPI FCI results seem to suggest that there is a significant conceptual hole in students' understanding of Newton's Second Law. Two labs in UPI which teach Newton's Second Law will be redesigned replacing more activity with students as a group talking through, thinking through, and answering conceptual questions asked by the TA. The results will be measured by comparing FCI results to those from previous semesters, coupled with interviews. The results will be analyzed, and we will attempt to understand why gains were or were not made.
Directory of Open Access Journals (Sweden)
Pastor, J.
2002-02-01
Full Text Available The topic of this work is the theoretical study of a disc-shaped acoustic transducer made with a 1-3 piezoelectric composite material. This material consists of PZT ceramic rods embedded in a polymer matrix. A modeling of this ideal transversally periodic structure is proposed. It is based on a finite element approach derived from homogenization techniques mainly used for composite material studies. The analysis focuses on a representative unit cell with specific boundary conditions on the lateral surfaces taking accurately into account the periodicity of the structure. The first step proposed is the development of a three-dimensional Fortran code with complex variables, especially adapted for this problem. Using the principle of correspondence of Lee-Mandel, this technique allows the prediction of the damping properties of the transducer from the complex modulus of the constituents. Both the versatility of the method and the rigorous character of the model are pointed out through various boundary conditions and mixed loadings. An interesting result is that, despite the lossy polymer matrix, a 1-3 composite can advantageously replace a much heavier massive transducer, in terms of efficiency and loss factor.
El objetivo de este trabajo es el estudio teórico de un transductor acústico con forma de disco construido con un composite piezoeléctrico 1-3. Este material consiste en barras cerámicas de PZT embebidas en una matriz polimérica. Se propone un modelo de su estructura periódica transversal ideal basándose en una aproximación mediante elementos finitos derivada de técnicas de homogeneización usadas principalmente para estudios de materiales compuestos. El análisis se enfoca a una celdilla unidad representativa con unas condiciones de contorno específicas sobre las superficies laterales, teniendo en cuenta adecuadamente la periodicidad de la estructura. El primer paso propuesto es el desarrollo de un código Fortran
Homogeneous turbulence dynamics
Sagaut, Pierre
2018-01-01
This book provides state-of-the-art results and theories in homogeneous turbulence, including anisotropy and compressibility effects with extension to quantum turbulence, magneto-hydodynamic turbulence and turbulence in non-newtonian fluids. Each chapter is devoted to a given type of interaction (strain, rotation, shear, etc.), and presents and compares experimental data, numerical results, analysis of the Reynolds stress budget equations and advanced multipoint spectral theories. The role of both linear and non-linear mechanisms is emphasized. The link between the statistical properties and the dynamics of coherent structures is also addressed. Despite its restriction to homogeneous turbulence, the book is of interest to all people working in turbulence, since the basic physical mechanisms which are present in all turbulent flows are explained. The reader will find a unified presentation of the results and a clear presentation of existing controversies. Special attention is given to bridge the results obta...
Electro-magnetostatic homogenization of bianisotropic metamaterials
Fietz, Chris
2012-01-01
We apply the method of asymptotic homogenization to metamaterials with microscopically bianisotropic inclusions to calculate a full set of constitutive parameters in the long wavelength limit. Two different implementations of electromagnetic asymptotic homogenization are presented. We test the homogenization procedure on two different metamaterial examples. Finally, the analytical solution for long wavelength homogenization of a one dimensional metamaterial with microscopically bi-isotropic i...
Barkaoui, Abdelwahed; Chamekh, Abdessalem; Merzouki, Tarek; Hambli, Ridha; Mkaddem, Ali
2014-03-01
The complexity and heterogeneity of bone tissue require a multiscale modeling to understand its mechanical behavior and its remodeling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network (NN) computation and homogenization equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained NN simulation. Finite element calculation is performed at nanoscopic levels to provide a database to train an in-house NN program; and (iii) in steps 2-10 from fibril to continuum cortical bone tissue, homogenization equations are used to perform the computation at the higher scales. The NN outputs (elastic properties of the microfibril) are used as inputs for the homogenization computation to determine the properties of mineralized collagen fibril. The mechanical and geometrical properties of bone constituents (mineral, collagen, and cross-links) as well as the porosity were taken in consideration. This paper aims to predict analytically the effective elastic constants of cortical bone by modeling its elastic response at these different scales, ranging from the nanostructural to mesostructural levels. Our findings of the lowest scale's output were well integrated with the other higher levels and serve as inputs for the next higher scale modeling. Good agreement was obtained between our predicted results and literature data. Copyright © 2013 John Wiley & Sons, Ltd.
Molecular physics. Theoretical principles and experimental methods
International Nuclear Information System (INIS)
Demtroeder, W.
2005-01-01
This advanced textbook comprehensively explains important principles of diatomic and polyatomic molecules and their spectra in two separate, distinct parts. The first part concentrates on the theoretical aspects of molecular physics, whereas the second part of the book covers experimental techniques, i.e. laser, Fourier, NMR, and ESR spectroscopies, used in the fields of physics, chemistry, biolog, and material science. Appropriate for undergraduate and graduate students in physics and chemistry with a knowledge of atomic physics and familiar with the basics of quantum mechanics. From the contents: - Electronic States of Molecules, - Rotation, Oscillation and Potential Curves of Diatomic Molecules, - The Spectra of Diatomic Molecules, - Molecule Symmetries and Group Theory, - Rotation and Oscillations of Polyatomic Molecules, - Electronic States of Polyatomic Molecules, - The Spectra of Polyatomic Molecules, - Collapse of the Born-Oppenheimer-Approximation, Disturbances in Molecular Spectra, - Molecules in Disturbing Fields, - Van-der-Waals-Molecules and Cluster, - Experimental Techniques in Molecular Physics. (orig.)
Introduction to mathematical physics methods and concepts
Wong, Chun Wa
2013-01-01
Mathematical physics provides physical theories with their logical basis and the tools for drawing conclusions from hypotheses. Introduction to Mathematical Physics explains to the reader why and how mathematics is needed in the description of physical events in space. For undergraduates in physics, it is a classroom-tested textbook on vector analysis, linear operators, Fourier series and integrals, differential equations, special functions and functions of a complex variable. Strongly correlated with core undergraduate courses on classical and quantum mechanics and electromagnetism, it helps the student master these necessary mathematical skills. It contains advanced topics of interest to graduate students on relativistic square-root spaces and nonlinear systems. It contains many tables of mathematical formulas and references to useful materials on the Internet. It includes short tutorials on basic mathematical topics to help readers refresh their mathematical knowledge. An appendix on Mathematica encourages...
Computing and physical methods to calculate Pu
International Nuclear Information System (INIS)
Mohamed, Ashraf Elsayed Mohamed
2013-01-01
Main limitations due to the enhancement of the plutonium content are related to the coolant void effect as the spectrum becomes faster, the neutron flux in the thermal region tends towards zero and is concentrated in the region from 10 Ke to 1 MeV. Thus, all captures by 240 Pu and 242 Pu in the thermal and epithermal resonance disappear and the 240 Pu and 242 Pu contributions to the void effect became positive. The higher the Pu content and the poorer the Pu quality, the larger the void effect. The core control in nominal or transient conditions Pu enrichment leads to a decrease in (B eff.), the efficiency of soluble boron and control rods. Also, the Doppler effect tends to decrease when Pu replaces U, so, that in case of transients the core could diverge again if the control is not effective enough. As for the voiding effect, the plutonium degradation and the 240 Pu and 242 Pu accumulation after multiple recycling lead to spectrum hardening and to a decrease in control. One solution would be to use enriched boron in soluble boron and shutdown rods. In this paper, I discuss and show the advanced computing and physical methods to calculate Pu inside the nuclear reactors and glovebox and the different solutions to be used to overcome the difficulties that effect, on safety parameters and on reactor performance, and analysis the consequences of plutonium management on the whole fuel cycle like Raw materials savings, fraction of nuclear electric power involved in the Pu management. All through two types of scenario, one involving a low fraction of the nuclear park dedicated to plutonium management, the other involving a dilution of the plutonium in all the nuclear park. (author)
Applied Mathematical Methods in Theoretical Physics
Masujima, Michio
2005-04-01
All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.
Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.
2017-10-01
Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.
Coarse mesh finite element method for boiling water reactor physics analysis
International Nuclear Information System (INIS)
Ellison, P.G.
1983-01-01
A coarse mesh method is formulated for the solution of Boiling Water Reactor physics problems using two group diffusion theory. No fuel assembly cross-section homogenization is required; water gaps, control blades and fuel pins of varying enrichments are treated explicitly. The method combines constrained finite element discretization with infinite lattice super cell trial functions to obtain coarse mesh solutions for which the only approximations are along the boundaries between fuel assemblies. The method is applied to bench mark Boiling Water Reactor problems to obtain both the eigenvalue and detailed flux distributions. The solutions to these problems indicate the method is useful in predicting detailed power distributions and eigenvalues for Boiling Water Reactor physics problems
Energy Technology Data Exchange (ETDEWEB)
Hiraki, A [Osaka Univ., Japan; Imura, T; Iwami, M; Kim, S C; Ushita, K; Okamoto, H; Hamakawa, Y
1979-09-01
Auger spectra of Si LMM transitions induced by keV Ar/sup +/ ion bombardment of Si alloy systems have been studied. The spectra observed are composed of two well-defined peaks termed elsewhere the atomic-like and bulk-like peaks, repsectively. A clear correlation has been found between the intensity of the atomic-like peak lying at 88 eV and the content of the foreign atoms alloyed with Si. Experiments were carried out on metallic silicides, or Si alloys with Au, Cu, Pd and Ni, and covalently bonded non-metallic Si alloys of C and H. From these studies, we propose that ion-induced Auger electron spectroscopy might be a useful tool for the determination of alloyed foreign atoms as well as for the study of their compositional homogeneity in binary alloy systems of silicon.
Sung, Heungsup; Yong, Dongeun; Ki, Chang Seok; Kim, Jae Seok; Seong, Moon Woo; Lee, Hyukmin; Kim, Mi Na
2016-09-01
Real-time reverse transcription PCR (rRT-PCR) of sputum samples is commonly used to diagnose Middle East respiratory syndrome coronavirus (MERS-CoV) infection. Owing to the difficulty of extracting RNA from sputum containing mucus, sputum homogenization is desirable prior to nucleic acid isolation. We determined optimal homogenization methods for isolating viral nucleic acids from sputum. We evaluated the following three sputum-homogenization methods: proteinase K and DNase I (PK-DNase) treatment, phosphate-buffered saline (PBS) treatment, and N-acetyl-L-cysteine and sodium citrate (NALC) treatment. Sputum samples were spiked with inactivated MERS-CoV culture isolates. RNA was extracted from pretreated, spiked samples using the easyMAG system (bioMérieux, France). Extracted RNAs were then subjected to rRT-PCR for MERS-CoV diagnosis (DiaPlex Q MERS-coronavirus, SolGent, Korea). While analyzing 15 spiked sputum samples prepared in technical duplicate, false-negative results were obtained with five (16.7%) and four samples (13.3%), respectively, by using the PBS and NALC methods. The range of threshold cycle (Ct) values observed when detecting upE in sputum samples was 31.1-35.4 with the PK-DNase method, 34.7-39.0 with the PBS method, and 33.9-38.6 with the NALC method. Compared with the control, which were prepared by adding a one-tenth volume of 1:1,000 diluted viral culture to PBS solution, the ranges of Ct values obtained by the PBS and NALC methods differed significantly from the mean control Ct of 33.2 (both Phomogenizing sputum samples prior to RNA extraction.
Project as an education method in teaching of physics
ŽAHOUREK, Martin
2011-01-01
The diploma thesis ?Project as an educational method for teaching physics ?deals with the possibilities of using project-based method for teaching physics at primary schools. Not only does it contain the theoretical background of project-based teaching, but also deals with practical issues in the form of an implementation of a chosen project ?Physics and physical education?. The aim of said project was to evaluate the efficiency of project-based teaching as far as the knowledge of pupils and ...
Atmospheric Physics Background – Methods – Trends
2012-01-01
On the occasion of the 50th anniversary of the Institute of Atmospheric Physics of the German Aerospace Center (DLR), this book presents more than 50 chapters highlighting results of the institute’s research. The book provides an up-to-date, in-depth survey across the entire field of atmospheric science, including atmospheric dynamics, radiation, cloud physics, chemistry, climate, numerical simulation, remote sensing, instruments and measurements, as well as atmospheric acoustics. The authors have provided a readily comprehensible and self-contained presentation of the complex field of atmospheric science. The topics are of direct relevance for aerospace science and technology. Future research challenges are identified.
At-tank Low-Activity Feed Homogeneity Analysis Verification
International Nuclear Information System (INIS)
DOUGLAS, J.G.
2000-01-01
This report evaluates the merit of selecting sodium, aluminum, and cesium-137 as analytes to indicate homogeneity of soluble species in low-activity waste (LAW) feed and recommends possible analytes and physical properties that could serve as rapid screening indicators for LAW feed homogeneity. The three analytes are adequate as screening indicators of soluble species homogeneity for tank waste when a mixing pump is used to thoroughly mix the waste in the waste feed staging tank and when all dissolved species are present at concentrations well below their solubility limits. If either of these conditions is violated, then the three indicators may not be sufficiently chemically representative of other waste constituents to reliably indicate homogeneity in the feed supernatant. Additional homogeneity indicators that should be considered are anions such as fluoride, sulfate, and phosphate, total organic carbon/total inorganic carbon, and total alpha to estimate the transuranic species. Physical property measurements such as gamma profiling, conductivity, specific gravity, and total suspended solids are recommended as possible at-tank methods for indicating homogeneity. Indicators of LAW feed homogeneity are needed to reduce the U.S. Department of Energy, Office of River Protection (ORP) Program's contractual risk by assuring that the waste feed is within the contractual composition and can be supplied to the waste treatment plant within the schedule requirements
Nuclear physics methods in materials research
International Nuclear Information System (INIS)
1980-01-01
The brochure contains the abstracts of the papers presented at the 7th EPS meeting 1980 in Darmstadt. The main subjects were: a) Neutron scattering and Moessbauer effect in materials research, b) ion implantation in micrometallurgy, c) applications of nuclear reactions and radioisotopes in research on solids, d) recent developments in activation analysis and e) pions, positrons, and heavy ions applied in solid state physics. (RW) [de
Directory of Open Access Journals (Sweden)
Петро Джуринський
2014-12-01
Full Text Available The paper presents the methodical approaches and recommendations on implementation of methods of future Physical Culture teachers to physical education of high school students into study process at a higher educational institution. The role of the approbated study discipline “Theory and methods of physical education at high school” has been determined in this research. It has also been defined, that future Physical Culture teacher’s training for physical education of high school students is a system of organizational and educational measures, ensuring the formation of future teacher’s professional knowledge and skills. The article presents the defined tasks, criteria, tools, forms, pedagogical conditions and stages of students’ training for teaching classes of Physical Education to high school students. Approbation of methodical approaches to future Physical Culture teachers’ training for physical education of high school students demonstrated their efficacy
Mathematical methods in physics and engineering
Dettman, John W
2011-01-01
Intended for college-level physics, engineering, or mathematics students, this volume offers an algebraically based approach to various topics in applied math. It is accessible to undergraduates with a good course in calculus which includes infinite series and uniform convergence. Exercises follow each chapter to test the student's grasp of the material; however, the author has also included exercises that extend the results to new situations and lay the groundwork for new concepts to be introduced later. A list of references for further reading will be found at the end of each chapter. For t
Methods of the physics of porous media
Wong, Po-Zen; De Graef, Marc
1999-01-01
Over the past 25 years, the field of VUV physics has undergone significant developments as new powerful spectroscopic tools, VUV lasers, and optical components have become available. This volume is aimed at experimentalists who are in need of choosing the best type of modern instrumentation in this applied field. In particular, it contains a detailed chapter on laboratory sources. This volume provides an up-to-date description of state-of-the-art equipment and techniques, and a broad reference bibliography. It treats phenomena from the standpoint of an experimental physicist, whereby such topi
Prediction of Solvent Physical Properties using the Hierarchical Clustering Method
Recently a QSAR (Quantitative Structure Activity Relationship) method, the hierarchical clustering method, was developed to estimate acute toxicity values for large, diverse datasets. This methodology has now been applied to the estimate solvent physical properties including sur...
Scattering theory in quantum mechanics. Physical principles and mathematical methods
International Nuclear Information System (INIS)
Amrein, W.O.; Jauch, J.M.; Sinha, K.B.
1977-01-01
A contemporary approach is given to the classical topics of physics. The purpose is to explain the basic physical concepts of quantum scattering theory, to develop the necessary mathematical tools for their description, to display the interrelation between the three methods (the Schroedinger equation solutions, stationary scattering theory, and time dependence) to derive the properties of various quantities of physical interest with mathematically rigorous methods
Using the Case Study Method in Teaching College Physics
Burko, Lior M.
2016-01-01
The case study teaching method has a long history (starting at least with Socrates) and wide current use in business schools, medical schools, law schools, and a variety of other disciplines. However, relatively little use is made of it in the physical sciences, specifically in physics or astronomy. The case study method should be considered by…
Renormalization methods in solid state physics
Energy Technology Data Exchange (ETDEWEB)
Nozieres, P [Institut Max von Laue - Paul Langevin, 38 - Grenoble (France)
1976-01-01
Renormalization methods in various solid state problems (e.g., the Kondo effect) are analyzed from a qualitative vantage point. Our goal is to show how the renormalization procedure works, and to uncover a few simple general ideas (universality, phenomenological descriptions, etc...).
International Nuclear Information System (INIS)
Jung, Cheulhee; Mun, Hyo Young; Li, Taihua; Park, Hyun Gyu
2009-01-01
A simple, highly efficient immobilization method to fabricate DNA microarrays, that utilizes gold nanoparticles as the mediator, has been developed. The fabrication method begins with electrostatic attachment of amine-modified DNA to gold nanoparticles. The resulting gold-DNA complexes are immobilized on conventional amine or aldehyde functionalized glass slides. By employing gold nanoparticles as the immobilization mediator, implementation of this procedure yields highly homogeneous microarrays that have higher binding capacities than those produced by conventional methods. This outcome is due to the increased three-dimensional immobilization surface provided by the gold nanoparticles as well as the intrinsic effects of gold on emission properties. This novel immobilization strategy gives microarrays that produce more intense hybridization signals for the complementary DNA. Furthermore, the silver enhancement technique, made possible only in the case of immobilized gold nanoparticles on the microarrays, enables simple monitoring of the integrity of the immobilized DNA probe.
International Nuclear Information System (INIS)
Tullett, J.D.
1990-01-01
P Benoist has developed a method for calculating cross-sections for Fast Reactor control rods and their followers described by a single homogenised region (the Equivalent Parameter Method). When used in a diffusion theory calculation, these equivalent cross-sections should give the same rod worth as one would obtain from a transport theory calculation with a heterogeneous description of the control rod and the follower. In this report, Benoist's theory is described, and a comprehensive set of tests is presented. These tests show that the method gives very good results over a range of geometries and control rod positions for a model fast reactor core. (author)
International Nuclear Information System (INIS)
Beron, Rodolphe
1988-01-01
In this work, we introduce an experimental determination method of parameters which are characteristics of the transfer functions of walls in buildings by the use of an easy equipment which use needs no significant constraint. This method called 'reference plates method' is based on the working out of the thermo-grams which result from any thermal perturbation. The first part of our work deals with the theoretical development of the methods of working out the measures used for both single-Iayered and multi-layered walls. The second part discusses the applying of the method on single-Iayered sample. The values of thermophysical characteristics of the wall is based on the working out of the heat equation written in terms an integral transformation which we take as Laplace's transformation. The case of multi-Iayered wall which we discuss in the third part, lead to the determination of 'z transforms coefficients' of the transfer functions of the studied wall. In addition to the theoretical study, we analyse the results of prospective experiments in the fourth part of our work and show the usefulness of such a measurement method. The last part is devoted to the presentation of the application of our work to the determination of thermal parameters of more general wall's configurations. (author) [fr
Directory of Open Access Journals (Sweden)
KUDRYAVTSEV Pavel Gennadievich
2015-04-01
Full Text Available The paper deals with possibilities to use quasi-homogenous approximation for discription of properties of dispersed systems. The authors applied statistical polymer ethod based on consideration of average structures of all possible macromolecules of the same weight. The equiations which allow evaluating many additive parameters of macromolecules and the systems with them were deduced. Statistical polymer method makes it possible to model branched, cross-linked macromolecules and the systems with them which are in equilibrium or non-equilibrium state. Fractal analysis of statistical polymer allows modeling different types of random fractal and other objects examined with the mehods of fractal theory. The method of fractal polymer can be also applied not only to polymers but also to composites, gels, associates in polar liquids and other packaged systems. There is also a description of the states of colloid solutions of silica oxide from the point of view of statistical physics. This approach is based on the idea that colloid solution of silica dioxide – sol of silica dioxide – consists of enormous number of interacting particles which are always in move. The paper is devoted to the research of ideal system of colliding but not interacting particles of sol. The analysis of behavior of silica sol was performed according to distribution Maxwell-Boltzmann and free path length was calculated. Using this data the number of the particles which can overcome the potential barrier in collision was calculated. To model kinetics of sol-gel transition different approaches were studied.
Homogeneity spoil spectroscopy
International Nuclear Information System (INIS)
Hennig, J.; Boesch, C.; Martin, E.; Grutter, R.
1987-01-01
One of the problems of in vivo MR spectroscopy of P-31 is spectra localization. Surface coil spectroscopy, which is the method of choice for clinical applications, suffers from the high-intensity signal from subcutaneous muscle tissue, which masks the spectrum of interest from deeper structures. In order to suppress this signal while maintaining the simplicity of surface coil spectroscopy, the authors introduced a small sheet of ferromagnetically dotted plastic between the surface coil and the body. This sheet destroys locally the field homogeneity and therefore all signal from structures around the coil. The very high reproducibility of the simple experimental procedure allows long-term studies important for monitoring tumor therapy
Spinor structures on homogeneous spaces
International Nuclear Information System (INIS)
Lyakhovskii, V.D.; Mudrov, A.I.
1993-01-01
For multidimensional models of the interaction of elementary particles, the problem of constructing and classifying spinor fields on homogeneous spaces is exceptionally important. An algebraic criterion for the existence of spinor structures on homogeneous spaces used in multidimensional models is developed. A method of explicit construction of spinor structures is proposed, and its effectiveness is demonstrated in examples. The results are of particular importance for harmonic decomposition of spinor fields
Energy Technology Data Exchange (ETDEWEB)
Lukkassen, D.
1996-12-31
When partial differential equations are set up to model physical processes in strongly heterogeneous materials, effective parameters for heat transfer, electric conductivity etc. are usually required. Averaging methods often lead to convergence problems and in homogenization theory one is therefore led to study how certain integral functionals behave asymptotically. This mathematical doctoral thesis discusses (1) means and bounds connected to homogenization of integral functionals, (2) reiterated homogenization of integral functionals, (3) bounds and homogenization of some particular partial differential operators, (4) applications and further results. 154 refs., 11 figs., 8 tabs.
Solar energy utilization by physical methods.
Wolf, M
1974-04-19
On the basis of the estimated contributions of these differing methods of the utilization of solar energy, their total energy delivery impact on the projected U.S. energy economy (9) can be evaluated (Fig. 5). Despite this late energy impact, the actual sales of solar energy utilization equipment will be significant at an early date. Potential sales in photovoltaic arrays alone could exceed $400 million by 1980, in order to meet the projected capacity buildup (10). Ultimately, the total energy utilization equipment industry should attain an annual sales volume of several tens of billion dollars in the United States, comparable to that of several other energy related industries. Varying amounts of technology development are required to assure the technical and economic feasibility of the different solar energy utilization methods. Several of these developments are far enough along that the paths can be analyzed from the present time to the time of demonstration of technical and economic feasibility, and from there to production and marketing readiness. After that point, a period of market introduction will follow, which will differ in duration according to the type of market addressed. It may be noted that the present rush to find relief from the current energy problem, or to be an early leader in entering a new market, can entail shortcuts in sound engineering practice, particularly in the areas of design for durability and easy maintenance, or of proper application engineering. The result can be loss of customer acceptance, as has been experienced in the past with various products, including solar water heaters. Since this could cause considerable delay in achieving the expected total energy impact, it will be important to spend adequate time at this stage for thorough development. Two other aspects are worth mentioning. The first is concerned with the economic impacts. Upon reflection on this point, one will observe that largescale solar energy utilization will
Using the Case Study Method in Teaching College Physics
Burko, Lior M.
2016-10-01
The case study teaching method has a long history (starting at least with Socrates) and wide current use in business schools, medical schools, law schools, and a variety of other disciplines. However, relatively little use is made of it in the physical sciences, specifically in physics or astronomy. The case study method should be considered by physics faculty as part of the effort to transition the teaching of college physics from the traditional frontal-lecture format to other formats that enhance active student participation. In this paper we endeavor to interest physics instructors in the case study method, and hope that it would also serve as a call for more instructors to produce cases that they use in their own classes and that can also be adopted by other instructors.
Physical Model Method for Seismic Study of Concrete Dams
Directory of Open Access Journals (Sweden)
Bogdan Roşca
2008-01-01
Full Text Available The study of the dynamic behaviour of concrete dams by means of the physical model method is very useful to understand the failure mechanism of these structures to action of the strong earthquakes. Physical model method consists in two main processes. Firstly, a study model must be designed by a physical modeling process using the dynamic modeling theory. The result is a equations system of dimensioning the physical model. After the construction and instrumentation of the scale physical model a structural analysis based on experimental means is performed. The experimental results are gathered and are available to be analysed. Depending on the aim of the research may be designed an elastic or a failure physical model. The requirements for the elastic model construction are easier to accomplish in contrast with those required for a failure model, but the obtained results provide narrow information. In order to study the behaviour of concrete dams to strong seismic action is required the employment of failure physical models able to simulate accurately the possible opening of joint, sliding between concrete blocks and the cracking of concrete. The design relations for both elastic and failure physical models are based on dimensional analysis and consist of similitude relations among the physical quantities involved in the phenomenon. The using of physical models of great or medium dimensions as well as its instrumentation creates great advantages, but this operation involves a large amount of financial, logistic and time resources.
Effective Teaching Methods--Project-based Learning in Physics
Holubova, Renata
2008-01-01
The paper presents results of the research of new effective teaching methods in physics and science. It is found out that it is necessary to educate pre-service teachers in approaches stressing the importance of the own activity of students, in competences how to create an interdisciplinary project. Project-based physics teaching and learning…
Some method for teaching physics to residents in radiation therapy
International Nuclear Information System (INIS)
Hughes, D.B.
A method is presented for teaching physics to residents in radiation therapy. Some of the various responsabilities of a hospital physicist are listed, with particular reference to radiation therapy departments [pt
Inverse operator theory method and its applications in nonlinear physics
International Nuclear Information System (INIS)
Fang Jinqing
1993-01-01
Inverse operator theory method, which has been developed by G. Adomian in recent years, and its applications in nonlinear physics are described systematically. The method can be an unified effective procedure for solution of nonlinear and/or stochastic continuous dynamical systems without usual restrictive assumption. It is realized by Mathematical Mechanization by us. It will have a profound on the modelling of problems of physics, mathematics, engineering, economics, biology, and so on. Some typical examples of the application are given and reviewed
Unfolding methods in high-energy physics experiments
International Nuclear Information System (INIS)
Blobel, V.
1985-01-01
Distributions measured in high-energy physics experiments are often distorted or transformed by limited acceptance and finite resolution of the detectors. The unfolding of measured distributions is an important, but due to inherent instabilities a very difficult problem. Methods for unfolding, applicable for the analysis of high-energy physics experiments, and their properties are discussed. An introduction is given to the method of regularization. (orig.)
Unfolding methods in high-energy physics experiments
International Nuclear Information System (INIS)
Blobel, V.
1984-12-01
Distributions measured in high-energy physics experiments are often distorted or transformed by limited acceptance and finite resolution of the detectors. The unfolding of measured distributions is an important, but due to inherent instabilities a very difficult problem. Methods for unfolding, applicable for the analysis of high-energy physics experiments, and their properties are discussed. An introduction is given to the method of regularization. (orig.)
Computer methods in physics 250 problems with guided solutions
Landau, Rubin H
2018-01-01
Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). Its also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.
Modern methodic of power cardio training in students’ physical education
Directory of Open Access Journals (Sweden)
A.Yu. Osipov
2016-12-01
Full Text Available Purpose: significant increase of students’ physical condition and health level at the account of application of modern power cardio training methodic. Material: 120 students (60 boys and 60 girls participated in the research. The age of the tested was 19 years. The research took one year. We used methodic of power and functional impact on trainees’ organism (HOT IRON. Such methodic is some systems of physical exercises with weights (mini-barbells, to be fulfilled under accompaniment of specially selected music. Results: we showed advantages of power-cardio and fitness trainings in students’ health improvement and in elimination obesity. Control tests showed experimental group students achieved confidently higher physical indicators. Boys demonstrated increase of physical strength and general endurance indicators. Girls had confidently better indicators of physical strength, flexibility and general endurance. Increase of control group students’ body mass can be explained by students’ insufficient physical activity at trainings, conducted as per traditional program. Conclusions: students’ trainings by power-cardio methodic with application HOT IRON exercises facilitate development the following physical qualities: strength and endurance in boys and strength, flexibility and endurance in girls. Besides, it was found that such systems of exercises facilitate normalization of boys’ body mass and correction of girls’ constitution.
Multi-Level iterative methods in computational plasma physics
International Nuclear Information System (INIS)
Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.
1999-01-01
Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD
Homogenization of neutronic diffusion models
International Nuclear Information System (INIS)
Capdebosq, Y.
1999-09-01
In order to study and simulate nuclear reactor cores, one needs to access the neutron distribution in the core. In practice, the description of this density of neutrons is given by a system of diffusion equations, coupled by non differential exchange terms. The strong heterogeneity of the medium constitutes a major obstacle to the numerical computation of this models at reasonable cost. Homogenization appears as compulsory. Heuristic methods have been developed since the origin by nuclear physicists, under a periodicity assumption on the coefficients. They consist in doing a fine computation one a single periodicity cell, to solve the system on the whole domain with homogeneous coefficients, and to reconstruct the neutron density by multiplying the solutions of the two computations. The objectives of this work are to provide mathematically rigorous basis to this factorization method, to obtain the exact formulas of the homogenized coefficients, and to start on geometries where two periodical medium are placed side by side. The first result of this thesis concerns eigenvalue problem models which are used to characterize the state of criticality of the reactor, under a symmetry assumption on the coefficients. The convergence of the homogenization process is proved, and formulas of the homogenized coefficients are given. We then show that without symmetry assumptions, a drift phenomenon appears. It is characterized by the mean of a real Bloch wave method, which gives the homogenized limit in the general case. These results for the critical problem are then adapted to the evolution model. Finally, the homogenization of the critical problem in the case of two side by side periodic medium is studied on a one dimensional on equation model. (authors)
International Nuclear Information System (INIS)
Quoc-Thang Vo
2013-01-01
This work is focused on a matrix/inclusion metal composite. A simple method is proposed to evaluate the elastic properties of one phase while the properties of the other phase are assumed to be known. The method is based on both an inverse homogenization scheme and mechanical field's measurements by 2D digital image correlation. The originality of the approach rests on the scale studied, i.e. the microstructure scale of material: the characteristic size of the inclusions is about few tens of microns. The evaluation is performed on standard uniaxial tensile tests associated with a long-distance microscope. It allows observation of the surface of a specimen on the microstructure scale during the mechanical stress. First, the accuracy of the method is estimated on 'perfect' mechanical fields coming from numerical simulations for four microstructures: elastic or porous single inclusions having either spherical or cylindrical shape. Second, this accuracy is estimated on real mechanical field for two simple microstructures: an elasto-plastic metallic matrix containing a single cylindrical micro void or four cylindrical micro voids arranged in a square pattern. Third, the method is used to evaluate elastic properties of αZr inclusions with arbitrary shape in an oxidized Zircaloy-4 sample of the fuel cladding of a pressurized water reactor after an accident loss of coolant accident (LOCA). In all this study, the phases are assumed to have isotropic properties. (author) [fr
Directory of Open Access Journals (Sweden)
Saikawa Shinichi
2008-05-01
Full Text Available Abstract Background Increased serum remnant lipoproteins are supposed to predict cardiovascular disease in addition to increased LDL. A new homogenous assay for remnant lipoprotein-cholesterol (RemL-C has been developed as an alternative to remnant-like particle-cholesterol (RLP-C, an immunoseparation assay, widely used for the measurement of remnant lipoprotein cholesterol. Methods We evaluated the correlations and data validation between the 2 assays in 83 subjects (49 men and 34 women without diabetes, hypertension and medications for hyperlipidemia, diabetes, and hypertension, and investigated the characteristics of remnant lipoproteins obtained by the two methods (RLP-C and RemL-C and their relationships with IDL-cholesterol determined by our developed HPLC method. Results A positive correlation was significantly found between the two methods (r = 0.853, 95%CI 0.781–0.903, p RLP-C level. RemL-C (r = 0.339, 95%CI 0.152–0.903; p = 0.0005 significantly correlated with IDL-cholesterol, but not RLP-C (r = 0.17, 95%CI -0.047–0.372; p = 0.1237 in all the samples (n = 83. Conclusion These results suggest that there is generally a significant correlation between RemL-C and RLP-C. However, RemL-C assay is likely to reflect IDL more closely than RLP-C.
Application of unsupervised learning methods in high energy physics
Energy Technology Data Exchange (ETDEWEB)
Koevesarki, Peter; Nuncio Quiroz, Adriana Elizabeth; Brock, Ian C. [Physikalisches Institut, Universitaet Bonn, Bonn (Germany)
2011-07-01
High energy physics is a home for a variety of multivariate techniques, mainly due to the fundamentally probabilistic behaviour of nature. These methods generally require training based on some theory, in order to discriminate a known signal from a background. Nevertheless, new physics can show itself in ways that previously no one thought about, and in these cases conventional methods give little or no help. A possible way to discriminate between known processes (like vector bosons or top-quark production) or look for new physics is using unsupervised machine learning to extract the features of the data. A technique was developed, based on the combination of neural networks and the method of principal curves, to find a parametrisation of the non-linear correlations of the data. The feasibility of the method is shown on ATLAS data.
Zhang, Meixiang; Ngo, Justine; Pirozzi, Filomena; Sun, Ying-Pu; Wynshaw-Boris, Anthony
2018-03-15
Embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs) have been widely used to generate cellular models harboring specific disease-related genotypes. Of particular importance are ESC and iPSC applications capable of producing dorsal telencephalic neural progenitor cells (NPCs) that are representative of the cerebral cortex and overcome the challenges of maintaining a homogeneous population of cortical progenitors over several passages in vitro. While previous studies were able to derive NPCs from pluripotent cell types, the fraction of dorsal NPCs in this population is small and decreases over several passages. Here, we present three protocols that are highly efficient in differentiating mouse and human ESCs, as well as human iPSCs, into a homogeneous and stable population of dorsal NPCs. These protocols will be useful for modeling cerebral cortical neurological and neurodegenerative disorders in both mouse and human as well as for high-throughput drug screening for therapeutic development. We optimized three different strategies for generating dorsal telencephalic NPCs from mouse and human pluripotent cell types through single or double inhibition of bone morphogenetic protein (BMP) and/or SMAD pathways. Mouse and human pluripotent cells were aggregated to form embryoid bodies in suspension and were treated with dorsomorphin alone (BMP inhibition) or combined with SB431542 (double BMP/SMAD inhibition) during neural induction. Neural rosettes were then selected from plated embryoid bodies to purify the population of dorsal NPCs. We tested the expression of key dorsal NPC markers as well as nonectodermal markers to confirm the efficiency of our three methods in comparison to published and commercial protocols. Single and double inhibition of BMP and/or SMAD during neural induction led to the efficient differentiation of dorsal NPCs, based on the high percentage of PAX6-positive cells and the NPC gene expression profile. There were no statistically
Investigations into homogenization of electromagnetic metamaterials
DEFF Research Database (Denmark)
Clausen, Niels Christian Jerichau
This dissertation encompasses homogenization methods, with a special interest into their applications to metamaterial homogenization. The first method studied is the Floquet-Bloch method, that is based on the assumption of a material being infinite periodic. Its field can then be expanded in term...
International Nuclear Information System (INIS)
Premadas, A.; Cyriac, Bincy; Kesavan, V.S.
2013-01-01
Oxalate precipitation of lanthanides in acidic medium is a widely used selective group separation method at percentage to trace level in different types of geological samples. Most of the procedures are based on the heterogeneous oxalate precipitation of lanthanides using calcium as carrier. In the heterogeneous precipitation, the co-precipitated impurities from the matrix elements are more, besides if the pH at the time of precipitation is not monitored carefully there is a chance of losing some of the lanthanides. In this report, we present a new homogeneous oxalate precipitation of trace level lanthanides from different types of geological samples using calcium as carrier. In the present method pH is getting adjusted (pH ∼1) on its own, after the hydrolysis of urea added to the sample solution. This acidic pH is essential for the complete precipitation of the lanthanides. Therefore, no critical parameter adjustment for the precipitation is involved in the proposed method. The oxalate precipitate obtained was in crystalline nature which facilitates the fast settlement, easy filtration; besides the co-precipitated matrix elements are very less as compared to normal heterogeneous oxalate precipitation of lanthanides. Another advantage is more quantity of the sample can be taken for the separation of lanthanides which is a limitation for other separation methods reported. Accuracy of the method was checked by analyzing nine international reference materials comprising different types of geological samples obtained from Canadian Certified Reference Project Materials such as syenite samples SY-2, SY-3 and SY-4; gabro sample MRG-1; soil samples SO-1 and SO-2; iron formation sample FeR-2; lake sediments LKSD-2 and LKSD-4. The values of the lanthanides obtained for these reference materials are comparable with recommended values, indicating that the method is accurate. The reproducibility is characterized by a relative standard deviation (RSD) of 1 to 6% (n=4). (author)
International Nuclear Information System (INIS)
Rezaei, Farahnaz; Hosseini, Mohammad-Reza Milani
2011-01-01
Highlights: → Ultrasonic assisted miniaturized matrix solid-phase dispersion combined with HLLE was developed as a new method for the extraction of OCPs in fish. → The goal of this combination was to enhance the selectivity of HLLE procedure and to extend its application in biological samples. → This method proposed the advantages of good detection limits, lower consumption of reagents, and does not need any special instrumentation. - Abstract: In this study, ultrasonic assisted miniaturized matrix solid-phase dispersion (US-MMSPD) combined with homogeneous liquid-liquid extraction (HLLE) has been developed as a new method for the extraction of organochlorinated pesticides (OCPs) in fish prior to gas chromatography with electron capture detector (GC-ECD). In the proposed method, OCPs (heptachlor, aldrin, DDE, DDD, lindane and endrin) were first extracted from fish sample into acetonitrile by US-MMSPD procedure, and the extract was then used as consolute solvent in HLLE process. Optimal condition for US-MMSPD step was as follows: volume of acetonitrile, 1.5 mL; temperature of ultrasound, 40 deg. C; time of ultrasound, 10 min. For HLLE step, optimal results were obtained at the following conditions: volume of chloroform, 35 μL; volume of aqueous phase, 1.5 mL; volume of double distilled water, 0.5 mL; time of centrifuge, 10 min. Under the optimum conditions, the enrichment factors for the studied compounds were obtained in the range of 185-240, and the overall recoveries were ranged from 39.1% to 81.5%. The limits of detection were 0.4-1.2 ng g -1 and the relative standard deviations for 20 ng g -1 of the OCPs, varied from 3.2% to 8% (n = 4). Finally, the proposed method has been successfully applied to the analysis of the OCPs in real fish sample, and satisfactory results were obtained.
Energy Technology Data Exchange (ETDEWEB)
Rezaei, Farahnaz [Department of Analytical Chemistry, Faculty of Chemistry, Iran University of Science and Technology, Narmak, Tehran 16846 (Iran, Islamic Republic of); Hosseini, Mohammad-Reza Milani, E-mail: drmilani@iust.ac.ir [Department of Analytical Chemistry, Faculty of Chemistry, Iran University of Science and Technology, Narmak, Tehran 16846 (Iran, Islamic Republic of); Electroanalytical Chemistry Research Center, Iran University of Science and Technology, Narmak, Tehran 16846 (Iran, Islamic Republic of)
2011-09-30
Highlights: {yields} Ultrasonic assisted miniaturized matrix solid-phase dispersion combined with HLLE was developed as a new method for the extraction of OCPs in fish. {yields} The goal of this combination was to enhance the selectivity of HLLE procedure and to extend its application in biological samples. {yields} This method proposed the advantages of good detection limits, lower consumption of reagents, and does not need any special instrumentation. - Abstract: In this study, ultrasonic assisted miniaturized matrix solid-phase dispersion (US-MMSPD) combined with homogeneous liquid-liquid extraction (HLLE) has been developed as a new method for the extraction of organochlorinated pesticides (OCPs) in fish prior to gas chromatography with electron capture detector (GC-ECD). In the proposed method, OCPs (heptachlor, aldrin, DDE, DDD, lindane and endrin) were first extracted from fish sample into acetonitrile by US-MMSPD procedure, and the extract was then used as consolute solvent in HLLE process. Optimal condition for US-MMSPD step was as follows: volume of acetonitrile, 1.5 mL; temperature of ultrasound, 40 deg. C; time of ultrasound, 10 min. For HLLE step, optimal results were obtained at the following conditions: volume of chloroform, 35 {mu}L; volume of aqueous phase, 1.5 mL; volume of double distilled water, 0.5 mL; time of centrifuge, 10 min. Under the optimum conditions, the enrichment factors for the studied compounds were obtained in the range of 185-240, and the overall recoveries were ranged from 39.1% to 81.5%. The limits of detection were 0.4-1.2 ng g{sup -1} and the relative standard deviations for 20 ng g{sup -1} of the OCPs, varied from 3.2% to 8% (n = 4). Finally, the proposed method has been successfully applied to the analysis of the OCPs in real fish sample, and satisfactory results were obtained.
Functionality and homogeneity.
2011-01-01
Functionality and homogeneity are two of the five Sustainable Safety principles. The functionality principle aims for roads to have but one exclusive function and distinguishes between traffic function (flow) and access function (residence). The homogeneity principle aims at differences in mass,
Spontaneous compactification to homogeneous spaces
International Nuclear Information System (INIS)
Mourao, J.M.
1988-01-01
The spontaneous compactification of extra dimensions to compact homogeneous spaces is studied. The methods developed within the framework of coset space dimensional reduction scheme and the most general form of invariant metrics are used to find solutions of spontaneous compactification equations
Exercises and problems in mathematical methods of physics
Cicogna, Giampaolo
2018-01-01
This book presents exercises and problems in the mathematical methods of physics with the aim of offering undergraduate students an alternative way to explore and fully understand the mathematical notions on which modern physics is based. The exercises and problems are proposed not in a random order but rather in a sequence that maximizes their educational value. Each section and subsection starts with exercises based on first definitions, followed by groups of problems devoted to intermediate and, subsequently, more elaborate situations. Some of the problems are unavoidably "routine", but others bring to the forenontrivial properties that are often omitted or barely mentioned in textbooks. There are also problems where the reader is guided to obtain important results that are usually stated in textbooks without complete proofs. In all, some 350 solved problems covering all mathematical notions useful to physics are included. While the book is intended primarily for undergraduate students of physics, students...
Toward whole-core neutron transport without spatial homogenization
International Nuclear Information System (INIS)
Lewis, E. E.
2009-01-01
Full text of publication follows: A long-term goal of computational reactor physics is the deterministic analysis of power reactor core neutronics without incurring significant discretization errors in the energy, spatial or angular variables. In principle, given large enough parallel configurations with unlimited CPU time and memory, this goal could be achieved using existing three-dimensional neutron transport codes. In practice, however, solving the Boltzmann equation for neutrons over the six-dimensional phase space is made intractable by the nature of neutron cross-sections and the complexity and size of power reactor cores. Tens of thousands of energy groups would be required for faithful cross section representation. Likewise, the numerous material interfaces present in power reactor lattices require exceedingly fine spatial mesh structures; these ubiquitous interfaces preclude effective implementation of adaptive grid, mesh-less methods and related techniques that have been applied so successfully in other areas of engineering science. These challenges notwithstanding, substantial progress continues in the pursuit for more robust deterministic methods for whole-core neutronics analysis. This paper examines the progress over roughly the last decade, emphasizing the space-angle variables and the quest to eliminate errors attributable to spatial homogenization. As prolog we briefly assess 1990's methods used in light water reactor analysis and review the lessons learned from the C5G7 benchmark exercises which were originated in 1999 to appraise the ability of transport codes to perform core calculations without homogenization. We proceed by examining progress over the last decade much of which falls into three areas. These may be broadly characterized as reduced homogenization, dynamic homogenization and planar-axial synthesis. In the first, homogenization in three-dimensional calculations is reduced from the fuel assembly to the pin-cell level. In the second
An introduction to computer simulation methods applications to physical systems
Gould, Harvey; Christian, Wolfgang
2007-01-01
Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...
Application of econometric and ecology analysis methods in physics software
Han, Min Cheol; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Sung Hun; Grazia Pia, Maria; Ronchieri, Elisabetta; Saracco, Paolo
2017-10-01
Some data analysis methods typically used in econometric studies and in ecology have been evaluated and applied in physics software environments. They concern the evolution of observables through objective identification of change points and trends, and measurements of inequality, diversity and evenness across a data set. Within each analysis area, various statistical tests and measures have been examined. This conference paper summarizes a brief overview of some of these methods.
Monte Carlo methods and applications in nuclear physics
International Nuclear Information System (INIS)
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs
Monte Carlo methods and applications in nuclear physics
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.
Numerical perturbative methods in the quantum theory of physical systems
International Nuclear Information System (INIS)
Adam, G.
1980-01-01
During the last two decades, development of digital electronic computers has led to the deployment of new, distinct methods in theoretical physics. These methods, based on the advances of modern numerical analysis as well as on specific equations describing physical processes, enabled to perform precise calculations of high complexity which have completed and sometimes changed our image of many physical phenomena. Our efforts have concentrated on the development of numerical methods with such intrinsic performances as to allow a successful approach of some Key issues in present theoretical physics on smaller computation systems. The basic principle of such methods is to translate, in numerical analysis language, the theory of perturbations which is suited to numerical rather than to analytical computation. This idea has been illustrated by working out two problems which arise from the time independent Schroedinger equation in the non-relativistic approximation, within both quantum systems with a small number of particles and systems with a large number of particles, respectively. In the first case, we are led to the numerical solution of some quadratic ordinary differential equations (first section of the thesis) and in the second case, to the solution of some secular equations in the Brillouin area (second section). (author)
Models and methods can theory meet the B physics challenge?
Nierste, U
2004-01-01
The B physics experiments of the next generation, BTeV and LHCb, will perform measurements with an unprecedented accuracy. Theory predictions must control hadronic uncertainties with the same precision to extract the desired short-distance information successfully. I argue that this is indeed possible, discuss those theoretical methods in which hadronic uncertainties are under control and list hadronically clean observables.
An entrepreneurial physics method and its experimental test
Brown, Robert
2012-02-01
As faculty in a master's program for entrepreneurial physics and in an applied physics PhD program, I have advised upwards of 40 master and doctoral theses in industrial physics. I have been closely involved with four robust start-up manufacturing companies focused on physics high-technology and I have spent 30 years collaborating with industrial physicists on research and development. Thus I am in a position to reflect on many articles and advice columns centered on entrepreneurship. What about the goals, strategies, resources, skills, and the 10,000 hours needed to be an entrepreneur? What about business plans, partners, financing, patents, networking, salesmanship and regulatory affairs? What about learning new technology, how to solve problems and, in fact, learning innovation itself? At this point, I have my own method to propose to physicists in academia for incorporating entrepreneurship into their research lives. With this method, we do not start with a major invention or discovery, or even with a search for one. The method is based on the training we have, and the teaching we do (even quantum electrodynamics!), as physicists. It is based on the networking we build by 1) providing courses of continuing education for people working in industry and 2) through our undergraduate as well as graduate students who have gone on to work in industry. In fact, if we were to be limited to two words to describe the method, they are ``former students.'' Data from local and international medical imaging manufacturing industry are presented.
Abe, M.; Murata, Y.; Iinuma, H.; Ogitsu, T.; Saito, N.; Sasaki, K.; Mibe, T.; Nakayama, H.
2018-05-01
A magnetic field design method of magneto-motive force (coil block (CB) and iron yoke) placements for g - 2/EDM measurements has been developed and a candidate placements were designed under superconducting limitations of current density 125 A/mm2 and maximum magnetic field on CBs less than 5.5 T. Placements of CBs and an iron yoke with poles were determined by tuning SVD (singular value decomposition) eigenmode strengths. The SVD was applied on a response matrix from magneto-motive forces to the magnetic fields in the muon storage region and two-dimensional (2D) placements of magneto-motive forces were designed by tuning the magnetic field eigenmode strengths obtained by the magnetic field. The tuning was performed iteratively. Magnetic field ripples in the azimuthal direction were minimized for the design. The candidate magnetic design had five CBs and an iron yoke with center iron poles. The magnet satisfied specifications of homogeneity (0.2 ppm peak-to-peak in 2D placements (the cylindrical coordinate of the radial position R and axial position Z) and less than 1.0 ppm ripples in the ring muon storage volume (0.318 m 0.0 m) for the spiral muon injection from the iron yoke at top.
XXXIV Bialowieza Workshop on Geometric Methods in Physics
Ali, S; Bieliavsky, Pierre; Odzijewicz, Anatol; Schlichenmaier, Martin; Voronov, Theodore
2016-01-01
This book features a selection of articles based on the XXXIV Białowieża Workshop on Geometric Methods in Physics, 2015. The articles presented are mathematically rigorous, include important physical implications and address the application of geometry in classical and quantum physics. Special attention deserves the session devoted to discussions of Gerard Emch's most important and lasting achievements in mathematical physics. The Białowieża workshops are among the most important meetings in the field and gather participants from mathematics and physics alike. Despite their long tradition, the Workshops remain at the cutting edge of ongoing research. For the past several years, the Białowieża Workshop has been followed by a School on Geometry and Physics, where advanced lectures for graduate students and young researchers are presented. The unique atmosphere of the Workshop and School is enhanced by the venue, framed by the natural beauty of the Białowieża forest in eastern Poland.
EFFECT OF DIFFERENT PHYSICAL ACTIVITY TRAINING METHODS ON OVERWEIGHT ADOLESCENTS
Directory of Open Access Journals (Sweden)
Shohreh Ghatrehsamani
2010-11-01
Full Text Available BACKGROUND: In view of the growing trend of obesity around the world, including in our country, and the effect of reduced physical activity in increasing the incidence of obesity and overweight in children and adolescents and limitations of families in providing transport for their children to attend exercise classes, as well as time limitations of students in taking part in these classes, accessing appropriate methods for presenting physical activity training seems essential. METHODS: This non-pharmacological clinical trial was performed during six months from May to November 2007 on 105 children and adolescents aged 6-18 years with obesity, randomly assigned to 3 groups of thirty-five. Nutrition and treatment behavior were the same in all groups, but physical activity training in the first group was taking part in physical activity training classes twice a week, in the second group by providing a training CD, and in the third group via face-to-face training. Before and after the intervention, anthropometric indicators were measured and recorded. RESULTS: Mean body mass index (BMI of participants in group attended physical activity training classes, and in the group undergone training with CD, after the interventions was significantly lower than that before the intervention. CONCLUSION: Our findings demonstrated that training using CDs can also be effective in reducing BMI in overweight and obese children and adolescents as much as face-to-face education and participation in physical training classes. Extending such interventions can be effective at the community level. Keywords: Children, adolescents, physical activity, education, obesity, treatment.
A literature review on biotic homogenization
Guangmei Wang; Jingcheng Yang; Chuangdao Jiang; Hongtao Zhao; Zhidong Zhang
2009-01-01
Biotic homogenization is the process whereby the genetic, taxonomic and functional similarity of two or more biotas increases over time. As a new research agenda for conservation biogeography, biotic homogenization has become a rapidly emerging topic of interest in ecology and evolution over the past decade. However, research on this topic is rare in China. Herein, we introduce the development of the concept of biotic homogenization, and then discuss methods to quantify its three components (...
Applications of Symmetry Methods to the Theory of Plasma Physics
Directory of Open Access Journals (Sweden)
Giampaolo Cicogna
2006-02-01
Full Text Available The theory of plasma physics offers a number of nontrivial examples of partial differential equations, which can be successfully treated with symmetry methods. We propose three different examples which may illustrate the reciprocal advantage of this "interaction" between plasma physics and symmetry techniques. The examples include, in particular, the complete symmetry analysis of system of two PDE's, with the determination of some conditional and partial symmetries, the construction of group-invariant solutions, and the symmetry classification of a nonlinear PDE.
Academic Training Lecture: Statistical Methods for Particle Physics
PH Department
2012-01-01
2, 3, 4 and 5 April 2012 Academic Training Lecture Regular Programme from 11:00 to 12:00 - Bldg. 222-R-001 - Filtration Plant Statistical Methods for Particle Physics by Glen Cowan (Royal Holloway) The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.
Salari, Marjan; Rakhshandehroo, Gholam Reza; Nikoo, Mohammad Reza
2018-09-01
The main purpose of this experimental study was to optimize Homogeneous Fenton oxidation (HFO) and identification of oxidized by-products from degradation of Ciprofloxacin (CIP) using hybrid AHP-PROMETHEE, Response Surface Methodology (RSM) and High Performance Liquid Chromatography coupled with Mass Spectrometry (HPLC-MS). At the first step, an assessment was made for performances of two catalysts (FeSO 4 ·7H 2 O and FeCl 2 ·4H 2 O) based on hybrid AHP-PROMETHEE decision making method. Then, RSM was utilized to examine and optimize the influence of different variables including initial CIP concentration, Fe 2+ concentration, [H 2 O 2 ]/[ Fe 2+ ] mole ratio and initial pH as independent variables on CIP removal, COD removal, and sludge to iron (SIR) as the response functions in a reaction time of 25 min. Weights of the mentioned responses as well as cost criteria were determined by AHP model based on pairwise comparison and then used as inputs to PROMETHEE method to develop hybrid AHP-PROMETHEE. Based on net flow results of this hybrid model, FeCl 2 ·4H 2 O was more efficient because of its less environmental stability as well as lower SIR production. Then, optimization of experiments using Central Composite Design (CCD) under RSM was performed with the FeCl 2 ·4H 2 O catalyst. Biodegradability of wastewater was determined in terms of BOD 5 /COD ratio, showing that HFO process is able to improve wastewater biodegradability from zero to 0.42. Finally, the main intermediaries of degradation and degradation pathways of CIP were investigated with (HPLC-MS). Major degradation pathways from hydroxylation of both piperazine and quinolonic rings, oxidation and cleavage of the piperazine ring, and defluorination (OH/F substitution) were suggested. Copyright © 2018 Elsevier Ltd. All rights reserved.
Monte Carlo methods for medical physics a practical introduction
Schuemann, Jan; Paganetti, Harald
2018-01-01
The Monte Carlo (MC) method, established as the gold standard to predict results of physical processes, is now fast becoming a routine clinical tool for applications that range from quality control to treatment verification. This book provides a basic understanding of the fundamental principles and limitations of the MC method in the interpretation and validation of results for various scenarios. It shows how user-friendly and speed optimized MC codes can achieve online image processing or dose calculations in a clinical setting. It introduces this essential method with emphasis on applications in hardware design and testing, radiological imaging, radiation therapy, and radiobiology.
An unfolding method for high energy physics experiments
International Nuclear Information System (INIS)
Blobel, V.
2002-06-01
Finite detector resolution and limited acceptance require one to apply unfolding methods in high energy physics experiments. Information on the detector resolution is usually given by a set of Monte Carlo events. Based on the experience with a widely used unfolding program (RUN) a modified method has been developed. The first step of the method is a maximum likelihood fit of the Monte Carlo distributions to the measured distribution in one, two or three dimensions; the finite statistics of the Monte Carlo events is taken into account by the use of Barlow's method with a new method of solution. A clustering method is used before combining bins in sparsely populated areas. In the second step a regularization is applied to the solution, which introduces only a small bias. The regularization parameter is determined from the data after a diagonalization and rotation procedure. (orig.)
Method for Determining the Sensitivity of a Physical Security System.
Energy Technology Data Exchange (ETDEWEB)
Speed, Ann [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gauthier, John H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoffman, Matthew John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wachtel, Amanda [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kittinger, Robert Scott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Munoz-Ramos, Karina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-05-23
Modern systems, such as physical security systems, are often designed to involve complex interactions of technological and human elements. Evaluation of the performance of these systems often overlooks the human element. A method is proposed here to expand the concept of sensitivity—as denoted by d’—from signal detection theory (Green & Swets 1966; Macmillan & Creelman 2005), which came out of the field of psychophysics, to cover not only human threat detection but also other human functions plus the performance of technical systems in a physical security system, thereby including humans in the overall evaluation of system performance. New in this method is the idea that probabilities of hits (accurate identification of threats) and false alarms (saying “threat” when there is not one), which are used to calculate d’ of the system, can be applied to technologies and, furthermore, to different functions in the system beyond simple yes-no threat detection. At the most succinct level, the method returns a single number that represents the effectiveness of a physical security system; specifically, the balance between the handling of actual threats and the distraction of false alarms. The method can be automated, and the constituent parts revealed, such that given an interaction graph that indicates the functional associations of system elements and the individual probabilities of hits and false alarms for those elements, it will return the d’ of the entire system as well as d’ values for individual parts. The method can also return a measure of the response bias* of the system. One finding of this work is that the d’ for a physical security system can be relatively poor in spite of having excellent d’s for each of its individual functional elements.
Methods of Efficient Study Habits and Physics Learning
Zettili, Nouredine
2010-02-01
We want to discuss the methods of efficient study habits and how they can be used by students to help them improve learning physics. In particular, we deal with the most efficient techniques needed to help students improve their study skills. We focus on topics such as the skills of how to develop long term memory, how to improve concentration power, how to take class notes, how to prepare for and take exams, how to study scientific subjects such as physics. We argue that the students who conscientiously use the methods of efficient study habits achieve higher results than those students who do not; moreover, a student equipped with the proper study skills will spend much less time to learn a subject than a student who has no good study habits. The underlying issue here is not the quantity of time allocated to the study efforts by the students, but the efficiency and quality of actions so that the student can function at peak efficiency. These ideas were developed as part of Project IMPACTSEED (IMproving Physics And Chemistry Teaching in SEcondary Education), an outreach grant funded by the Alabama Commission on Higher Education. This project is motivated by a major pressing local need: A large number of high school physics teachers teach out of field. )
SuPer-Homogenization (SPH) Corrected Cross Section Generation for High Temperature Reactor
Energy Technology Data Exchange (ETDEWEB)
Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hummel, Andrew John [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hiruta, Hikaru [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2017-03-01
The deterministic full core simulators require homogenized group constants covering the operating and transient conditions over the entire lifetime. Traditionally, the homogenized group constants are generated using lattice physics code over an assembly or block in the case of prismatic high temperature reactors (HTR). For the case of strong absorbers that causes strong local depressions on the flux profile require special techniques during homogenization over a large volume. Fuel blocks with burnable poisons or control rod blocks are example of such cases. Over past several decades, there have been a tremendous number of studies performed for improving the accuracy of full-core calculations through the homogenization procedure. However, those studies were mostly performed for light water reactor (LWR) analyses, thus, may not be directly applicable to advanced thermal reactors such as HTRs. This report presents the application of SuPer-Homogenization correction method to a hypothetical HTR core.
Theoretical physics 7 quantum mechanics : methods and applications
Nolting, Wolfgang
2017-01-01
This textbook offers a clear and comprehensive introduction to methods and applications in quantum mechanics, one of the core components of undergraduate physics courses. It follows on naturally from the previous volumes in this series, thus developing the understanding of quantized states further on. The first part of the book introduces the quantum theory of angular momentum and approximation methods. More complex themes are covered in the second part of the book, which describes multiple particle systems and scattering theory. Ideally suited to undergraduate students with some grounding in the basics of quantum mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successful German editions, the eight volumes of this seri...
Introduction to methods of approximation in physics and astronomy
van Putten, Maurice H P M
2017-01-01
This textbook provides students with a solid introduction to the techniques of approximation commonly used in data analysis across physics and astronomy. The choice of methods included is based on their usefulness and educational value, their applicability to a broad range of problems and their utility in highlighting key mathematical concepts. Modern astronomy reveals an evolving universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data-analysis. The book is organized to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal dete...
Learning physics: A comparative analysis between instructional design methods
Mathew, Easow
The purpose of this research was to determine if there were differences in academic performance between students who participated in traditional versus collaborative problem-based learning (PBL) instructional design approaches to physics curricula. This study utilized a quantitative quasi-experimental design methodology to determine the significance of differences in pre- and posttest introductory physics exam performance between students who participated in traditional (i.e., control group) versus collaborative problem solving (PBL) instructional design (i.e., experimental group) approaches to physics curricula over a college semester in 2008. There were 42 student participants (N = 42) enrolled in an introductory physics course at the research site in the Spring 2008 semester who agreed to participate in this study after reading and signing informed consent documents. A total of 22 participants were assigned to the experimental group (n = 22) who participated in a PBL based teaching methodology along with traditional lecture methods. The other 20 students were assigned to the control group (n = 20) who participated in the traditional lecture teaching methodology. Both the courses were taught by experienced professors who have qualifications at the doctoral level. The results indicated statistically significant differences (p traditional (i.e., lower physics posttest scores and lower differences between pre- and posttest scores) versus collaborative (i.e., higher physics posttest scores, and higher differences between pre- and posttest scores) instructional design approaches to physics curricula. Despite some slight differences in control group and experimental group demographic characteristics (gender, ethnicity, and age) there were statistically significant (p = .04) differences between female average academic improvement which was much higher than male average academic improvement (˜63%) in the control group which may indicate that traditional teaching methods
PhySIC: a veto supertree method with desirable properties.
Ranwez, Vincent; Berry, Vincent; Criscuolo, Alexis; Fabre, Pierre-Henri; Guillemot, Sylvain; Scornavacca, Celine; Douzery, Emmanuel J P
2007-10-01
This paper focuses on veto supertree methods; i.e., methods that aim at producing a conservative synthesis of the relationships agreed upon by all source trees. We propose desirable properties that a supertree should satisfy in this framework, namely the non-contradiction property (PC) and the induction property (PI). The former requires that the supertree does not contain relationships that contradict one or a combination of the source topologies, whereas the latter requires that all topological information contained in the supertree is present in a source tree or collectively induced by several source trees. We provide simple examples to illustrate their relevance and that allow a comparison with previously advocated properties. We show that these properties can be checked in polynomial time for any given rooted supertree. Moreover, we introduce the PhySIC method (PHYlogenetic Signal with Induction and non-Contradiction). For k input trees spanning a set of n taxa, this method produces a supertree that satisfies the above-mentioned properties in O(kn(3) + n(4)) computing time. The polytomies of the produced supertree are also tagged by labels indicating areas of conflict as well as those with insufficient overlap. As a whole, PhySIC enables the user to quickly summarize consensual information of a set of trees and localize groups of taxa for which the data require consolidation. Lastly, we illustrate the behaviour of PhySIC on primate data sets of various sizes, and propose a supertree covering 95% of all primate extant genera. The PhySIC algorithm is available at http://atgc.lirmm.fr/cgi-bin/PhySIC.
Applications of Monte Carlo method in Medical Physics
International Nuclear Information System (INIS)
Diez Rios, A.; Labajos, M.
1989-01-01
The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)
Continuum methods of physical modeling continuum mechanics, dimensional analysis, turbulence
Hutter, Kolumban
2004-01-01
The book unifies classical continuum mechanics and turbulence modeling, i.e. the same fundamental concepts are used to derive model equations for material behaviour and turbulence closure and complements these with methods of dimensional analysis. The intention is to equip the reader with the ability to understand the complex nonlinear modeling in material behaviour and turbulence closure as well as to derive or invent his own models. Examples are mostly taken from environmental physics and geophysics.
An optimization method for parameters in reactor nuclear physics
International Nuclear Information System (INIS)
Jachic, J.
1982-01-01
An optimization method for two basic problems of Reactor Physics was developed. The first is the optimization of a plutonium critical mass and the bruding ratio for fast reactors in function of the radial enrichment distribution of the fuel used as control parameter. The second is the maximization of the generation and the plutonium burnup by an optimization of power temporal distribution. (E.G.) [pt
Statistical Methods for Particle Physics (4/4)
CERN. Geneva
2012-01-01
The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.
Statistical Methods for Particle Physics (1/4)
CERN. Geneva
2012-01-01
The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.
Statistical Methods for Particle Physics (2/4)
CERN. Geneva
2012-01-01
The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.
Statistical Methods for Particle Physics (3/4)
CERN. Geneva
2012-01-01
The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.
Comparison of Chemical and Physical-chemical Wastewater Discoloring Methods
Directory of Open Access Journals (Sweden)
Durašević, V.
2007-11-01
Full Text Available Today's chemical and physical-chemical wastewater discoloration methods do not completely meet demands regarding degree of discoloration. In this paper discoloration was performed using Fenton (FeSO4 . 7 H2O + H2O2 + H2SO4 and Fenton-like (FeCl3 . 6 H2O + H2O2 + HCOOH chemical methods and physical-chemical method of coagulation/flocculation (using poly-electrolyte (POEL combining anion active coagulant (modified poly-acrylamides and cationic flocculant (product of nitrogen compounds in combination with adsorption on activated carbon. Suitability of aforementioned methods was investigated on reactive and acid dyes, regarding their most common use in the textile industry. Also, investigations on dyes of different chromogen (anthraquinone, phthalocyanine, azo and xanthene were carried out in order to determine the importance of molecular spatial structure. Oxidative effect of Fenton and Fenton-like reagents resulted in decomposition of colored chromogen and high degree of discoloration. However, the problem is the inability of adding POEL in stechiometrical ratio (also present in physical-chemical methods, when the phenomenon of overdosing coagulants occurs in order to obtain a higher degree of discoloration, creating a potential danger of burdening water with POEL. Input and output water quality was controlled through spectrophotometric measurements and standard biological parameters. In addition, part of the investigations concerned industrial wastewaters obtained from dyeing cotton materials using reactive dye (C. I. Reactive Blue 19, a process that demands the use of vast amounts of electrolytes. Also, investigations of industrial wastewaters was labeled as a crucial step carried out in order to avoid serious misassumptions and false conclusions, which may arise if dyeing processes are only simulated in the laboratory.
Czech Academy of Sciences Publication Activity Database
Straňák, Vítězslav; Čada, Martin; Quaas, M.; Block, S.; Bogdanowicz, R.; Kment, Štěpán; Wulff, H.; Hubička, Zdeněk; Helm, Ch.A.; Tichý, M.; Hippler, R.
2009-01-01
Roč. 42, č. 10 (2009), 105204/1-105204/12 ISSN 0022-3727 R&D Projects: GA AV ČR KAN301370701; GA AV ČR KJB100100805; GA AV ČR KAN400720701 Institutional research plan: CEZ:AV0Z10100522 Keywords : TiO 2 * high power magnteron sputtering * plasma diagnostic * film diagnostic Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.083, year: 2009
International Nuclear Information System (INIS)
Zhang, H.; Rizwan-uddin; Dorning, J.J.
1995-01-01
A diffusion equation-based systematic homogenization theory and a self-consistent dehomogenization theory for fuel assemblies have been developed for use with coarse-mesh nodal diffusion calculations of light water reactors. The theoretical development is based on a multiple-scales asymptotic expansion carried out through second order in a small parameter, the ratio of the average diffusion length to the reactor characteristic dimension. By starting from the neutron diffusion equation for a three-dimensional heterogeneous medium and introducing two spatial scales, the development systematically yields an assembly-homogenized global diffusion equation with self-consistent expressions for the assembly-homogenized diffusion tensor elements and cross sections and assembly-surface-flux discontinuity factors. The rector eigenvalue 1/k eff is shown to be obtained to the second order in the small parameter, and the heterogeneous diffusion theory flux is shown to be obtained to leading order in that parameter. The latter of these two results provides a natural procedure for the reconstruction of the local fluxes and the determination of pin powers, even though homogenized assemblies are used in the global nodal diffusion calculation
Methods for Probing New Physics at High Energies
Denton, Peter B.
This dissertation covers two broad topics. The title, " Methods for Probing New Physics at High Energies," hopefully encompasses both of them. The first topic is located in part I of this work and is about integral dispersion relations. This is a technique to probe for new physics at energy scales near to the machine energy of a collider. For example, a hadron collider taking data at a given energy is typically only sensitive to new physics occurring at energy scales about a factor of five to ten beneath the actual machine energy due to parton distribution functions. This technique is sensitive to physics happening directly beneath the machine energy in addition to the even more interesting case: directly above. Precisely where this technique is sensitive is one of the main topics of this area of research. The other topic is located in part II and is about cosmic ray anisotropy at the highest energies. The unanswered questions about cosmic rays at the highest energies are numerous and interconnected in complicated ways. What may be the first piece of the puzzle to fall into place is determining their sources. This work looks to determine if and when the use of spherical harmonics becomes sensitive enough to determine these sources. The completed papers for this work can be found online. For part I on integral dispersion relations see reference published in Physical Review D. For part II on cosmic ray anisotropy, there are conference proceedings published in the Journal of Physics: Conference Series. The analysis of the location of an experiment on anisotropy reconstruction is, and the comparison of different experiments' abilities to reconstruct anisotropies is published in The Astrophysical Journal and the Journal of High Energy Astrophysics respectively. While this dissertation is focused on three papers completed with Tom Weiler at Vanderbilt University, other papers were completed at the same time. The first was with Nicusor Arsene, Lauretiu Caramete, and
Methods of teaching the physics of climate change in undergraduate physics courses
Sadler, Michael
2015-04-01
Although anthropogenic climate change is generally accepted in the scientific community, there is considerable skepticism among the general population and, therefore, in undergraduate students of all majors. Students are often asked by their peers, family members, and others, whether they ``believe'' climate change is occurring and what should be done about it (if anything). I will present my experiences and recommendations for teaching the physics of climate change to both physics and non-science majors. For non-science majors, the basic approach is to try to develop an appreciation for the scientific method (particularly peer-reviewed research) in a course on energy and the environment. For physics majors, the pertinent material is normally covered in their undergraduate courses in modern physics and thermodynamics. Nevertheless, it helps to review the basics, e.g. introductory quantum mechanics (discrete energy levels of atomic systems), molecular spectroscopy, and blackbody radiation. I have done this in a separate elective topics course, titled ``Physics of Climate Change,'' to help the students see how their knowledge gives them insight into a topic that is very volatile (socially and politically).
31st International Colloquium in Group Theoretical Methods in Physics
Gazeau, Jean-Pierre; Faci, Sofiane; Micklitz, Tobias; Scherer, Ricardo; Toppan, Francesco
2017-01-01
This proceedings records the 31st International Colloquium on Group Theoretical Methods in Physics (“Group 31”). Plenary-invited articles propose new approaches to the moduli spaces in gauge theories (V. Pestun, 2016 Weyl Prize Awardee), the phenomenology of neutrinos in non-commutative space-time, the use of Hardy spaces in quantum physics, contradictions in the use of statistical methods on complex systems, and alternative models of supersymmetry. This volume’s survey articles broaden the colloquia’s scope out into Majorana neutrino behavior, the dynamics of radiating charges, statistical pattern recognition of amino acids, and a variety of applications of gauge theory, among others. This year’s proceedings further honors Bertram Kostant (2016 Wigner Medalist), as well as S.T. Ali and L. Boyle, for their life-long contributions to the math and physics communities. The aim of the ICGTMP is to provide a forum for physicists, mathematicians, and scientists of related disciplines who develop or apply ...
Bateev, A. B.; Filippov, V. P.
2017-01-01
The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.
Physical methods in air pollution research: The second decade
International Nuclear Information System (INIS)
Cahill, T.A.
1985-01-01
The ''Second Decade'' in the application of physical techniques to air pollution has been a profound change in the understanding and capabilities. A great deal remains to be done with the new tools. But what about the next phase? The author feels that it will probably involve greater chemical and biological emphasis, as opposed to merely elemental analysis. But this will not be easy, and one will again need an influx of new people and ideas into the field, most likely from the biological, organic chemical, and medical communities. The author predicts that because of the inherent complexity of the problem, it will not happen in just 10 years. In the meantime, one will somehow manage to keep busy rediscovering atmospheric aerosols yet again, but with the new eyes the improved physical methods have gained
Statistical physics and computational methods for evolutionary game theory
Javarone, Marco Alberto
2018-01-01
This book presents an introduction to Evolutionary Game Theory (EGT) which is an emerging field in the area of complex systems attracting the attention of researchers from disparate scientific communities. EGT allows one to represent and study several complex phenomena, such as the emergence of cooperation in social systems, the role of conformity in shaping the equilibrium of a population, and the dynamics in biological and ecological systems. Since EGT models belong to the area of complex systems, statistical physics constitutes a fundamental ingredient for investigating their behavior. At the same time, the complexity of some EGT models, such as those realized by means of agent-based methods, often require the implementation of numerical simulations. Therefore, beyond providing an introduction to EGT, this book gives a brief overview of the main statistical physics tools (such as phase transitions and the Ising model) and computational strategies for simulating evolutionary games (such as Monte Carlo algor...
Physical non-viral gene delivery methods for tissue engineering
Mellott, Adam J.; Forrest, M. Laird; Detamore, Michael S.
2016-01-01
The integration of gene therapy into tissue engineering to control differentiation and direct tissue formation is not a new concept; however, successful delivery of nucleic acids into primary cells, progenitor cells, and stem cells has proven exceptionally challenging. Viral vectors are generally highly effective at delivering nucleic acids to a variety of cell populations, both dividing and non-dividing, yet these viral vectors are marred by significant safety concerns. Non-viral vectors are preferred for gene therapy, despite lower transfection efficiencies, and possess many customizable attributes that are desirable for tissue engineering applications. However, there is no single non-viral gene delivery strategy that “fits-all” cell types and tissues. Thus, there is a compelling opportunity to examine different non-viral vectors, especially physical vectors, and compare their relative degrees of success. This review examines the advantages and disadvantages of physical non-viral methods (i.e., microinjection, ballistic gene delivery, electroporation, sonoporation, laser irradiation, magnetofection, and electric field-induced molecular vibration), with particular attention given to electroporation because of its versatility, with further special emphasis on Nucleofection™. In addition, attributes of cellular character that can be used to improve differentiation strategies are examined for tissue engineering applications. Ultimately, electroporation exhibits a high transfection efficiency in many cell types, which is highly desirable for tissue engineering applications, but electroporation and other physical non-viral gene delivery methods are still limited by poor cell viability. Overcoming the challenge of poor cell viability in highly efficient physical non-viral techniques is the key to using gene delivery to enhance tissue engineering applications. PMID:23099792
Physical non-viral gene delivery methods for tissue engineering.
Mellott, Adam J; Forrest, M Laird; Detamore, Michael S
2013-03-01
The integration of gene therapy into tissue engineering to control differentiation and direct tissue formation is not a new concept; however, successful delivery of nucleic acids into primary cells, progenitor cells, and stem cells has proven exceptionally challenging. Viral vectors are generally highly effective at delivering nucleic acids to a variety of cell populations, both dividing and non-dividing, yet these viral vectors are marred by significant safety concerns. Non-viral vectors are preferred for gene therapy, despite lower transfection efficiencies, and possess many customizable attributes that are desirable for tissue engineering applications. However, there is no single non-viral gene delivery strategy that "fits-all" cell types and tissues. Thus, there is a compelling opportunity to examine different non-viral vectors, especially physical vectors, and compare their relative degrees of success. This review examines the advantages and disadvantages of physical non-viral methods (i.e., microinjection, ballistic gene delivery, electroporation, sonoporation, laser irradiation, magnetofection, and electric field-induced molecular vibration), with particular attention given to electroporation because of its versatility, with further special emphasis on Nucleofection™. In addition, attributes of cellular character that can be used to improve differentiation strategies are examined for tissue engineering applications. Ultimately, electroporation exhibits a high transfection efficiency in many cell types, which is highly desirable for tissue engineering applications, but electroporation and other physical non-viral gene delivery methods are still limited by poor cell viability. Overcoming the challenge of poor cell viability in highly efficient physical non-viral techniques is the key to using gene delivery to enhance tissue engineering applications.
Implementation of statistical analysis methods for medical physics data
International Nuclear Information System (INIS)
Teixeira, Marilia S.; Pinto, Nivia G.P.; Barroso, Regina C.; Oliveira, Luis F.
2009-01-01
The objective of biomedical research with different radiation natures is to contribute for the understanding of the basic physics and biochemistry of the biological systems, the disease diagnostic and the development of the therapeutic techniques. The main benefits are: the cure of tumors through the therapy, the anticipated detection of diseases through the diagnostic, the using as prophylactic mean for blood transfusion, etc. Therefore, for the better understanding of the biological interactions occurring after exposure to radiation, it is necessary for the optimization of therapeutic procedures and strategies for reduction of radioinduced effects. The group pf applied physics of the Physics Institute of UERJ have been working in the characterization of biological samples (human tissues, teeth, saliva, soil, plants, sediments, air, water, organic matrixes, ceramics, fossil material, among others) using X-rays diffraction and X-ray fluorescence. The application of these techniques for measurement, analysis and interpretation of the biological tissues characteristics are experimenting considerable interest in the Medical and Environmental Physics. All quantitative data analysis must be initiated with descriptive statistic calculation (means and standard deviations) in order to obtain a previous notion on what the analysis will reveal. It is well known que o high values of standard deviation found in experimental measurements of biologicals samples can be attributed to biological factors, due to the specific characteristics of each individual (age, gender, environment, alimentary habits, etc). This work has the main objective the development of a program for the use of specific statistic methods for the optimization of experimental data an analysis. The specialized programs for this analysis are proprietary, another objective of this work is the implementation of a code which is free and can be shared by the other research groups. As the program developed since the
Physical methods of treatment of complications of anti tumoral therapy
International Nuclear Information System (INIS)
Zhukovets, A.G.; Ulashchik, V.S.
1998-01-01
Numerous experimental and clinical materials about expediency of the use of physical methods for lowering of frequency and heaviness of complications of radial therapy are reviewed. One of such methods, possessing most expressed radioprotection ability, is low intensity laser radiation. Some of the authors demonstrated that use of this method move aside the time of appearance of early radial reactions. Preliminary local use of laser irradiation (λ = 510 nm) permits to avoid of development of epidermis disease and such radial reactions as ulcer and skin fibrosis in cancer patients after neutron-photon therapy. There are good results of application of ultraviolet irradiation in the region of action of ionizing radiation in the case of medical treatment of skin cancer. Low frequency magnetic field can reduce the expression of radial reactions
International Nuclear Information System (INIS)
Cai, Li
2014-01-01
In the framework of the Generation IV reactors neutronic research, new core calculation tools are implemented in the code system APOLLO3 for the deterministic part. These calculation methods are based on the discretization concept of nuclear energy data (named multi-group and are generally produced by deterministic codes) and should be validated and qualified with respect to some Monte-Carlo reference calculations. This thesis aims to develop an alternative technique of producing multi-group nuclear properties by a Monte-Carlo code (TRIPOLI-4). At first, after having tested the existing homogenization and condensation functionalities with better precision obtained nowadays, some inconsistencies are revealed. Several new multi-group parameters estimators are developed and validated for TRIPOLI-4 code with the aid of itself, since it has the possibility to use the multi-group constants in a core calculation. Secondly, the scattering anisotropy effect which is necessary for handling neutron leakage case is studied. A correction technique concerning the diagonal line of the first order moment of the scattering matrix is proposed. This is named the IGSC technique and is based on the usage of an approximate current which is introduced by Todorova. An improvement of this IGSC technique is then presented for the geometries which hold an important heterogeneity property. This improvement uses a more accurate current quantity which is the projection on the abscissa X. The later current can represent the real situation better but is limited to 1D geometries. Finally, a B1 leakage model is implemented in the TRIPOLI-4 code for generating multi-group cross sections with a fundamental mode based critical spectrum. This leakage model is analyzed and validated rigorously by the comparison with other codes: Serpent and ECCO, as well as an analytical case.The whole development work introduced in TRIPOLI-4 code allows producing multi-group constants which can then be used in the core
Dosing method of physical activity in aerobics classes for students
Directory of Open Access Journals (Sweden)
Yu.I. Beliak
2014-10-01
Full Text Available Purpose : reasons for the method of dosing of physical activity in aerobics classes for students. The basis of the method is the evaluation of the metabolic cost of funds used in them. Material : experiment involved the assessment of the pulse response of students to load complexes classical and step aerobics (n = 47, age 20-23 years. In complexes used various factors regulating the intensity: perform combinations of basic steps, involvement of movements with his hands, holding in hands dumbbells weighing 1kg increase in the rate of musical accompaniment, varying heights step platform. Results . on the basis of the relationship between heart rate and oxygen consumption was determined by the energy cost of each admission control load intensity. This indicator has been used to justify the intensity and duration of multiplicity aerobics. Figure correspond to the level of physical condition and motor activity deficits students. Conclusions : the estimated component of this method of dosing load makes it convenient for use in automated computer programs. Also it can be easily modified to dispense load other types of recreational fitness.
Application of nuclear-physical methods for studies in the solid state physics area
International Nuclear Information System (INIS)
Gorlachrv, I.D.; Knyazev, B.B.; Platov, A.B.
2004-01-01
The set of nuclear-physical methods developed on the heavy ion accelerator at the Institute of Nuclear Physics of the National Nuclear Center of the Republic of Kazakhstan allows to conduct an examination of elementary content as well as to obtain the elements distribution in a sample in their depth and surface. This information could be very important for study of samples wide range integral parameters and the characteristics of sputtered layers and implanted films. The beam analysis methods, as well as Rutherford backscattering methods (RBS), nuclear reaction analysis (NRA), proton-induced X-ray emission analysis (PIXE) are included in the complex structure. Besides for expand an analyzed elements range and precision increase for quantitative characteristics of elementarily content of samples the X-ray florescent analysis method with isotope excitation (RFA) is using in the capacity complementary PIXE method. Modernization of proton beam transportation system at the heavy ion accelerator allows to develop a new analytical trend - combination of the proton micro-probe with PIXE analysis. In this case the information about examined sample elementary content is within size field ∼10 μm. The beam scanning by the surface is allowing to obtain the elements distribution by the two spatial coordinates connected with the surface. This information may be useful in the case of an existence of a micro-inclusions in the sample
Dating methods enter high-school physics curriculum
International Nuclear Information System (INIS)
Beck, L.
2002-01-01
The new curriculum of physics of the upper forms in French grammar schools includes a part dedicated to ''nuclear transformations''. One of the applications most often considered in manuals is isotopic dating and generally several methods are explained to pupils: carbon 14 dating, potassium-argon dating (used for dating ancient lava layers) and uranium-thorium dating (used for dating corals). The author reviews with a critical eye the content of manuals and laments through concrete examples the lack of exactness and accuracy of some presentations. (A.C.)
Directory of Open Access Journals (Sweden)
Yoonhee Lee
2016-06-01
Full Text Available As a type of accident-tolerant fuel, fully ceramic microencapsulated (FCM fuel was proposed after the Fukushima accident in Japan. The FCM fuel consists of tristructural isotropic particles randomly dispersed in a silicon carbide (SiC matrix. For a fuel element with such high heterogeneity, we have proposed a two-temperature homogenized model using the particle transport Monte Carlo method for the heat conduction problem. This model distinguishes between fuel-kernel and SiC matrix temperatures. Moreover, the obtained temperature profiles are more realistic than those of other models. In Part I of the paper, homogenized parameters for the FCM fuel in which tristructural isotropic particles are randomly dispersed in the fine lattice stochastic structure are obtained by (1 matching steady-state analytic solutions of the model with the results of particle transport Monte Carlo method for heat conduction problems, and (2 preserving total enthalpies in fuel kernels and SiC matrix. The homogenized parameters have two desirable properties: (1 they are insensitive to boundary conditions such as coolant bulk temperatures and thickness of cladding, and (2 they are independent of operating power density. By performing the Monte Carlo calculations with the temperature-dependent thermal properties of the constituent materials of the FCM fuel, temperature-dependent homogenized parameters are obtained.
Dynamics of homogeneous nucleation
DEFF Research Database (Denmark)
Toxværd, Søren
2015-01-01
The classical nucleation theory for homogeneous nucleation is formulated as a theory for a density fluctuation in a supersaturated gas at a given temperature. But molecular dynamics simulations reveal that it is small cold clusters which initiates the nucleation. The temperature in the nucleating...
Homogeneous bilateral block shifts
Indian Academy of Sciences (India)
Douglas class were classified in [3]; they are unilateral block shifts of arbitrary block size (i.e. dim H(n) can be anything). However, no examples of irreducible homogeneous bilateral block shifts of block size larger than 1 were known until now.
Tignanelli, H. L.; Vazquez, R. A.; Mostaccio, C.; Gordillo, S.; Plastino, A.
1990-11-01
RESUMEN. Presentamos una metodologia de analisis de la homogeneidad a partir de la Teoria de la Informaci6n, aplicable a muestras de datos observacionales. ABSTRACT:Standard concepts that underlie Information Theory are employed in order design a methodology that enables one to analyze the homogeneity of a given data sample. Key : DATA ANALYSIS
Homogeneous Poisson structures
International Nuclear Information System (INIS)
Shafei Deh Abad, A.; Malek, F.
1993-09-01
We provide an algebraic definition for Schouten product and give a decomposition for any homogenenous Poisson structure in any n-dimensional vector space. A large class of n-homogeneous Poisson structures in R k is also characterized. (author). 4 refs
NATO Advanced Study Institute on Methods in Computational Molecular Physics
Diercksen, Geerd
1992-01-01
This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...
Physical Fault Injection and Monitoring Methods for Programmable Devices
AUTHOR|(INSPIRE)INSPIRE-00510096; Ferencei, Jozef
A method of detecting faults for evaluating the fault cross section of any field programmable gate array (FPGA) was developed and is described in the thesis. The incidence of single event effects in FPGAs was studied for different probe particles (proton, neutron, gamma) using this method. The existing accelerator infrastructure of the Nuclear Physics Institute in Rez was supplemented by more sensitive beam monitoring system to ensure that the tests are done under well defined beam conditions. The bit cross section of single event effects was measured for different types of configuration memories, clock signal phase and beam energies and intensities. The extended infrastructure served also for radiation testing of components which are planned to be used in the new Inner Tracking System (ITS) detector of the ALICE experiment and for selecting optimal fault mitigation techniques used for securing the design of the FPGA-based ITS readout unit against faults induced by ionizing radiation.
Homogenization of High-Contrast Brinkman Flows
Brown, Donald L.; Efendiev, Yalchin R.; Li, Guanglian; Savatorova, Viktoria
2015-01-01
, Homogenization: Methods and Applications, Transl. Math. Monogr. 234, American Mathematical Society, Providence, RI, 2007, G. Allaire, SIAM J. Math. Anal., 23 (1992), pp. 1482--1518], although a powerful tool, are not applicable here. Our second point
Literature in Focus: Statistical Methods in Experimental Physics
2007-01-01
Frederick James was a high-energy physicist who became the CERN "expert" on statistics and is now well-known around the world, in part for this famous text. The first edition of Statistical Methods in Experimental Physics was originally co-written with four other authors and was published in 1971 by North Holland (now an imprint of Elsevier). It became such an important text that demand for it has continued for more than 30 years. Fred has updated it and it was released in a second edition by World Scientific in 2006. It is still a top seller and there is no exaggeration in calling it «the» reference on the subject. A full review of the title appeared in the October CERN Courier.Come and meet the author to hear more about how this book has flourished during its 35-year lifetime. Frederick James Statistical Methods in Experimental Physics Monday, 26th of November, 4 p.m. Council Chamber (Bldg. 503-1-001) The author will be introduced...
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Benchmarking homogenization algorithms for monthly data
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.
2013-09-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Homogeneous group, research, institution
Directory of Open Access Journals (Sweden)
Francesca Natascia Vasta
2014-09-01
Full Text Available The work outlines the complex connection among empiric research, therapeutic programs and host institution. It is considered the current research state in Italy. Italian research field is analyzed and critic data are outlined: lack of results regarding both the therapeutic processes and the effectiveness of eating disorders group analytic treatment. The work investigates on an eating disorders homogeneous group, led into an eating disorder outpatient service. First we present the methodological steps the research is based on including the strong connection among theory and clinical tools. Secondly clinical tools are described and the results commented. Finally, our results suggest the necessity of validating some more specifical hypothesis: verifying the relationship between clinical improvement (sense of exclusion and painful emotions reduction and specific group therapeutic processes; verifying the relationship between depressive feelings, relapses and transition trough a more differentiated groupal field.Keywords: Homogeneous group; Eating disorders; Institutional field; Therapeutic outcome
Homogen Mur - et udviklingsprojekt
DEFF Research Database (Denmark)
Dahl, Torben; Beim, Anne; Sørensen, Peter
1997-01-01
Mølletorvet i Slagelse er det første byggeri i Danmark, hvor ydervæggen er udført af homogene bærende og isolerende teglblokke. Byggeriet viser en række af de muligheder, der både med hensyn til konstruktioner, energiforhold og arkitektur ligger i anvendelsen af homogent blokmurværk.......Mølletorvet i Slagelse er det første byggeri i Danmark, hvor ydervæggen er udført af homogene bærende og isolerende teglblokke. Byggeriet viser en række af de muligheder, der både med hensyn til konstruktioner, energiforhold og arkitektur ligger i anvendelsen af homogent blokmurværk....
Homogenization of resonant chiral metamaterials
DEFF Research Database (Denmark)
Andryieuski, Andrei; Menzel, C.; Rockstuhl, Carsten
2010-01-01
Homogenization of metamaterials is a crucial issue as it allows to describe their optical response in terms of effective wave parameters as, e.g., propagation constants. In this paper we consider the possible homogenization of chiral metamaterials. We show that for meta-atoms of a certain size...... an analytical criterion for performing the homogenization and a tool to predict the homogenization limit. We show that strong coupling between meta-atoms of chiral metamaterials may prevent their homogenization at all....
International Nuclear Information System (INIS)
Figueroa-O’Farrill, José; Ungureanu, Mara
2016-01-01
Motivated by the search for new gravity duals to M2 branes with N>4 supersymmetry — equivalently, M-theory backgrounds with Killing superalgebra osp(N|4) for N>4 — we classify (except for a small gap) homogeneous M-theory backgrounds with symmetry Lie algebra so(n)⊕so(3,2) for n=5,6,7. We find that there are no new backgrounds with n=6,7 but we do find a number of new (to us) backgrounds with n=5. All backgrounds are metrically products of the form AdS 4 ×P 7 , with P riemannian and homogeneous under the action of SO(5), or S 4 ×Q 7 with Q lorentzian and homogeneous under the action of SO(3,2). At least one of the new backgrounds is supersymmetric (albeit with only N=2) and we show that it can be constructed from a supersymmetric Freund-Rubin background via a Wick rotation. Two of the new backgrounds have only been approximated numerically.
Energy Technology Data Exchange (ETDEWEB)
Figueroa-O’Farrill, José [School of Mathematics and Maxwell Institute for Mathematical Sciences,The University of Edinburgh,James Clerk Maxwell Building, The King’s Buildings, Peter Guthrie Tait Road,Edinburgh EH9 3FD, Scotland (United Kingdom); Ungureanu, Mara [Humboldt-Universität zu Berlin, Institut für Mathematik,Unter den Linden 6, 10099 Berlin (Germany)
2016-01-25
Motivated by the search for new gravity duals to M2 branes with N>4 supersymmetry — equivalently, M-theory backgrounds with Killing superalgebra osp(N|4) for N>4 — we classify (except for a small gap) homogeneous M-theory backgrounds with symmetry Lie algebra so(n)⊕so(3,2) for n=5,6,7. We find that there are no new backgrounds with n=6,7 but we do find a number of new (to us) backgrounds with n=5. All backgrounds are metrically products of the form AdS{sub 4}×P{sup 7}, with P riemannian and homogeneous under the action of SO(5), or S{sup 4}×Q{sup 7} with Q lorentzian and homogeneous under the action of SO(3,2). At least one of the new backgrounds is supersymmetric (albeit with only N=2) and we show that it can be constructed from a supersymmetric Freund-Rubin background via a Wick rotation. Two of the new backgrounds have only been approximated numerically.
Test Methods for Evaluating Solid Waste, Physical/Chemical Methods. First Update. (3rd edition)
International Nuclear Information System (INIS)
Friedman; Sellers.
1988-01-01
The proposed Update is for Test Methods for Evaluating Solid Waste, Physical/Chemical Methods, SW-846, Third Edition. Attached to the report is a list of methods included in the proposed update indicating whether the method is a new method, a partially revised method, or a totally revised method. Do not discard or replace any of the current pages in the SW-846 manual until the proposed update I package is promulgated. Until promulgation of the update package, the methods in the update package are not officially part of the SW-846 manual and thus do not carry the status of EPA-approved methods. In addition to the proposed Update, six finalized methods are included for immediate inclusion into the Third Edition of SW-846. Four methods, originally proposed October 1, 1984, will be finalized in a soon to be released rulemaking. They are, however, being submitted to subscribers for the first time in the update. These methods are 7211, 7381, 7461, and 7951. Two other methods were finalized in the 2nd Edition of SW-846. They were inadvertantly omitted from the 3rd Edition and are not being proposed as new. These methods are 7081 and 7761
Acceleration methods for multi-physics compressible flow
Peles, Oren; Turkel, Eli
2018-04-01
In this work we investigate the Runge-Kutta (RK)/Implicit smoother scheme as a convergence accelerator for complex multi-physics flow problems including turbulent, reactive and also two-phase flows. The flows considered are subsonic, transonic and supersonic flows in complex geometries, and also can be either steady or unsteady flows. All of these problems are considered to be a very stiff. We then introduce an acceleration method for the compressible Navier-Stokes equations. We start with the multigrid method for pure subsonic flow, including reactive flows. We then add the Rossow-Swanson-Turkel RK/Implicit smoother that enables performing all these complex flow simulations with a reasonable CFL number. We next discuss the RK/Implicit smoother for time dependent problem and also for low Mach numbers. The preconditioner includes an intrinsic low Mach number treatment inside the smoother operator. We also develop a modified Roe scheme with a corresponding flux Jacobian matrix. We then give the extension of the method for real gas and reactive flow. Reactive flows are governed by a system of inhomogeneous Navier-Stokes equations with very stiff source terms. The extension of the RK/Implicit smoother requires an approximation of the source term Jacobian. The properties of the Jacobian are very important for the stability of the method. We discuss what the chemical physics theory of chemical kinetics tells about the mathematical properties of the Jacobian matrix. We focus on the implication of the Le-Chatelier's principle on the sign of the diagonal entries of the Jacobian. We present the implementation of the method for turbulent flow. We use a two RANS turbulent model - one equation model - Spalart-Allmaras and a two-equation model - k-ω SST model. The last extension is for two-phase flows with a gas as a main phase and Eulerian representation of a dispersed particles phase (EDP). We present some examples for such flow computations inside a ballistic evaluation
Computational Methods for Physical Model Information Management: Opening the Aperture
International Nuclear Information System (INIS)
Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.
2015-01-01
The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)
Energy Technology Data Exchange (ETDEWEB)
Cenizo de Castro, E.; Garcia Pareja, S.; Moreno Saiz, C.; Hernandez Rodriguez, R.; Bodineau Gil, C.; Martin-Viera Cueto, J. A.
2011-07-01
Hemi fields treatments are widely used in radiotherapy. Because the tolerance established for the positioning of each jaw is 1 mm, may be cases of overlap or separation of up to 2 mm. This implies heterogeneity of doses up to 40% in the joint area. This paper presents an accurate method of calibration of the jaws so as to obtain homogeneous dose distributions when using this type of treatment. (Author)
Diagnostics and correction of disregulation states by physical methods
Gorsha, O. V.; Gorsha, V. I.
2017-01-01
Nicolaus Copernicus University, Toruń, Poland Ukrainian Research Institute for Medicine of Transport, Odesa, Ukraine Gorsha O. V., Gorsha V. I. Diagnostics and correction of disregulation states by physical methods Горша О. В., Горша В. И. Диагностика и коррекция физическими методами дизрегуляторных состояний Toruń, Odesa 2017 Nicolaus Copernicus University, To...
Perkó, Z.
2015-01-01
This thesis presents novel adjoint and spectral methods for the sensitivity and uncertainty (S&U) analysis of multi-physics problems encountered in the field of reactor physics. The first part focuses on the steady state of reactors and extends the adjoint sensitivity analysis methods well
POLARIZATION REMOTE SENSING PHYSICAL MECHANISM, KEY METHODS AND APPLICATION
Directory of Open Access Journals (Sweden)
B. Yang
2017-09-01
Full Text Available China's long-term planning major projects "high-resolution earth observation system" has been invested nearly 100 billion and the satellites will reach 100 to 2020. As to 2/3 of China's area covered by mountains，it has a higher demand for remote sensing. In addition to light intensity, frequency, phase, polarization is also the main physical characteristics of remote sensing electromagnetic waves. Polarization is an important component of the reflected information from the surface and the atmospheric information, and the polarization effect of the ground object reflection is the basis of the observation of polarization remote sensing. Therefore, the effect of eliminating the polarization effect is very important for remote sensing applications. The main innovations of this paper is as follows: (1 Remote sensing observation method. It is theoretically deduced and verified that the polarization can weaken the light in the strong light region, and then provide the polarization effective information. In turn, the polarization in the low light region can strengthen the weak light, the same can be obtained polarization effective information. (2 Polarization effect of vegetation. By analyzing the structure characteristics of vegetation, polarization information is obtained, then the vegetation structure information directly affects the absorption of biochemical components of leaves. (3 Atmospheric polarization neutral point observation method. It is proved to be effective to achieve the ground-gas separation, which can achieve the effect of eliminating the atmospheric polarization effect and enhancing the polarization effect of the object.
Physics methods for calculating light water reactor increased performances
International Nuclear Information System (INIS)
Vandenberg, C.; Charlier, A.
1988-01-01
The intensive use of light water reactors (LWRs) has induced modification of their characteristics and performances in order to improve fissile material utilization and to increase their availability and flexibility under operation. From the conceptual point of view, adequate methods must be used to calculate core characteristics, taking into account present design requirements, e.g., use of burnable poison, plutonium recycling, etc. From the operational point of view, nuclear plants that have been producing a large percentage of electricity in some countries must adapt their planning to the need of the electrical network and operate on a load-follow basis. Consequently, plant behavior must be predicted and accurately followed in order to improve the plant's capability within safety limits. The Belgonucleaire code system has been developed and extensively validated. It is an accurate, flexible, easily usable, fast-running tool for solving the problems related to LWR technology development. The methods and validation of the two computer codes LWR-WIMS and MICROLUX, which are the main components of the physics calculation system, are explained
A physically based catchment partitioning method for hydrological analysis
Menduni, Giovanni; Riboni, Vittoria
2000-07-01
We propose a partitioning method for the topographic surface, which is particularly suitable for hydrological distributed modelling and shallow-landslide distributed modelling. The model provides variable mesh size and appears to be a natural evolution of contour-based digital terrain models. The proposed method allows the drainage network to be derived from the contour lines. The single channels are calculated via a search for the steepest downslope lines. Then, for each network node, the contributing area is determined by means of a search for both steepest upslope and downslope lines. This leads to the basin being partitioned into physically based finite elements delimited by irregular polygons. In particular, the distributed computation of local geomorphological parameters (i.e. aspect, average slope and elevation, main stream length, concentration time, etc.) can be performed easily for each single element. The contributing area system, together with the information on the distribution of geomorphological parameters provide a useful tool for distributed hydrological modelling and simulation of environmental processes such as erosion, sediment transport and shallow landslides.
Assembly homogenization techniques for light water reactor analysis
International Nuclear Information System (INIS)
Smith, K.S.
1986-01-01
Recent progress in development and application of advanced assembly homogenization methods for light water reactor analysis is reviewed. Practical difficulties arising from conventional flux-weighting approximations are discussed and numerical examples given. The mathematical foundations for homogenization methods are outlined. Two methods, Equivalence Theory and Generalized Equivalence Theory which are theoretically capable of eliminating homogenization error are reviewed. Practical means of obtaining approximate homogenized parameters are presented and numerical examples are used to contrast the two methods. Applications of these techniques to PWR baffle/reflector homogenization and BWR bundle homogenization are discussed. Nodal solutions to realistic reactor problems are compared to fine-mesh PDQ calculations, and the accuracy of the advanced homogenization methods is established. Remaining problem areas are investigated, and directions for future research are suggested. (author)
HOMOGENEOUS NUCLEAR POWER REACTOR
King, L.D.P.
1959-09-01
A homogeneous nuclear power reactor utilizing forced circulation of the liquid fuel is described. The reactor does not require fuel handling outside of the reactor vessel during any normal operation including complete shutdown to room temperature, the reactor being selfregulating under extreme operating conditions and controlled by the thermal expansion of the liquid fuel. The liquid fuel utilized is a uranium, phosphoric acid, and water solution which requires no gus exhaust system or independent gas recombining system, thereby eliminating the handling of radioiytic gas.
Deng, Shaoqiang
2012-01-01
"Homogeneous Finsler Spaces" is the first book to emphasize the relationship between Lie groups and Finsler geometry, and the first to show the validity in using Lie theory for the study of Finsler geometry problems. This book contains a series of new results obtained by the author and collaborators during the last decade. The topic of Finsler geometry has developed rapidly in recent years. One of the main reasons for its surge in development is its use in many scientific fields, such as general relativity, mathematical biology, and phycology (study of algae). This monograph introduc
Askerov, Bahram M
2010-01-01
This book deals with theoretical thermodynamics and the statistical physics of electron and particle gases. While treating the laws of thermodynamics from both classical and quantum theoretical viewpoints, it posits that the basis of the statistical theory of macroscopic properties of a system is the microcanonical distribution of isolated systems, from which all canonical distributions stem. To calculate the free energy, the Gibbs method is applied to ideal and non-ideal gases, and also to a crystalline solid. Considerable attention is paid to the Fermi-Dirac and Bose-Einstein quantum statistics and its application to different quantum gases, and electron gas in both metals and semiconductors is considered in a nonequilibrium state. A separate chapter treats the statistical theory of thermodynamic properties of an electron gas in a quantizing magnetic field.
Radiotracer investigation of cement raw meal homogenizers. Pt. 2
International Nuclear Information System (INIS)
Baranyai, L.
1983-01-01
Based on radioisotopic tracer technique a method has been worked out to study the homogenization and segregation processes of cement-industrial raw meal homogenizers. On-site measurements were carried out by this method in some Hungarian cement works to determine the optimal homogenization parameters of operating homogenizers. The motion and distribution of different raw meal fractions traced with 198 Au radioisotope was studied in homogenization processes proceeding with different parameters. In the first part of the publication the change of charge homogenity in time was discussed which had been measured as the resultant of mixing and separating processes. In the second part the parameters and types of homogenizers influencing the efficiency of homogenization have been detailed. (orig.) [de
Radiotracer investigation of cement raw meal homogenizers. Pt. 2
Energy Technology Data Exchange (ETDEWEB)
Baranyai, L
1983-12-01
Based on radioisotopic tracer technique a method has been worked out to study the homogenization and segregation processes of cement-industrial raw meal homogenizers. On-site measurements were carried out by this method in some Hungarian cement works to determine the optimal homogenization parameters of operating homogenizers. The motion and distribution of different raw meal fractions traced with /sup 198/Au radioisotope was studied in homogenization processes proceeding with different parameters. In the first part of the publication the change of charge homogenity in time was discussed which had been measured as the resultant of mixing and separating processes. In the second part the parameters and types of homogenizers influencing the efficiency of homogenization have been detailed.
An evaluation of teaching methods in the introductory physics classroom
Savage, Lauren Michelle Williams
The introductory physics mechanics course at the University of North Carolina at Charlotte has a history of relatively high DFW rates. In 2011, the course was redesigned from the traditional lecture format to the inverted classroom format (flipped). This format inverts the classroom by introducing material in a video assigned as homework while the instructor conducts problem solving activities and guides discussions during the regular meetings. This format focuses on student-centered learning and is more interactive and engaging. To evaluate the effectiveness of the new method, final exam data over the past 10 years was mined and the pass rates examined. A normalization condition was developed to evaluate semesters equally. The two teaching methods were compared using a grade distribution across multiple semesters. Students in the inverted class outperformed those in the traditional class: "A"s increased by 22% and "B"s increased by 38%. The final exam pass rate increased by 12% under the inverted classroom approach. The same analysis was used to compare the written and online final exam formats. Surprisingly, no students scored "A"s on the online final. However, the percent of "B"s increased by 136%. Combining documented best practices from a literature review with personal observations of student performance and attitudes from first hand classroom experience as a teaching assistant in both teaching methods, reasons are given to support the continued use of the inverted classroom approach as well as the online final. Finally, specific recommendations are given to improve the course structure where weaknesses have been identified.
Department of Nuclear Methods in the Solid State Physics
International Nuclear Information System (INIS)
2002-01-01
Full text: The activity of the Department of Nuclear Methods in the Solid State Physics is focused on experimental research in condensed matter physics. Thermal neutron scattering and Moessbauer effect are the main techniques mastered in the laboratory. Most of the studies aim at better understanding of properties and processes observed in modern materials. Some applied research and theoretical studies were also performed. Research activities of the Department in 2001 can be summarized as follows: Neutron scattering studies concerned the magnetic ordering in TbB 12 and TmIn 3 and some special features of magnetic excitations in antiferromagnetic γ-Mn-alloys. Some work was devoted to optimization of the neutron single crystal monochromators and polarizers grown in Crystal Growth Laboratory. Small angle scattering studies on the surfactant - water ternary system were performed in cooperation with JINR Dubna. Moessbauer effect investigations of dysprosium intermetallic compounds yielded the new data for Pauling-Slater curves. The same technique applied to perovskites and ferrocene adduct to fullerene helped to resolve their structure. X-ray topographic and diffractometric studies were performed on hydrogen implanted semiconductor surfaces employing the synchrotron radiation sources. The X-ray method was applied also to investigations of plasma spraying process and phase composition of ceramic oxide coatings. Large part of studies concerned the structure of biologically active, pharmacologically important organic complexes, supported by modeling of their electron structure. Crystal growth of large size single-crystals of metals and alloys was used for preparation of specimens with mosaic structure suitable for neutron monochromator and polarizer systems. The construction work of the Neutron and Gamma Radiography Station has been completed. The results of first tests and studies proved the expected abilities of the systems. The possibility to visualize inner structures
Benchmarking homogenization algorithms for monthly data
Directory of Open Access Journals (Sweden)
V. K. C. Venema
2012-01-01
Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.
Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve
A physics-motivated Centroidal Voronoi Particle domain decomposition method
Energy Technology Data Exchange (ETDEWEB)
Fu, Lin, E-mail: lin.fu@tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de; Adams, Nikolaus A., E-mail: nikolaus.adams@tum.de
2017-04-15
In this paper, we propose a novel domain decomposition method for large-scale simulations in continuum mechanics by merging the concepts of Centroidal Voronoi Tessellation (CVT) and Voronoi Particle dynamics (VP). The CVT is introduced to achieve a high-level compactness of the partitioning subdomains by the Lloyd algorithm which monotonically decreases the CVT energy. The number of computational elements between neighboring partitioning subdomains, which scales the communication effort for parallel simulations, is optimized implicitly as the generated partitioning subdomains are convex and simply connected with small aspect-ratios. Moreover, Voronoi Particle dynamics employing physical analogy with a tailored equation of state is developed, which relaxes the particle system towards the target partition with good load balance. Since the equilibrium is computed by an iterative approach, the partitioning subdomains exhibit locality and the incremental property. Numerical experiments reveal that the proposed Centroidal Voronoi Particle (CVP) based algorithm produces high-quality partitioning with high efficiency, independently of computational-element types. Thus it can be used for a wide range of applications in computational science and engineering.
The Fe removal in pyrophyllite by physical method
Cho, Kanghee; Jo, Jiyu; Bak, GeonYeong; Choi, NagChoul; Park*, CheonYoung
2015-04-01
The presence of Fe in ingredient material such as limestone, borax and pyrophyllite can prevent their use mainly in the glass fiber manufacturing industry. The red to yellow pigmentation in pyrophyllite is mainly due to the associated oxides and sulfides of Fe such as hematite, pyrite, etc. The removal of Fe in the pyrophyllite was investigated using high frequency treatment and magnetic separation under various alumina grades in pyrophyllite. The hematite and pyrite were observed in the pyrophyllite from photomicrograph and XRD analysis results. On the decrease of Al2O3 content in pyrophyllite was showed that SiO2, Fe2O3 and TiO2 content were increased by XRF analysis. The high frequency treatment experiment for the pyrophyllite showed that the (1) pyrite phase was transformed hematite and magnetite, (2) mass loss of the sample by volatilization of included sulfur(S) in pyrite. The results of magnetic separation for treated sample by high frequency were identified that Fe removal percent were in the range of 97.6~98.8%. This study demonstrated that physical method (high frequency treatment and magnetic separation) was effective for the removal of Fe in pyrophyllite. This subject is supported by Korea Ministry of Environment(MOE) as "Advanced Technology Program for Environmental Industry".
Homogeneous instantons in bigravity
International Nuclear Information System (INIS)
Zhang, Ying-li; Sasaki, Misao; Yeom, Dong-han
2015-01-01
We study homogeneous gravitational instantons, conventionally called the Hawking-Moss (HM) instantons, in bigravity theory. The HM instantons describe the amplitude of quantum tunneling from a false vacuum to the true vacuum. Corrections to General Relativity (GR) are found in a closed form. Using the result, we discuss the following two issues: reduction to the de Rham-Gabadadze-Tolley (dRGT) massive gravity and the possibility of preference for a large e-folding number in the context of the Hartle-Hawking (HH) no-boundary proposal. In particular, concerning the dRGT limit, it is found that the tunneling through the so-called self-accelerating branch is exponentially suppressed relative to the normal branch, and the probability becomes zero in the dRGT limit. As far as HM instantons are concerned, this could imply that the reduction from bigravity to the dRGT massive gravity is ill-defined.
Introduction to Methods of Approximation in Physics and Astronomy
van Putten, Maurice H. P. M.
2017-04-01
Modern astronomy reveals an evolving Universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data analysis. In realizing the full discovery potential of these multimessenger approaches, the latter increasingly involves high-performance supercomputing. These lecture notes developed out of lectures on mathematical-physics in astronomy to advanced undergraduate and beginning graduate students. They are organised to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal detection algorithms involving the Fourier transform and examples of numerical integration of ordinary differential equations and some illustrative aspects of modern computational implementation. In the applications, considerable emphasis is put on fluid dynamical problems associated with accretion flows, as these are responsible for a wealth of high energy emission phenomena in astronomy. The topics chosen are largely aimed at phenomenological approaches, to capture main features of interest by effective methods of approximation at a desired level of accuracy and resolution. Formulated in terms of a system of algebraic, ordinary or partial differential equations, this may be pursued by perturbation theory through expansions in a small parameter or by direct numerical computation. Successful application of these methods requires a robust understanding of asymptotic behavior, errors and convergence. In some cases, the number of degrees of freedom may be reduced, e.g., for the purpose of (numerical) continuation or to identify
The cell method a purely algebraic computational method in physics and engineering
Ferretti, Elena
2014-01-01
The Cell Method (CM) is a computational tool that maintains critical multidimensional attributes of physical phenomena in analysis. This information is neglected in the differential formulations of the classical approaches of finite element, boundary element, finite volume, and finite difference analysis, often leading to numerical instabilities and spurious results. This book highlights the central theoretical concepts of the CM that preserve a more accurate and precise representation of the geometric and topological features of variables for practical problem solving. Important applications occur in fields such as electromagnetics, electrodynamics, solid mechanics and fluids. CM addresses non-locality in continuum mechanics, an especially important circumstance in modeling heterogeneous materials. Professional engineers and scientists, as well as graduate students, are offered: A general overview of physics and its mathematical descriptions; Guidance on how to build direct, discrete formulations; Coverag...
Diamond-shaped electromagnetic transparent devices with homogeneous material parameters
International Nuclear Information System (INIS)
Li Tinghua; Huang Ming; Yang Jingjing; Yu Jiang; Lan Yaozhong
2011-01-01
Based on the linear coordinate transformation method, two-dimensional and three-dimensional electromagnetic transparent devices with diamond shape composed of homogeneous and non-singular materials are proposed in this paper. The permittivity and permeability tensors of the transparent devices are derived. The performance and scattering properties of the transparent devices are confirmed by a full-wave simulation. It can physically protect electric devices such as an antenna and a radar station inside, without sacrificing their performance. This work represents important progress towards the practical realization of metamaterial-assisted transparent devices and expands the application of transformation optics.
Directory of Open Access Journals (Sweden)
Phungamngoen Chanthima
2016-01-01
Full Text Available Coconut milk is one of the most important protein-rich food sources available today. Separation of an emulsion into an aqueous phase and cream phase is commonly occurred and this leads an unacceptably physical defect of either fresh or processed coconut milk. Since homogenization steps are known to affect the stability of coconut milk. This work was aimed to study the effect of homogenization steps on quality of coconut milk. The samples were subject to high speed homogenization in the range of 5000-15000 rpm under sterilize temperatures at 120-140 °C for 15 min. The result showed that emulsion stability increase with increasing speed of homogenization. The lower fat particles were generated and easy to disperse in continuous phase lead to high stability. On the other hand, the stability of coconut milk decreased, fat globule increased, L value decreased and b value increased when the high sterilization temperature was applied. Homogenization after heating led to higher stability than homogenization before heating due to the reduced particle size of coconut milk after aggregation during sterilization process. The results implied that homogenization after sterilization process might play an important role on the quality of the sterilized coconut milk.
Single-molecule experiments in biological physics: methods and applications.
Ritort, F
2006-08-16
I review single-molecule experiments (SMEs) in biological physics. Recent technological developments have provided the tools to design and build scientific instruments of high enough sensitivity and precision to manipulate and visualize individual molecules and measure microscopic forces. Using SMEs it is possible to manipulate molecules one at a time and measure distributions describing molecular properties, characterize the kinetics of biomolecular reactions and detect molecular intermediates. SMEs provide additional information about thermodynamics and kinetics of biomolecular processes. This complements information obtained in traditional bulk assays. In SMEs it is also possible to measure small energies and detect large Brownian deviations in biomolecular reactions, thereby offering new methods and systems to scrutinize the basic foundations of statistical mechanics. This review is written at a very introductory level, emphasizing the importance of SMEs to scientists interested in knowing the common playground of ideas and the interdisciplinary topics accessible by these techniques. The review discusses SMEs from an experimental perspective, first exposing the most common experimental methodologies and later presenting various molecular systems where such techniques have been applied. I briefly discuss experimental techniques such as atomic-force microscopy (AFM), laser optical tweezers (LOTs), magnetic tweezers (MTs), biomembrane force probes (BFPs) and single-molecule fluorescence (SMF). I then present several applications of SME to the study of nucleic acids (DNA, RNA and DNA condensation) and proteins (protein-protein interactions, protein folding and molecular motors). Finally, I discuss applications of SMEs to the study of the nonequilibrium thermodynamics of small systems and the experimental verification of fluctuation theorems. I conclude with a discussion of open questions and future perspectives.
Single-molecule experiments in biological physics: methods and applications
International Nuclear Information System (INIS)
Ritort, F
2006-01-01
I review single-molecule experiments (SMEs) in biological physics. Recent technological developments have provided the tools to design and build scientific instruments of high enough sensitivity and precision to manipulate and visualize individual molecules and measure microscopic forces. Using SMEs it is possible to manipulate molecules one at a time and measure distributions describing molecular properties, characterize the kinetics of biomolecular reactions and detect molecular intermediates. SMEs provide additional information about thermodynamics and kinetics of biomolecular processes. This complements information obtained in traditional bulk assays. In SMEs it is also possible to measure small energies and detect large Brownian deviations in biomolecular reactions, thereby offering new methods and systems to scrutinize the basic foundations of statistical mechanics. This review is written at a very introductory level, emphasizing the importance of SMEs to scientists interested in knowing the common playground of ideas and the interdisciplinary topics accessible by these techniques. The review discusses SMEs from an experimental perspective, first exposing the most common experimental methodologies and later presenting various molecular systems where such techniques have been applied. I briefly discuss experimental techniques such as atomic-force microscopy (AFM), laser optical tweezers (LOTs), magnetic tweezers (MTs), biomembrane force probes (BFPs) and single-molecule fluorescence (SMF). I then present several applications of SME to the study of nucleic acids (DNA, RNA and DNA condensation) and proteins (protein-protein interactions, protein folding and molecular motors). Finally, I discuss applications of SMEs to the study of the nonequilibrium thermodynamics of small systems and the experimental verification of fluctuation theorems. I conclude with a discussion of open questions and future perspectives. (topical review)
Diffusion piecewise homogenization via flux discontinuity ratios
International Nuclear Information System (INIS)
Sanchez, Richard; Dante, Giorgio; Zmijarevic, Igor
2013-01-01
We analyze piecewise homogenization with flux-weighted cross sections and preservation of averaged currents at the boundary of the homogenized domain. Introduction of a set of flux discontinuity ratios (FDR) that preserve reference interface currents leads to preservation of averaged region reaction rates and fluxes. We consider the class of numerical discretizations with one degree of freedom per volume and per surface and prove that when the homogenization and computing meshes are equal there is a unique solution for the FDRs which exactly preserve interface currents. For diffusion sub-meshing we introduce a Jacobian-Free Newton-Krylov method and for all cases considered obtain an 'exact' numerical solution (eight digits for the interface currents). The homogenization is completed by extending the familiar full assembly homogenization via flux discontinuity factors to the sides of regions laying on the boundary of the piecewise homogenized domain. Finally, for the familiar nodal discretization we numerically find that the FDRs obtained with no sub-mesh (nearly at no cost) can be effectively used for whole-core diffusion calculations with sub-mesh. This is not the case, however, for cell-centered finite differences. (authors)
The relationship between continuum homogeneity and statistical homogeneity in cosmology
International Nuclear Information System (INIS)
Stoeger, W.R.; Ellis, G.F.R.; Hellaby, C.
1987-01-01
Although the standard Friedmann-Lemaitre-Robertson-Walker (FLRW) Universe models are based on the concept that the Universe is spatially homogeneous, up to the present time no definition of this concept has been proposed that could in principle be tested by observation. Such a definition is here proposed, based on a simple spatial averaging procedure, which relates observable properties of the Universe to the continuum homogeneity idea that underlies the FLRW models. It turns out that the statistical homogeneity often used to describe the distribution of matter on a large scale does not imply spatial homogeneity according to this definition, and so cannot be simply related to a FLRW Universe model. Values are proposed for the homogeneity parameter and length scale of homogeneity of the Universe. (author)
Mathematical methods for students of physics and related fields
Hassani, Sadri
2000-01-01
Intended to follow the usual introductory physics courses, this book has the unique feature of addressing the mathematical needs of sophomores and juniors in physics, engineering and other related fields Many original, lucid, and relevant examples from the physical sciences, problems at the ends of chapters, and boxes to emphasize important concepts help guide the student through the material Beginning with reviews of vector algebra and differential and integral calculus, the book continues with infinite series, vector analysis, complex algebra and analysis, ordinary and partial differential equations Discussions of numerical analysis, nonlinear dynamics and chaos, and the Dirac delta function provide an introduction to modern topics in mathematical physics This new edition has been made more user-friendly through organization into convenient, shorter chapters Also, it includes an entirely new section on Probability and plenty of new material on tensors and integral transforms Some praise for the previous edi...
Mathematical Methods For Students of Physics and Related Fields
Hassani, Sadri
2009-01-01
Intended to follow the usual introductory physics courses, this book has the unique feature of addressing the mathematical needs of sophomores and juniors in physics, engineering and other related fields. Many original, lucid, and relevant examples from the physical sciences, problems at the ends of chapters, and boxes to emphasize important concepts help guide the student through the material. Beginning with reviews of vector algebra and differential and integral calculus, the book continues with infinite series, vector analysis, complex algebra and analysis, ordinary and partial differential equations. Discussions of numerical analysis, nonlinear dynamics and chaos, and the Dirac delta function provide an introduction to modern topics in mathematical physics. This new edition has been made more user-friendly through organization into convenient, shorter chapters. Also, it includes an entirely new section on Probability and plenty of new material on tensors and integral transforms. Some praise for the previo...
Physical-chemical property based sequence motifs and methods regarding same
Braun, Werner [Friendswood, TX; Mathura, Venkatarajan S [Sarasota, FL; Schein, Catherine H [Friendswood, TX
2008-09-09
A data analysis system, program, and/or method, e.g., a data mining/data exploration method, using physical-chemical property motifs. For example, a sequence database may be searched for identifying segments thereof having physical-chemical properties similar to the physical-chemical property motifs.
A Synthetic Approach to the Transfer Matrix Method in Classical and Quantum Physics
Pujol, O.; Perez, J. P.
2007-01-01
The aim of this paper is to propose a synthetic approach to the transfer matrix method in classical and quantum physics. This method is an efficient tool to deal with complicated physical systems of practical importance in geometrical light or charged particle optics, classical electronics, mechanics, electromagnetics and quantum physics. Teaching…
Lectures on holographic methods for condensed matter physics
International Nuclear Information System (INIS)
Hartnoll, Sean A
2009-01-01
These notes are loosely based on lectures given at the CERN Winter School on Supergravity, Strings and Gauge theories, February 2009, and at the IPM String School in Tehran, April 2009. I have focused on a few concrete topics and also on addressing questions that have arisen repeatedly. Background condensed matter physics material is included as motivation and easy reference for the high energy physics community. The discussion of holographic techniques progresses from equilibrium, to transport and to superconductivity.
Economical preparation of extremely homogeneous nuclear accelerator targets
International Nuclear Information System (INIS)
Maier, H.J.
1983-01-01
Techniques for target preparation with a minimum consumption of isotopic material are described. The rotating substrate method, which generates extremely homogeneous targets, is discussed in some detail
International Nuclear Information System (INIS)
Zaza, Chady
2015-01-01
The numerical simulation of steam generators of pressurized water reactors is a complex problem, involving different flow regimes and a wide range of length and time scales. An accidental scenario may be associated with very fast variations of the flow with an important Mach number. In contrast in the nominal regime the flow may be stationary, at low Mach number. Moreover whatever the regime under consideration, the array of U-tubes is modelled by a porous medium in order to avoid taking into account the complex geometry of the steam generator, which entails the issue of the coupling conditions at the interface with the free-fluid. We propose a new pressure-correction scheme for cell-centered finite volumes for solving the compressible Navier-Stokes and Euler equations at all Mach number. The existence of a discrete solution, the consistency of the scheme in the Lax sense and the positivity of the internal energy were proved. Then the scheme was extended to the homogeneous two-phase flow models of the GENEPI code developed at CEA. Lastly a multigrid-AMR algorithm was adapted for using our pressure-correction scheme on adaptive grids. Regarding the second issue addressed in this work, the numerical simulation of a fluid flow over a porous bed involves very different length scales. Macroscopic interface models - such as Ochoa-Tapia-Whitaker or Beavers-Joseph law for a viscous flow - represent the transition region between the free-fluid and the porous region by an interface of discontinuity associated with specific transmission conditions. An extension to the Beavers-Joseph law was proposed for the convective regime. By introducing a jump in the kinetic energy at the interface, we recover an interface condition close to the Beavers-Joseph law but with a non-linear slip coefficient, which depends on the free-fluid velocity at the interface and on the Darcy velocity. The validity of this new transmission condition was assessed with direct numerical simulations at
Directory of Open Access Journals (Sweden)
Grėtė Brukštutė
2016-04-01
Full Text Available The article is analysing the research that was already carried out in order to determine correlation between a physical environment of schools and educational paradigms. While selecting materials for the analysis, the attention was focused on studies conducted in the USA and European countries. Based on these studies the methodological attitudes towards coherence of the education and spatial structures were tried to identify. Homogeneity and conformity of an educational character and a physical learning environment became especially important during changes of educational conceptions. The issue how educational paradigms affect the architecture of school buildings is not yet analysed in Lithuania, therefore the results of this research could actualize a theme on correlation between educational paradigms and the architecture of school buildings and form initial guidelines for the development of the modern physical learning environment.
The evaporative vector: Homogeneous systems
International Nuclear Information System (INIS)
Klots, C.E.
1987-05-01
Molecular beams of van der Waals molecules are the subject of much current research. Among the methods used to form these beams, three-sputtering, laser ablation, and the sonic nozzle expansion of neat gases - yield what are now recognized to be ''warm clusters.'' They contain enough internal energy to undergo a number of first-order processes, in particular that of evaporation. Because of this evaporation and its attendant cooling, the properties of such clusters are time-dependent. The states of matter which can be arrived at via an evaporative vector on a typical laboratory time-scale are discussed. Topics include the (1) temperatures, (2) metastability, (3) phase transitions, (4) kinetic energies of fragmentation, and (5) the expression of magical properties, all for evaporating homogeneous clusters
Burn-up function of fuel management code for aqueous homogeneous reactors and its validation
International Nuclear Information System (INIS)
Wang Liangzi; Yao Dong; Wang Kan
2011-01-01
Fuel Management Code for Aqueous Homogeneous Reactors (FMCAHR) is developed based on the Monte Carlo transport method, to analyze the physics characteristics of aqueous homogeneous reactors. FMCAHR has the ability of doing resonance treatment, searching for critical rod heights, thermal hydraulic parameters calculation, radiolytic-gas bubbles' calculation and bum-up calculation. This paper introduces the theory model and scheme of its burn-up function, and then compares its calculation results with benchmarks and with DRAGON's burn-up results, which confirms its bum-up computing precision and its applicability in the bum-up calculation and analysis for aqueous solution reactors. (authors)
A non-asymptotic homogenization theory for periodic electromagnetic structures.
Tsukerman, Igor; Markel, Vadim A
2014-08-08
Homogenization of electromagnetic periodic composites is treated as a two-scale problem and solved by approximating the fields on both scales with eigenmodes that satisfy Maxwell's equations and boundary conditions as accurately as possible. Built into this homogenization methodology is an error indicator whose value characterizes the accuracy of homogenization. The proposed theory allows one to define not only bulk, but also position-dependent material parameters (e.g. in proximity to a physical boundary) and to quantify the trade-off between the accuracy of homogenization and its range of applicability to various illumination conditions.
Methods to Implement Innovation and Entrepreneurship in Physics
Arion, Douglas
2015-03-01
The physics community is beginning to become aware of the benefits of entrepreneurship and innovation education: greater enrollments, improved students satisfaction, a wider range of interesting research problems, and the potential for greater return from more successful alumni. This talk will suggest a variety of mechanisms by which physics departments can include entrepreneurship and innovation content within their programs - without necessarily requiring earth-shattering changes to the curriculum. These approaches will thus make it possible for departments to get involved with entrepreneurship and innovation, and grow those components into vibrant activities for students and faculty.
Homogenization of resonant chiral metamaterials
Andryieuski, Andrei; Menzel, Christoph; Rockstuhl, Carsten; Malureanu, Radu; Lederer, Falk; Lavrinenko, Andrei
2010-01-01
Homogenization of metamaterials is a crucial issue as it allows to describe their optical response in terms of effective wave parameters as e.g. propagation constants. In this paper we consider the possible homogenization of chiral metamaterials. We show that for meta-atoms of a certain size a critical density exists above which increasing coupling between neighboring meta-atoms prevails a reasonable homogenization. On the contrary, a dilution in excess will induce features reminiscent to pho...
Bilipschitz embedding of homogeneous fractals
Lü, Fan; Lou, Man-Li; Wen, Zhi-Ying; Xi, Li-Feng
2014-01-01
In this paper, we introduce a class of fractals named homogeneous sets based on some measure versions of homogeneity, uniform perfectness and doubling. This fractal class includes all Ahlfors-David regular sets, but most of them are irregular in the sense that they may have different Hausdorff dimensions and packing dimensions. Using Moran sets as main tool, we study the dimensions, bilipschitz embedding and quasi-Lipschitz equivalence of homogeneous fractals.
Mashayekhi, Hossein Ali
2013-01-01
Homogeneous liquid-liquid extraction via flotation assistance (HoLLE-FA) and gas chromatography-flame ionization detection (GC-FID) was presented for the extraction and determination of fenitrothion in water samples. In this work, a rapid, simple and efficient HoLLE-FA method was developed based on applying low-density organic solvents without employing centrifugation. A special extraction cell was designed to facilitate the collection of low-density solvent extraction in the determination of fenitrothion in water samples. The water sample solution was added into an extraction cell that contained an appropriate mixture of extraction and homogeneous solvents. By using air flotation, the organic solvent was collected at the conical part of the designed cell. Under the optimum conditions, the method performance was studied in terms of the linear dynamic range (LDR from 1.0 up to 100 μg L⁻¹), linearity (r² > 0.998), and precision (repeatability extraction and determination of fenitrothion in three different water samples.
Application of the K-W-L Teaching and Learning Method to an Introductory Physics Course
Wrinkle, Cheryl Schaefer; Manivannan, Mani K.
2009-01-01
The K-W-L method of teaching is a simple method that actively engages students in their own learning. It has been used with kindergarten and elementary grades to teach other subjects. The authors have successfully used it to teach physics at the college level. In their introductory physics labs, the K-W-L method helped students think about what…
Czech Academy of Sciences Publication Activity Database
Kopačka, Ján; Tkachuk, A.; Gabriel, Dušan; Kolman, Radek; Bischoff, M.; Plešek, Jiří
2018-01-01
Roč. 113, č. 10 (2018), s. 1607-1629 ISSN 0029-5981 R&D Projects: GA MŠk(CZ) EF15_003/0000493; GA ČR(CZ) GA17-22615S; GA ČR GA17-12925S; GA ČR(CZ) GA16-03823S Grant - others:AV ČR(CZ) DAAD-16-12 Program:Bilaterální spolupráce Institutional support: RVO:61388998 Keywords : bipenalty method * explicit time integration * finite element method * penalty method * reflection-transmission analysis * stability analysis Subject RIV: JC - Computer Hardware ; Software OBOR OECD: Applied mechanics Impact factor: 2.162, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/nme.5712/full
C.M. Cobbaert (Christa); H. Baadenhuijsen; L. Zwang (Louwerens); C.W. Weykamp; P.N. Demacker; P.G.H. Mulder (Paul)
1999-01-01
textabstractBACKGROUND: Standardization of HDL-cholesterol is needed for risk assessment. We assessed for the first time the accuracy of HDL-cholesterol testing in The Netherlands and evaluated 11 candidate reference materials (CRMs). METHODS: The total error (TE) of
Application of the group-theoretical method to physical problems
Abd-el-malek, Mina B.
1998-01-01
The concept of the theory of continuous groups of transformations has attracted the attention of applied mathematicians and engineers to solve many physical problems in the engineering sciences. Three applications are presented in this paper. The first one is the problem of time-dependent vertical temperature distribution in a stagnant lake. Two cases have been considered for the forms of the water parameters, namely water density and thermal conductivity. The second application is the unstea...
Homogeneous versus heterogeneous zeolite nucleation
Dokter, W.H.; Garderen, van H.F.; Beelen, T.P.M.; Santen, van R.A.; Bras, W.
1995-01-01
Aggregates of fractal dimension were found in the intermediate gel phases that organize prior to nucleation and crystallization (shown right) of silicalite from a homogeneous reaction mixture. Small- and wide-angle X-ray scattering studies prove that for zeolites nucleation may be homogeneous or
Homogeneous crystal nucleation in polymers.
Schick, C; Androsch, R; Schmelzer, J W P
2017-11-15
The pathway of crystal nucleation significantly influences the structure and properties of semi-crystalline polymers. Crystal nucleation is normally heterogeneous at low supercooling, and homogeneous at high supercooling, of the polymer melt. Homogeneous nucleation in bulk polymers has been, so far, hardly accessible experimentally, and was even doubted to occur at all. This topical review summarizes experimental findings on homogeneous crystal nucleation in polymers. Recently developed fast scanning calorimetry, with cooling and heating rates up to 10 6 K s -1 , allows for detailed investigations of nucleation near and even below the glass transition temperature, including analysis of nuclei stability. As for other materials, the maximum homogeneous nucleation rate for polymers is located close to the glass transition temperature. In the experiments discussed here, it is shown that polymer nucleation is homogeneous at such temperatures. Homogeneous nucleation in polymers is discussed in the framework of the classical nucleation theory. The majority of our observations are consistent with the theory. The discrepancies may guide further research, particularly experiments to progress theoretical development. Progress in the understanding of homogeneous nucleation is much needed, since most of the modelling approaches dealing with polymer crystallization exclusively consider homogeneous nucleation. This is also the basis for advancing theoretical approaches to the much more complex phenomena governing heterogeneous nucleation.
Homogenization theory in reactor lattices
International Nuclear Information System (INIS)
Benoist, P.
1986-02-01
The purpose of the theory of homogenization of reactor lattices is to determine, by the mean of transport theory, the constants of a homogeneous medium equivalent to a given lattice, which allows to treat the reactor as a whole by diffusion theory. In this note, the problem is presented by laying emphasis on simplicity, as far as possible [fr
Comments on Thermal Physical Properties Testing Methods of Phase Change Materials
Directory of Open Access Journals (Sweden)
Jingchao Xie
2013-01-01
Full Text Available There is no standard testing method of the thermal physical properties of phase change materials (PCM. This paper has shown advancements in this field. Developments and achievements in thermal physical properties testing methods of PCM were commented, including differential scanning calorimetry, T-history measurement, the water bath method, and differential thermal analysis. Testing principles, advantages and disadvantages, and important points for attention of each method were discussed. A foundation for standardized testing methods for PCM was made.
Wagner, Robert M [Knoxville, TN; Daw, Charles S [Knoxville, TN; Green, Johney B [Knoxville, TN; Edwards, Kevin D [Knoxville, TN
2008-10-07
This invention is a method of achieving stable, optimal mixtures of HCCI and SI in practical gasoline internal combustion engines comprising the steps of: characterizing the combustion process based on combustion process measurements, determining the ratio of conventional and HCCI combustion, determining the trajectory (sequence) of states for consecutive combustion processes, and determining subsequent combustion process modifications using said information to steer the engine combustion toward desired behavior.
Benchmarking lattice physics data and methods for boiling water reactor analysis
International Nuclear Information System (INIS)
Cacciapouti, R.J.; Edenius, M.; Harris, D.R.; Hebert, M.J.; Kapitz, D.M.; Pilat, E.E.; VerPlanck, D.M.
1983-01-01
The objective of the work reported was to verify the adequacy of lattice physics modeling for the analysis of the Vermont Yankee BWR using a multigroup, two-dimensional transport theory code. The BWR lattice physics methods have been benchmarked against reactor physics experiments, higher order calculations, and actual operating data
Physical activity decreases from childhood through adulthood. Among youth, teenagers (teens) achieve the lowest levels of physical activity, and high school age youth are particularly at risk of inactivity. Effective methods are needed to increase youth physical activity in a way that can be maintai...
Harlapur, M. D.; Mallapur, D. G.; Udupa, K. Rajendra
2018-04-01
In the present study, an experimental study of the volumetric wear behaviour of Aluminium (Al-25Mg2Si2Cu4Ni) alloy in as cast and 5Hr homogenized with T6 heat treatment is carried out at constant load. The Pin on disc apparatus was used to carry out the sliding wear test. Taguchi method based on L-16 orthogonal array was employed to evaluate the data on the wear behavior. Signal-to-noise ratio among the objective of smaller the better and mean of means results were used. General regression model is obtained by correlation. Lastly confirmation test was completed to compose a comparison between the experimental results foreseen from the mention correlation. The mathematical model reveals the load has maximum contribution on the wear rate compared to speed. Scanning Electron Microscope was used to analyze the worn-out wear surfaces. Wear results show that 5Hr homogenized Al-25Mg2Si2Cu4Ni alloy samples with T6 treated had better volumetric wear resistance as compared to as cast samples.
Coupling of partitioned physics codes with quasi-Newton methods
CSIR Research Space (South Africa)
Haelterman, R
2017-03-01
Full Text Available , A class of methods for solving nonlinear simultaneous equations. Math. Comp. 19, pp. 577–593 (1965) [3] C.G. Broyden, Quasi-Newton methods and their applications to function minimization. Math. Comp. 21, pp. 368–381 (1967) [4] J.E. Dennis, J.J. More...´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) [5] J.E. Dennis, R.B. Schnabel, Least Change Secant Updates for quasi- Newton methods. SIAM Rev. 21, pp. 443–459 (1979) [6] G. Dhondt, CalculiX CrunchiX USER’S MANUAL Version 2...
Application of nuclear-physics methods in space materials science
Novikov, L. S.; Voronina, E. N.; Galanina, L. I.; Chirskaya, N. P.
2017-07-01
The brief history of the development of investigations at the Skobeltsyn Institute of Nuclear Physics, Moscow State University (SINP MSU) in the field of space materials science is outlined. A generalized scheme of a numerical simulation of the radiation impact on spacecraft materials and elements of spacecraft equipment is examined. The results obtained by solving some of the most important problems that modern space materials science should address in studying nuclear processes, the interaction of charged particles with matter, particle detection, the protection from ionizing radiation, and the impact of particles on nanostructures and nanomaterials are presented.
Mathematical methods for mathematicians, physical scientists and engineers
Dunning-Davies, J
2003-01-01
This practical introduction encapsulates the entire content of teaching material for UK honours degree courses in mathematics, physics, chemistry and engineering, and is also appropriate for post-graduate study. It imparts the necessary mathematics for use of the techniques, with subject-related worked examples throughout. The text is supported by challenging problem exercises (and answers) to test student comprehension. Index notation used in the text simplifies manipulations in the sections on vectors and tensors. Partial differential equations are discussed, and special functions introduced
International Nuclear Information System (INIS)
Rezvani Nikabadi, H; Shahtahmasebi, N; Rezaee Rokn-Abadi, M; Bagheri Mohagheghi, M M; Goharshadi, E K
2013-01-01
In this paper, a facile method for the synthesis of gold nanoseeds on the functionalized surface of silica nanoparticles has been investigated. Mono-dispersed silica particles and gold nanoparticles were prepared by the chemical reduction method. The thickness of the Au shell was well controlled by repeating the reduction time of HAuCl 4 on silica/3-aminopropyltriethoxysilane (APTES)/initial gold nanoparticles. The prepared SiO 2 -gold core/shell nanoparticles were studied using x-ray diffraction, scanning electron microscopy, transmission electron microscopy (TEM), Fourier transform infrared spectroscopy and ultraviolet visible (UV-Vis) spectroscopy. The TEM images indicated that the silica nanoparticles were spherical in shape with 100 nm diameters and functionalizing silica nanoparticles with a layer of bi-functional APTES molecules and tetrakis hydroxy methyl phosphonium chloride. The gold nanoparticles show a narrow size of up to 5 nm and by growing gold nanoseeds over the silica cores a red shift in the maximum absorbance of UV-Vis spectroscopy from 524 to 637 nm was observed.
Perfect Form: Variational Principles, Methods, and Applications in Elementary Physics
International Nuclear Information System (INIS)
Isenberg, C
1997-01-01
This short book is concerned with the physical applications of variational principles of the calculus. It is intended for undergraduate students who have taken some introductory lectures on the subject and have been exposed to Lagrangian and Hamiltonian mechanics. Throughout the book the author emphasizes the historical background to the subject and provides numerous problems, mainly from the fields of mechanics and optics. Some of these problems are provided with an answer, while others, regretfully, are not. It would have been an added help to the undergraduate reader if complete solutions could have been provided in an appendix. The introductory chapter is concerned with Fermat's Principle and image formation. This is followed by the derivation of the Euler - Lagrange equation. The third chapter returns to the subject of optical paths without making the link with a mechanical variational principle - that comes later. Chapters on the subjects of minimum potential energy, least action and Hamilton's principle follow. This volume provides an 'easy read' for a student keen to learn more about the subject. It is well illustrated and will make a useful addition to all undergraduate physics libraries. (book review)
Energy Technology Data Exchange (ETDEWEB)
Tang, Yundong [College of Physics and Information Engineering, Fuzhou University, Fuzhou 350116 (China); Flesch, Rodolfo C.C. [Departamento de Automação e Sistemas, Universidade Federal de Santa Catarina, 88040-900 Florianópolis, SC (Brazil); Jin, Tao, E-mail: jintly@fzu.edu.cn [College of Electrical Engineering and Automation, Fuzhou University, Fuzhou 350116 (China)
2017-06-15
Highlights: • The effects of blood vessels on temperature field distribution are investigated. • The critical thermal energy of hyperthermia is computed by the Finite Element Analysis. • A treatment method is proposed by using the MNPs with low Curie temperature. • The cooling effects due to the blood flow can be controlled. - Abstract: Magnetic hyperthermia ablates tumor cells by absorbing the thermal energy from magnetic nanoparticles (MNPs) under an external alternating magnetic field. The blood vessels (BVs) within tumor region can generally reduce treatment effectiveness due to the cooling effect of blood flow. This paper aims to investigate the cooling effect of BVs on the temperature field of malignant tumor regions using a complex geometric model and numerical simulation. For deriving the model, the Navier-Stokes equation for blood flow is combined with Pennes bio-heat transfer equation for human tissue. The effects on treatment temperature caused by two different BV distributions inside a mammary tumor are analyzed through numerical simulation under different conditions of flow rate considering a Fe-Cr-Nb-B alloy, which has low Curie temperature ranging from 42 °C to 45 °C. Numerical results show that the multi-vessel system has more obvious cooling effects than the single vessel one on the temperature field distribution for hyperthermia. Besides, simulation results show that the temperature field within tumor area can also be influenced by the velocity and diameter of BVs. To minimize the cooling effect, this article proposes a treatment method based on the increase of the thermal energy provided to MNPs associated with the adoption of low Curie temperature particles recently reported in literature. Results demonstrate that this approach noticeably improves the uniformity of the temperature field, and shortens the treatment time in a Fe-Cr-Nb-B system, thus reducing the side effects to the patient.
A generalized model for homogenized reflectors
International Nuclear Information System (INIS)
Pogosbekyan, Leonid; Kim, Yeong Il; Kim, Young Jin; Joo, Hyung Kook
1996-01-01
A new concept of equivalent homogenization is proposed. The concept employs new set of homogenized parameters: homogenized cross sections (XS) and interface matrix (IM), which relates partial currents at the cell interfaces. The idea of interface matrix generalizes the idea of discontinuity factors (DFs), proposed and developed by K. Koebke and K. Smith. The method of K. Smith can be simulated within framework of new method, while the new method approximates hetero-geneous cell better in case of the steep flux gradients at the cell interfaces. The attractive shapes of new concept are:improved accuracy, simplicity of incorporation in the existing codes, equal numerical expenses in comparison to the K. Smith's approach. The new concept is useful for: (a) explicit reflector/baffle simulation; (b)control blades simulation; (c) mixed UO 2 /MOX core simulation. The offered model has been incorporated in the finite difference code and in the nodal code PANBOX. The numerical results show good accuracy of core calculations and insensitivity of homogenized parameters with respect to in-core conditions
Mechanized syringe homogenization of human and animal tissues.
Kurien, Biji T; Porter, Andrew C; Patel, Nisha C; Kurono, Sadamu; Matsumoto, Hiroyuki; Scofield, R Hal
2004-06-01
Tissue homogenization is a prerequisite to any fractionation schedule. A plethora of hands-on methods are available to homogenize tissues. Here we report a mechanized method for homogenizing animal and human tissues rapidly and easily. The Bio-Mixer 1200 (manufactured by Innovative Products, Inc., Oklahoma City, OK) utilizes the back-and-forth movement of two motor-driven disposable syringes, connected to each other through a three-way stopcock, to homogenize animal or human tissue. Using this method, we were able to homogenize human or mouse tissues (brain, liver, heart, and salivary glands) in 5 min. From sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis and a matrix-assisted laser desorption/ionization time-of-flight mass spectrometric enzyme assay for prolidase, we have found that the homogenates obtained were as good or even better than that obtained used a manual glass-on-Teflon (DuPont, Wilmington, DE) homogenization protocol (all-glass tube and Teflon pestle). Use of the Bio-Mixer 1200 to homogenize animal or human tissue precludes the need to stay in the cold room as is the case with the other hands-on homogenization methods available, in addition to freeing up time for other experiments.
A pedagogical derivation of the matrix element method in particle physics data analysis
Sumowidagdo, Suharyo
2018-03-01
The matrix element method provides a direct connection between the underlying theory of particle physics processes and detector-level physical observables. I am presenting a pedagogically-oriented derivation of the matrix element method, drawing from elementary concepts in probability theory, statistics, and the process of experimental measurements. The level of treatment should be suitable for beginning research student in phenomenology and experimental high energy physics.
A Survey On Physical Methods For Deformation Modeling
Directory of Open Access Journals (Sweden)
Huda Basloom
2015-08-01
Full Text Available Much effort has been dedicated to achieving realism in the simulation of deformable objects such as cloth hair rubber sea water smoke and human soft tissue in surgical simulation. However the deformable object in these simulations will exhibit physically correct behaviors true to the behavior of real objects when any force is applied to it and sometimes this requires real-time simulation. No matter how complex the geometry is real-time simulation is still required in some applications. Surgery simulation is an example of the need for real-time simulation. This situation has attracted the attention of a wide community of researchers such as computer scientists mechanical engineers biomechanics and computational geometers. This paper presents a review of the existing techniques for modeling deformable objects which have been developed within the last three decades for different computer graphics interactive applications.
Statistical methods for data analysis in particle physics
Lista, Luca
2017-01-01
This concise set of course-based notes provides the reader with the main concepts and tools needed to perform statistical analyses of experimental data, in particular in the field of high-energy physics (HEP). First, the book provides an introduction to probability theory and basic statistics, mainly intended as a refresher from readers’ advanced undergraduate studies, but also to help them clearly distinguish between the Frequentist and Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on both discoveries and upper limits, as many applications in HEP concern hypothesis testing, where the main goal is often to provide better and better limits so as to eventually be able to distinguish between competing hypotheses, or to rule out some of them altogether. Many worked-out examples will help newcomers to the field and graduate students alike understand the pitfalls involved in applying theoretical co...
Statistical methods for data analysis in particle physics
AUTHOR|(CDS)2070643
2015-01-01
This concise set of course-based notes provides the reader with the main concepts and tools to perform statistical analysis of experimental data, in particular in the field of high-energy physics (HEP). First, an introduction to probability theory and basic statistics is given, mainly as reminder from advanced undergraduate studies, yet also in view to clearly distinguish the Frequentist versus Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on upper limits as many applications in HEP concern hypothesis testing, where often the main goal is to provide better and better limits so as to be able to distinguish eventually between competing hypotheses or to rule out some of them altogether. Many worked examples will help newcomers to the field and graduate students to understand the pitfalls in applying theoretical concepts to actual data
Tokamak physics studies using x-ray diagnostic methods
International Nuclear Information System (INIS)
Hill, K.W.; Bitter, M.; von Goeler, S.
1987-03-01
X-ray diagnostic measurements have been used in a number of experiments to improve our understanding of important tokamak physics issues. The impurity content in TFTR plasmas, its sources and control have been clarified through soft x-ray pulse-height analysis (PHA) measurements. The dependence of intrinsic impurity concentrations and Z/sub eff/ on electron density, plasma current, limiter material and conditioning, and neutral-beam power have shown that the limiter is an important source of metal impurities. Neoclassical-like impurity peaking following hydrogen pellet injection into Alcator C and a strong effect of impurities on sawtooth behavior were demonstrated by x-ray imaging (XIS) measurements. Rapid inward motion of impurities and continuation of m = 1 activity following an internal disruption were demonstrated with XIS measurements on PLT using injected aluminum to enhance the signals. Ion temperatures up to 12 keV and a toroidal plasma rotation velocity up to 6 x 10 5 m/s have been measured by an x-ray crystal spectrometer (XCS) with up to 13 MW of 85-keV neutral-beam injection in TFTR. Precise wavelengths and relative intensities of x-ray lines in several helium-like ions and neon-like ions of silver have been measured in TFTR and PLT by the XCS. The data help to identify the important excitation processes predicted in atomic physics. Wavelengths of n = 3 to 2 silver lines of interest for x-ray lasers were measured, and precise instrument calibration techniques were developed. Electron thermal conductivity and sawtooth dynamics have been studied through XIS measurements on TFTR of heat-pulse propagation and compound sawteeth. A non-Maxwellian electron distribution function has been measured, and evidence of the Parail-Pogutse instability identified by hard x-ray PHA measurements on PLT during lower-hybrid current-drive experiments
Keiner, Louis E.; Gilman, Craig
2015-01-01
This study measures the effects of increased faculty-student engagement on student learning, success rates, and perceptions in a Physical Oceanography course. The study separately implemented two teaching methods that had been shown to be successful in a different discipline, introductory physics. These methods were the use of interactive…
Proceedings, 3rd International Satellite Conference on Mathematical Methods in Physics (ICMP13)
2013-01-01
The aim of the Conference is to present the latest advances in Mathematical Methods to researchers, post-docs and graduated students acting in the areas of Physics of Particles and Fields, Mathematical Physics and Applied Mathematics. Topics: Methods of Spectral and Group Theory, Differential and Algebraic Geometry and Topology in Field Theory, Quantum Gravity, String Theory and Cosmology.
Commensurability effects in holographic homogeneous lattices
International Nuclear Information System (INIS)
Andrade, Tomas; Krikun, Alexander
2016-01-01
An interesting application of the gauge/gravity duality to condensed matter physics is the description of a lattice via breaking translational invariance on the gravity side. By making use of global symmetries, it is possible to do so without scarifying homogeneity of the pertinent bulk solutions, which we thus term as “homogeneous holographic lattices.' Due to their technical simplicity, these configurations have received a great deal of attention in the last few years and have been shown to correctly describe momentum relaxation and hence (finite) DC conductivities. However, it is not clear whether they are able to capture other lattice effects which are of interest in condensed matter. In this paper we investigate this question focusing our attention on the phenomenon of commensurability, which arises when the lattice scale is tuned to be equal to (an integer multiple of) another momentum scale in the system. We do so by studying the formation of spatially modulated phases in various models of homogeneous holographic lattices. Our results indicate that the onset of the instability is controlled by the near horizon geometry, which for insulating solutions does carry information about the lattice. However, we observe no sharp connection between the characteristic momentum of the broken phase and the lattice pitch, which calls into question the applicability of these models to the physics of commensurability.
Tang, Yundong; Flesch, Rodolfo C. C.; Jin, Tao
2017-06-01
Magnetic hyperthermia ablates tumor cells by absorbing the thermal energy from magnetic nanoparticles (MNPs) under an external alternating magnetic field. The blood vessels (BVs) within tumor region can generally reduce treatment effectiveness due to the cooling effect of blood flow. This paper aims to investigate the cooling effect of BVs on the temperature field of malignant tumor regions using a complex geometric model and numerical simulation. For deriving the model, the Navier-Stokes equation for blood flow is combined with Pennes bio-heat transfer equation for human tissue. The effects on treatment temperature caused by two different BV distributions inside a mammary tumor are analyzed through numerical simulation under different conditions of flow rate considering a Fe-Cr-Nb-B alloy, which has low Curie temperature ranging from 42 °C to 45 °C. Numerical results show that the multi-vessel system has more obvious cooling effects than the single vessel one on the temperature field distribution for hyperthermia. Besides, simulation results show that the temperature field within tumor area can also be influenced by the velocity and diameter of BVs. To minimize the cooling effect, this article proposes a treatment method based on the increase of the thermal energy provided to MNPs associated with the adoption of low Curie temperature particles recently reported in literature. Results demonstrate that this approach noticeably improves the uniformity of the temperature field, and shortens the treatment time in a Fe-Cr-Nb-B system, thus reducing the side effects to the patient.
Homogenized group cross sections by Monte Carlo
International Nuclear Information System (INIS)
Van Der Marck, S. C.; Kuijper, J. C.; Oppe, J.
2006-01-01
Homogenized group cross sections play a large role in making reactor calculations efficient. Because of this significance, many codes exist that can calculate these cross sections based on certain assumptions. However, the application to the High Flux Reactor (HFR) in Petten, the Netherlands, the limitations of such codes imply that the core calculations would become less accurate when using homogenized group cross sections (HGCS). Therefore we developed a method to calculate HGCS based on a Monte Carlo program, for which we chose MCNP. The implementation involves an addition to MCNP, and a set of small executables to perform suitable averaging after the MCNP run(s) have completed. Here we briefly describe the details of the method, and we report on two tests we performed to show the accuracy of the method and its implementation. By now, this method is routinely used in preparation of the cycle to cycle core calculations for HFR. (authors)
Homogeneous Spaces and Equivariant Embeddings
Timashev, DA
2011-01-01
Homogeneous spaces of linear algebraic groups lie at the crossroads of algebraic geometry, theory of algebraic groups, classical projective and enumerative geometry, harmonic analysis, and representation theory. By standard reasons of algebraic geometry, in order to solve various problems on a homogeneous space it is natural and helpful to compactify it keeping track of the group action, i.e. to consider equivariant completions or, more generally, open embeddings of a given homogeneous space. Such equivariant embeddings are the subject of this book. We focus on classification of equivariant em
Vickers, Ken
2005-03-01
The education and training of the workforce needed to assure global competitiveness of American industry in high technology areas, along with the proper role of various disciplines in that educational process, is currently being re-examined. Several academic areas in science and engineering have reported results from such studies that revealed several broad themes of educational need that span and cross the boundaries of science and engineering. They included greater attention to and the development of team-building skills, personal or interactive skills, creative ability, and a business or entrepreneurial where-with-all. We will report in this paper the results of a fall 2000 Department of Education FIPSE grant to implement changes in its graduate physics program to address these issues. The proposal goal was to produce next-generation physics graduate students that are trained to evaluate and overcome complex technical problems by their participation in courses emphasizing the commercialization of technology research. To produce next-generation physics graduates who have learned to work with their student colleagues for their mutual success in an industrial-like group setting. And finally, to produce graduates who can lead interdisciplinary groups in solving complex problems in their career field.
Towards a physical interpretation of the entropic lattice Boltzmann method
Malaspinas, Orestis; Deville, Michel; Chopard, Bastien
2008-12-01
The entropic lattice Boltzmann method (ELBM) is one among several different versions of the lattice Boltzmann method for the simulation of hydrodynamics. The collision term of the ELBM is characterized by a nonincreasing H function, guaranteed by a variable relaxation time. We propose here an analysis of the ELBM using the Chapman-Enskog expansion. We show that it can be interpreted as some kind of subgrid model, where viscosity correction scales like the strain rate tensor. We confirm our analytical results by the numerical computations of the relaxation time modifications on the two-dimensional dipole-wall interaction benchmark.
Application of the selected physical methods in biological research
Directory of Open Access Journals (Sweden)
Jaromír Tlačbaba
2013-01-01
Full Text Available This paper deals with the application of acoustic emission (AE, which is a part of the non-destructive methods, currently having an extensive application. This method is used for measuring the internal defects of materials. AE has a high potential in further research and development to extend the application of this method even in the field of process engineering. For that matter, it is the most elaborate acoustic emission monitoring in laboratory conditions with regard to external stimuli. The aim of the project is to apply the acoustic emission recording the activity of bees in different seasons. The mission is to apply a new perspective on the behavior of colonies by means of acoustic emission, which collects a sound propagation in the material. Vibration is one of the integral part of communication in the community. Sensing colonies with the support of this method is used for understanding of colonies biological behavior to stimuli clutches, colony development etc. Simulating conditions supported by acoustic emission monitoring system the illustrate colonies activity. Collected information will be used to represent a comprehensive view of the life cycle and behavior of honey bees (Apis mellifera. Use of information about the activities of bees gives a comprehensive perspective on using of acoustic emission in the field of biological research.
Integration of educational methods and physical settings: Design ...
African Journals Online (AJOL)
... setting without having an architectural background. The theoretical framework of the research allows designers to consider key features and users' possible activities in High/ Scope settings and shape their designs accordingly. Keywords: daily activity; design; High/Scope education; interior space; teaching method ...
Bogaert, Inge; De Martelaer, Kristine; Deforche, Benedicte; Clarys, Peter; Zinzen, Evert
2015-01-01
Objective: The primary aim of this study was to describe and analyse the physical activity and sedentary levels of secondary school teachers in Flanders. A secondary aim was to collect information regarding a possible worksite intervention of special relevance to secondary school teachers. Design: Mixed-methods quantitative and qualitative…
Qualitative analysis of homogeneous universes
International Nuclear Information System (INIS)
Novello, M.; Araujo, R.A.
1980-01-01
The qualitative behaviour of cosmological models is investigated in two cases: Homogeneous and isotropic Universes containing viscous fluids in a stokesian non-linear regime; Rotating expanding universes in a state which matter is off thermal equilibrium. (Author) [pt
Reciprocity theory of homogeneous reactions
Agbormbai, Adolf A.
1990-03-01
The reciprocity formalism is applied to the homogeneous gaseous reactions in which the structure of the participating molecules changes upon collision with one another, resulting in a change in the composition of the gas. The approach is applied to various classes of dissociation, recombination, rearrangement, ionizing, and photochemical reactions. It is shown that for the principle of reciprocity to be satisfied it is necessary that all chemical reactions exist in complementary pairs which consist of the forward and backward reactions. The backward reaction may be described by either the reverse or inverse process. The forward and backward processes must satisfy the same reciprocity equation. Because the number of dynamical variables is usually unbalanced on both sides of a chemical equation, it is necessary that this balance be established by including as many of the dynamical variables as needed before the reciprocity equation can be formulated. Statistical transformation models of the reactions are formulated. The models are classified under the titles free exchange, restricted exchange and simplified restricted exchange. The special equations for the forward and backward processes are obtained. The models are consistent with the H theorem and Le Chatelier's principle. The models are also formulated in the context of the direct simulation Monte Carlo method.
Method and apparatus for physical separation of different sized nanostructures
Roberts, Christopher B.; Saunders, Steven R.
2012-07-10
The present application provides apparatuses and methods for the size-selective fractionation of ligand-capped nanoparticles that utilizes the tunable thermophysical properties of gas-expanded liquids. The nanoparticle size separation processes are based on the controlled reduction of the solvent strength of an organic phase nanoparticle dispersion through increases in concentration of the antisolvent gas, such as CO.sub.2, via pressurization. The method of nanomaterial separation contains preparing a vessel having a solvent and dispersed nanoparticles, pressurizing the chamber with a gaseous antisolvent, and causing a first amount of the nanoparticles to precipitate, transporting the solution to a second vessel, pressurizing the second vessel with the gaseous antisolvent and causing further nanoparticles to separate from the solution.
Optical and Physical Methods for Mapping Flooding with Satellite Imagery
Fayne, Jessica Fayne; Bolten, John; Lakshmi, Venkat; Ahamed, Aakash
2016-01-01
Flood and surface water mapping is becoming increasingly necessary, as extreme flooding events worldwide can damage crop yields and contribute to billions of dollars economic damages as well as social effects including fatalities and destroyed communities (Xaio et al. 2004; Kwak et al. 2015; Mueller et al. 2016).Utilizing earth observing satellite data to map standing water from space is indispensable to flood mapping for disaster response, mitigation, prevention, and warning (McFeeters 1996; Brakenridge and Anderson 2006). Since the early 1970s(Landsat, USGS 2013), researchers have been able to remotely sense surface processes such as extreme flood events to help offset some of these problems. Researchers have demonstrated countless methods and modifications of those methods to help increase knowledge of areas at risk and areas that are flooded using remote sensing data from optical and radar systems, as well as free publically available and costly commercial datasets.
A variational method in out-of-equilibrium physical systems.
Pinheiro, Mario J
2013-12-09
We propose a new variational principle for out-of-equilibrium dynamic systems that are fundamentally based on the method of Lagrange multipliers applied to the total entropy of an ensemble of particles. However, we use the fundamental equation of thermodynamics on differential forms, considering U and S as 0-forms. We obtain a set of two first order differential equations that reveal the same formal symplectic structure shared by classical mechanics, fluid mechanics and thermodynamics. From this approach, a topological torsion current emerges of the form , where Aj and ωk denote the components of the vector potential (gravitational and/or electromagnetic) and where ω denotes the angular velocity of the accelerated frame. We derive a special form of the Umov-Poynting theorem for rotating gravito-electromagnetic systems. The variational method is then applied to clarify the working mechanism of particular devices.
Neurocomputing methods for pattern recognition in nuclear physics
Energy Technology Data Exchange (ETDEWEB)
Gyulassy, M.; Dong, D.; Harlander, M. [Lawrence Berkeley Lab., CA (United States)
1991-12-31
We review recent progress on the development and applications of novel neurocomputing techniques for pattern recognition problems of relevance to RHIC experiments. The Elastic Tracking algorithm is shown to achieve sub-pad two track resolution without preprocessing. A high pass neural filter is developed for jet analysis and singular deconvolution methods are shown to recover the primordial jet distribution to a surprising high degree of accuracy.
Application of heavy-light methods to B meson physics
International Nuclear Information System (INIS)
Eichten, E.; Hockney, G.; Thacker, H.B.
1989-01-01
The heavy-light method is applied to the study of the B meson spectrum, the pseudoscalar decay constant f B , the mixing (B) parameter, and exclusive semileptonic B meson decays. Preliminary results are discussed for f B and the B parameter at β = 5.7 and κ = 0.165 on a 12 3 x 24 lattice and at β = 5.9 and κ = 0.158 on a 16 3 x 32 lattice. 9 refs., 2 figs., 2 tabs
A personal view on homogenization
International Nuclear Information System (INIS)
Tartar, L.
1987-02-01
The evolution of some ideas is first described. Under the name homogenization are collected all the mathematical results who help understanding the relations between the microstructure of a material and its macroscopic properties. Homogenization results are given through a critically detailed bibliography. The mathematical models given are systems of partial differential equations, supposed to describe some properties at a scale ε and we want to understand what will happen to the solutions if ε tends to 0
Homogenization of discrete media
International Nuclear Information System (INIS)
Pradel, F.; Sab, K.
1998-01-01
Material such as granular media, beam assembly are easily seen as discrete media. They look like geometrical points linked together thanks to energetic expressions. Our purpose is to extend discrete kinematics to the one of an equivalent continuous material. First we explain how we build the localisation tool for periodic materials according to estimated continuum medium type (classical Cauchy, and Cosserat media). Once the bridge built between discrete and continuum media, we exhibit its application over two bidimensional beam assembly structures : the honey comb and a structural reinforced variation. The new behavior is then applied for the simple plan shear problem in a Cosserat continuum and compared with the real discrete solution. By the mean of this example, we establish the agreement of our new model with real structures. The exposed method has a longer range than mechanics and can be applied to every discrete problems like electromagnetism in which relationship between geometrical points can be summed up by an energetic function. (orig.)
Homogenization of High-Contrast Brinkman Flows
Brown, Donald L.
2015-04-16
Modeling porous flow in complex media is a challenging problem. Not only is the problem inherently multiscale but, due to high contrast in permeability values, flow velocities may differ greatly throughout the medium. To avoid complicated interface conditions, the Brinkman model is often used for such flows [O. Iliev, R. Lazarov, and J. Willems, Multiscale Model. Simul., 9 (2011), pp. 1350--1372]. Instead of permeability variations and contrast being contained in the geometric media structure, this information is contained in a highly varying and high-contrast coefficient. In this work, we present two main contributions. First, we develop a novel homogenization procedure for the high-contrast Brinkman equations by constructing correctors and carefully estimating the residuals. Understanding the relationship between scales and contrast values is critical to obtaining useful estimates. Therefore, standard convergence-based homogenization techniques [G. A. Chechkin, A. L. Piatniski, and A. S. Shamev, Homogenization: Methods and Applications, Transl. Math. Monogr. 234, American Mathematical Society, Providence, RI, 2007, G. Allaire, SIAM J. Math. Anal., 23 (1992), pp. 1482--1518], although a powerful tool, are not applicable here. Our second point is that the Brinkman equations, in certain scaling regimes, are invariant under homogenization. Unlike in the case of Stokes-to-Darcy homogenization [D. Brown, P. Popov, and Y. Efendiev, GEM Int. J. Geomath., 2 (2011), pp. 281--305, E. Marusic-Paloka and A. Mikelic, Boll. Un. Mat. Ital. A (7), 10 (1996), pp. 661--671], the results presented here under certain velocity regimes yield a Brinkman-to-Brinkman upscaling that allows using a single software platform to compute on both microscales and macroscales. In this paper, we discuss the homogenized Brinkman equations. We derive auxiliary cell problems to build correctors and calculate effective coefficients for certain velocity regimes. Due to the boundary effects, we construct
Cosmic homogeneity: a spectroscopic and model-independent measurement
Gonçalves, R. S.; Carvalho, G. C.; Bengaly, C. A. P., Jr.; Carvalho, J. C.; Bernui, A.; Alcaniz, J. S.; Maartens, R.
2018-03-01
Cosmology relies on the Cosmological Principle, i.e. the hypothesis that the Universe is homogeneous and isotropic on large scales. This implies in particular that the counts of galaxies should approach a homogeneous scaling with volume at sufficiently large scales. Testing homogeneity is crucial to obtain a correct interpretation of the physical assumptions underlying the current cosmic acceleration and structure formation of the Universe. In this letter, we use the Baryon Oscillation Spectroscopic Survey to make the first spectroscopic and model-independent measurements of the angular homogeneity scale θh. Applying four statistical estimators, we show that the angular distribution of galaxies in the range 0.46 < z < 0.62 is consistent with homogeneity at large scales, and that θh varies with redshift, indicating a smoother Universe in the past. These results are in agreement with the foundations of the standard cosmological paradigm.
A stochastic physical-mathematical method for reactor kinetics analysis
International Nuclear Information System (INIS)
Velickovic, Lj.
1966-01-01
The developed theoretical model is concerned with BF 3 counter placed in the core of a low power reactor (a few MW) where statistical neutron effects are most evident. Our experiments were somewhat different. The detector used was and ionization chamber with double sampling, in ADC and in the time analyzer. The objective of this model was not to obtain precise numerical calculations, but to explain the method and the essentials of the correlation. Introducing all the six groups of delayed neutrons and possibly photoneutrons the model could be improved to obtained more realistic results
Homogenization of discrete media
Energy Technology Data Exchange (ETDEWEB)
Pradel, F.; Sab, K. [CERAM-ENPC, Marne-la-Vallee (France)
1998-11-01
Material such as granular media, beam assembly are easily seen as discrete media. They look like geometrical points linked together thanks to energetic expressions. Our purpose is to extend discrete kinematics to the one of an equivalent continuous material. First we explain how we build the localisation tool for periodic materials according to estimated continuum medium type (classical Cauchy, and Cosserat media). Once the bridge built between discrete and continuum media, we exhibit its application over two bidimensional beam assembly structures : the honey comb and a structural reinforced variation. The new behavior is then applied for the simple plan shear problem in a Cosserat continuum and compared with the real discrete solution. By the mean of this example, we establish the agreement of our new model with real structures. The exposed method has a longer range than mechanics and can be applied to every discrete problems like electromagnetism in which relationship between geometrical points can be summed up by an energetic function. (orig.) 7 refs.
Methods of Physical Recreation of Students Trained in Kickboxing Section
Directory of Open Access Journals (Sweden)
C. А. Пашкевич
2016-09-01
Full Text Available Research objective: to evaluate the effectiveness of implementing sports massage in recreation of kickboxing students to improve their sports performance. Materials and methods. The research used: review and analysis of literature, pedagogical observations, physiological (relay test, strength endurance test, fatigue intensity assessment and statistical methods. The participants of the research were three groups (5 persons in each group. The first group of students (C1 received preliminary warming massage (20 min, the second group (C2 received recreational massage after the training (20 min, the third group (C3 had passive rest before and after the training (20 min. Before and after the massage session, assessment of the response rate and strength endurance took place three times during the training (at the beginning, in the middle, and at the end with regard to the level of the students’ fatigue intensity during the training. For the rough evaluation of the cause-effect relationship between the influencing factor and the effect appearance, the research used the relative risk indicator (RR. Research results. The sports massage reduced the athletes’ fatigue during the training (RR = 5.0, p < 0.05, i.e. the coach could increase the training load without any significant impact on the functional systems of the athletes. The preliminary massage had a more distinct positive effect on the students’ response rate and endurance indicators. The recreational massage improved only the students’ endurance processes during the training.
FDTD Method for Piecewise Homogeneous Dielectric Media
Directory of Open Access Journals (Sweden)
Zh. O. Dombrovskaya
2016-01-01
Full Text Available In this paper, we consider a numerical solution of Maxwell’s curl equations for piecewise uniform dielectric medium by the example of a one-dimensional problem. For obtaining the second order accuracy, the electric field grid node is placed into the permittivity discontinuity point of the medium. If the dielectric permittivity is large, the problem becomes singularly perturbed and a contrast structure appears. We propose a piecewise quasi-uniform mesh which resolves all characteristic solution parts of the problem (regular part, boundary layer and transition zone placed between them in detail. The features of the mesh are discussed.
Bayesian methods for the physical sciences learning from examples in astronomy and physics
Andreon, Stefano
2015-01-01
Statistical literacy is critical for the modern researcher in Physics and Astronomy. This book empowers researchers in these disciplines by providing the tools they will need to analyze their own data. Chapters in this book provide a statistical base from which to approach new problems, including numerical advice and a profusion of examples. The examples are engaging analyses of real-world problems taken from modern astronomical research. The examples are intended to be starting points for readers as they learn to approach their own data and research questions. Acknowledging that scientific progress now hinges on the availability of data and the possibility to improve previous analyses, data and code are distributed throughout the book. The JAGS symbolic language used throughout the book makes it easy to perform Bayesian analysis and is particularly valuable as readers may use it in a myriad of scenarios through slight modifications.
Review of multi-physics temporal coupling methods for analysis of nuclear reactors
International Nuclear Information System (INIS)
Zerkak, Omar; Kozlowski, Tomasz; Gajev, Ivan
2015-01-01
Highlights: • Review of the numerical methods used for the multi-physics temporal coupling. • Review of high-order improvements to the Operator Splitting coupling method. • Analysis of truncation error due to the temporal coupling. • Recommendations on best-practice approaches for multi-physics temporal coupling. - Abstract: The advanced numerical simulation of a realistic physical system typically involves multi-physics problem. For example, analysis of a LWR core involves the intricate simulation of neutron production and transport, heat transfer throughout the structures of the system and the flowing, possibly two-phase, coolant. Such analysis involves the dynamic coupling of multiple simulation codes, each one devoted to the solving of one of the coupled physics. Multiple temporal coupling methods exist, yet the accuracy of such coupling is generally driven by the least accurate numerical scheme. The goal of this paper is to review in detail the approaches and numerical methods that can be used for the multi-physics temporal coupling, including a comprehensive discussion of the issues associated with the temporal coupling, and define approaches that can be used to perform multi-physics analysis. The paper is not limited to any particular multi-physics process or situation, but is intended to provide a generic description of multi-physics temporal coupling schemes for any development stage of the individual (single-physics) tools and methods. This includes a wide spectrum of situation, where the individual (single-physics) solvers are based on pre-existing computation codes embedded as individual components, or a new development where the temporal coupling can be developed and implemented as a part of code development. The discussed coupling methods are demonstrated in the framework of LWR core analysis
Butler, S. L.
2017-12-01
The electrical resistivity method is now highly developed with 2D and even 3D surveys routinely performed and with available fast inversion software. However, rules of thumb, based on simple mathematical formulas, for important quantities like depth of investigation, horizontal position and resolution have not previously been available and would be useful for survey planning, preliminary interpretation and general education about the method. In this contribution, I will show that the sensitivity function for the resistivity method for a homogeneous half-space can be analyzed in terms of its first and second moments which yield simple mathematical formulas. The first moment gives the sensitivity-weighted center of an apparent resistivity measurement with the vertical center being an estimate of the depth of investigation. I will show that this depth of investigation estimate works at least as well as previous estimates based on the peak and median of the depth sensitivity function which must be calculated numerically for a general four electrode array. The vertical and horizontal first moments can also be used as pseudopositions when plotting 1, 2 and 3D pseudosections. The appropriate horizontal plotting point for a pseudosection was not previously obvious for nonsymmetric arrays. The second moments of the sensitivity function give estimates of the spatial extent of the region contributing to an apparent resistivity measurement and hence are measures of the resolution. These also have simple mathematical formulas.
Large-scale Homogenization of Bulk Materials in Mammoth Silos
Schott, D.L.
2004-01-01
This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the
7 CFR 58.920 - Homogenization.
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Homogenization. 58.920 Section 58.920 Agriculture... Procedures § 58.920 Homogenization. Where applicable concentrated products shall be homogenized for the... homogenization and the pressure at which homogenization is accomplished will be that which accomplishes the most...
L. Demortier
Physics-wise, the CMS week in December was dominated by discussions of the analyses that will be carried out in the “next six months”, i.e. while waiting for the first LHC collisions. As presented in December, analysis approvals based on Monte Carlo simulation were re-opened, with the caveat that for this work to be helpful to the goals of CMS, it should be carried out using the new software (CMSSW_2_X) and associated samples. By the end of the week, the goal for the physics groups was set to be the porting of our physics commissioning methods and plans, as well as the early analyses (based an integrated luminosity in the range 10-100pb-1) into this new software. Since December, the large data samples from CMSSW_2_1 were completed. A big effort by the production group gave a significant number of events over the end-of-year break – but also gave out the first samples with the fast simulation. Meanwhile, as mentioned in December, the arrival of 2_2 meant that ...
Lee, Sang Heon
2013-05-01
BiSrCaCuO superconductor thick films were prepared at several curing temperatures, and their electro-physical properties were determined to find an optimum fabrication conditions. Critical temperatures of the superconductors were decreased with increasing melting temperature, which was related to the amount of equilibrium phases of the superconducting materials with temperature. The critical temperature of BiSrCaCuO bulk and thick film superconductors were 107 K and 96 K, respectively. The variation of susceptibility of the superconductor thick film formed at 950 degrees C had multi-step-type curve for 70 G externally applied field, whereas, a superconductor thick film formed at 885 degrees C had a single step-type curve like a bulk BiSrCaCuO ceramic superconductor in the temperature-susceptibility curves. A partial melting at 865 degrees C is one of optimum conditions for making a superconductor thick film with a relatively homogeneous phase.
International Nuclear Information System (INIS)
Huebel, Horst
2010-01-01
Traditionally in the center of interest on quantum physics referring to schools the question lies, whether electrons and photons are now particles or waves, a question, which is often characterized by the phrase ''wave-particle dualism'', which notoriously not exists in its original meaning. Against that by the author - basing on important preparatory works of Kueblbeck and Mueller - a new concept for the treatment of quantum physics for the school was proposed, which puts ''basic facts'' in the foreground, comparable with the Kueblbeck-Mueller ''characteristic features''. The ''basic facts'' are similar to axioms of quantum physics, by means of which a large number of experiments and phenomena can be ''explained'' at least qualitatively - in a heuristic way -. Instead of the so-called ''wave-particle dualism'' here uncertainty and complementarity are put in the foreground. The new concept is in the Internet under http://www.forphys.de extensively presented with many further materials. In the partial volumes of this publication manifold and carefully elaborated teaching materials are presented, by means of which scholars can get themselves the partial set of quantum physics referring to schools by different methods like learn at stations, short referates, Internet research, group puzzle, the query-sheet or the card-index method etc. In the present 2. part materials for the ''basic facts'' of quantum physics are prepared, by which also modern experiments can be interpreted. Here deals it with the getting of knowledge and application of the ''basic Facts''. This pursues also by real scholar experiments, simulations and analogy tests. The scholars obtain so more simply than generally a deeper insight in quantum physics.
Homogenization of variational inequalities for obstacle problems
International Nuclear Information System (INIS)
Sandrakov, G V
2005-01-01
Results on the convergence of solutions of variational inequalities for obstacle problems are proved. The variational inequalities are defined by a non-linear monotone operator of the second order with periodic rapidly oscillating coefficients and a sequence of functions characterizing the obstacles. Two-scale and macroscale (homogenized) limiting variational inequalities are obtained. Derivation methods for such inequalities are presented. Connections between the limiting variational inequalities and two-scale and macroscale minimization problems are established in the case of potential operators.
An evaluation of methods assessing the physical demands of manual lifting in scaffolding
Beek, van der A.J.; Mathiassen, S.E.; Windhorst, J.; Burdorf, A.
2005-01-01
Four methods assessing the physical demands of manual lifting were compared. The scaffolding job was evaluated and three distinct scaffolding tasks were ranked using: (1) the revised NIOSH lifting equation (NIOSH method), (2) lifting guidelines for the Dutch construction industry (Arbouw method),
Genetic Homogenization of Composite Materials
Directory of Open Access Journals (Sweden)
P. Tobola
2009-04-01
Full Text Available The paper is focused on numerical studies of electromagnetic properties of composite materials used for the construction of small airplanes. Discussions concentrate on the genetic homogenization of composite layers and composite layers with a slot. The homogenization is aimed to reduce CPU-time demands of EMC computational models of electrically large airplanes. First, a methodology of creating a 3-dimensional numerical model of a composite material in CST Microwave Studio is proposed focusing on a sufficient accuracy of the model. Second, a proper implementation of a genetic optimization in Matlab is discussed. Third, an association of the optimization script and a simplified 2-dimensional model of the homogeneous equivalent model in Comsol Multiphysics is proposed considering EMC issues. Results of computations are experimentally verified.
Song, Jin-Zi; Wan, Min; Xu, Hui; Yao, Xiu-Jun; Zhang, Bo; Wang, Jin-Hong
2009-09-01
The major idea of this article is to discuss standardization and normalization for the product standard of medical devices. Analyze the problem related to the physical performance requirements and test methods during product standard drafting process and make corresponding suggestions.
Barriers and facilitators of sports in children with physical disabilities : a mixed-method study
Jaarsma, Eva A.; Dijkstra, Pieter U.; de Blecourt, Alida C. E.; Geertzen, Jan H. B.; Dekker, Rienk
2015-01-01
Purpose: This study explored barriers and facilitators of sports participation of children with physical disabilities from the perspective of the children, their parents and their health professionals. Method: Thirty children and 38 parents completed a questionnaire, and 17 professionals were
Homogeneous Biosensing Based on Magnetic Particle Labels
Schrittwieser, Stefan
2016-06-06
The growing availability of biomarker panels for molecular diagnostics is leading to an increasing need for fast and sensitive biosensing technologies that are applicable to point-of-care testing. In that regard, homogeneous measurement principles are especially relevant as they usually do not require extensive sample preparation procedures, thus reducing the total analysis time and maximizing ease-of-use. In this review, we focus on homogeneous biosensors for the in vitro detection of biomarkers. Within this broad range of biosensors, we concentrate on methods that apply magnetic particle labels. The advantage of such methods lies in the added possibility to manipulate the particle labels by applied magnetic fields, which can be exploited, for example, to decrease incubation times or to enhance the signal-to-noise-ratio of the measurement signal by applying frequency-selective detection. In our review, we discriminate the corresponding methods based on the nature of the acquired measurement signal, which can either be based on magnetic or optical detection. The underlying measurement principles of the different techniques are discussed, and biosensing examples for all techniques are reported, thereby demonstrating the broad applicability of homogeneous in vitro biosensing based on magnetic particle label actuation.
Homogeneous Biosensing Based on Magnetic Particle Labels
Schrittwieser, Stefan; Pelaz, Beatriz; Parak, Wolfgang J.; Lentijo-Mozo, Sergio; Soulantica, Katerina; Dieckhoff, Jan; Ludwig, Frank; Guenther, Annegret; Tschöpe, Andreas; Schotter, Joerg
2016-01-01
The growing availability of biomarker panels for molecular diagnostics is leading to an increasing need for fast and sensitive biosensing technologies that are applicable to point-of-care testing. In that regard, homogeneous measurement principles are especially relevant as they usually do not require extensive sample preparation procedures, thus reducing the total analysis time and maximizing ease-of-use. In this review, we focus on homogeneous biosensors for the in vitro detection of biomarkers. Within this broad range of biosensors, we concentrate on methods that apply magnetic particle labels. The advantage of such methods lies in the added possibility to manipulate the particle labels by applied magnetic fields, which can be exploited, for example, to decrease incubation times or to enhance the signal-to-noise-ratio of the measurement signal by applying frequency-selective detection. In our review, we discriminate the corresponding methods based on the nature of the acquired measurement signal, which can either be based on magnetic or optical detection. The underlying measurement principles of the different techniques are discussed, and biosensing examples for all techniques are reported, thereby demonstrating the broad applicability of homogeneous in vitro biosensing based on magnetic particle label actuation. PMID:27275824
Homogeneous Biosensing Based on Magnetic Particle Labels
Schrittwieser, Stefan; Pelaz, Beatriz; Parak, Wolfgang; Lentijo Mozo, Sergio; Soulantica, Katerina; Dieckhoff, Jan; Ludwig, Frank; Guenther, Annegret; Tschö pe, Andreas; Schotter, Joerg
2016-01-01
The growing availability of biomarker panels for molecular diagnostics is leading to an increasing need for fast and sensitive biosensing technologies that are applicable to point-of-care testing. In that regard, homogeneous measurement principles are especially relevant as they usually do not require extensive sample preparation procedures, thus reducing the total analysis time and maximizing ease-of-use. In this review, we focus on homogeneous biosensors for the in vitro detection of biomarkers. Within this broad range of biosensors, we concentrate on methods that apply magnetic particle labels. The advantage of such methods lies in the added possibility to manipulate the particle labels by applied magnetic fields, which can be exploited, for example, to decrease incubation times or to enhance the signal-to-noise-ratio of the measurement signal by applying frequency-selective detection. In our review, we discriminate the corresponding methods based on the nature of the acquired measurement signal, which can either be based on magnetic or optical detection. The underlying measurement principles of the different techniques are discussed, and biosensing examples for all techniques are reported, thereby demonstrating the broad applicability of homogeneous in vitro biosensing based on magnetic particle label actuation.
Optimal design of a 7 T highly homogeneous superconducting magnet for a Penning trap
International Nuclear Information System (INIS)
Wu Wei; He Yuan; Ma Lizhen; Huang Wenxue; Xia Jiawen
2010-01-01
A Penning trap system called Lanzhou Penning Trap (LPT) is now being developed for precise mass measurements at the Institute of Modern Physics(IMP). One of the key components is a 7 T actively shielded superconducting magnet with a clear warm bore of 156 mm. The required field homogeneity is 3 x 10 -7 over two 1 cubic centimeter volumes lying 220 mm apart along the magnet axis. We introduce a two-step method which combines linear programming and a nonlinear optimization algorithm for designing the multi-section superconducting magnet. This method is fast and flexible for handling arbitrary shaped homogeneous volumes and coils. With the help of this method an optimal design for the LPT superconducting magnet has been obtained. (authors)
Deb, Pradip
2010-07-01
As a fundamental basis of all natural science and technology, Physics is the key subject in many science teaching institutions around the world. Physics teaching and learning is the most important issue today—because of its complexity and fast growing applications in many new fields. The laws of Physics are global—but teaching and learning methods of Physics are very different among countries and cultures. When I first came in Australia for higher education about 11 years ago with an undergraduate and a graduate degree in Physics from a university of Bangladesh, I found the Physics education system in Australia is very different to what I have experienced in Bangladesh. After having two graduate degrees from two Australian universities and gaining few years experience in Physics teaching in Australian universities, I compare the two different types of Physics education experiences in this paper and tried to find the answer of the question—does it all depend on the resources or internal culture of the society or both. Undergraduate and graduate level Physics syllabi, resources and teaching methods, examination and assessment systems, teacher-student relationships, and research cultures are discussed and compared with those in Australia.
Features of method of medical physical culture at insufficiency of aortic valve
Directory of Open Access Journals (Sweden)
S.A. Kalmykov
2013-01-01
Full Text Available The basic approaches are considered to application of facilities of medical physical education at aortic insufficiency on the stages of physical rehabilitation. An analysis is conducted more than 20 literary sources. The mechanisms of medical action of physical exercises are specified - restorative influence, forming of temporal indemnifications, trophic action, normalization of the broken functions. It is set that task, forms, facilities, the methods of medical physical culture depend on the degree of weight of disease, degree of cardio-vascular insufficiency and stage of physical rehabilitation. Engaged in a medical physical culture conducted in form morning hygienical gymnastics, medical gymnastics, independent employments, dosed walks, walking on steps, mobile and sporting games. It is marked that sparing training and training the motive modes are instrumental in the gradual training of the cardio-vascular system. Recommended the dosed walking to lead to a to 5-8 km on the sparing training and to 8-12 km on training modes.
Data analysis methods in physical oceanography. By Emery, W.J. and Thomson, R.E.
Digital Repository Service at National Institute of Oceanography (India)
Nayak, M.R.
. 729-730 September 1999 Book Reviews DATA ANALYSIS METHODS IN PHYSICAL O~EAN~GRAFWY. By Wil- liam J. Emery and Richard E. Thomson. PERGAMON Else&r Sci- ence. 1998. 400 p. U.S. $112 / NLG 177.00. The book Data Analysis Methods in Physical... Oceanography pro- vides a comprehensive and practical compilation of the essential information and analysis techniques required for the advanced processing and interpretation of digital spat&temporal data in physical oceanography, as well as in other...
Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (4).
Murase, Kenya
2016-01-01
Partial differential equations are often used in the field of medical physics. In this (final) issue, the methods for solving the partial differential equations were introduced, which include separation of variables, integral transform (Fourier and Fourier-sine transforms), Green's function, and series expansion methods. Some examples were also introduced, in which the integral transform and Green's function methods were applied to solving Pennes' bioheat transfer equation and the Fourier series expansion method was applied to Navier-Stokes equation for analyzing the wall shear stress in blood vessels.Finally, the author hopes that this series will be helpful for people who engage in medical physics.
Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (3).
Murase, Kenya
2016-01-01
In this issue, simultaneous differential equations were introduced. These differential equations are often used in the field of medical physics. The methods for solving them were also introduced, which include Laplace transform and matrix methods. Some examples were also introduced, in which Laplace transform and matrix methods were applied to solving simultaneous differential equations derived from a three-compartment kinetic model for analyzing the glucose metabolism in tissues and Bloch equations for describing the behavior of the macroscopic magnetization in magnetic resonance imaging.In the next (final) issue, partial differential equations and various methods for solving them will be introduced together with some examples in medical physics.
ASCOT-1, Thermohydraulics of Axisymmetric PWR Core with Homogeneous Flow During LOCA
International Nuclear Information System (INIS)
1978-01-01
1 - Nature of the physical problem solved: ASCOT-1 is used to analyze the thermo-hydraulic behaviour in a PWR core during a loss-of-coolant accident. 2 - Method of solution: The core is assumed to be axisymmetric two-dimensional and the conservation laws are solved by the method of characteristics. For the temperature response of fuel in the annular regions into which the core is divided, the heat conduction equations are solved by an explicit method with averaged flow conditions. 3 - Restrictions on the complexity of the problem: Axisymmetric two-dimensional homogeneous flows
Bayesian Methods for the Physical Sciences. Learning from Examples in Astronomy and Physics.
Andreon, Stefano; Weaver, Brian
2015-05-01
Chapter 1: This chapter presents some basic steps for performing a good statistical analysis, all summarized in about one page. Chapter 2: This short chapter introduces the basics of probability theory inan intuitive fashion using simple examples. It also illustrates, again with examples, how to propagate errors and the difference between marginal and profile likelihoods. Chapter 3: This chapter introduces the computational tools and methods that we use for sampling from the posterior distribution. Since all numerical computations, and Bayesian ones are no exception, may end in errors, we also provide a few tips to check that the numerical computation is sampling from the posterior distribution. Chapter 4: Many of the concepts of building, running, and summarizing the resultsof a Bayesian analysis are described with this step-by-step guide using a basic (Gaussian) model. The chapter also introduces examples using Poisson and Binomial likelihoods, and how to combine repeated independent measurements. Chapter 5: All statistical analyses make assumptions, and Bayesian analyses are no exception. This chapter emphasizes that results depend on data and priors (assumptions). We illustrate this concept with examples where the prior plays greatly different roles, from major to negligible. We also provide some advice on how to look for information useful for sculpting the prior. Chapter 6: In this chapter we consider examples for which we want to estimate more than a single parameter. These common problems include estimating location and spread. We also consider examples that require the modeling of two populations (one we are interested in and a nuisance population) or averaging incompatible measurements. We also introduce quite complex examples dealing with upper limits and with a larger-than-expected scatter. Chapter 7: Rarely is a sample randomly selected from the population we wish to study. Often, samples are affected by selection effects, e.g., easier
MO-DE-BRA-05: Developing Effective Medical Physics Knowledge Structures: Models and Methods
International Nuclear Information System (INIS)
Sprawls, P
2015-01-01
Purpose: Develop a method and supporting online resources to be used by medical physics educators for teaching medical imaging professionals and trainees so they develop highly-effective physics knowledge structures that can contribute to improved diagnostic image quality on a global basis. Methods: The different types of mental knowledge structures were analyzed and modeled with respect to both the learning and teaching process for their development and the functions or tasks that can be performed with the knowledge. While symbolic verbal and mathematical knowledge structures are very important in medical physics for many purposes, the tasks of applying physics in clinical imaging--especially to optimize image quality and diagnostic accuracy--requires a sensory conceptual knowledge structure, specifically, an interconnected network of visually based concepts. This type of knowledge supports tasks such as analysis, evaluation, problem solving, interacting, and creating solutions. Traditional educational methods including lectures, online modules, and many texts are serial procedures and limited with respect to developing interconnected conceptual networks. A method consisting of the synergistic combination of on-site medical physics teachers and the online resource, CONET (Concept network developer), has been developed and made available for the topic Radiographic Image Quality. This was selected as the inaugural topic, others to follow, because it can be used by medical physicists teaching the large population of medical imaging professionals, such as radiology residents, who can apply the knowledge. Results: Tutorials for medical physics educators on developing effective knowledge structures are being presented and published and CONET is available with open access for all to use. Conclusion: An adjunct to traditional medical physics educational methods with the added focus on sensory concept development provides opportunities for medical physics teachers to share
MO-DE-BRA-05: Developing Effective Medical Physics Knowledge Structures: Models and Methods
Energy Technology Data Exchange (ETDEWEB)
Sprawls, P [Sprawls Educational Foundation, Montreat, NC (United States)
2015-06-15
Purpose: Develop a method and supporting online resources to be used by medical physics educators for teaching medical imaging professionals and trainees so they develop highly-effective physics knowledge structures that can contribute to improved diagnostic image quality on a global basis. Methods: The different types of mental knowledge structures were analyzed and modeled with respect to both the learning and teaching process for their development and the functions or tasks that can be performed with the knowledge. While symbolic verbal and mathematical knowledge structures are very important in medical physics for many purposes, the tasks of applying physics in clinical imaging--especially to optimize image quality and diagnostic accuracy--requires a sensory conceptual knowledge structure, specifically, an interconnected network of visually based concepts. This type of knowledge supports tasks such as analysis, evaluation, problem solving, interacting, and creating solutions. Traditional educational methods including lectures, online modules, and many texts are serial procedures and limited with respect to developing interconnected conceptual networks. A method consisting of the synergistic combination of on-site medical physics teachers and the online resource, CONET (Concept network developer), has been developed and made available for the topic Radiographic Image Quality. This was selected as the inaugural topic, others to follow, because it can be used by medical physicists teaching the large population of medical imaging professionals, such as radiology residents, who can apply the knowledge. Results: Tutorials for medical physics educators on developing effective knowledge structures are being presented and published and CONET is available with open access for all to use. Conclusion: An adjunct to traditional medical physics educational methods with the added focus on sensory concept development provides opportunities for medical physics teachers to share
FEMB, 2-D Homogeneous Neutron Diffusion in X-Y Geometry with Keff Calculation, Dyadic Fission Matrix
International Nuclear Information System (INIS)
Misfeldt, I.B.
1987-01-01
1 - Nature of physical problem solved: The two-dimensional neutron diffusion equation (xy geometry) is solved in the homogeneous form (K eff calculation). The boundary conditions specify each group current as a linear homogeneous function of the group fluxes (gamma matrix concept). For each material, the fission matrix is assumed to be dyadic. 2 - Method of solution: Finite element formulation with Lagrange type elements. Solution technique: SOR with extrapolation. 3 - Restrictions on the complexity of the problem: Maximum order of the Lagrange elements is 6
TVEDIM, 2-D Homogeneous and Inhomogeneous Neutron Diffusion for X-Y, R-Z, R-Theta Geometry
International Nuclear Information System (INIS)
Kristiansen, G.K.
1987-01-01
1 - Nature of physical problem solved: The two-dimensional neutron diffusion equation (x-y, r-z, or r-theta geometry is solved, either in the inhomogeneous (source calculation) or the homogeneous form (K eff calculation or absorber adjustment). The boundary conditions specify each group current as a linear homogeneous function of the group fluxes (gamma matrix concept). For each material, the fission matrix is assumed to by dyadic. 2 - Method of solution: Finite difference formulation (5 point scheme, mesh corner variant) is used. Solution technique: multi-line SOR. Eigenvalue estimate by neutron balance
Design of SC solenoid with high homogeneity
International Nuclear Information System (INIS)
Yang Xiaoliang; Liu Zhong; Luo Min; Luo Guangyao; Kang Qiang; Tan Jie; Wu Wei
2014-01-01
A novel kind of SC (superconducting) solenoid coil is designed to satisfy the homogeneity requirement of the magnetic field. In this paper, we first calculate the current density distribution of the solenoid coil section through the linear programming method. Then a traditional solenoid and a nonrectangular section solenoid are designed to produce a central field up to 7 T with a homogeneity to the greatest extent. After comparison of the two solenoid coils designed in magnet field quality, fabrication cost and other aspects, the new design of the nonrectangular section of a solenoid coil can be realized through improving the techniques of framework fabrication and winding. Finally, the outlook and error analysis of this kind of SC magnet coil are also discussed briefly. (authors)
Smooth homogeneous structures in operator theory
Beltita, Daniel
2005-01-01
Geometric ideas and techniques play an important role in operator theory and the theory of operator algebras. Smooth Homogeneous Structures in Operator Theory builds the background needed to understand this circle of ideas and reports on recent developments in this fruitful field of research. Requiring only a moderate familiarity with functional analysis and general topology, the author begins with an introduction to infinite dimensional Lie theory with emphasis on the relationship between Lie groups and Lie algebras. A detailed examination of smooth homogeneous spaces follows. This study is illustrated by familiar examples from operator theory and develops methods that allow endowing such spaces with structures of complex manifolds. The final section of the book explores equivariant monotone operators and Kähler structures. It examines certain symmetry properties of abstract reproducing kernels and arrives at a very general version of the construction of restricted Grassmann manifolds from the theory of loo...
Shape optimization in biomimetics by homogenization modelling
International Nuclear Information System (INIS)
Hoppe, Ronald H.W.; Petrova, Svetozara I.
2003-08-01
Optimal shape design of microstructured materials has recently attracted a great deal of attention in material science. The shape and the topology of the microstructure have a significant impact on the macroscopic properties. The present work is devoted to the shape optimization of new biomorphic microcellular ceramics produced from natural wood by biotemplating. We are interested in finding the best material-and-shape combination in order to achieve the optimal prespecified performance of the composite material. The computation of the effective material properties is carried out using the homogenization method. Adaptive mesh-refinement technique based on the computation of recovered stresses is applied in the microstructure to find the homogenized elasticity coefficients. Numerical results show the reliability of the implemented a posteriori error estimator. (author)
Detection and observation of deep vein thromboses by physical, radiological and electrical methods
International Nuclear Information System (INIS)
Bonafous, J.; Constantinesco, A.; Meyer, P.; Bidet, R.; Healy, J.C.; Chambron, J.; Lobera, A.; Chopin, J.
1975-01-01
The method of thrombosis detection by iodine 131-labelled exogenous fibrinogen was evaluated by comparison with three physical methods: an isotopic method using selenomethionine-labelled endogenous fibrinogen, phlebography, phletysmography by electrical impedance. The results confirm the superiority of the isotopic method with regard to efficiency and comfort for the patient. Under certain conditions the frequency of positive tests was the same with selenomethionine as with exogenous fibrinogen [fr
[Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (2)].
Murase, Kenya
2015-01-01
In this issue, symbolic methods for solving differential equations were firstly introduced. Of the symbolic methods, Laplace transform method was also introduced together with some examples, in which this method was applied to solving the differential equations derived from a two-compartment kinetic model and an equivalent circuit model for membrane potential. Second, series expansion methods for solving differential equations were introduced together with some examples, in which these methods were used to solve Bessel's and Legendre's differential equations. In the next issue, simultaneous differential equations and various methods for solving these differential equations will be introduced together with some examples in medical physics.
An overview of currently available methods and future trends for physical activity
Directory of Open Access Journals (Sweden)
Alexander Kiško
2013-12-01
Full Text Available Background: Methodological limitations make comparison of various instruments difficult, although the number of publications on physical activity assessment has extensively increased. Therefore, systematization of techniques and definitions is essential for the improvement of knowledge in the area. Objective: This paper systematically describes and compares up-to-date methods that assess habitual physical activity and discusses main issues regarding the use and interpretation of data collected with these techniques. Methods: A general outline of the measures and techniques described above is presented in review form, along with their respective definition, usual applications, positive aspects and shortcomings. Results and conclusions: The various factors to be considered in the selection of physical activity assessment methods include goals, sample size, budget, cultural and social/environmental factors, physical burden for the subject, and statistical factors, such as accuracy and precision. It is concluded that no single current technique is able to quantify all aspects of physical activity under free-living conditions, requiring the use of complementary methods. In not too distant future, devices will take advantage of consumer technologies, such as mobile phones, GPS devices. It is important to perform other activities, such as detecting and responding to physical activity in a real time, creating new opportunities in measurement, remote compliance monitoring, data-driven discovery and intervention.
Numerical methods for solution of some nonlinear problems of mathematical physics
International Nuclear Information System (INIS)
Zhidkov, E.P.
1981-01-01
The continuous analog of the Newton method and its application to some nonlinear problems of mathematical physics using a computer is considered. It is shown that the application of this method in JINR to the wide range of nonlinear problems has shown its universality and high efficiency [ru
Effective linear two-body method for many-body problems in atomic and nuclear physics
International Nuclear Information System (INIS)
Kim, Y.E.; Zubarev, A.L.
2000-01-01
We present an equivalent linear two-body method for the many body problem, which is based on an approximate reduction of the many-body Schroedinger equation by the use of a variational principle. The method is applied to several problems in atomic and nuclear physics. (author)
A method to measure the thermal-physical parameter of gas hydrate in porous media
Energy Technology Data Exchange (ETDEWEB)
Diao, S.B.; Ye, Y.G.; Yue, Y.J.; Zhang, J.; Chen, Q.; Hu, G.W. [Qingdao Inst. of Marine Geology, Qingdao (China)
2008-07-01
It is important to explore and make good use of gas hydrates through the examination of the thermal-physical parameters of sediment. This paper presented a new type of simulation experiment using a device that was designed based on the theories of time domain reflection and transient hot wire method. A series of investigations were performed using this new device. The paper described the experiment, with reference to the experiment device and materials and method. It also presented the results of thermal physical properties; result of the thermal conductivity of water, dry sand and wet sand; and results of wet sand under various pressures. The time domain reflection (TDR) method was utilized to monitor the saturation of the hydrates. Both parallel hot-wire method and cross hot-wire method were utilized to measure the thermal conductivity of the gas hydrate in porous media. A TDR sensor which was equipped with both cross hot-wire probe and parallel hot-wire probe was developed in order to measure the cell temperature with these two methods at one time. It was concluded that the TDR probe could be taken as an online measurement skill in investigating the hydrate thermal physical property in porous media. The TDR sensor could monitor the hydrate formation process and the parallel hot-wire method and cross hot-wire method could effectively measure the thermal physical properties of the hydrates in porous media. 10 refs., 7 figs.
Dyomin, V. V.; Polovtsev, I. G.; Davydova, A. Yu.
2018-03-01
The physical principles of a method for determination of geometrical characteristics of particles and particle recognition based on the concepts of digital holography, followed by processing of the particle images reconstructed from the digital hologram, using the morphological parameter are reported. An example of application of this method for fast plankton particle recognition is given.
Improved methods for the study of hadronic physics from lattice QCD
International Nuclear Information System (INIS)
Orginos, Kostas; Richards, David
2015-01-01
The solution of quantum chromodynamics (QCD) on a lattice provides a first-principles method for understanding QCD in the low-energy regime, and is thus an essential tool for nuclear physics. The generation of gauge configurations, the starting point for lattice calculations, requires the most powerful leadership-class computers available. However, to fully exploit such leadership-class computing requires increasingly sophisticated methods for obtaining physics observables from the underlying gauge ensembles. In this paper, we describe a variety of recent methods that have been used to advance our understanding of the spectrum and structure of hadrons through lattice QCD. (paper)
Improved methods for the study of hadronic physics from lattice QCD
Energy Technology Data Exchange (ETDEWEB)
Orginos, Kostas [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); College of William and Mary, Williamsburg, VA (United States); Richards, David [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)
2015-02-05
The solution of QCD on a lattice provides a first-principles method for understanding QCD in the low-energy regime, and is thus an essential tool for nuclear physics. The generation of gauge configurations, the starting point for lattice calculations, requires the most powerful leadership-class computers available. However, to fully exploit such leadership-class computing requires increasingly sophisticated methods for obtaining physics observables from the underlying gauge ensembles. In this study, we describe a variety of recent methods that have been used to advance our understanding of the spectrum and structure of hadrons through lattice QCD.
A new physical method to assess handle properties of fabrics made from wood-based fibers
Abu-Rous, M.; Liftinger, E.; Innerlohinger, J.; Malengier, B.; Vasile, S.
2017-10-01
In this work, the handfeel of fabrics made of wood-based fibers such as viscose, modal and Lyocell was investigated in relation to cotton fabrics applying the Tissue Softness Analyzer (TSA) method in comparison to other classical methods. Two different construction groups of textile were investigated. The validity of TSA in assessing textile softness of these constructions was tested. TSA results were compared to human hand evaluation as well as to classical physical measurements like drape coefficient, ring pull-through and Handle-o-meter, as well as a newer device, the Fabric Touch Tester (FTT). Physical methods as well as human hand assessments mostly agreed on the softest and smoothest range, but showed different rankings in the harder/rougher side fabrics. TSA ranking of softness and smoothness corresponded to the rankings by other physical methods as well as with human hand feel for the basic textile constructions.
Matrix-dependent multigrid-homogenization for diffusion problems
Energy Technology Data Exchange (ETDEWEB)
Knapek, S. [Institut fuer Informatik tu Muenchen (Germany)
1996-12-31
We present a method to approximately determine the effective diffusion coefficient on the coarse scale level of problems with strongly varying or discontinuous diffusion coefficients. It is based on techniques used also in multigrid, like Dendy`s matrix-dependent prolongations and the construction of coarse grid operators by means of the Galerkin approximation. In numerical experiments, we compare our multigrid-homogenization method with homogenization, renormalization and averaging approaches.
P. Sphicas
There have been three physics meetings since the last CMS week: “physics days” on March 27-29, the Physics/ Trigger week on April 23-27 and the most recent physics days on May 22-24. The main purpose of the March physics days was to finalize the list of “2007 analyses”, i.e. the few topics that the physics groups will concentrate on for the rest of this calendar year. The idea is to carry out a full physics exercise, with CMSSW, for select physics channels which test key features of the physics objects, or represent potential “day 1” physics topics that need to be addressed in advance. The list of these analyses was indeed completed and presented in the plenary meetings. As always, a significant amount of time was also spent in reviewing the status of the physics objects (reconstruction) as well as their usage in the High-Level Trigger (HLT). The major event of the past three months was the first “Physics/Trigger week” in Apri...
Group theoretical methods in physics. [Tuebingen, July 18-22, 1977
Energy Technology Data Exchange (ETDEWEB)
Kramer, P; Rieckers, A
1978-01-01
This volume comprises the proceedings of the 6th International Colloquium on Group Theoretical Methods in Physics, held at Tuebingen in July 1977. Invited papers were presented on the following topics: supersymmetry and graded Lie algebras; concepts of order and disorder arising from molecular physics; symplectic structures and many-body physics; symmetry breaking in statistical mechanics and field theory; automata and systems as examples of applied (semi-) group theory; renormalization group; and gauge theories. Summaries are given of the contributed papers, which can be grouped as follows: supersymmetry, symmetry in particles and relativistic physics; symmetry in molecular and solid state physics; broken symmetry and phase transitions; structure of groups and dynamical systems; representations of groups and Lie algebras; and general symmetries, quantization. Those individual papers in scope for the TIC data base are being entered from ATOMINDEX tapes. (RWR)
Directory of Open Access Journals (Sweden)
Kurysh N.O.
2012-02-01
Full Text Available Analysis of publication in physical activity measurement methods is provided. Presented three groups of physical activity measurement methods criterion, objective, subjective methods. Established that criterion methods are more accurate and used for validate objective and subjective methods. Subjective methods frequently used in investigation of youth and elderly population. Advisability of IPAQ questionnaire in international physical activity investigation of elderly adults is substantiated.
Observational homogeneity of the Universe
International Nuclear Information System (INIS)
Bonnor, W.B.; Ellis, G.F.R.
1986-01-01
A new approach to observational homogeneity is presented. The observation that stars and galaxies in distant regions appear similar to those nearby may be taken to imply that matter has had a similar thermodynamic history in widely separated parts of the Universe (the Postulate of Uniform Thermal Histories, or PUTH). The supposition is now made that similar thermodynamic histories imply similar dynamical histories. Then the distant apparent similarity is evidence for spatial homogeneity of the Universe. General Relativity is used to test this idea, taking a perfect fluid model and implementing PUTH by the condition that the density and entropy per baryon shall be the same function of the proper time along all galaxy world-lines. (author)
Some Critical Points in the Methods and Philosphy of Physical Sciences
Bozdemir, Süleyman
2018-01-01
Nowadays, it seems that there are not enough studies on the philosophy and methods of physical sciences that would be attractive to the researchers in the field. However, many revolutionary inventions have come from the mechanism of the philosophical thought of the physical sciences. This is, of course, a vast and very interesting topic that must be investigated in detail by philosophers, scientists or philosopher-scientists such as physicists. In order to do justice to it one has to write a ...
Conclusions about homogeneity and devitrification
International Nuclear Information System (INIS)
Larche, F.
1997-01-01
A lot of experimental data concerning homogeneity and devitrification of R7T7 glass have been published. It appears that: - the crystallization process is very limited, - the interfaces due to bubbles and the container wall favor crystallization locally but the ratio of crystallized volume remains always below a few per cents, and - crystallization has no damaging long-term effects as far as leaching tests can be trusted. (A.C.)
Is charity a homogeneous good?
Backus, Peter
2010-01-01
In this paper I estimate income and price elasticities of donations to six different charitable causes to test the assumption that charity is a homogeneous good. In the US, charitable donations can be deducted from taxable income. This has long been recognized as producing a price, or taxprice, of giving equal to one minus the marginal tax rate faced by the donor. A substantial portion of the economic literature on giving has focused on estimating price and income elasticities of giving as th...
Methods of analysis of physical activity among persons with spinal cord injury: A review
Directory of Open Access Journals (Sweden)
Jarmila Štěpánová
2016-11-01
Full Text Available Background: A spinal cord injury is one of the most devastating acquired physical disabilities. People with spinal cord injury are usually in a productive age, often interested in sports and physical activity. Therefore it is essential to support the development of monitoring of the quality and quantity of physical activity of people with spinal cord injury. Objective: The aim of this study was to perform systematic review of international studies from the period 2004-2014 with the aim to find appropriate questionnaires focused on the subjective perception of the amount of physical activity of persons with spinal cord injury (SCI to be used in The Czech Republic. Methods: A systematic literature review incorporated the databases of Medline, SPORTDiscus, Ebsco and PSYCInfo. Results: This type of questionnaire has not been used previously in the Czech Republic yet the following international surveys have been used: 1. Physical Activity Scale for Individuals with Physical Disabilities (PASIPD, 2. The Physical Activity Recall Assessment for People with Spinal Cord Injury (PARA-SCI, 3. Leisure Time Physical Activity Questionnaire for People with Spinal Cord Injury (LTPAQ-SCI. In the database search we found studies also focusing on the objective measurements of physical activity of wheelchair users with SCI. The physical switches used by intact populations are adapted for measurements (pedometers, accelerometers, speedometers. Most recent studies utilize Accelerometer based Activity Monitors which are attached to wheel of wheelchair or body of wheelchair users (wrist, leg or chest. Conclusions: This study is essential to critically approach issues of health and active lifestyle of persons with SCI and its use for teaching of students of adapted physical activity and physiotherapy.
Directory of Open Access Journals (Sweden)
Mariusz Lipowski
2015-02-01
Full Text Available Background As a conscious activity of an individual, physical activity (PA constitutes an element of the free-time dimension. The type of goal allows us to distinguish between sport and PA: sport performance vs. psychophysical health – hence the idea to develop a tool for measurement of the motivational function of an objective in physical activity and sport. Participants and procedure The normalisation sample consisted of 2141 individuals: 1163 women aged 16-64 (M = 23.90, SD = 8.30 and 978 men aged 16-66 (M = 24.50, SD = 9.40. In the process of validation, a factor analysis, and subsequently validity and reliability analysis of the tool, and normalisation of scales were performed. Results Based on the factor analysis and the degree to which each of the given items conformed to the theory of the motivational function of an objective, the following scales were distinguished: 1 motivational value (the extent to which the objective influences the actions undertaken by an individual, 2 time management (the level of focus on planning, arranging and organizing time for PA, 3 persistence in action (efficiency and persistence of action, and the ability to deal with adversities, and 4 motivational conflict (the level of conflict: PA objectives vs. other objectives. The Cronbach’s α reliability coefficient for this version reached .78. The Inventory of Physical Activity Objectives (IPAO also included questions that allow one to control for variables such as the variety of forms, duration, and frequency of PA, and socio-demographic variables. Conclusions The IPAO, as a new method for measuring motives for physical activity and sport, is characterized by good psychometric properties. The IPAO can serve both scientific research and as a useful tool for personal trainers, helping diagnose the motives for engaging in PA and sports. With knowledge about the purposefulness of actions, it is possible to support and shape additional motivation experienced by
A method for the efficient and effective evaluation of contract health physics technicians
International Nuclear Information System (INIS)
Burkdoll, S.C.; Conley, T.A.
1992-01-01
This paper reports that Wolf Creek has developed a method for efficiently training contract health physics technicians. The time allotted for training contractors prior to a refueling and maintenance outage is normally limited to a few days. Therefore, it was necessary to develop a systematic method to evaluate prior experience as well as practical skills and knowledge. In addition, instruction in the particular methodologies used at Wolf Creek hadto be included with methods for evaluating technician comprehension
[Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (1)].
Murase, Kenya
2014-01-01
Utilization of differential equations and methods for solving them in medical physics are presented. First, the basic concept and the kinds of differential equations were overviewed. Second, separable differential equations and well-known first-order and second-order differential equations were introduced, and the methods for solving them were described together with several examples. In the next issue, the symbolic and series expansion methods for solving differential equations will be mainly introduced.
Physical activity among South Asian women: a systematic, mixed-methods review
Babakus, Whitney S; Thompson, Janice L
2012-01-01
Abstract Introduction The objective of this systematic mixed-methods review is to assess what is currently known about the levels of physical activity (PA) and sedentary time (ST) and to contextualize these behaviors among South Asian women with an immigrant background. Methods A systematic search of the literature was conducted using combinations of the key words PA, ST, South Asian, and immigrant. A mixed-methods approach was used to analyze and synthesize all evidence, both quantitative an...
Supercomputer methods for the solution of fundamental problems of particle physics
International Nuclear Information System (INIS)
Moriarty, K.J.M.; Rebbi, C.
1990-01-01
The authors present motivation and methods for computer investigations in particle theory. They illustrate the computational formulation of quantum chromodynamics and selected application to the calculation of hadronic properties. They discuss possible extensions of the methods developed for particle theory to different areas of applications, such as cosmology and solid-state physics, that share common methods. Because of the commonality of methodology, advances in one area stimulate advances in other ares. They also outline future plans of research
Conformally compactified homogeneous spaces (Possible Observable Consequences)
International Nuclear Information System (INIS)
Budinich, P.
1995-01-01
Some arguments based on the possible spontaneous violation of the Cosmological Principles (represented by the observed large-scale structures of galaxies), the Cartan-geometry of simple spinors and on the Fock-formulation of hydrogen-atom wave-equation in momentum-space, are presented in favour of the hypothesis that space-time and momentum-space should be both conformally compactified and represented by the two four-dimensional homogeneous spaces of the conformal group, both isomorphic to (S 3 X S 1 )/Z 2 and correlated by conformal inversion. Within this framework, the possible common origin for the S0(4) symmetry underlying the geometrical structure of the Universe, of Kepler orbits and of the H-atom is discussed. On of the consequences of the proposed hypothesis could be that any quantum field theory should be naturally free from both infrared and ultraviolet divergences. But then physical spaces defined as those where physical phenomena may be best described, could be different from those homogeneous spaces. A simple, exactly soluble, toy model, valid for a two-dimensional space-time is presented where the conjecture conformally compactified space-time and momentum-space are both isomorphic to (S 1 X S 1 )/Z 2 , while the physical spaces are two finite lattice which are dual since Fourier transforms, represented by finite, discrete, sums may be well defined on them. Furthermore, a q-deformed SU q (1,1) may be represented on them if q is a root of unity. (author). 22 refs, 3 figs
Farajzadeh, Mir Ali; Feriduni, Behruz; Mogaddam, Mohammad Reza Afshar
2016-01-01
In this paper, a new extraction method based on counter current salting-out homogenous liquid-liquid extraction (CCSHLLE) followed by dispersive liquid-liquid microextraction (DLLME) has been developed for the extraction and preconcentration of widely used pesticides in fruit juice samples prior to their analysis by gas chromatography-flame ionization detection (GC-FID). In this method, initially, sodium chloride as a separation reagent is filled into a small column and a mixture of water (or fruit juice) and acetonitrile is passed through the column. By passing the mixture sodium chloride is dissolved and the fine droplets of acetonitrile are formed due to salting-out effect. The produced droplets go up through the remained mixture and collect as a separated layer. Then, the collected organic phase (acetonitrile) is removed with a syringe and mixed with 1,1,2,2-tetrachloroethane (extraction solvent at µL level). In the second step, for further enrichment of the analytes the above mixture is injected into 5 mL de-ionized water placed in a test tube with conical bottom in order to dissolve acetonitrile into water and to achieve a sedimented phase at µL-level volume containing the enriched analytes. Under the optimal extraction conditions (extraction solvent, 1.5 mL acetonitrile; pH, 7; flow rate, 0.5 mL min(-1); preconcentration solvent, 20 µL 1,1,2,2-tetrachloroethane; NaCl concentration; 5%, w/w; and centrifugation rate and time, 5000 rpm and 5 min, respectively), the extraction recoveries and enrichment factors ranged from 87% to 96% and 544 to 600, respectively. Repeatability of the proposed method, expressed as relative standard deviations, ranged from 2% to 6% for intra-day (n=6, C=250 or 500 µg L(-1)) and inter-days (n=4, C=250 or 500 µg L(-1)) precisions. Limits of detection are obtained between 2 and 12 µg L(-1). Finally, the proposed method is applied for the determination of the target pesticide residues in the juice samples. Copyright © 2015
Advanced resonance self-shielding method for gray resonance treatment in lattice physics code GALAXY
International Nuclear Information System (INIS)
Koike, Hiroki; Yamaji, Kazuya; Kirimura, Kazuki; Sato, Daisuke; Matsumoto, Hideki; Yamamoto, Akio
2012-01-01
A new resonance self-shielding method based on the equivalence theory is developed for general application to the lattice physics calculations. The present scope includes commercial light water reactor (LWR) design applications which require both calculation accuracy and calculation speed. In order to develop the new method, all the calculation processes from cross-section library preparation to effective cross-section generation are reviewed and reframed by adopting the current enhanced methodologies for lattice calculations. The new method is composed of the following four key methods: (1) cross-section library generation method with a polynomial hyperbolic tangent formulation, (2) resonance self-shielding method based on the multi-term rational approximation for general lattice geometry and gray resonance absorbers, (3) spatially dependent gray resonance self-shielding method for generation of intra-pellet power profile and (4) integrated reaction rate preservation method between the multi-group and the ultra-fine-group calculations. From the various verifications and validations, applicability of the present resonance treatment is totally confirmed. As a result, the new resonance self-shielding method is established, not only by extension of a past concentrated effort in the reactor physics research field, but also by unification of newly developed unique and challenging techniques for practical application to the lattice physics calculations. (author)
Energy Technology Data Exchange (ETDEWEB)
Breton, D; Lafore, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1964-07-01
This paper is a synthesis of various experimental methods in use with the reactors of the Commissariat a l'Energie Atomique. The main techniques used are mentioned and the difficulties encountered and the accuracy obtained are particularly dwelt upon. The application of these various methods to reactors in order to obtain specific results is also indicated. This paper consists of five parts. I - General methods. Macroscopic and microscopic flux distribution (anisotropy effect), power distribution, etc... II - Kinetic measurements a) pulsed neutron technique: apparatus and accuracy; application to {lambda}t and to anti reactivity measurements; application to graphite, light water and beryllium oxide. b) oscillation techniques: equipment and accuracy; application to the measurements of effective cross sections and resonance integrals. c) fluctuations: apparatus and technique of measurement. III - Poison methods. Description of methods for introducing and extracting the poison, difficulties encountered with light and heavy water, measurement of temperature coefficients and anti-reactivity. IV - Spectra measurements. Choice and development of foils, problems of measurement, application to spectral measurements for thermalization studies, application to dosimetry. V - Experimental shielding measurements. The technique and apparatus recently developed in this field are presented. (authors) [French] Cette communication fait une synthese des differentes methodes experimentales mises en oeuvre sur les reacteurs du CEA. Elle presente les principales techniques utilisees et insiste plus particulierement sur les difficultes rencontrees et la precision obtenue; elle indique egalement l'application de ces differentes methodes sur les reacteurs, en vue de l'obtention des resultats determines. Elle comporte cinq parties: I - METHODES GENERALES: Distribution de flux macroscopique et microscopique (effet d'anisotropie), distribution de puissance, etc... II - MESURES CINETIQUES: a
Designing a questionnaire on physical activity habits and lifestyle from the Delphi method
Directory of Open Access Journals (Sweden)
Estefanía Castillo Viera
2012-02-01
Full Text Available Nowadays there are numerous questionnaires studying physical activity habits, lifestyle and health (IPAQ, SF-36, EQ-5D. The problem arises when trying to design a new tool for use in a specific population that has specific characteristics and which seeks to explore aspects related to the area in which it develops. The main objective of this study was to design a questionnaire on physical activity and lifestyle of the university population. To prepare this Delphi method was used, a procedure based on expert consultation through a steering group and a group of experts to cast their opinions on a cyclical basis of the topic until they reach a consensus. In our case, a questionnaire was developed with seven dimensions and 55 items. The results lead us to make a positive assessment of the use of the Delphi method to design the questionnaire and ensuring greater validity.Key words: Physical activity habits, lifestyle, Delphi method.
Physical foundations of image quality in nuclear medicine. Methods for its evaluation
International Nuclear Information System (INIS)
Perez Diaz, Marlen; Diaz Rizo, Oscar
2007-01-01
The present paper describes the main physical factors which characterize image quality in Nuclear Medicine from the physical and mathematical point of view. A conceptual description of how image system (gamma camera) degrades the information emitted by the object is also presented. A critical review of some qualitative and quantitative methods for grading image quality, collateral to equipment quality control, follows this material. Among these methods we present the ROC analysis, Clustering Techniques and Discriminant Analysis. As a part of the two last ones, we also analyze the main factors which determine image quality and how they produce changes in the quantitative physical variables measured on the images. A comparison among the methods is also presented, remarking their utility to check image quality, as well as the main advantages and disadvantages of each one (au)
Peripheral nerve magnetic stimulation: influence of tissue non-homogeneity
Directory of Open Access Journals (Sweden)
Papazov Sava P
2003-12-01
Full Text Available Abstract Background Peripheral nerves are situated in a highly non-homogeneous environment, including muscles, bones, blood vessels, etc. Time-varying magnetic field stimulation of the median and ulnar nerves in the carpal region is studied, with special consideration of the influence of non-homogeneities. Methods A detailed three-dimensional finite element model (FEM of the anatomy of the wrist region was built to assess the induced currents distribution by external magnetic stimulation. The electromagnetic field distribution in the non-homogeneous domain was defined as an internal Dirichlet problem using the finite element method. The boundary conditions were obtained by analysis of the vector potential field excited by external current-driven coils. Results The results include evaluation and graphical representation of the induced current field distribution at various stimulation coil positions. Comparative study for the real non-homogeneous structure with anisotropic conductivities of the tissues and a mock homogeneous media is also presented. The possibility of achieving selective stimulation of either of the two nerves is assessed. Conclusion The model developed could be useful in theoretical prediction of the current distribution in the nerves during diagnostic stimulation and therapeutic procedures involving electromagnetic excitation. The errors in applying homogeneous domain modeling rather than real non-homogeneous biological structures are demonstrated. The practical implications of the applied approach are valid for any arbitrary weakly conductive medium.
Built environment and physical activity: a brief review of evaluation methods
Directory of Open Access Journals (Sweden)
Adriano Akira Ferreira Hino
2010-08-01
Full Text Available There is strong evidence indicating that the environment where people live has amarked influence on physical activity. The current understanding of this relationship is basedon studies conducted in developed and culturally distinct countries and may not be applicableto the context of Brazil. In this respect, a better understanding of methods evaluating the relationshipbetween the environment and physical activity may contribute to the development ofnew studies in this area in Brazil. The objective of the present study was to briefly describe themain methods used to assess the relationship between built environment and physical activity.Three main approaches are used to obtain information about the environment: 1 environmentalperception; 2 systematic observation, and 3 geoprocessing. These methods are mainly applied toevaluate population density, mixed land use, physical activity facilities, street patterns, sidewalk/bike path coverage, public transportation, and safety/esthetics. In Brazil, studies investigating therelationship between the environment and physical activity are scarce, but the number of studiesis growing. Thus, further studies are necessary and methods applicable to the context of Brazilneed to be developed in order to increase the understanding of this subject.
International Nuclear Information System (INIS)
Marleau, G.; Debos, E.
1998-01-01
One of the main problems encountered in cell calculations is that of spatial homogenization where one associates to an heterogeneous cell an homogeneous set of cross sections. The homogenization process is in fact trivial when a totally reflected cell without leakage is fully homogenized since it involved only a flux-volume weighting of the isotropic cross sections. When anisotropic leakages models are considered, in addition to homogenizing isotropic cross sections, the anisotropic scattering cross section must also be considered. The simple option, which consists of using the same homogenization procedure for both the isotropic and anisotropic components of the scattering cross section, leads to inconsistencies between the homogeneous and homogenized transport equation. Here we will present a method for homogenizing the anisotropic scattering cross sections that will resolve these inconsistencies. (author)
D. Acosta
2010-01-01
A remarkable amount of progress has been made in Physics since the last CMS Week in June given the exponential growth in the delivered LHC luminosity. The first major milestone was the delivery of a variety of results to the ICHEP international conference held in Paris this July. For this conference, CMS prepared 15 Physics Analysis Summaries on physics objects and 22 Summaries on new and interesting physics measurements that exploited the luminosity recorded by the CMS detector. The challenge was incorporating the largest batch of luminosity that was delivered only days before the conference (300 nb-1 total). The physics covered from this initial running period spanned hadron production measurements, jet production and properties, electroweak vector boson production, and even glimpses of the top quark. Since then, the accumulated integrated luminosity has increased by a factor of more than 100, and all groups have been working tremendously hard on analysing this dataset. The September Physics Week was held ...
Directory of Open Access Journals (Sweden)
Olaniyi Samuel Iyiola
2014-09-01
Full Text Available In this paper, we obtain analytical solutions of homogeneous time-fractional Gardner equation and non-homogeneous time-fractional models (including Buck-master equation using q-Homotopy Analysis Method (q-HAM. Our work displays the elegant nature of the application of q-HAM not only to solve homogeneous non-linear fractional differential equations but also to solve the non-homogeneous fractional differential equations. The presence of the auxiliary parameter h helps in an effective way to obtain better approximation comparable to exact solutions. The fraction-factor in this method gives it an edge over other existing analytical methods for non-linear differential equations. Comparisons are made upon the existence of exact solutions to these models. The analysis shows that our analytical solutions converge very rapidly to the exact solutions.
Hallinen, Nicole R.; Chi, Min; Chin, Doris B.; Prempeh, Joe; Blair, Kristen P.; Schwartz, Daniel L.
2013-01-01
Cognitive developmental psychology often describes children's growing qualitative understanding of the physical world. Physics educators may be able to use the relevant methods to advantage for characterizing changes in students' qualitative reasoning. Siegler developed the "rule assessment" method for characterizing levels of qualitative understanding for two factor situations (e.g., volume and mass for density). The method assigns children to rule levels that correspond to the degree they notice and coordinate the two factors. Here, we provide a brief tutorial plus a demonstration of how we have used this method to evaluate instructional outcomes with middle-school students who learned about torque, projectile motion, and collisions using different instructional methods with simulations.
Physical models and numerical methods of the reactor dynamic computer program RETRAN
International Nuclear Information System (INIS)
Kamelander, G.; Woloch, F.; Sdouz, G.; Koinig, H.
1984-03-01
This report describes the physical models and the numerical methods of the reactor dynamic code RETRAN simulating reactivity transients in Light-Water-Reactors. The neutron-physical part of RETRAN bases on the two-group-diffusion equations which are solved by discretization similar to the TWIGL-method. An exponential transformation is applied and the inner iterations are accelerated by a coarse-mesh-rebalancing procedure. The thermo-hydraulic model approximates the equation of state by a built-in steam-water-table and disposes of options for the calculation of heat-conduction coefficients and heat transfer coefficients. (Author) [de
Using means and methods of general physical training in education of bowlers
Directory of Open Access Journals (Sweden)
Fanigina O.U.
2011-04-01
Full Text Available There were discovered the main directions of bowlers education. The means and methods of physical education, which insure the formation of high quality moves being the part of main skill, are discovered. There were shown different means of general education accounting individual peculiarities of bowler. The principles of choosing general developing exercises and main direction of influence on developing different abilities are represented. It's created the scientific-methodic support of physical education in teaching-training process for children who play bowing in sport schools.
An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory
Energy Technology Data Exchange (ETDEWEB)
Baker, Randal Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-02-22
CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest and emerging HPC systems.