Scattering by multiple parallel radially stratified infinite cylinders buried in a lossy half space.
Lee, Siu-Chun
2013-07-01
The theoretical solution for scattering by an arbitrary configuration of closely spaced parallel infinite cylinders buried in a lossy half space is presented in this paper. The refractive index and permeability of the half space and cylinders are complex in general. Each cylinder is radially stratified with a distinct complex refractive index and permeability. The incident radiation is an arbitrarily polarized plane wave propagating in the plane normal to the axes of the cylinders. Analytic solutions are derived for the electric and magnetic fields and the Poynting vector of backscattered radiation emerging from the half space. Numerical examples are presented to illustrate the application of the scattering solution to calculate backscattering from a lossy half space containing multiple homogeneous and radially stratified cylinders at various depths and different angles of incidence.
The use of an equivalent homogeneous half-space in soil-structure interaction analysis
International Nuclear Information System (INIS)
Holzloehner, U.
1979-01-01
In analyses of seismic soil-structure interaction, the soil often is assumed as an elastic body. The solution procedure is lengthy if the heterogeneity of the soil is considered strictly. If the soil is taken as a homogeneous elastic half-space, existing solutions can be used. There are solutions for some simple layered systems, too. However, it is often not easy to correlate the variation of the soil properties with depth as found by measurements to those of ideal systems. The purpose of the paper is to show how to make use of the existing solutions. (orig.)
Numerical modelling of complex resistivity effects on a homogeneous half-space at low frequencies
DEFF Research Database (Denmark)
Ingeman-Nielsen, Thomas; Baumgartner, François
2006-01-01
for environmental applications and thanks to technological progress, the use of wide band frequency equipment seems promising, and it is expected to shed light on the different results among the published solutions to the electromagnetic (EM) coupling problem. We review the theory of EM coupling over a homogeneous...
Butler, S. L.
2017-08-01
It is instructive to consider the sensitivity function for a homogeneous half space for resistivity since it has a simple mathematical formula and it does not require a priori knowledge of the resistivity of the ground. Past analyses of this function have allowed visualization of the regions that contribute most to apparent resistivity measurements with given array configurations. The horizontally integrated form of this equation gives the sensitivity function for an infinitesimally thick horizontal slab with a small resistivity contrast and analysis of this function has admitted estimates of the depth of investigation for a given electrode array. Recently, it has been shown that the average of the vertical coordinate over this function yields a simple formula that can be used to estimate the depth of investigation. The sensitivity function for a vertical inline slab has also been previously calculated. In this contribution, I show that the sensitivity function for a homogeneous half-space can also be integrated so as to give sensitivity functions to semi-infinite vertical slabs that are perpendicular to the array axis. These horizontal sensitivity functions can, in turn, be integrated over the spatial coordinates to give the mean horizontal positions of the sensitivity functions. The mean horizontal positions give estimates for the centres of the regions that affect apparent resistivity measurements for arbitrary array configuration and can be used as horizontal positions when plotting pseudosections even for non-collinear arrays. The mean of the horizontal coordinate that is perpendicular to a collinear array also gives a simple formula for estimating the distance over which offline resistivity anomalies will have a significant effect. The root mean square (rms) widths of the sensitivity functions are also calculated in each of the coordinate directions as an estimate of the inverse of the resolution of a given array. For depth and in the direction perpendicular
Butler, S. L.
2017-12-01
The electrical resistivity method is now highly developed with 2D and even 3D surveys routinely performed and with available fast inversion software. However, rules of thumb, based on simple mathematical formulas, for important quantities like depth of investigation, horizontal position and resolution have not previously been available and would be useful for survey planning, preliminary interpretation and general education about the method. In this contribution, I will show that the sensitivity function for the resistivity method for a homogeneous half-space can be analyzed in terms of its first and second moments which yield simple mathematical formulas. The first moment gives the sensitivity-weighted center of an apparent resistivity measurement with the vertical center being an estimate of the depth of investigation. I will show that this depth of investigation estimate works at least as well as previous estimates based on the peak and median of the depth sensitivity function which must be calculated numerically for a general four electrode array. The vertical and horizontal first moments can also be used as pseudopositions when plotting 1, 2 and 3D pseudosections. The appropriate horizontal plotting point for a pseudosection was not previously obvious for nonsymmetric arrays. The second moments of the sensitivity function give estimates of the spatial extent of the region contributing to an apparent resistivity measurement and hence are measures of the resolution. These also have simple mathematical formulas.
Rayleigh wave effects in an elastic half-space.
Aggarwal, H. R.
1972-01-01
Consideration of Rayleigh wave effects in a homogeneous isotropic linearly elastic half-space subject to an impulsive uniform disk pressure loading. An approximate formula is obtained for the Rayleigh wave effects. It is shown that the Rayleigh waves near the center of loading arise from the portion of the dilatational and shear waves moving toward the axis, after they originate at the edge of the load disk. A study is made of the vertical displacement due to Rayleigh waves at points on the axis near the surface of the elastic half-space.
Transient Response of Thin Wire above a Layered Half-Space Using TDIE/FDTD Hybrid Method
Directory of Open Access Journals (Sweden)
Bing Wei
2012-01-01
Full Text Available The TDIE/FDTD hybrid method is applied to calculate the transient responses of thin wire above a lossy layered half-space. The time-domain reflection of the layered half space is computed by one-dimensional modified FDTD method. Then, transient response of thin wire induced by two excitation sources (the incident wave and reflected wave is calculated by TDIE method. Finally numerical results are given to illustrate the feasibility and high efficiency of the presented scheme.
Static elastic deformation in an orthotropic half-space with rigid ...
Indian Academy of Sciences (India)
Yogita Godara
2017-10-06
Oct 6, 2017 ... The solution of static elastic deformation of a homogeneous, orthotropic elastic uniform half-space with ... Faults are fractures in Earth's crust where rocks ...... Mavko G M 1981 Mechanics of motion on major faults; Ann. Rev.
Ignition of a combustible half space
Olmstead, W. E.
1983-01-01
A half space of combustible material is subjected to an arbitrary energy flux at the boundary where convection heat loss is also allowed. An asymptotic analysis of the temperature growth reveals two conditions necessary for ignition to occur. Cases of both large and order unity Lewis number are shown to lead to a nonlinear integral equation governing the thermal runaway. Some global and asymptotic properties of the integral equation are obtained.
Negative refraction of inhomogeneous waves in lossy isotropic media
International Nuclear Information System (INIS)
Fedorov, V Yu; Nakajima, T
2014-01-01
We theoretically study negative refraction of inhomogeneous waves at the interface of lossy isotropic media. We obtain explicit (up to the sign) expressions for the parameters of a wave transmitted through the interface between two lossy media characterized by complex permittivity and permeability. We show that the criterion of negative refraction that requires negative permittivity and permeability can be used only in the case of a homogeneous incident wave at the interface between a lossless and lossy media. In a more general situation, when the incident wave is inhomogeneous, or both media are lossy, the criterion of negative refraction becomes dependent on an incidence angle. Most interestingly, we show that negative refraction can be realized in conventional lossy materials (such as metals) if their interfaces are properly oriented. (paper)
The propagation of nonlinear rayleigh waves in layered elastic half-space
International Nuclear Information System (INIS)
Ahmetolan, S.
2004-01-01
In this work, the propagation of small but finite amplitude generalized Rayleigh waves in an elastic half-space covered by a different elastic layer of uniform and finite thickness is considered. The constituent materials are assumed to be homogeneous, isotropic, compressible hyperelastic. Excluding the harmonic resonance phenomena, it is shown that the nonlinear self modulation of generalized Rayleigh waves is governed asymptotically by a nonlinear Schrodinger (NLS) equation. The stability of the solutions and the existence of solitary wave-type solutions a NLS are strongly depend on the sign of the product of the coefficients of the nonlinear and dipersion terms of the equation.Therefore the analysis continues with the examination of dependence of these coefficients on the nonlinear material parameters. Three different models have been considered which are nonlinear layer-nonlinear half space, linear layer-nonlinear half space and nonlinear layer-linear half space. The behavior of the coefficients of the NLS equation was also analyzed the limit as h(thickness of the layer) goes to zero and k(the wave number) is constant. Then conclusions are drawn about the effect of nonlinear material parameters on the wave modulation. In the numerical investigations both hypothetical and real material models are used
Accounting for Antenna in Half-Space Fresnel Coefficient Estimation
Directory of Open Access Journals (Sweden)
A. D'Alterio
2012-01-01
Full Text Available The problem of retrieving the Fresnel reflection coefficients of a half-space medium starting from measurements collected under a reflection mode multistatic configuration is dealt with. According to our previous results, reflection coefficient estimation is cast as the inversion of linear operator. However, here, we take a step ahead towards more realistic scenarios as the role of antennas (both transmitting and receiving is embodied in the estimation procedure. Numerical results are presented to show the effectiveness of the method for different types of half-space media.
Torsional surface waves in an inhomogeneous layer over a gravitating anisotropic porous half-space
International Nuclear Information System (INIS)
Gupta, Shishir; Pramanik, Abhijit
2015-01-01
The present work aims to deal with the propagation of torsional surface wave in an inhomogeneous layer over a gravitating anisotropic porous half space. The inhomogeneous layer exhibits the inhomogeneity of quadratic type. In order to show the effect of gravity the equation for the velocity of torsional wave has been obtained. It is also observed that for a layer over a homogeneous half space without gravity, the torsional surface wave does not propagate. An attempt is also made to assess the possible propagation of torsional surface waves in that medium in the absence of the upper layer. The effects of inhomogeneity factors and porosity on the phase velocity are depicted by means of graphs. (paper)
Scattering of a pulse by a cavity in an elastic half-space
International Nuclear Information System (INIS)
Scandrett, C.L.; Kriegsmann, G.A.; Achienbach, J.D.
1986-01-01
The finite difference technique is employed to study plane strain scattering of pulses from finite anomalies embedded in an isotropic, homogeneous, elastic half-space. In particular, the scatterer is taken to by a cylindrical cavity. A new transmission boundary condition is developed which transmits energy conveyed by Rayleigh surface waves. This condition is successfully employed in reducing the domain of numerical calculations from a semi-infinite to a finite region. A test of the numerical scheme is given by considering a time harmonic pulse of infinite extent. The numerical technique is marched out in time until transients have radiated away and a steady state solution has been reached which is found to be in good agreement with results produced by a series type solution. Time domain solutions are given in terms of time histories of displacements at the half-space free surface; and by sequences of snapshots, taken of the entire numerical domain, which illustrate the scattering dynamics
Electrostatic images for underwater anisotropic conductive half spaces
International Nuclear Information System (INIS)
Flykt, M.; Lindell, I.; Eloranta, E.
1998-01-01
A static image principle makes it possible to derive analytical solutions to some basic geometries for DC fields. The underwater environment is especially difficult both from the theoretical and practical point of view. However, there are increasing demands that also the underwater geological formations should be studied in detail. The traditional image of a point source lies at the mirror point of the original. When anisotropic media is involved, however, the image location can change and the image source may be a continues, sector-like distribution. In this paper some theoretical considerations are carried out in the case where the lower half space can have a very general anisotropy in terms of electrical conductivity, while the upper half space is assumed isotropic. The reflection potential field is calculated for different values of electrical conductivity. (orig.)
Quasi-Rayleigh waves in transversely isotropic half-space with inclined axis of symmetry
International Nuclear Information System (INIS)
Yanovskaya, T.B.; Savina, L.S.
2003-09-01
A method for determination of characteristics of quasi-Rayleigh (qR) wave in a transversely isotropic homogeneous half-space with inclined axis of symmetry is outlined. The solution is obtained as a superposition of qP, qSV and qSH waves, and surface wave velocity is determined from the boundary conditions at the free surface and at infinity, as in the case of Rayleigh wave in isotropic half-space. Though the theory is simple enough, a numerical procedure for the calculation of surface wave velocity presents some difficulties. The difficulty is conditioned by necessity to calculate complex roots of a non-linear equation, which in turn contains functions determined as roots of nonlinear equations with complex coefficients. Numerical analysis shows that roots of the equation corresponding to the boundary conditions do not exist in the whole domain of azimuths and inclinations of the symmetry axis. The domain of existence of qR wave depends on the ratio of the elastic parameters: for some strongly anisotropic models the wave cannot exist at all. For some angles of inclination qR wave velocities deviate from those calculated on the basis of the perturbation method valid for weak anisotropy, though they have the same tendency of variation with azimuth. The phase of qR wave varies with depth unlike Rayleigh wave in isotropic half-space. Unlike Rayleigh wave in isotropic half-space, qR wave has three components - vertical, radial and transverse. Particle motion in horizontal plane is elliptic. Direction of the major axis of the ellipsis coincide with the direction of propagation only in azimuths 0 deg. (180 deg.) and 90 deg. (270 deg.). (author)
LibHalfSpace: A C++ object-oriented library to study deformation and stress in elastic half-spaces
Ferrari, Claudio; Bonafede, Maurizio; Belardinelli, Maria Elina
2016-11-01
The study of deformation processes in elastic half-spaces is widely employed for many purposes (e.g. didactic, scientific investigation of real processes, inversion of geodetic data, etc.). We present a coherent programming interface containing a set of tools designed to make easier and faster the study of processes in an elastic half-space. LibHalfSpace is presented in the form of an object-oriented library. A set of well known and frequently used source models (Mogi source, penny shaped horizontal crack, inflating spheroid, Okada rectangular dislocation, etc.) are implemented to describe the potential usage and the versatility of the library. The common interface given to library tools enables us to switch easily among the effects produced by different deformation sources that can be monitored at the free surface. Furthermore, the library also offers an interface which simplifies the creation of new source models exploiting the features of object-oriented programming (OOP). These source models can be built as distributions of rectangular boundary elements. In order to better explain how new models can be deployed some examples are included in the library.
Fernández Pantoja, M.; Yarovoy, A.G.; Rubio Bretones, A.; González García, S.
2009-01-01
This paper presents a procedure to extend the methods of moments in time domain for the transient analysis of thin-wire antennas to include those cases where the antennas are located over a lossy half-space. This extended technique is based on the reflection coefficient (RC) approach, which
Directory of Open Access Journals (Sweden)
Sunita Deswal
2013-01-01
Full Text Available The aim of this paper is to study magneto-thermoelastic interactions in an initially stressed isotropic homogeneous half-space in the context of fractional order theory of generalized thermoelasticity. State space formulation with the Laplace transform technique is used to obtain the general solution, and the resulting formulation is applied to the ramp type increase in thermal load and zero stress. Solutions of the problem in the physical domain are obtained by using a numerical method of the Laplace inverse transform based on the Fourier expansion technique, and the expressions for the displacement, temperature, and stress inside the half-space are obtained. Numerical computations are carried out for a particular material for illustrating the results. Results obtained for the field variables are displayed graphically. Some comparisons have been shown in figures to present the effect of fractional parameter, ramp parameter, magnetic field, and initial stress on the field variables. Some particular cases of special interest have been deduced from the present investigation.
Deformation of a flexible disk bonded to an elastic half space-application to the lung.
Lai-Fook, S J; Hajji, M A; Wilson, T A
1980-08-01
An analysis is presented of the deformation of a homogeneous, isotropic, elastic half space subjected to a constant radial strain in a circular area on the boundary. Explicit analytic expressions for the normal and radial displacements and the shear stress on the boundary are used to interpret experiments performed on inflated pig lungs. The boundary strain was induced by inflating or deflating the lung after bonding a flexible disk to the lung surface. The prediction that the surface bulges outward for positive boundary strain and inward for negative strain was observed in the experiments. Poisson's ratio at two transpulmonary pressures was measured, by use of the normal displacement equation evaluated at the surface. A direct estimate of Poisson's ratio was possible because the normal displacement of the surface depended uniquely on the compressibility of the material. Qualitative comparisons between theory and experiment support the use of continuum analyses in evaluating the behavior of the lung parenchyma when subjected to small local distortions.
Directory of Open Access Journals (Sweden)
Qi-lin Xiong
Full Text Available Abstract The generalized magneto-thermoelastic problem of an infinite homogeneous isotropic microstretch half-space with temperature-dependent material properties placed in a transverse magnetic field is investigated in the context of different generalized thermoelastic theories. The upper surface of the half-space is subjected to a zonal time-dependent heat shock. By solving finite element governing equations, the solution to the problem is obtained, from which the transient magneto-thermoelastic responses, including temperature, stresses, displacements, microstretch, microrotation, induced magnetic field and induced electric field are presented graphically. Comparisons are made in the results obtained under different generalized thermoelastic theories to show some unique features of generalized thermoelasticity, and comparisons are made in the results obtained under three forms of temperature dependent material properties (absolute temperature dependent, reference temperature dependent and temperature-independent to show the effects of absolute temperature and reference temperature. Weibull or Log-normal.
Energy Technology Data Exchange (ETDEWEB)
Tuereci, R.G. [Kirikkale Univ., Kirikkale (Turkey). Kirikkale Vocational School; Tuereci, D. [Ministry of Education, Ankara (Turkey). 75th year Anatolia High School
2017-05-15
One speed, time independent and homogeneous medium neutron transport equation can be solved with the anisotropic scattering which includes both the linear anisotropic and the quadratic anisotropic scattering properties. Having solved Case's eigenfunctions and the orthogonality relations among these eigenfunctions, some neutron transport problems such as albedo problem can be calculated as numerically by using numerical or semi-analytic methods. In this study the half-space albedo problem is investigated by using the modified F{sub N} method.
International Nuclear Information System (INIS)
Nkoma, J.S.
1982-08-01
A quantum-mechanical theory for the inelastic scattering of slow electrons (ISSE) by surface excitations is developed within the half-space model. The process of transmission of incident electrons into the crystal is described by the homogeneous Schroedinger equation, while the scattering process inside the crystal is described by an inhomogeneous Schroedinger equation. The scattering cross-section for ISSE by surface excitations is derived and is found to be small since it is dependent on an inverse sum of wavevectors which is large. It is also dependent on the fluctuations in the scattering potential. (author)
Static deformation of two welded monoclinic elastic half-spaces due ...
Indian Academy of Sciences (India)
M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22
Static deformation of two monoclinic elastic half-spaces in welded contact due to a long inclined strike-slip fault situated in one of the half-spaces is studied analytically and numerically. Closed- form algebraic expressions for the displacement at any point of the medium are obtained. The variation of the displacement at the ...
Displacement field for an edge dislocation in a layered half-space
Savage, J.C.
1998-01-01
The displacement field for an edge dislocation in an Earth model consisting of a layer welded to a half-space of different material is found in the form of a Fourier integral following the method given by Weeks et al. [1968]. There are four elementary solutions to be considered: the dislocation is either in the half-space or the layer and the Burgers vector is either parallel or perpendicular to the layer. A general two-dimensional solution for a dip-slip faulting or dike injection (arbitrary dip) can be constructed from a superposition of these elementary solutions. Surface deformations have been calculated for an edge dislocation located at the interface with Burgers vector inclined 0??, 30??, 60??, and 90?? to the interface for the case where the rigidity of the layer is half of that of the half-space and the Poisson ratios are the same. Those displacement fields have been compared to the displacement fields generated by similarly situated edge dislocations in a uniform half-space. The surface displacement field produced by the edge dislocation in the layered half-space is very similar to that produced by an edge dislocation at a different depth in a uniform half-space. In general, a low-modulus (high-modulus) layer causes the half-space equivalent dislocation to appear shallower (deeper) than the actual dislocation in the layered half-space.
FernáNdez Pantoja, M.; Yarovoy, A. G.; Rubio Bretones, A.; GonzáLez GarcíA, S.
2009-12-01
This paper presents a procedure to extend the methods of moments in time domain for the transient analysis of thin-wire antennas to include those cases where the antennas are located over a lossy half-space. This extended technique is based on the reflection coefficient (RC) approach, which approximates the fields incident on the ground interface as plane waves and calculates the time domain RC using the inverse Fourier transform of Fresnel equations. The implementation presented in this paper uses general expressions for the RC which extend its range of applicability to lossy grounds, and is proven to be accurate and fast for antennas located not too near to the ground. The resulting general purpose procedure, able to treat arbitrarily oriented thin-wire antennas, is appropriate for all kind of half-spaces, including lossy cases, and it has turned out to be as computationally fast solving the problem of an arbitrary ground as dealing with a perfect electric conductor ground plane. Results show a numerical validation of the method for different half-spaces, paying special attention to the influence of the antenna to ground distance in the accuracy of the results.
Directory of Open Access Journals (Sweden)
Rajneesh Kumar
Full Text Available The problem of reflection and refraction phenomenon due to plane waves incident obliquely at a plane interface between uniform elastic solid half-space and microstretch thermoelastic diffusion solid half-space has been studied. It is found that the amplitude ratios of various reflected and refracted waves are functions of angle of incidence, frequency of incident wave and are influenced by the microstretch thermoelastic diffusion properties of the media. The expressions of amplitude ratios and energy ratios are obtained in closed form. The energy ratios have been computed numerically for a particular model. The variations of energy ratios with angle of incidence are shown for thermoelastic diffusion media in the context of Lord-Shulman (L-S (1967 and Green-Lindsay (G-L (1972 theories. The conservation of energy at the interface is verified. Some particular cases are also deduced from the present investigation.
Gupta, Shishir; Pramanik, Abhijit; Smita; Pramanik, Snehamoy
2018-06-01
The phenomenon of plane waves at the intersecting plane of a triclinic half-space and a self-reinforced half-space is discussed with possible applications during wave propagation. Analytical expressions of the phase velocities of reflection and refraction for quasi-compressional and quasi-shear waves under initial stress are discussed carefully. The closest form of amplitude proportions on reflection and refraction factors of three quasi-plane waves are developed mathematically by applying appropriate boundary conditions. Graphics are sketched to exhibit the consequences of initial stress in the three-dimensional plane wave on reflection and refraction coefficients. Some special cases that coincide with the fundamental properties of several layers are designed to express the reflection and refraction coefficients.
Modelling the acoustical response of lossy lamella-crystals
DEFF Research Database (Denmark)
Christensen, Johan; Mortensen, N. Asger; Willatzen, Morten
2014-01-01
The sound propagation properties of lossy lamella-crystals are analysed theoretically utilizing a rig- orous wave expansion formalism and an effective medium approach. We investigate both sup- ported and free-standing crystal slab structures and predict high absorption for a broad range...... of frequencies. A detailed derivation of the formalism is presented, and we show how the results obtained in the subwavelength and superwavelength regimes qualitatively can be reproduced by homogenizing the lamella-crystals. We come to the conclusion that treating this structure within the metamaterial limit...... only makes sense if the crystal filling fraction is sufficiently large to satisfy an effective medium approach....
Electrical properties of spherical dipole antennas with lossy material cores
DEFF Research Database (Denmark)
Hansen, Troels Vejle; Kim, Oleksiy S.; Breinbjerg, Olav
2012-01-01
A spherical magnetic dipole antenna with a linear, isotropic, homogenous, passive, and lossy material core is modeled analytically, and closed form expressions are given for the internally stored magnetic and electric energies, the radiation efficiency, and radiation quality factor. This model...... and all the provided expressions are exact and valid for arbitrary core sizes, permeability, permittivity, electric and magnetic loss tangents. Arbitrary dispersion models for both permeability and permittivity can be applied. In addition, we present an investigation for an antenna of fixed electrical...
Rayleigh Waves in a Rotating Orthotropic Micropolar Elastic Solid Half-Space
Directory of Open Access Journals (Sweden)
Baljeet Singh
2013-01-01
Full Text Available A problem on Rayleigh wave in a rotating half-space of an orthotropic micropolar material is considered. The governing equations are solved for surface wave solutions in the half space of the material. These solutions satisfy the boundary conditions at free surface of the half-space to obtain the frequency equation of the Rayleigh wave. For numerical purpose, the frequency equation is approximated. The nondimensional speed of Rayleigh wave is computed and shown graphically versus nondimensional frequency and rotation-frequency ratio for both orthotropic micropolar elastic and isotropic micropolar elastic cases. The numerical results show the effects of rotation, orthotropy, and nondimensional frequency on the nondimensional speed of the Rayleigh wave.
Normal Mode Analysis to a Poroelastic Half-Space Problem under Generalized Thermoelasticity
Directory of Open Access Journals (Sweden)
Chunbao Xiong
Full Text Available Abstract The thermo-hydro-mechanical problems associated with a poroelastic half-space soil medium with variable properties under generalized thermoelasticity theory were investigated in this study. By remaining faithful to Biot’s theory of dynamic poroelasticity, we idealized the foundation material as a uniform, fully saturated, poroelastic half-space medium. We first subjected this medium to time harmonic loads consisting of normal or thermal loads, then investigated the differences between the coupled thermohydro-mechanical dynamic models and the thermo-elastic dynamic models. We used normal mode analysis to solve the resulting non-dimensional coupled equations, then investigated the effects that non-dimensional vertical displacement, excess pore water pressure, vertical stress, and temperature distribution exerted on the poroelastic half-space medium and represented them graphically.
Foundation plate on the elastic half-space, deterministic and probabilistic approach
Directory of Open Access Journals (Sweden)
Tvrdá Katarína
2017-01-01
Full Text Available Interaction between the foundation plate and subgrade can be described by different mathematical - physical model. Elastic foundation can be modelled by different types of models, e.g. one-parametric model, two-parametric model and a comprehensive model - Boussinesque (elastic half-space had been used. The article deals with deterministic and probabilistic analysis of deflection of the foundation plate on the elastic half-space. Contact between the foundation plate and subsoil was modelled using contact elements node-node. At the end the obtained results are presented.
Charge accumulation in lossy dielectrics: a review
DEFF Research Database (Denmark)
Rasmussen, Jørgen Knøster; McAllister, Iain Wilson; Crichton, George C
1999-01-01
At present, the phenomenon of charge accumulation in solid dielectrics is under intense experimental study. Using a field theoretical approach, we review the basis for charge accumulation in lossy dielectrics. Thereafter, this macroscopic approach is applied to planar geometries such that the mat......At present, the phenomenon of charge accumulation in solid dielectrics is under intense experimental study. Using a field theoretical approach, we review the basis for charge accumulation in lossy dielectrics. Thereafter, this macroscopic approach is applied to planar geometries...
Deformation of two welded elastic half-spaces due to a long inclined ...
Indian Academy of Sciences (India)
2Department of Mathematics, University of Delhi South Campus, New Delhi 110 021, India. ∗e-mail: ... Airy stress function for a tensile line source in two welded half-spaces are first obtained. These expressions ... computing the displacement and stress fields around a long inclined tensile fault near an internal boundary. 1.
Vibration of an Elastic Circular Plate on an Elastic Half Space
DEFF Research Database (Denmark)
Krenk, Steen; Schmidt, H.
1981-01-01
The axisymmetric problem of a vibrating elastic plate on an elastic half space is solved by a direct method, in which the contact stresses and the normal displacements of the plate are taken as the unknown functions. First, the influence functions that give the displacements in terms...
Propagation of waves at the loosely bonded interface of two porous elastic half-spaces
International Nuclear Information System (INIS)
Tajuddin, M.
1993-10-01
Employing Biot's theory for wave propagation in porous solids, the propagation of waves at the loosely bonded interface between two poroelastic half-spaces is examined theoretically. The analogous study of Stoneley waves for smooth interface and bonded interface form a limiting case. The results due to classical theory are shown as a special case. (author). 13 refs
Residual stresses relaxation in surface-hardened half-space under creep conditions
Directory of Open Access Journals (Sweden)
Vladimir P. Radchenko
2015-09-01
Full Text Available We developed the method for solving the problem of residual stresses relaxation in surface-hardened layer of half-space under creep conditions. At the first stage we made the reconstruction of stress-strain state in half-space after plastic surface hardening procedure based on partial information about distribution for one residual stress tensor component experimentally detected. At the second stage using a numerical method we solve the problem of relaxation of self-balanced residual stresses under creep conditions. To solve this problem we introduce the following Cartesian system: x0y plane is aligned with hardened surface of half-space and 0z axis is directed to the depth of hardened layer. We also introduce the hypotheses of plane sections parallel to x0z and y0z planes. Detailed analysis of the problem has been done. Comparison of the calculated data with the corresponding test data was made for plane specimens (rectangular parallelepipeds made of EP742 alloy during T=650°C after the ultrasonic hardening with four hardening modes. We use half-space to model these specimens because penetration's depth of residual stresses is less than specimen general size in two digit exponent. There is enough correspondence of experimental and calculated data. It is shown that there is a decay (in modulus of pressing residual stresses under creep in 1.4–1.6 times.
Deformation of a layered half-space due to a very long tensile fault
Indian Academy of Sciences (India)
The problem of the coseismic deformation of an earth model consisting of an elastic layer of uniform thickness overlying an elastic half-space due to a very long tensile fault in the layer is solved analytically. Integral expressions for the surface displacements are obtained for a vertical tensile fault and a horizontal tensile fault.
Plane strain deformation of a multi-layered poroelastic half-space by ...
Indian Academy of Sciences (India)
The Biot linearized quasi-static theory of ﬂuid-inﬁltrated porous materials is used to formulate the problem of the two-dimensional plane strain deformation of a multi-layered poroelastic half-space by surface loads. The Fourier–Laplace transforms of the stresses, displacements, pore pressure and ﬂuid ﬂux in each ...
Impact of a half-space interface on the wireless link between tiny sensor nodes
Penkin, D.; Janssen, G.; Yarovoy, A.
2014-01-01
The power budget of a wireless link between two electrically small sensor nodes located close to an interface between two media is studied. The model includes both the propagation channel losses and input impedance of the radio frequency antennas. It is shown that a highly inductive half-space
Asymmetric Vibrations of a Circular Elastic Plate on an Elastic Half Space
DEFF Research Database (Denmark)
Schmidt, H.; Krenk, Steen
1982-01-01
The asymmetric problem of a vibrating circular elastic plate in frictionless contact with an elastic half space is solved by an integral equation method, where the contact stress appears as the unknown function. By a trigonometric expansion, the problem is reduced to a number of uncoupled two...
International Nuclear Information System (INIS)
Davis, J. E.; Eddy, M. J.; Sutton, T. M.; Altomari, T. J.
2007-01-01
Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces - a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation. (authors)
Experimental measurements of the eddy current signal due to a flawed, conducting half space
International Nuclear Information System (INIS)
Long, S.A.; Toomsawasdi, S.; Zaman, A.J.M.
1984-01-01
This chapter reports on an experimental investigation in which the change in impedance of a practical multi-turn eddy current coil near a conducting half space is measured as a function of the conductivity and the lift-off distance. The results are compared in a qualitative fashion with the analytical results for a single-turn coil. Measurements are also made of the change in impedance due to a small void in the conducting half space as a function of both its depth and radial position. The results indicate that, at least in a qualitative fashion, the precisely derived analytical solutions adequately predict the general behavior of the change in complex impedance of an eddy current coil above a conducting ground plane as a function of lift-off distance. It is determined that the effect of a sub-surface void on the change in inductance of the test coil correlates well with theoretical calculations
Delaney, P.
1984-01-01
Analytical solutions are developed for the pressurization, expansion, and flow of one- and two-phase liquids during heating of fully saturated and hydraulically open Darcian half-spaces subjected to a step rise in temperature at its surface. For silicate materials, advective transfer is commonly unimportant in the liquid region; this is not always the case in the vapor region. Volume change is commonly more important than heat of vaporization in determining the position of the liquid-vapor interface, assuring that the temperatures cannot be determined independently of pressures. Pressure increases reach a maximum near the leading edge of the thermal front and penetrate well into the isothermal region of the body. Mass flux is insensitive to the hydraulic properties of the half-space. ?? 1984.
Shape differentiability of the Neumann problem of the Laplace equation in the half-space
Czech Academy of Sciences Publication Activity Database
Amrouche, Ch.; Nečasová, Šárka; Sokolowski, J.
2008-01-01
Roč. 37, č. 4 (2008), s. 748-769 ISSN 0324-8569 R&D Projects: GA ČR GA201/05/0005; GA ČR GA201/08/0012 Institutional research plan: CEZ:AV0Z10190503 Keywords : shape optimization * Neumann problem * half space * material derivative Subject RIV: BA - General Mathematics Impact factor: 0.689, year: 2008
Rocking Rotation of a Rigid Disk Embedded in a Transversely Isotropic Half-Space
Directory of Open Access Journals (Sweden)
Seyed Ahmadi
2014-06-01
Full Text Available The asymmetric problem of rocking rotation of a circular rigid disk embedded in a finite depth of a transversely isotropic half-space is analytically addressed. The rigid disk is assumed to be in frictionless contact with the elastic half-space. By virtue of appropriate Green's functions, the mixed boundary value problem is written as a dual integral equation. Employing further mathematical techniques, the integral equation is reduced to a well-known Fredholm integral equation of the second kind. The results related to the contact stress distribution across the disk region and the equivalent rocking stiffness of the system are expressed in terms of the solution of the obtained Fredholm integral equation. When the rigid disk is located on the surface or at the remote boundary, the exact closed-form solutions are presented. For verification purposes, the limiting case of an isotropic half-space is considered and the results are verified with those available in the literature. The jump behavior in the results at the edge of the rigid disk for the case of an infinitesimal embedment is highlighted analytically for the first time. Selected numerical results are depicted for the contact stress distribution across the disk region, rocking stiffness of the system, normal stress, and displacement components along the radial axis. Moreover, effects of anisotropy on the rocking stiffness factor are discussed in detail.
Diffraction of SH-waves by topographic features in a layered transversely isotropic half-space
Ba, Zhenning; Liang, Jianwen; Zhang, Yanju
2017-01-01
The scattering of plane SH-waves by topographic features in a layered transversely isotropic (TI) half-space is investigated by using an indirect boundary element method (IBEM). Firstly, the anti-plane dynamic stiffness matrix of the layered TI half-space is established and the free fields are solved by using the direct stiffness method. Then, Green's functions are derived for uniformly distributed loads acting on an inclined line in a layered TI half-space and the scattered fields are constructed with the deduced Green's functions. Finally, the free fields are added to the scattered ones to obtain the global dynamic responses. The method is verified by comparing results with the published isotropic ones. Both the steady-state and transient dynamic responses are evaluated and discussed. Numerical results in the frequency domain show that surface motions for the TI media can be significantly different from those for the isotropic case, which are strongly dependent on the anisotropy property, incident angle and incident frequency. Results in the time domain show that the material anisotropy has important effects on the maximum duration and maximum amplitudes of the time histories.
Directory of Open Access Journals (Sweden)
Li Ping
2004-01-01
Full Text Available Detailed studies of anomalous conductors in otherwise homogeneous media have been modelled. Vertical contacts form common geometries in galvanic studies when describing geological formations with different electrical conductivities on either side. However, previous studies of vertical discontinuities have been mainly concerned with isotropic environments. In this paper, we deal with the effect on the electric potentials, such as mise-à-la-masse anomalies, due to a conductor near a vertical contact between two anisotropic regions. We also demonstrate the interactive effects when the conductive body is placed across the vertical contact. This problem is normally very difficult to solve by the traditional numerical methods. The integral equations for the electric potential in anisotropic half-spaces are established. Green's function is obtained using the reflection and transmission image method in which five images are needed to fit the boundary conditions on the vertical interface and the air-earth surface. The effects of the anisotropy of the environments and the conductive body on the electric potential are illustrated with the aid of several numerical examples.
Privacy of a lossy bosonic memory channel
Energy Technology Data Exchange (ETDEWEB)
Ruggeri, Giovanna [Dipartimento di Fisica, Universita di Lecce, I-73100 Lecce (Italy)]. E-mail: ruggeri@le.infn.it; Mancini, Stefano [Dipartimento di Fisica, Universita di Camerino, I-62032 Camerino (Italy)]. E-mail: stefano.mancini@unicam.it
2007-03-12
We study the security of the information transmission between two honest parties realized through a lossy bosonic memory channel when losses are captured by a dishonest party. We then show that entangled inputs can enhance the private information of such a channel, which however does never overcome that of unentangled inputs in absence of memory.
Optimal design of lossy bandgap structures
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard
2004-01-01
The method of topology optimization is used to design structures for wave propagation with one lossy material component. Optimized designs for scalar elastic waves are presented for mininimum wave transmission as well as for maximum wave energy dissipation. The structures that are obtained...... are of the 1D or 2D bandgap type depending on the objective and the material parameters....
Quantum optics of lossy asymmetric beam splitters
Uppu, Ravitej; Wolterink, Tom; Tentrup, Tristan Bernhard Horst; Pinkse, Pepijn Willemszoon Harry
2016-01-01
We theoretically investigate quantum interference of two single photons at a lossy asymmetric beam splitter, the most general passive 2×2 optical circuit. The losses in the circuit result in a non-unitary scattering matrix with a non-trivial set of constraints on the elements of the scattering
Privacy of a lossy bosonic memory channel
International Nuclear Information System (INIS)
Ruggeri, Giovanna; Mancini, Stefano
2007-01-01
We study the security of the information transmission between two honest parties realized through a lossy bosonic memory channel when losses are captured by a dishonest party. We then show that entangled inputs can enhance the private information of such a channel, which however does never overcome that of unentangled inputs in absence of memory
Directory of Open Access Journals (Sweden)
Narottam Maity
2016-01-01
Full Text Available Reflection of longitudinal displacement waves in a generalized thermoelastic half space under the action of uniform magnetic field has been investigated. The magnetic field is applied in such a direction that the problem can be considered as a two-dimensional one. The discussion is based on the three theories of generalized thermoelasticity: Lord-Shulman (L-S, Green-Lindsay (G-L, and Green-Naghdi (G-N with energy dissipation. We compute the possible wave velocities for different models. Amplitude ratios have been presented. The effects of magnetic field on various subjects of interest are discussed and shown graphically.
Cao, Le; Wei, Bing
2014-08-25
Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.
Reflection and transmission of full-vector X-waves normally incident on dielectric half spaces
Salem, Mohamed
2011-08-01
The reflection and transmission of full-vector X-Waves incident normally on a planar interface between two lossless dielectric half-spaces are investigated. Full-vector X-Waves are obtained by superimposing transverse electric and magnetic polarization components, which are derived from the scalar X-Wave solution. The analysis of transmission and reflection is carried out via a straightforward but yet effective method: First, the X-Wave is decomposed into vector Bessel beams via the Bessel-Fourier transform. Then, the reflection and transmission coefficients of the beams are obtained in the spectral domain. Finally, the transmitted and reflected X-Waves are obtained via the inverse Bessel-Fourier transform carried out on the X-wave spectrum weighted with the corresponding coefficient. © 2011 IEEE.
THERMAL CONSOLIDATION OF LAYERED POROUS HALF-SPACE TO VARIABLE THERMAL LOADING
Institute of Scientific and Technical Information of China (English)
BAI Bing
2006-01-01
An analytical method was derived for the thermal consolidation of layered,saturated porous half-space to variable thermal loading with time. In the coupled governing equations of linear thermoelastic media, the influences of thermo-osmosis effect and thermal filtration effect were introduced. Solutions in Laplace transform space were first obtained and then numerically inverted. The responses of a double-layered porous space subjected to exponential decaying thermal loading were studied. The influences of the differences between the properties of the two layers (e.g., the coefficient of thermal consolidation, elastic modulus) on thermal consolidation were discussed. The studies show that the coupling effects of displacement and stress fields on temperature field can be completely neglected, however, thc thermo-osmosis effect has an obvious influence on thermal responses.
International Nuclear Information System (INIS)
Sernelius, Bo E
2008-01-01
The energy spectrum of electromagnetic normal modes plays a central role in the theory of the van der Waals and Casimir interaction. Here we study the modes in connection with the van der Waals interaction between two metal half spaces. Neglecting dissipation leads to distinct normal modes with real-valued frequencies. Including dissipation seems to have the effect that these distinct modes move away from the real axis into the complex frequency plane. The summation of the zero-point energies of these modes render a complex-valued result. Using the contour integration, resulting from the use of the generalized argument principle, gives a real-valued and different result. We resolve this contradiction and show that the spectrum of true normal modes forms a continuum with real frequencies
Regularities of magnetic field penetration into half-space in type-II superconductors
International Nuclear Information System (INIS)
Medvedev, Yu.V.; Krasnyuk, I.B.
2003-01-01
The equations, modeling the distributions of the magnetic field induction and current density in the half-space with an account of the exponential volt-ampere characteristics, are obtained. The velocity of the magnetization front propagation by the assigned average rate of the change by the time of the external magnetic field at the sample boundary is determined. The integral condition for the electric resistance, nonlinearly dependent on the magnetic field, by accomplishing whereof the magnetic flux penetrates into the sample with the finite velocity is indicated. The analytical representation of the equation with the exponential boundary mode, which models the change in the magnetic field at the area boundary, is pointed out [ru
Monoenergetic neutral particle transport in an anisotropically scattering half-space
International Nuclear Information System (INIS)
Ganapol, B.D.; Garth, J.C.; Woolf, S.
1995-01-01
During the past several years, a considerable effort has been underway to develop reliable analytical benchmark solutions to the one-speed transport equation in various geometries. The reader may well ask open-quotes whyclose quotes such a task has been undertaken, given the recent rapid advances in numerical transport theory. The simple answer is that reliable numerical solutions do not yet exist, and their development still represents a mathematical challenge. However, regardless of how mathematically challenging the development is, there are more compelling reasons for this effort which are rooted in the very fundamentals of science and technology. In particular, these solutions, which are highly accurate numerical evaluations of analytical solution representations, serve as open-quotes industry standardsclose quotes to which other more approximate methods or approximations can be compared. Thus analytical benchmarks are part of the process control and continuous improvement of numerical transport methods and are therefore integral components in TQM (Total Quality Management) as applied to transport methods development. With the above reasoning in mind, the authors begin the development and application of a new analytical solution and evaluation for a half-space featuring anisotropic scattering. This work is an extension of previous efforts in which isotropically scattering half-spaces were treated. The previous benchmarks were obtained most conveniently via a numerical Laplace transform inversion which could be applied in a straightforward manner to the case of isotropic scattering. The application of the Laplace transform method is problematical for anisotropic scattering and does not admit to the direct identification of the scalar flux from integral transport theory
Rembrandt Kadrioru lossis / Jüri Hain
Hain, Jüri, 1941-
2000-01-01
Kadrioru lossi kuppelsaali tundmatu meistri laemaali maalimise ajalugu. Maali aluseks on Rembrandti maal "Diana suplus". Laemaal on maalitud Magdalena de Passe (1600-1638) vasegravüüri põhjal, mis on tehtud Rembrandti maalist. Rembrandt on lähtunud Antonio Tempesta (1555-1630) kahest gravüürist, viimane on lähtunud Otto van Veeni (1556-1629) maalist
Dyakonov surface waves in lossy metamaterials
Sorní Laserna, Josep; Naserpour, Mahin; Zapata Rodríguez, Carlos Javier; Miret Marí, Juan José
2015-01-01
We analyze the existence of localized waves in the vicinities of the interface between two dielectrics, provided one of them is uniaxial and lossy. We found two families of surface waves, one of them approaching the well-known Dyakonov surface waves (DSWs). In addition, a new family of wave fields exists which are tightly bound to the interface. Although its appearance is clearly associated with the dissipative character of the anisotropic material, the characteristic propagation length of su...
Modeling of Lossy Inductance in Moving-Coil Loudspeakers
DEFF Research Database (Denmark)
Kong, Xiao-Peng; Agerkvist, Finn T.; Zeng, Xin-Wu
2015-01-01
The electrical impedance of moving-coil loudspeakers is dominated by the lossy inductance in high frequency range. Using the equivalent electrical circuit method, a new model for the lossy inductance based on separate functions for the magnitude and phase of the impedance is presented. The electr......The electrical impedance of moving-coil loudspeakers is dominated by the lossy inductance in high frequency range. Using the equivalent electrical circuit method, a new model for the lossy inductance based on separate functions for the magnitude and phase of the impedance is presented...
Directory of Open Access Journals (Sweden)
M. Dehestani
Full Text Available Recent research works demonstrated that the interaction between the loads and the carrying structure's boundary which is related to the inertia of the load is an influential factor on the dynamic response of the structure. Although effects of the inertia in moving loads were considered in many works, very few papers can be found on the inertial effects of the stationary loads on structures. In this paper, an elastodynamic formulation was employed to investigate the dynamic response of a homogeneous isotropic elastic half-space under an inertial strip foundation subjected to a time-harmonic force. Fourier integral transformation was used to solve the system of Poisson-type partial differential equation considering the boundary conditions and the inertial effects. Steepest descent method was employed to obtain the approximate far-field displacements and stresses. A numerical example is presented to illustrate the methodology and typical results.
Ballard, Patrick
2016-12-01
The steady sliding frictional contact problem between a moving rigid indentor of arbitrary shape and an isotropic homogeneous elastic half-space in plane strain is extensively analysed. The case where the friction coefficient is a step function (with respect to the space variable), that is, where there are jumps in the friction coefficient, is considered. The problem is put under the form of a variational inequality which is proved to always have a solution which, in addition, is unique in some cases. The solutions exhibit different kinds of universal singularities that are explicitly given. In particular, it is shown that the nature of the universal stress singularity at a jump of the friction coefficient is different depending on the sign of the jump.
Jiao, Fengyu; Wei, Peijun; Li, Yueqiu
2018-01-01
Reflection and transmission of plane waves through a flexoelectric piezoelectric slab sandwiched by two piezoelectric half-spaces are studied in this paper. The secular equations in the flexoelectric piezoelectric material are first derived from the general governing equation. Different from the classical piezoelectric medium, there are five kinds of coupled elastic waves in the piezoelectric material with the microstructure effects taken into consideration. The state vectors are obtained by the summation of contributions from all possible partial waves. The state transfer equation of flexoelectric piezoelectric slab is derived from the motion equation by the reduction of order, and the transfer matrix of flexoelectric piezoelectric slab is obtained by solving the state transfer equation. By using the continuous conditions at the interface and the approach of partition matrix, we get the resultant algebraic equations in term of the transfer matrix from which the reflection and transmission coefficients can be calculated. The amplitude ratios and further the energy flux ratios of various waves are evaluated numerically. The numerical results are shown graphically and are validated by the energy conservation law. Based on these numerical results, the influences of two characteristic lengths of microstructure and the flexoelectric coefficients on the wave propagation are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Høye, J S; I Brevik; Milton, K A
2015-01-01
Casimir friction between a polarizable particle and a semi-infinite space is a delicate physical phenomenon, as it concerns the interaction between a microscopic quantum particle and a semi-infinite reservoir. Not unexpectedly, results obtained in the past about the friction force obtained via different routes are sometimes, at least apparently, wildly different from each other. Recently, we considered the Casimir friction force for two dielectric semi-infinite plates moving parallel to each other Høye and Brevik (2014 Eur. Phys. J. D 68 61), and managed to get essential agreement with results obtained by Pendry (1997 J. Phys.: Condens. Matter 9 10301), Volokitin and Persson (2007 Rev. Mod. Phys. 79 1291), and Barton (2011 New J. Phys. 13 043023; 2011 J. Phys.: Condens. Matter 23 335004). Our method was based upon use of the Kubo formalism. In the present paper we focus on the interaction between a polarizable particle and a dielectric half-space again, and calculate the friction force using the same basic method as before. The new ingredient in the present analysis is that we take into account radiative damping, and derive the modifications thereof. Some comparisons are also made with works from others. Essential agreement with the results of Intravaia, Behunin, and Dalvit can also be achieved using the modification of the atomic polarizability by the metallic plate. (paper)
Singular perturbations with boundary conditions and the Casimir effect in the half space
Albeverio, S.; Cognola, G.; Spreafico, M.; Zerbini, S.
2010-06-01
We study the self-adjoint extensions of a class of nonmaximal multiplication operators with boundary conditions. We show that these extensions correspond to singular rank 1 perturbations (in the sense of Albeverio and Kurasov [Singular Perturbations of Differential Operaters (Cambridge University Press, Cambridge, 2000)]) of the Laplace operator, namely, the formal Laplacian with a singular delta potential, on the half space. This construction is the appropriate setting to describe the Casimir effect related to a massless scalar field in the flat space-time with an infinite conducting plate and in the presence of a pointlike "impurity." We use the relative zeta determinant (as defined in the works of Müller ["Relative zeta functions, relative determinants and scattering theory," Commun. Math. Phys. 192, 309 (1998)] and Spreafico and Zerbini ["Finite temperature quantum field theory on noncompact domains and application to delta interactions," Rep. Math. Phys. 63, 163 (2009)]) in order to regularize the partition function of this model. We study the analytic extension of the associated relative zeta function, and we present explicit results for the partition function and for the Casimir force.
Modular Hamiltonians for deformed half-spaces and the averaged null energy condition
Faulkner, Thomas; Leigh, Robert G.; Parrikar, Onkar; Wang, Huajia
2016-09-01
We study modular Hamiltonians corresponding to the vacuum state for deformed half-spaces in relativistic quantum field theories on {{R}}^{1,d-1} . We show that in addition to the usual boost generator, there is a contribution to the modular Hamiltonian at first order in the shape deformation, proportional to the integral of the null components of the stress tensor along the Rindler horizon. We use this fact along with monotonicity of relative entropy to prove the averaged null energy condition in Minkowski space-time. This subsequently gives a new proof of the Hofman-Maldacena bounds on the parameters appearing in CFT three-point functions. Our main technical advance involves adapting newly developed perturbative methods for calculating entanglement entropy to the problem at hand. These methods were recently used to prove certain results on the shape dependence of entanglement in CFTs and here we generalize these results to excited states and real time dynamics. We also discuss the AdS/CFT counterpart of this result, making connection with the recently proposed gravitational dual for modular Hamiltonians in holographic theories.
Wave energy transfer in elastic half-spaces with soft interlayers.
Glushkov, Evgeny; Glushkova, Natalia; Fomenko, Sergey
2015-04-01
The paper deals with guided waves generated by a surface load in a coated elastic half-space. The analysis is based on the explicit integral and asymptotic expressions derived in terms of Green's matrix and given loads for both laminate and functionally graded substrates. To perform the energy analysis, explicit expressions for the time-averaged amount of energy transferred in the time-harmonic wave field by every excited guided or body wave through horizontal planes and lateral cylindrical surfaces have been also derived. The study is focused on the peculiarities of wave energy transmission in substrates with soft interlayers that serve as internal channels for the excited guided waves. The notable features of the source energy partitioning in such media are the domination of a single emerging mode in each consecutive frequency subrange and the appearance of reverse energy fluxes at certain frequencies. These effects as well as modal and spatial distribution of the wave energy coming from the source into the substructure are numerically analyzed and discussed.
Half-space problem of unsteady evaporation and condensation of polyatomic gas
Inaba, Masashi; Yano, Takeru
2016-11-01
On the basis of polyatomic version of the ellipsoidal-statistical Bhatnager-Gross-Krook (ES-BGK) model, we consider time-periodic gas flows in a semi-infinite expanse of an initially equilibrium polyatomic gas (methanol) bounded by its planar condensed phase. The kinetic boundary condition at the vapor-liquid interface is assumed to be the complete condensation condition with periodically time-varying macroscopic variables (temperature, saturated vapor density and velocity of the interface), and the boundary condition at infinity is the local equilibrium distribution function. The time scale of variation of macroscopic variables is assumed to be much larger than the mean free time of gas molecules, and the variations of those from a reference state are assumed to be sufficiently small. We numerically investigate thus formulated time-dependent half-space problem for the polyatomic version of linearized ES-BGK model equation with the finite difference method for the case of the Strouhal number Sh=0.01 and 0.1. It is shown that the amplitude of the mass flux at the interface is the maximum, and the phase difference in time between the mass flux and v∞ - vℓ (v∞: vapor velocity at infinity, vℓ: velocity of the vapor-liquid interface) is the minimum absolute value, when the phase difference in time between the liquid surface temperature (the saturated vapor density) and the velocity of interface is close to zero.
Directory of Open Access Journals (Sweden)
Ailawalia Praveen
2015-01-01
Full Text Available The purpose of this paper is to study the two dimensional deformation of fibre reinforced micropolar thermoelastic medium in the context of Green-Lindsay theory of thermoelasticity. A mechanical force is applied along the interface of fluid half space and fibre reinforced micropolar thermoelastic half space. The normal mode analysis has been applied to obtain the exact expressions for displacement component, force stress, temperature distribution and tangential couple stress. The effect of anisotropy and micropolarity on the displacement component, force stress, temperature distribution and tangential couple stress has been depicted graphically.
Quasi-Stationary Temperature Field of Two-Layer Half-Space with Moving Boundary
Directory of Open Access Journals (Sweden)
P. A. Vlasov
2015-01-01
Full Text Available Due to intensive introduction of mathematical modeling methods into engineering practice, analytical methods for solving problems of heat conduction theory along with computational methods become increasingly important. Despite the well-known limitations of the analytical method applicability, this trend is caused by many reasons. In particular, solutions of the appropriate problems presented in analytically closed form can be used to test the new efficient computational algorithms, to carry out a parametric study of the temperature field of the analyzed system and to explore specific features of its formation, to formulate and solve optimization problems. In addition, these solutions allow us to explore the possibility for simplifying mathematical model with retaining its adequacy to the studied process.The main goal of the conducted research is to provide an analytically closed-form solution to the problem of finding the quasi-stationary temperature field of the system, which is simulated by isotropic half-space with isotropic coating of constant thickness. The outer boundary of this system is exposed to the Gaussian-type heat flux and uniformly moves in parallel with itself.A two-dimensional mathematical model that takes into account the axial symmetry of the studied process has been used. After the transition to a moving coordinate system rigidly associated with a moving boundary the Hankel integral transform of zero order (with respect to the radial variable and the Laplace transform (with respect to the temporal variable were used. Next, the image of the Hankel transform for the stationary temperature field of the system with respect to the moving coordinate system was found using a limit theorem of operational calculus. This allowed representing the required quasi-stationary field in the form of an improper integral of the first kind, which depends on the parameters. This result obtained can be used to conduct a parametric study and solve
Lossy compression for Animated Web Visualisation
Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.
2017-12-01
This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.
2-D Deformation analysis of a half-space due to a long dip-slip fault ...
Indian Academy of Sciences (India)
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
tic deformation in a uniform half-space due to long faults has been attempted by a number of researchers. Singh and Rani (1996) presented step- by-step progress made in the direction of crustal deformation modeling associated with strike-slip and dip-slip faulting in the earth. Cohen (1996) gave convenient formulas for ...
Analysis of the temporal electric fields in lossy dielectric media
DEFF Research Database (Denmark)
McAllister, Iain Wilson; Crichton, George C
1991-01-01
The time-dependent electric fields associated with lossy dielectric media are examined. The analysis illustrates that, with respect to the basic time constant, these lossy media can take a considerable time to attain a steady-state condition. Time-dependent field enhancement factors are considered......, and inherent surface-charge densities quantified. The calculation of electrostatic forces on a free, lossy dielectric particle is illustrated. An extension to the basic analysis demonstrates that, on reversal of polarity, the resultant tangential field at the interface could play a decisive role...
Directory of Open Access Journals (Sweden)
Rajneesh Kumar
Full Text Available This paper is concerned with the study of propagation of Stoneley waves at the interface of two dissimilar isotropic microstretch thermoelastic diffusion medium in the context of generalized theories of thermoelasticity. The dispersion equation of Stoneley waves is derived in the form of a determinant by using the boundary conditions. The dispersion curves giving the phase velocity and attenuation coefficients with wave number are computed numerically. Numerically computed results are shown graphically to depict the diffusion effect alongwith the relaxation times in microstretch thermoelastic diffusion solid half spaces for thermally insulated and impermeable boundaries, respectively. The components of displacement, stress, couple stress, microstress, and temperature change are presented graphically for two dissimilar microstretch thermoelastic diffusion half-spaces. Several cases of interest under different conditions are also deduced and discussed.
Partially blind instantly decodable network codes for lossy feedback environment
Sorour, Sameh; Douik, Ahmed S.; Valaee, Shahrokh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2014-01-01
an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both
2-D deformation of two welded half-spaces due to a blind dip-slip fault
Indian Academy of Sciences (India)
The solution of two-dimensional problem of an interface breaking long inclined dip-slip fault in two welded half-spaces is well known.The purpose of this note is to obtain the corresponding solution for a blind fault.The solution is valid for arbitrary values of the fault-depth and the dip angle.Graphs showing the variation of the ...
Solution of the Korteweg--de Vries equation in a half-space bounded by a wall
International Nuclear Information System (INIS)
Moses, H.E.
1976-01-01
A solution of the Korteweg--de Vries equation in the half-space 0 less than r less than infinity with the boundary condition V(0) = 0 is given. The boundary condition may be interpreted as the requirement that the plane which bounds the half-space be a rigid wall. Aside from possible physical interest, this solution, which is obtained from one of the potentials for the radial Schroedinger equation which do not scatter, appears to indicate that the radial Schroedinger equation and the corresponding Gel'fand--Levitan equation play a role in the case of the half-space bounded by a wall similar to that of the one-dimensional Schroedinger equation (-- infinity less than x less than infinity) and its corresponding Gel'fand--Levitan equation in the more usual full space treatment of the KdV equation. A possible interpretation of the solution presented is that it corresponds to the reflection of a wave by a wall, in which the incident wave is singular and the reflected wave is nonsingular but highly dispersive
Energy Technology Data Exchange (ETDEWEB)
Ganapol, B.D., E-mail: ganapol@cowboy.ame.arizona.edu [Department of Aerospace and Mechanical Engineering, University of Arizona, Tucson, AZ (United States); Mostacci, D.; Previti, A. [Montecuccolino Laboratory, University of Bologna, Via dei Colli, 16, I-40136 Bologna (Italy)
2016-07-01
We present highly accurate solutions to the neutral particle transport equation in a half-space. While our initial motivation was in response to a recently published solution based on Chandrasekhar's H-function, the presentation to follow has taken on a more comprehensive tone. The solution by H-functions certainly did achieved high accuracy but was limited to isotropic scattering and emission from spatially uniform and linear sources. Moreover, the overly complicated nature of the H-function approach strongly suggests that its extension to anisotropic scattering and general sources is not at all practical. For this reason, an all encompassing theory for the determination of highly precise benchmarks, including anisotropic scattering for a variety of spatial source distributions, is presented for particle transport in a half-space. We illustrate the approach via a collection of cases including tables of 7-place flux benchmarks to guide transport methods developers. The solution presented can be applied to a considerable number of one and two half-space transport problems with variable sources and represents a state-of-the-art benchmark solution.
Pavlov, V. M.
2013-01-01
A new algorithm is proposed for calculating the complete synthetic seismograms from a point source in the form of the sum of a single force and a dipole with an arbitrary seismic moment tensor in a plane layered medium composed of homogenous elastic isotropic layers. Following the idea of (Alekseev and Mikhailenko, 1978), an artificial cylindrical boundary is introduced, on which the boundary conditions are specified. For this modified problem, the exact solution (in terms of the displacements and stresses on the horizontal plane areal element) in the frequency domain is derived and substantiated. The unknown depth-dependent coefficients form the motion-stress vector, whose components satisfy the known system of ordinary differential equations. This system is solved by the method that involves the matrix impedance and propagator for the vector of motion, as previously suggested by the author in (Pavlov, 2009). In relation to the initial problem, the reflections from the artificial boundary are noise, which, to a certain degree, can be suppressed by selecting a long enough distance to this boundary and owing to the presence of a purely imaginary addition to the frequency. The algorithm is not constrained by the thickness of the layers, is applicable for any frequency range, and is suitable for computing the static offset.
International Nuclear Information System (INIS)
Coen, S.
1981-01-01
The theory given by Moses and deRidder is modified so that the derivative of the solution of the Gelfand-Levitan integral equation is not required. Based on this modification, a numerical procedure is developed which approximately constructs the dielectric profile of the layered half-space from the impulse response. Moreover, an inverse scattering theory is developed for a Goupillaud-type dielectric medium, and a fast numerical procedure based on the Berryman and Greene algorithm is presented. The performance of the numerical algorithms is examined by applying them to pecise and imprecise artificial impulse response data. 11 refs
Directory of Open Access Journals (Sweden)
Onur Şahin
2017-04-01
Full Text Available An analysis of the distributed moving load along the surface of a coated half space is presented. The formulation of the problem depends on the hyperbolic-elliptic asymptotic model developed earlier by the authors. The integral solution of the longitudinal and transverse displacements along the surface for the sub and super-Rayleigh cases are obtained by using the uniform stationary phase method. Numerical comparisons of the exact and asymptotic solutions of the longitudinal displacement are illustrated for the certain cross-sections of the profile.
Directory of Open Access Journals (Sweden)
S. K. Roy-Choudhuri
1990-01-01
Full Text Available In the present paper we consider the magneto-thermo-elastic wave produced by a thermal shock in a perfectly conducting elastic half-space. Here the Lord-Shulman theory of thermoelasticity [1] is used to account for the interaction between the elastic and thermal fields. The solution obtained in analytical form reduces to those of Kaliski and Nowacki [2] when the coupling between the temperature and strain fields and the relaxation time are neglected. The results also agree with those of Massalas and DaLamangas [3] in absence of the thermal relaxation time.
Spontaneous radiation of a chiral molecule located near a half-space of a bi-isotropic material
Guzatov, D. V.; Klimov, V. V.; Poprukailo, N. S.
2013-04-01
Analytical expressions for the rate of spontaneous emission from a chiral (optically active) molecule located near a half-space occupied by a chiral (bi-isotropic) material have been obtained and analyzed in detail. It is established that the rates of spontaneous emission from the "right" and "left" enantiomers of molecules occurring near the chiral medium may significantly differ in cases of chiral materials with (i) both negative dielectric permittivity and magnetic permeability (DNG metamaterial) and (ii) negative permeability and positive permittivity (MNG metamaterial). Based on this phenomenon, DMG and MNG metamaterials can be used to create devices capable of separating right and left enantiomers in racemic mixtures.
Spontaneous radiation of a chiral molecule located near a half-space of a bi-isotropic material
International Nuclear Information System (INIS)
Guzatov, D. V.; Klimov, V. V.; Poprukailo, N. S.
2013-01-01
Analytical expressions for the rate of spontaneous emission from a chiral (optically active) molecule located near a half-space occupied by a chiral (bi-isotropic) material have been obtained and analyzed in detail. It is established that the rates of spontaneous emission from the “right” and “left” enantiomers of molecules occurring near the chiral medium may significantly differ in cases of chiral materials with (i) both negative dielectric permittivity and magnetic permeability (DNG metamaterial) and (ii) negative permeability and positive permittivity (MNG metamaterial). Based on this phenomenon, DMG and MNG metamaterials can be used to create devices capable of separating right and left enantiomers in racemic mixtures.
Dehbashi, Reza; Shahabadi, Mahmoud
2013-12-01
The commonly used coordinate transformation for cylindrical cloaks is generalized. This transformation is utilized to determine an anisotropic inhomogeneous diagonal material tensors of a shell type cloak for various material types, i.e., double-positive (DPS: ɛ, μ > 0), double-negative (DNG: ɛ, μ cloaking for various material types, a rigorous analysis is performed. It is shown that perfect cloaking will be achieved for same type material for the cloak and its surrounding medium. Moreover, material losses are included in the analysis to demonstrate that perfect cloaking for lossy materials can be achieved for identical loss tangent of the cloak and its surrounding material. Sensitivity of the cloaking performance to losses for different material types is also investigated. The obtained analytical results are verified using a Finite-Element computational analysis.
Lossy/lossless coding of bi-level images
DEFF Research Database (Denmark)
Martins, Bo; Forchhammer, Søren
1997-01-01
Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template...... as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy `rate...
Hayati, Yazdan; Eskandari-Ghadi, Morteza
2018-02-01
An asymmetric three-dimensional thermoelastodynamic wave propagation with scalar potential functions is presented for an isotropic half-space, in such a way that the wave may be originated from an arbitrary either traction or heat flux applied on a patch at the free surface of the half-space. The displacements, stresses and temperature are presented within the framework of Biot's coupled thermoelasticity formulations. By employing a complete representation for the displacement and temperature fields in terms of two scalar potential functions, the governing equations of coupled thermoelasticity are uncoupled into a sixth- and a second-order partial differential equation in cylindrical coordinate system. By virtue of Fourier expansion and Hankel integral transforms, the angular and radial variables are suppressed respectively, and a 6{th}- and a 2{nd}-order ordinary differential equation in terms of depth are received, which are solved readily, from which the displacement, stresses and temperature fields are derived in transformed space by satisfying both the regularity and boundary conditions. By applying the inverse Hankel integral transforms, the displacements and temperature are numerically evaluated to determine the solutions in the real space. The numerical evaluations are done for three specific cases of vertical and horizontal time-harmonic patch traction and a constant heat flux passing through a circular disc on the surface of the half-space. It has been previously proved that the potential functions used in this paper are applicable from elastostatics to thermoelastodynamics. Thus, the analytical solutions presented in this paper are verified by comparing the results of this study with two specific problems reported in the literature, which are an elastodynamic problem and an axisymmetric quasi-static thermoelastic problem. To show the accuracy of numerical results, the solution of this study is also compared with the solution for elastodynamics exists in
Stepanov, F. I.
2018-04-01
The mechanical properties of a material which is modeled by an exponential creep kernel characterized by a spectrum of relaxation and retardation times are studied. The research is carried out considering a contact problem for a solid indenter sliding over a viscoelastic half-space. The contact pressure, indentation depth of the indenter, and the deformation component of the friction coefficient are analyzed with respect to the case of half-space material modeled by single relaxation and retardation times.
Zhu, P. Y.
1991-01-01
The effective-medium approximation is applied to investigate scattering from a half-space of randomly and densely distributed discrete scatterers. Starting from vector wave equations, an approximation, called effective-medium Born approximation, a particular way, treating Green's functions, and special coordinates, of which the origin is set at the field point, are used to calculate the bistatic- and back-scatterings. An analytic solution of backscattering with closed form is obtained and it shows a depolarization effect. The theoretical results are in good agreement with the experimental measurements in the cases of snow, multi- and first-year sea-ice. The root product ratio of polarization to depolarization in backscattering is equal to 8; this result constitutes a law about polarized scattering phenomena in the nature.
Fabrikant, I.; Karapetian, E.; Kalinin, S. V.
2018-02-01
We consider the problem of an arbitrary shaped rigid punch pressed against the boundary of a transversely isotropic half-space and interacting with an arbitrary flat crack or inclusion, located in the plane parallel to the boundary. The set of governing integral equations is derived for the most general conditions, namely the presence of both normal and tangential stresses under the punch, as well as general loading of the crack faces. In order to verify correctness of the derivations, two different methods were used to obtain governing integral equations: generalized method of images and utilization of the reciprocal theorem. Both methods gave the same results. Axisymmetric coaxial case of interaction between a rigid inclusion and a flat circular punch both centered along the z-axis is considered as an illustrative example. Most of the final results are presented in terms of elementary functions.
Spontaneous radiation of a chiral molecule located near a half-space of a bi-isotropic material
Energy Technology Data Exchange (ETDEWEB)
Guzatov, D. V., E-mail: vklimov@sci.lebedev.ru [Yanka Kupala Grodno State University (Belarus); Klimov, V. V., E-mail: klimov256@gmail.com [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation); Poprukailo, N. S. [Yanka Kupala Grodno State University (Belarus)
2013-04-15
Analytical expressions for the rate of spontaneous emission from a chiral (optically active) molecule located near a half-space occupied by a chiral (bi-isotropic) material have been obtained and analyzed in detail. It is established that the rates of spontaneous emission from the 'right' and 'left' enantiomers of molecules occurring near the chiral medium may significantly differ in cases of chiral materials with (i) both negative dielectric permittivity and magnetic permeability (DNG metamaterial) and (ii) negative permeability and positive permittivity (MNG metamaterial). Based on this phenomenon, DMG and MNG metamaterials can be used to create devices capable of separating right and left enantiomers in racemic mixtures.
Enhanced inertia from lossy effective fluids using multi-scale sonic crystals
Directory of Open Access Journals (Sweden)
Matthew D. Guild
2014-12-01
Full Text Available In this work, a recent theoretically predicted phenomenon of enhanced permittivity with electromagnetic waves using lossy materials is investigated for the analogous case of mass density and acoustic waves, which represents inertial enhancement. Starting from fundamental relationships for the homogenized quasi-static effective density of a fluid host with fluid inclusions, theoretical expressions are developed for the conditions on the real and imaginary parts of the constitutive fluids to have inertial enhancement, which are verified with numerical simulations. Realizable structures are designed to demonstrate this phenomenon using multi-scale sonic crystals, which are fabricated using a 3D printer and tested in an acoustic impedance tube, yielding good agreement with the theoretical predictions and demonstrating enhanced inertia.
Wu, F.; Wu, T.-H.; Li, X.-Y.
2018-03-01
This article aims to present a systematic indentation theory on a half-space of multi-ferroic composite medium with transverse isotropy. The effect of sliding friction between the indenter and substrate is taken into account. The cylindrical flat-ended indenter is assumed to be electrically/magnetically conducting or insulating, which leads to four sets of mixed boundary-value problems. The indentation forces in the normal and tangential directions are related to the Coulomb friction law. For each case, the integral equations governing the contact behavior are developed by means of the generalized method of potential theory, and the corresponding coupling field is obtained in terms of elementary functions. The effect of sliding on the contact behavior is investigated. Finite element method (FEM) in the context of magneto-electro-elasticity is developed to discuss the validity of the analytical solutions. The obtained analytical solutions may serve as benchmarks to various simplified analyses and numerical codes and as a guide for future experimental studies.
Instabilities in dynamic anti-plane sliding of an elastic layer on a dissimilar elastic half-space
Kunnath, R.
2012-12-01
The stability of dynamic anti-plane sliding at an interface between an elastic layer and an elastic half-space with dissimilar elastic properties is studied. Friction at the interface is assumed to follow a rate- and state-dependent law, with a positive instantaneous dependence on slip velocity and a rate weakening behavior in the steady state. The perturbations are of the form exp(ikx+pt), where k is the wavenumber, x is the coordinate along the interface, p is the time response to the perturbation and t is time. The results of the stability analysis are shown in Figs. 1 and 2 with the velocity weakening parameter b/a=5, shear wave speed ratio cs'/cs=1.2, shear modulus ratio μ'/μ=1.2 and non-dimensional layer thickness H=100. The normalized instability growth rate and normalized phase velocity are plotted as a function of wavenumber. Fig.1 is for a non-dimensional unperturbed slip velocity ɛ=5 (rapid sliding) while Fig. 2 is for ɛ=0.05 (slow sliding). The results show the destabilization of interfacial waves. For slow sliding, destabilization of interfacial waves is still seen, indicating that the quasi-static approximation to slow sliding is not valid. This is in agreement with the result of Ranjith (Int. J. Solids and Struct., 2009, 46, 3086-3092) who predicted an instability of long-wavelength Love waves in slow sliding.
CFOA-Based Lossless and Lossy Inductance Simulators
Directory of Open Access Journals (Sweden)
F. Kaçar
2011-09-01
Full Text Available Inductance simulator is a useful component in the circuit synthesis theory especially for analog signal processing applications such as filter, chaotic oscillator design, analog phase shifters and cancellation of parasitic element. In this study, new four inductance simulator topologies employing a single current feedback operational amplifier are presented. The presented topologies require few passive components. The first topology is intended for negative inductance simulation, the second topology is for lossy series inductance, the third one is for lossy parallel inductance and the fourth topology is for negative parallel (-R (-L (-C simulation. The performance of the proposed CFOA based inductance simulators is demonstrated on both a second-order low-pass filter and inductance cancellation circuit. PSPICE simulations are given to verify the theoretical analysis.
Task-oriented lossy compression of magnetic resonance images
Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques
1996-04-01
A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.
Partially blind instantly decodable network codes for lossy feedback environment
Sorour, Sameh
2014-09-01
In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. When feedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packet combinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend the perfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations. For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partially blind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment for any arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposed solutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.
Half-Space Temperature Field with a Movable Thermally Thin-Coated Boundary Under External Heat Flux
Directory of Open Access Journals (Sweden)
P. A. Vlasov
2014-01-01
Full Text Available In engineering practice analytical methods of the mathematical theory of heat conduction hold a special place. This is due to many reasons, in particular, because of the fact that the solutions of the relevant problems represented in analytically closed form, can be used not only for a parametric analysis of the studied temperature field and to explore the specific features of its formation, but also to test the developed computational algorithms, which are aimed at solving real-world application heat and mass transfer problems. Difficulties arising when using the analytical mathematical theory methods of heat conduction in practice are well known. Also they are significantly exacerbated if the boundaries of the system under study are movable, even in the simplest case, when the law of motion is known.The main goal of the conducted research is to have an analytically closed-form problem solution for finding the orthotropic half-space temperature field, a boundary of which has thermally thin coating exposed to extremely concentrated stationary external heat flux and uniformly moves parallel to itself.The assumption that the covering of the boundary is thermally thin, allowed to realize the idea of \\concentrated capacity", that is to accept the hypothesis that the mean-thickness coating temperature is equal to the temperature of its boundaries. This assumption allowed us to reduce the problem under consideration to a mixed problem for a parabolic equation with a specific boundary condition.The Hankel integral transform of zero order with respect to the radial variable and the Laplace transform with respect to the temporal variable were used to solve the reduced problem. These techniques have allowed us to submit the required solution as an iterated integral.
Development of the town Viljandi in light of the studies at Lossi street / Eero Heinloo
Heinloo, Eero
2015-01-01
Uuringud näitasid, et varaseim püsiasustusele viitav ladestus on ulatunud Lossi ja Kauba tänavate ristist kuni Lossi tn 3 ja 4 hoonete vahelise alani. Hetkeseisus ei saa püsiasustuse algust dateerida varasemaks kui 13. sajandi keskpaik. Lossi tänava kujunemise võib dateerida 1270. või 1280. aastatesse. Hiljemalt 14. sajandi II veerandil on tänava puitsillutis asendatud munakivisillutisega.
Directory of Open Access Journals (Sweden)
I. K. Badalakha
2009-02-01
Full Text Available The article shows the result of solving the problem of stress-strain state of an elastic half-space because of the load action that uniformly distributed over the line, with the use of untraditional linear dependence of deformations on stressed state that is different from the generalized Hooke’s law.
Lossy compression of quality scores in genomic data.
Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew
2014-08-01
Next-generation sequencing technologies are revolutionizing medicine. Data from sequencing technologies are typically represented as a string of bases, an associated sequence of per-base quality scores and other metadata, and in aggregate can require a large amount of space. The quality scores show how accurate the bases are with respect to the sequencing process, that is, how confident the sequencer is of having called them correctly, and are the largest component in datasets in which they are retained. Previous research has examined how to store sequences of bases effectively; here we add to that knowledge by examining methods for compressing quality scores. The quality values originate in a continuous domain, and so if a fidelity criterion is introduced, it is possible to introduce flexibility in the way these values are represented, allowing lossy compression over the quality score data. We present existing compression options for quality score data, and then introduce two new lossy techniques. Experiments measuring the trade-off between compression ratio and information loss are reported, including quantifying the effect of lossy representations on a downstream application that carries out single nucleotide polymorphism and insert/deletion detection. The new methods are demonstrably superior to other techniques when assessed against the spectrum of possible trade-offs between storage required and fidelity of representation. An implementation of the methods described here is available at https://github.com/rcanovas/libCSAM. rcanovas@student.unimelb.edu.au Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
High-altitude electromagnetic pulse environment over the lossy ground
International Nuclear Information System (INIS)
Xie Yanzhao; Wang Zanji
2003-01-01
The electromagnetic field above ground produced by an incident high-altitude electromagnetic pulse plane wave striking the ground plane was described in this paper in terms of the Fresnel reflection coefficients and the numerical FFT. The pulse reflected from the ground plane always cancel the incident field for the horizontal field component, but the reflected field adds to the incident for the vertical field component. The results of several cases for variations in the observation height, angle of incidence and lossy ground electrical parameters were also presented showing different e-field components above the earth
Theory and Circuit Model for Lossy Coaxial Transmission Line
Energy Technology Data Exchange (ETDEWEB)
Genoni, T. C.; Anderson, C. N.; Clark, R. E.; Gansz-Torres, J.; Rose, D. V.; Welch, Dale Robert
2017-04-01
The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.
Solving for the capacity of a noisy lossy bosonic channel via the master equation
International Nuclear Information System (INIS)
Qin Tao; Zhao Meisheng; Zhang Yongde
2006-01-01
We discuss the noisy lossy bosonic channel by exploiting master equations. The capacity of the noisy lossy bosonic channel and the criterion for the optimal capacities are derived. Consequently, we verify that master equations can be a tool to study bosonic channels
Spectral Distortion in Lossy Compression of Hyperspectral Data
Directory of Open Access Journals (Sweden)
Bruno Aiazzi
2012-01-01
Full Text Available Distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated, with the aim of minimizing the spectral distortion between original and decompressed data. The absolute angular error, or spectral angle mapper (SAM, is used to quantify spectral distortion, while radiometric distortions are measured by maximum absolute deviation (MAD for near-lossless methods, for example, differential pulse code modulation (DPCM, or mean-squared error (MSE for lossy methods, for example, spectral decorrelation followed by JPEG 2000. Two strategies of interband distortion allocation are compared: given a target average bit rate, distortion may be set to be constant with wavelength. Otherwise, it may be allocated proportionally to the noise level of each band, according to the virtually lossless protocol. Comparisons with the uncompressed originals show that the average SAM of radiance spectra is minimized by constant distortion allocation to radiance data. However, variable distortion allocation according to the virtually lossless protocol yields significantly lower SAM in case of reflectance spectra obtained from compressed radiance data, if compared with the constant distortion allocation at the same compression ratio.
Ito, Yasufumi; Takeuchi, Kazumasa A.
2018-04-01
We study height fluctuations of interfaces in the (1 +1 ) -dimensional Kardar-Parisi-Zhang (KPZ) class, growing at different speeds in the left half and the right half of space. Carrying out simulations of the discrete polynuclear growth model with two different growth rates, combined with the standard setting for the droplet, flat, and stationary geometries, we find that the fluctuation properties at and near the boundary are described by the KPZ half-space problem developed in the theoretical literature. In particular, in the droplet case, the distribution at the boundary is given by the largest-eigenvalue distribution of random matrices in the Gaussian symplectic ensemble, often called the GSE Tracy-Widom distribution. We also characterize crossover from the full-space statistics to the half-space one, which arises when the difference between the two growth speeds is small.
Michalski, Krzysztof A.; Lin, Hung-I.
2018-01-01
Second-order asymptotic formulas for the electromagnetic fields of a horizontal electric dipole over an imperfectly conducting half-space are derived using the modified saddle-point method. Application examples are presented for ordinary and plasmonic media, and the accuracy of the new formulation is assessed by comparisons with two alternative state-of-the-art theories and with the rigorous results of numerical integration.
International Nuclear Information System (INIS)
Panasyuk, George Y; Schotland, John C; Markel, Vadim A
2009-01-01
We obtain a short-distance expansion for the half-space, frequency domain electromagnetic Green's tensor. The small parameter of the theory is ωε 1 L/c, where ω is the frequency, ε 1 is the permittivity of the upper half-space, in which both the source and the point of observation are located, and which is assumed to be transparent, c is the speed of light in vacuum and L is a characteristic length, defined as the distance from the point of observation to the reflected (with respect to the planar interface) position of the source. In the case when the lower half-space (the substrate) is characterized by a complex permittivity ε 2 , we compute the expansion to third order. For the case when the substrate is a transparent dielectric, we compute the imaginary part of the Green's tensor to seventh order. The analytical calculations are verified numerically. The practical utility of the obtained expansion is demonstrated by computing the radiative lifetime of two electromagnetically interacting molecules in the vicinity of a transparent dielectric substrate. The computation is performed in the strong interaction regime when the quasi-particle pole approximation is inapplicable. In this regime, the integral representation for the half-space Green's tensor is difficult to use while its electrostatic limiting expression is grossly inadequate. However, the analytical expansion derived in this paper can be used directly and efficiently. The results of this study are also relevant to nano-optics and near-field imaging, especially when tomographic image reconstruction is involved
International Nuclear Information System (INIS)
Sanchez, R.; Ragusa, J.; Santandrea, S.
2004-01-01
The problem of the determination of a homogeneous reflector that preserves a set of prescribed albedo is considered. Duality is used for a direct estimation of the derivatives needed in the iterative calculation of the optimal homogeneous cross sections. The calculation is based on the preservation of collapsed multigroup albedo obtained from detailed reference calculations and depends on the low-order operator used for core calculations. In this work we analyze diffusion and transport as low-order operators and argue that the P 0 transfers are the best choice for the unknown cross sections to be adjusted. Numerical results illustrate the new approach for SP N core calculations. (Author)
Yuan, Zonghao; Cao, Zhigang; Boström, Anders; Cai, Yuanqiang
2018-04-01
A computationally efficient semi-analytical solution for ground-borne vibrations from underground railways is proposed and used to investigate the influence of hydraulic boundary conditions at the scattering surfaces and the moving ground water table on ground vibrations. The arrangement of a dry soil layer with varying thickness resting on a saturated poroelastic half-space, which includes a circular tunnel subject to a harmonic load at the tunnel invert, creates the scenario of a moving water table for research purposes in this paper. The tunnel is modelled as a hollow cylinder, which is made of viscoelastic material and buried in the half-space below the ground water table. The wave field in the dry soil layer consists of up-going and down-going waves while the wave field in the tunnel wall consists of outgoing and regular cylindrical waves. The complete solution for the saturated half-space with a cylindrical hole is composed of down-going plane waves and outgoing cylindrical waves. By adopting traction-free boundary conditions on the ground surface and continuity conditions at the interfaces of the two soil layers and of the tunnel and the surrounding soil, a set of algebraic equations can be obtained and solved in the transformed domain. Numerical results show that the moving ground water table can cause an uncertainty of up to 20 dB for surface vibrations.
Resolution limits of migration and linearized waveform inversion images in a lossy medium
Schuster, Gerard T.; Dutta, Gaurav; Li, Jing
2017-01-01
The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.
Resolution limits of migration and linearized waveform inversion images in a lossy medium
Schuster, Gerard T.
2017-03-10
The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.
Energy Technology Data Exchange (ETDEWEB)
Sanchez, R.; Ragusa, J.; Santandrea, S. [Commissariat a l' Energie Atomique, Direction de l' Energie Nucleaire, Service d' Etudes de Reacteurs et de Modelisation Avancee, CEA de Saclay, DM2S/SERMA 91 191 Gif-sur-Yvette cedex (France)]. e-mail: richard.sanchez@cea.fr
2004-07-01
The problem of the determination of a homogeneous reflector that preserves a set of prescribed albedo is considered. Duality is used for a direct estimation of the derivatives needed in the iterative calculation of the optimal homogeneous cross sections. The calculation is based on the preservation of collapsed multigroup albedo obtained from detailed reference calculations and depends on the low-order operator used for core calculations. In this work we analyze diffusion and transport as low-order operators and argue that the P{sub 0} transfers are the best choice for the unknown cross sections to be adjusted. Numerical results illustrate the new approach for SP{sub N} core calculations. (Author)
A microcontroller-based interface circuit for lossy capacitive sensors
International Nuclear Information System (INIS)
Reverter, Ferran; Casas, Òscar
2010-01-01
This paper introduces and analyses a low-cost microcontroller-based interface circuit for lossy capacitive sensors, i.e. sensors whose parasitic conductance (G x ) is not negligible. Such a circuit relies on a previous circuit also proposed by the authors, in which the sensor is directly connected to a microcontroller without using either a signal conditioner or an analogue-to-digital converter in the signal path. The novel circuit uses the same hardware, but it performs an additional measurement and executes a new calibration technique. As a result, the sensitivity of the circuit to G x decreases significantly (a factor higher than ten), but not completely due to the input capacitances of the port pins of the microcontroller. Experimental results show a relative error in the capacitance measurement below 1% for G x x ) shows the effectiveness of the circuit
Xue, Zhang-Na; Yu, Ya-Jun; Tian, Xiao-Geng
2017-07-01
Based upon the coupled thermoelasticity and Green and Lindsay theory, the new governing equations of two-temperature thermoelastic theory with thermal nonlocal parameter is formulated. To more realistically model thermal loading of a half-space surface, a linear temperature ramping function is adopted. Laplace transform techniques are used to get the general analytical solutions in Laplace domain, and the inverse Laplace transforms based on Fourier expansion techniques are numerically implemented to obtain the numerical solutions in time domain. Specific attention is paid to study the effect of thermal nonlocal parameter, ramping time, and two-temperature parameter on the distributions of temperature, displacement and stress distribution.
DEFF Research Database (Denmark)
Andersen, Lars; Nielsen, Søren R. K.
2003-01-01
The paper deals with the boundary element method formulation of the steady-state wave propagation through elastic media due to a source moving with constant velocity. The Greens' function for the three-dimensional full-space is formulated in a local frame of reference following the source...... is approximated, but the error which is introduced in this way is insignificant. Numerical examples are given for a moving rectangular load on an elastic half-space. The result from a boundary element code based on the derived Green's function are compared with a semi-analytic solution....
Scaling a network with positive gains to a lossy or gainy network
Koene, J.
1979-01-01
Necessary and sufficient conditions are presented under which it is possible to scale a network with positive gains to a lossy or a gainy network. A procedure to perform such a scaling operation is given.
How useful is slow light in enhancing nonlinear interactions in lossy periodic nanostructures?
DEFF Research Database (Denmark)
Saravi, Sina; Quintero-Bermudez, Rafael; Setzpfandt, Frank
2016-01-01
We investigate analytically, and with nonlinear simulations, the extent of usefulness of slow light for enhancing the efficiency of second harmonic generation in lossy nanostructures, and find that the slower is not always the better....
Lossi legendist, tamme teekonnast ja maali mõistatusest / Alar Läänelaid
Läänelaid, Alar, 1951-
2006-01-01
Dendrokronoloogilisest dateerimisest (aastarõngasdateerimine), mis on toonud selgust Alatskivi lossi söögisaali laekaunistuse dateerimisse, Hans van Esseni maali "Vaikelu homaariga" ning Clara Peetersi "Vaikelu jahisaagiga" valmimisaja ja ehtsuse kindlaksmääramisse
Diffraction of an inhomogeneous plane wave by an impedance wedge in a lossy medium
CSIR Research Space (South Africa)
Manara, G
1998-11-01
Full Text Available The diffraction of an inhomogeneous plane wave by an impedance wedge embedded in a lossy medium is analyzed. The rigorous integral representation for the field is asymptotically evaluated in the context of the uniform geometrical theory...
An Evaluation Framework for Lossy Compression of Genome Sequencing Quality Values
Alberti, Claudio; Daniels, Noah; Hernaez, Mikel; Voges, Jan; Goldfeder, Rachel L.; Hernandez-Lopez, Ana A.; Mattavelli, Marco; Berger, Bonnie
2016-01-01
This paper provides the specification and an initial validation of an evaluation framework for the comparison of lossy compressors of genome sequencing quality values. The goal is to define reference data, test sets, tools and metrics that shall be used to evaluate the impact of lossy compression of quality values on human genome variant calling. The functionality of the framework is validated referring to two state-of-the-art genomic compressors. This work has been spurred by the current act...
High thermal conductivity lossy dielectric using co-densified multilayer configuration
Tiegs, Terry N.; Kiggans, Jr., James O.
2003-06-17
Systems and methods are described for loss dielectrics. A method of manufacturing a lossy dielectric includes providing at least one high dielectric loss layer and providing at least one high thermal conductivity-electrically insulating layer adjacent the at least one high dielectric loss layer and then densifying together. The systems and methods provide advantages because the lossy dielectrics are less costly and more environmentally friendly than the available alternatives.
IPTV multicast with peer-assisted lossy error control
Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd
2010-07-01
Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Directory of Open Access Journals (Sweden)
Rajneesh Kumar
Full Text Available The aim of the present paper is to study the propagation of Lamb waves in micropolar generalized thermoelastic solids with two temperatures bordered with layers or half-spaces of inviscid liquid subjected to stress-free boundary conditions in the context of Green and Lindsay (G-L theory. The secular equations for governing the symmetric and skew-symmetric leaky and nonleaky Lamb wave modes of propagation are derived. The computer simulated results with respect to phase velocity, attenuation coefficient, amplitudes of dilatation, microrotation vector and heat flux in case of symmetric and skew-symmetric modes have been depicted graphically. Moreover, some particular cases of interest have also been discussed.
Zhang, H.-m.; Chen, X.-f.; Chang, S.
- It is difficult to compute synthetic seismograms for a layered half-space with sources and receivers at close to or the same depths using the generalized R/T coefficient method (Kennett, 1983; Luco and Apsel, 1983; Yao and Harkrider, 1983; Chen, 1993), because the wavenumber integration converges very slowly. A semi-analytic method for accelerating the convergence, in which part of the integration is implemented analytically, was adopted by some authors (Apsel and Luco, 1983; Hisada, 1994, 1995). In this study, based on the principle of the Repeated Averaging Method (Dahlquist and Björck, 1974; Chang, 1988), we propose an alternative, efficient, numerical method, the peak-trough averaging method (PTAM), to overcome the difficulty mentioned above. Compared with the semi-analytic method, PTAM is not only much simpler mathematically and easier to implement in practice, but also more efficient. Using numerical examples, we illustrate the validity, accuracy and efficiency of the new method.
Decoherence in quantum lossy systems: superoperator and matrix techniques
Yazdanpanah, Navid; Tavassoly, Mohammad Kazem; Moya-Cessa, Hector Manuel
2017-06-01
Due to the unavoidably dissipative interaction between quantum systems with their environments, the decoherence flows inevitably into the systems. Therefore, to achieve a better understanding on how decoherence affects on the damped systems, a fundamental investigation of master equation seems to be required. In this regard, finding out the missed information which has been lost due to irreversibly of the dissipative systems, is also of practical importance in quantum information science. Motivating by these facts, in this work we want to use superoperator and matrix techniques, by which we are able to illustrate two methods to obtain the explicit form of density operators corresponding to damped systems at arbitrary temperature T ≥ 0. To establish the potential abilities of the suggested methods, we apply them to deduce the density operator of some practical well-known quantum systems. Using the superoperator techniques, at first we obtain the density operator of a damped system which includes a qubit interacting with a single-mode quantized field within an optical cavity. As the second system, we study the decoherence of a quantized field within an optical damped cavity. We also use our proposed matrix method to study the decoherence of a system which includes two qubits in the interaction with each other via dipole-dipole interaction and at the same time with a quantized field in a lossy cavity. The influences of dissipation on the decoherence of dynamical properties of these systems are also numerically investigated. At last, the advantages of the proposed superoperator techniques in comparison with matrix method are explained.
Lossy transmission line model of hydrofractured well dynamics
Energy Technology Data Exchange (ETDEWEB)
Patzek, T.W. [Department of Materials Science and Mineral Engineering, University of California at Berkeley, Berkeley, CA (United States); De, A. [Earth Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA (United States)
2000-01-01
The real-time detection of hydrofracture growth is crucial to the successful operation of water, CO{sub 2} or steam injection wells in low-permeability reservoirs and to the prevention of subsidence and well failure. In this paper, we describe propagation of very low frequency (1-10 to 100 Hz) Stoneley waves in a fluid-filled wellbore and their interactions with the fundamental wave mode in a vertical hydrofracture. We demonstrate that Stoneley-wave loses energy to the fracture and the energy transfer from the wellbore to the fracture opening is most efficient in soft rocks. We conclude that placing the wave source and receivers beneath the injection packer provides the most efficient means of hydrofracture monitoring. We then present the lossy transmission line model of wellbore and fracture for the detection and characterization of fracture state and volume. We show that this model captures the wellbore and fracture geometry, the physical properties of injected fluid and the wellbore-fracture system dynamics. The model is then compared with experimentally measured well responses. The simulated responses are in good agreement with published experimental data from several water injection wells with depths ranging from 1000 ft to 9000 ft. Hence, we conclude that the transmission line model of water injectors adequately captures wellbore and fracture dynamics. Using an extensive data set for the South Belridge Diatomite waterfloods, we demonstrate that even for very shallow wells the fracture size and state can be adequately recognized at wellhead. Finally, we simulate the effects of hydrofracture extension on the transient response to a pulse signal generated at wellhead. We show that hydrofracture extensions can indeed be detected by monitoring the wellhead pressure at sufficiently low frequencies.
A lossy graph model for delay reduction in generalized instantly decodable network coding
Douik, Ahmed S.
2014-06-01
The problem of minimizing the decoding delay in Generalized instantly decodable network coding (G-IDNC) for both perfect and lossy feedback scenarios is formulated as a maximum weight clique problem over the G-IDNC graph in. In this letter, we introduce a new lossy G-IDNC graph (LG-IDNC) model to further minimize the decoding delay in lossy feedback scenarios. Whereas the G-IDNC graph represents only doubtless combinable packets, the LG-IDNC graph represents also uncertain packet combinations, arising from lossy feedback events, when the expected decoding delay of XORing them among themselves or with other certain packets is lower than that expected when sending these packets separately. We compare the decoding delay performance of LG-IDNC and G-IDNC graphs through extensive simulations. Numerical results show that our new LG-IDNC graph formulation outperforms the G-IDNC graph formulation in all lossy feedback situations and achieves significant improvement in the decoding delay especially when the feedback erasure probability is higher than the packet erasure probability. © 2012 IEEE.
Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing
Directory of Open Access Journals (Sweden)
Liantao Wu
2015-08-01
Full Text Available Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links.
StirMark Benchmark: audio watermarking attacks based on lossy compression
Steinebach, Martin; Lang, Andreas; Dittmann, Jana
2002-04-01
StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.
An Evaluation Framework for Lossy Compression of Genome Sequencing Quality Values.
Alberti, Claudio; Daniels, Noah; Hernaez, Mikel; Voges, Jan; Goldfeder, Rachel L; Hernandez-Lopez, Ana A; Mattavelli, Marco; Berger, Bonnie
2016-01-01
This paper provides the specification and an initial validation of an evaluation framework for the comparison of lossy compressors of genome sequencing quality values. The goal is to define reference data, test sets, tools and metrics that shall be used to evaluate the impact of lossy compression of quality values on human genome variant calling. The functionality of the framework is validated referring to two state-of-the-art genomic compressors. This work has been spurred by the current activity within the ISO/IEC SC29/WG11 technical committee (a.k.a. MPEG), which is investigating the possibility of starting a standardization activity for genomic information representation.
Directory of Open Access Journals (Sweden)
Andrea Colombi
2017-08-01
Full Text Available In metamaterial science, local resonance and hybridization are key phenomena strongly influencing the dispersion properties; the metasurface discussed in this article created by a cluster of resonators, subwavelength rods, atop an elastic surface being an exemplar with these features. On this metasurface, band-gaps, slow or fast waves, negative refraction, and dynamic anisotropy can all be observed by exploring frequencies and wavenumbers from the Floquet–Bloch problem and by using the Brillouin zone. These extreme characteristics, when appropriately engineered, can be used to design and control the propagation of elastic waves along the metasurface. For the exemplar we consider, two parameters are easily tuned: rod height and cluster periodicity. The height is directly related to the band-gap frequency and, hence, to the slow and fast waves, while the periodicity is related to the appearance of dynamic anisotropy. Playing with these two parameters generates a gallery of metasurface designs to control the propagation of both flexural waves in plates and surface Rayleigh waves for half-spaces. Scalability with respect to the frequency and wavelength of the governing physical laws allows the application of these concepts in very different fields and over a wide range of lengthscales.
Tantsufestival Momentum 2008 : Alatskivi lossi tunded, mõtted, emotsioonid / Reet Kruup
Kruup, Reet
2008-01-01
25.-27. aprillini Alatskivi Kaunite Kunstide Kuu raames toimunud tantsufestivalist Momentum 2008. Festivalil osalesid Alatskivi vallas tegutsevate tantsurühmade kõrval ka tantsuteater Tee Kuubis, kontakttantsurühmitus Kontakt Tartust ja Tartu Ülikooli Viljandi Kultuuriakadeemia lavakunstide osakonna tudengid, kes esitasid improvisatsioonilise tantsuetenduse "Alatskivi lossi tunded, mõtted, emotsioonid..."
A lossy graph model for delay reduction in generalized instantly decodable network coding
Douik, Ahmed S.; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2014-01-01
, arising from lossy feedback events, when the expected decoding delay of XORing them among themselves or with other certain packets is lower than that expected when sending these packets separately. We compare the decoding delay performance of LG-IDNC and G
Lossi hoovis mälestati veretöö ohvreid / Veljo Kuivjõgi
Kuivjõgi, Veljo, 1951-
2006-01-01
Endel Püüa raamatu "Punane terror Saaremaal 1941. aastal" (Saaremaa : Saaremaa Muuseum, 2006) esitlusest ja mälestuspäevast, mis oli pühendatud kõigile ohvritele, kes hukati 1941. aasta suvel Kuressaare lossi hoovis
Security of modified Ping-Pong protocol in noisy and lossy channel.
Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu
2014-05-12
The "Ping-Pong" (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove the security of this modified PP protocol against collective attacks when the noisy and lossy channel is taken into account. Simulation results show that our protocol is practical.
Lossy/Lossless Floating/Grounded Inductance Simulation Using One DDCC
Directory of Open Access Journals (Sweden)
M. A. Ibrahim
2012-04-01
Full Text Available In this work, we present new topologies for realizing one lossless grounded inductor and two floating, one lossless and one lossy, inductors employing a single differential difference current conveyor (DDCC and a minimum number of passive components, two resistors, and one grounded capacitor. The floating inductors are based on ordinary dual-output differential difference current conveyor (DO-DDCC while the grounded lossless inductor is based one a modified dual-output differential difference current conveyor (MDO-DDCC. The proposed lossless floating inductor is obtained from the lossy one by employing a negative impedance converter (NIC. The non-ideality effects of the active element on the simulated inductors are investigated. To demonstrate the performance of the proposed grounded inductance simulator as an example, it is used to construct a parallel resonant circuit. SPICE simulation results are given to confirm the theoretical analysis.
Adiabatic passage for a lossy two-level quantum system by a complex time method
International Nuclear Information System (INIS)
Dridi, G; Guérin, S
2012-01-01
Using a complex time method with the formalism of Stokes lines, we establish a generalization of the Davis–Dykhne–Pechukas formula which gives in the adiabatic limit the transition probability of a lossy two-state system driven by an external frequency-chirped pulse-shaped field. The conditions that allow this generalization are derived. We illustrate the result with the dissipative Allen–Eberly and Rosen–Zener models. (paper)
Lossy compression of TPC data and trajectory tracking efficiency for the ALICE experiment
International Nuclear Information System (INIS)
Nicolaucig, A.; Ivanov, M.; Mattavelli, M.
2003-01-01
In this paper a quasi-lossless algorithm for the on-line compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN is described. The algorithm is based on a lossy source code modeling technique, i.e. it is based on a source model which is lossy if samples of the TPC signal are considered one by one; conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse, representing the pulse charge and the time localization of the pulse. So as to evaluate the consequences of the error introduced by the lossy compression process, the results of the trajectory tracking algorithms that process data off-line after the experiment are analyzed, in particular, versus their sensibility to the noise introduced by the compression. Two different versions of these off-line algorithms are described, performing cluster finding and particle tracking. The results on how these algorithms are affected by the lossy compression are reported. Entropy coding can be applied to the set of events defined by the source model to reduce the bit rate to the corresponding source entropy. Using TPC simulated data according to the expected ALICE TPC performance, the compression algorithm achieves a data reduction in the range of 34.2% down to 23.7% of the original data rate depending on the desired precision on the pulse center of mass. The number of operations per input symbol required to implement the algorithm is relatively low, so that a real-time implementation of the compression process embedded in the TPC data acquisition chain using low-cost integrated electronics is a realistic option to effectively reduce the data storing cost of ALICE experiment
Integrated Circuit Interconnect Lines on Lossy Silicon Substrate with Finite Element Method
Sarhan M. Musa,; Matthew N. O. Sadiku
2014-01-01
The silicon substrate has a significant effect on the inductance parameter of a lossy interconnect line on integrated circuit. It is essential to take this into account in determining the transmission line electrical parameters. In this paper, a new quasi-TEM capacitance and inductance analysis of multiconductor multilayer interconnects is successfully demonstrated using finite element method (FEM). We specifically illustrate the electrostatic modeling of single and coupled in...
Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach
Danyali, Habibiollah; Mertins, Alfred
2011-01-01
In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
Using off-the-shelf lossy compression for wireless home sleep staging.
Lan, Kun-Chan; Chang, Da-Wei; Kuo, Chih-En; Wei, Ming-Zhi; Li, Yu-Hung; Shaw, Fu-Zen; Liang, Sheng-Fu
2015-05-15
Recently, there has been increasing interest in the development of wireless home sleep staging systems that allow the patient to be monitored remotely while remaining in the comfort of their home. However, transmitting large amount of Polysomnography (PSG) data over the Internet is an important issue needed to be considered. In this work, we aim to reduce the amount of PSG data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to classify sleep stages. We examine the effects of off-the-shelf lossy compression on an all-night PSG dataset from 20 healthy subjects, in the context of automated sleep staging. The popular compression method Set Partitioning in Hierarchical Trees (SPIHT) was used, and a range of compression levels was selected in order to compress the signals with various degrees of loss. In addition, a rule-based automatic sleep staging method was used to automatically classify the sleep stages. Considering the criteria of clinical usefulness, the experimental results show that the system can achieve more than 60% energy saving with a high accuracy (>84%) in classifying sleep stages by using a lossy compression algorithm like SPIHT. As far as we know, our study is the first that focuses how much loss can be tolerated in compressing complex multi-channel PSG data for sleep analysis. We demonstrate the feasibility of using lossy SPIHT compression for wireless home sleep staging. Copyright © 2015 Elsevier B.V. All rights reserved.
Lossy compression of TPC data and trajectory tracking efficiency for the ALICE experiment
Nicolaucig, A; Mattavelli, M
2003-01-01
In this paper a quasi-lossless algorithm for the on-line compression of the data generated by the Time Projection Chamber (TPC) detector of the ALICE experiment at CERN is described. The algorithm is based on a lossy source code modeling technique, i.e. it is based on a source model which is lossy if samples of the TPC signal are considered one by one; conversely, the source model is lossless or quasi-lossless if some physical quantities that are of main interest for the experiment are considered. These quantities are the area and the location of the center of mass of each TPC signal pulse, representing the pulse charge and the time localization of the pulse. So as to evaluate the consequences of the error introduced by the lossy compression process, the results of the trajectory tracking algorithms that process data off-line after the experiment are analyzed, in particular, versus their sensibility to the noise introduced by the compression. Two different versions of these off- line algorithms are described,...
Radiance intensity enhanced by thin inhomogeneous lossy films
International Nuclear Information System (INIS)
Ben-Abdallah, Philippe; Ni Bo
2004-01-01
Basically, the classical radiative transfer theory assumes that the coherent component of the radiation field is equal to zero and heuristic considerations about energy conservation are used in the phenomenological derivation of the RTE. Here a self-consistent theory is presented to investigate radiative transport in the presence of diffraction processes within thin inhomogeneous films. The problem of linear optics about the transport of scalar radiation within film is solved, a new definition of the radiance is introduced in agreement with earlier definitions and a corresponding radiative transfer equation is derived. The influence of spatial variations of the bulk properties on the propagating mode is described in detail. It is analytically predicted that, unlike homogeneous media, an inhomogeneous film can enhance the radiance intensity in spite of the diffraction and the local extinction. From a practical point of view, the results of this work should be useful to perform the optimal design for many thermoelectric devices such as the new generations of photovoltaiec cells
Sethi, M.; Sharma, A.; Vasishth, A.
2017-05-01
The present paper deals with the mathematical modeling of the propagation of torsional surface waves in a non-homogeneous transverse isotropic elastic half-space under a rigid layer. Both rigidities and density of the half-space are assumed to vary inversely linearly with depth. Separation of variable method has been used to get the analytical solutions for the dispersion equation of the torsional surface waves. Also, the effects of nonhomogeneities on the phase velocity of torsional surface waves have been shown graphically. Also, dispersion equations have been derived for some particular cases, which are in complete agreement with some classical results.
Edge-Based Image Compression with Homogeneous Diffusion
Mainberger, Markus; Weickert, Joachim
It is well-known that edges contain semantically important image information. In this paper we present a lossy compression method for cartoon-like images that exploits information at image edges. These edges are extracted with the Marr-Hildreth operator followed by hysteresis thresholding. Their locations are stored in a lossless way using JBIG. Moreover, we encode the grey or colour values at both sides of each edge by applying quantisation, subsampling and PAQ coding. In the decoding step, information outside these encoded data is recovered by solving the Laplace equation, i.e. we inpaint with the steady state of a homogeneous diffusion process. Our experiments show that the suggested method outperforms the widely-used JPEG standard and can even beat the advanced JPEG2000 standard for cartoon-like images.
Mechanical Homogenization Increases Bacterial Homogeneity in Sputum
Stokell, Joshua R.; Khan, Ammad
2014-01-01
Sputum obtained from patients with cystic fibrosis (CF) is highly viscous and often heterogeneous in bacterial distribution. Adding dithiothreitol (DTT) is the standard method for liquefaction prior to processing sputum for molecular detection assays. To determine if DTT treatment homogenizes the bacterial distribution within sputum, we measured the difference in mean total bacterial abundance and abundance of Burkholderia multivorans between aliquots of DTT-treated sputum samples with and without a mechanical homogenization (MH) step using a high-speed dispersing element. Additionally, we measured the effect of MH on bacterial abundance. We found a significant difference between the mean bacterial abundances in aliquots that were subjected to only DTT treatment and those of the aliquots which included an MH step (all bacteria, P = 0.04; B. multivorans, P = 0.05). There was no significant effect of MH on bacterial abundance in sputum. Although our results are from a single CF patient, they indicate that mechanical homogenization increases the homogeneity of bacteria in sputum. PMID:24759710
Progress with lossy compression of data from the Community Earth System Model
Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.
2017-12-01
Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.
Studies of imaging characteristics for a slab of a lossy left-handed material
International Nuclear Information System (INIS)
Shen Linfang; He Sailing
2003-01-01
The characteristics of an imaging system formed by a slab of a lossy left-handed material (LHM) are studied. The transfer function of the LHM imaging system is written in an appropriate product form with each term having a clear physical interpretation. A tiny loss of the LHM may suppress the transmission of evanescent waves through the LHM slab and this is explained physically. An analytical expression for the resolution of the imaging system is derived. It is shown that it is impossible to make a subwavelength imaging by using a realistic LHM imaging system unless the LHM slab is much thinner than the wavelength
The Application of RPL Routing Protocol in Low Power Wireless Sensor and Lossy Networks
Directory of Open Access Journals (Sweden)
Xun Yang
2014-05-01
Full Text Available With the continuous development of computer information technology, wireless sensor has been successfully changed the mode of human life, at the same time, as one of the technologies continues to improve the future life, how to better integration with the RPL routing protocols together become one of research focuses in the current climate. This paper start from the wireless sensor network, briefly discusses the concept, followed by systematic exposition of RPL routing protocol developed background, relevant standards, working principle, topology and related terms, and finally explore the RPL routing protocol in wireless sensor low power lossy network applications.
Lossy effects on the lateral shifts in negative-phase-velocity medium
International Nuclear Information System (INIS)
You Yuan
2009-01-01
Theoretical investigations of the lateral shifts of the reflected and transmitted beams were performed, using the stationary-phase approach, for the planar interface of a conventional medium and a lossy negative-phase-velocity medium. The lateral shifts exhibit different behaviors beyond and below a certain angle, for both incident p-polarized and incident s-polarized plane waves. Loss in the negative-phase-velocity medium affects lateral shifts greatly, and may cause changes from negative to positive values for p-polarized incidence
Security of modified Ping-Pong protocol in noisy and lossy channel
Han, Yun-Guang; Yin, Zhen-Qiang; Li, Hong-Wei; Chen, Wei; Wang, Shuang; Guo, Guang-Can; Han, Zheng-Fu
2014-01-01
The “Ping-Pong” (PP) protocol is a two-way quantum key protocol based on entanglement. In this protocol, Bob prepares one maximally entangled pair of qubits, and sends one qubit to Alice. Then, Alice performs some necessary operations on this qubit and sends it back to Bob. Although this protocol was proposed in 2002, its security in the noisy and lossy channel has not been proven. In this report, we add a simple and experimentally feasible modification to the original PP protocol, and prove ...
Socorro, A. B.; Corres, J. M.; Del Villar, I.; Matias, I. R.; Arregui, F. J.
2014-05-01
This work presents the development and test of an anti-gliadin antibodies biosensor based on lossy mode resonances (LMRs) to detect celiac disease. Several polyelectrolites were used to perform layer-by-layer assembly processes in order to generate the LMR and to fabricate a gliadin-embedded thin-film. The LMR shifted 20 nm when immersed in a 5 ppm anti-gliadin antibodies-PBS solution, what makes this bioprobe suitable for detecting celiac disease. This is the first time, to our knowledge, that LMRs are used to detect celiac disease and these results suppose promising prospects on the use of such phenomena as biological detectors.
Functionality and homogeneity.
2011-01-01
Functionality and homogeneity are two of the five Sustainable Safety principles. The functionality principle aims for roads to have but one exclusive function and distinguishes between traffic function (flow) and access function (residence). The homogeneity principle aims at differences in mass,
Homogenization of Mammalian Cells.
de Araújo, Mariana E G; Lamberti, Giorgia; Huber, Lukas A
2015-11-02
Homogenization is the name given to the methodological steps necessary for releasing organelles and other cellular constituents as a free suspension of intact individual components. Most homogenization procedures used for mammalian cells (e.g., cavitation pump and Dounce homogenizer) rely on mechanical force to break the plasma membrane and may be supplemented with osmotic or temperature alterations to facilitate membrane disruption. In this protocol, we describe a syringe-based homogenization method that does not require specialized equipment, is easy to handle, and gives reproducible results. The method may be adapted for cells that require hypotonic shock before homogenization. We routinely use it as part of our workflow to isolate endocytic organelles from mammalian cells. © 2015 Cold Spring Harbor Laboratory Press.
A Lossy Counting-Based State of Charge Estimation Method and Its Application to Electric Vehicles
Directory of Open Access Journals (Sweden)
Hong Zhang
2015-12-01
Full Text Available Estimating the residual capacity or state-of-charge (SoC of commercial batteries on-line without destroying them or interrupting the power supply, is quite a challenging task for electric vehicle (EV designers. Many Coulomb counting-based methods have been used to calculate the remaining capacity in EV batteries or other portable devices. The main disadvantages of these methods are the cumulative error and the time-varying Coulombic efficiency, which are greatly influenced by the operating state (SoC, temperature and current. To deal with this problem, we propose a lossy counting-based Coulomb counting method for estimating the available capacity or SoC. The initial capacity of the tested battery is obtained from the open circuit voltage (OCV. The charging/discharging efficiencies, used for compensating the Coulombic losses, are calculated by the lossy counting-based method. The measurement drift, resulting from the current sensor, is amended with the distorted Coulombic efficiency matrix. Simulations and experimental results show that the proposed method is both effective and convenient.
Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions
Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina
2002-01-01
OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.
Directory of Open Access Journals (Sweden)
Xiangwei Li
2014-12-01
Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.
Design of a receiver operating characteristic (ROC) study of 10:1 lossy image compression
Collins, Cary A.; Lane, David; Frank, Mark S.; Hardy, Michael E.; Haynor, David R.; Smith, Donald V.; Parker, James E.; Bender, Gregory N.; Kim, Yongmin
1994-04-01
The digital archiving system at Madigan Army Medical Center (MAMC) uses a 10:1 lossy data compression algorithm for most forms of computed radiography. A systematic study on the potential effect of lossy image compression on patient care has been initiated with a series of studies focused on specific diagnostic tasks. The studies are based upon the receiver operating characteristic (ROC) method of analysis for diagnostic systems. The null hypothesis is that observer performance with approximately 10:1 compressed and decompressed images is not different from using original, uncompressed images for detecting subtle pathologic findings seen on computed radiographs of bone, chest, or abdomen, when viewed on a high-resolution monitor. Our design involves collecting cases from eight pathologic categories. Truth is determined by committee using confirmatory studies performed during routine clinical practice whenever possible. Software has been developed to aid in case collection and to allow reading of the cases for the study using stand-alone Siemens Litebox workstations. Data analysis uses two methods, ROC analysis and free-response ROC (FROC) methods. This study will be one of the largest ROC/FROC studies of its kind and could benefit clinical radiology practice using PACS technology. The study design and results from a pilot FROC study are presented.
Sink-to-Sink Coordination Framework Using RPL: Routing Protocol for Low Power and Lossy Networks
Directory of Open Access Journals (Sweden)
Meer M. Khan
2016-01-01
Full Text Available RPL (Routing Protocol for low power and Lossy networks is recommended by Internet Engineering Task Force (IETF for IPv6-based LLNs (Low Power and Lossy Networks. RPL uses a proactive routing approach and each node always maintains an active path to the sink node. Sink-to-sink coordination defines syntax and semantics for the exchange of any network defined parameters among sink nodes like network size, traffic load, mobility of a sink, and so forth. The coordination allows sink to learn about the network condition of neighboring sinks. As a result, sinks can make coordinated decision to increase/decrease their network size for optimizing over all network performance in terms of load sharing, increasing network lifetime, and lowering end-to-end latency of communication. Currently, RPL does not provide any coordination framework that can define message exchange between different sink nodes for enhancing the network performance. In this paper, a sink-to-sink coordination framework is proposed which utilizes the periodic route maintenance messages issued by RPL to exchange network status observed at a sink with its neighboring sinks. The proposed framework distributes network load among sink nodes for achieving higher throughputs and longer network’s life time.
Recent advances in lossy compression of scientific floating-point data
Lindstrom, P.
2017-12-01
With a continuing exponential trend in supercomputer performance, ever larger data sets are being generated through numerical simulation. Bandwidth and storage capacity are, however, not keeping pace with this increase in data size, causing significant data movement bottlenecks in simulation codes and substantial monetary costs associated with archiving vast volumes of data. Worse yet, ever smaller fractions of data generated can be stored for further analysis, where scientists frequently rely on decimating or averaging large data sets in time and/or space. One way to mitigate these problems is to employ data compression to reduce data volumes. However, lossless compression of floating-point data can achieve only very modest size reductions on the order of 10-50%. We present ZFP and FPZIP, two state-of-the-art lossy compressors for structured floating-point data that routinely achieve one to two orders of magnitude reduction with little to no impact on the accuracy of visualization and quantitative data analysis. We provide examples of the use of such lossy compressors in climate and seismic modeling applications to effectively accelerate I/O and reduce storage requirements. We further discuss how the design decisions behind these and other compressors impact error distributions and other statistical and differential properties, including derived quantities of interest relevant to each science application.
Directory of Open Access Journals (Sweden)
A.A. Kobozeva
2016-09-01
Full Text Available The problem of detection of the digital image falsification results performed by cloning is considered – one of the most often used program tools implemented in all modern graphic editors. Aim: The aim of the work is further development of approach to the solution of a cloning detection problem having the cloned image saved in a lossy format, offered by authors earlier. Materials and Methods: Further development of a new approach to the solution of a problem of cloning results detection in the digital image is presented. Approach is based on the accounting of small changes of cylindrical body volume with the generatrix, that is parallel to the OZ axis, bounded above by the interpolating function plot for a matrix of brightness of the analyzed image, and bounded below by the XOY plane, during the compression process. Results: Adaptation of the offered approach to conditions of the cloned image compression with the arbitrary factor of compression quality is carried out (compression ratio. The approach solvency in the conditions of the cloned image compression according to the algorithms different from the JPEG standard is shown: JPEG2000, compression with use of low-rank approximations of the image matrix (matrix blocks. The results of computational experiment are given. It is shown that the developed approach can be used to detect the results of cloning in digital video in the conditions of lossy compression after cloning process.
Receiver-Assisted Congestion Control to Achieve High Throughput in Lossy Wireless Networks
Shi, Kai; Shu, Yantai; Yang, Oliver; Luo, Jiarong
2010-04-01
Many applications would require fast data transfer in high-speed wireless networks nowadays. However, due to its conservative congestion control algorithm, Transmission Control Protocol (TCP) cannot effectively utilize the network capacity in lossy wireless networks. In this paper, we propose a receiver-assisted congestion control mechanism (RACC) in which the sender performs loss-based control, while the receiver is performing delay-based control. The receiver measures the network bandwidth based on the packet interarrival interval and uses it to compute a congestion window size deemed appropriate for the sender. After receiving the advertised value feedback from the receiver, the sender then uses the additive increase and multiplicative decrease (AIMD) mechanism to compute the correct congestion window size to be used. By integrating the loss-based and the delay-based congestion controls, our mechanism can mitigate the effect of wireless losses, alleviate the timeout effect, and therefore make better use of network bandwidth. Simulation and experiment results in various scenarios show that our mechanism can outperform conventional TCP in high-speed and lossy wireless environments.
The SPH homogeneization method
International Nuclear Information System (INIS)
Kavenoky, Alain
1978-01-01
The homogeneization of a uniform lattice is a rather well understood topic while difficult problems arise if the lattice becomes irregular. The SPH homogeneization method is an attempt to generate homogeneized cross sections for an irregular lattice. Section 1 summarizes the treatment of an isolated cylindrical cell with an entering surface current (in one velocity theory); Section 2 is devoted to the extension of the SPH method to assembly problems. Finally Section 3 presents the generalisation to general multigroup problems. Numerical results are obtained for a PXR rod bundle assembly in Section 4
Homogeneity of Inorganic Glasses
DEFF Research Database (Denmark)
Jensen, Martin; Zhang, L.; Keding, Ralf
2011-01-01
Homogeneity of glasses is a key factor determining their physical and chemical properties and overall quality. However, quantification of the homogeneity of a variety of glasses is still a challenge for glass scientists and technologists. Here, we show a simple approach by which the homogeneity...... of different glass products can be quantified and ranked. This approach is based on determination of both the optical intensity and dimension of the striations in glasses. These two characteristic values areobtained using the image processing method established recently. The logarithmic ratio between...
Benchmarking monthly homogenization algorithms
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
DEFF Research Database (Denmark)
Hansen, Troels Vejle; Kim, Oleksiy S.; Breinbjerg, Olav
2014-01-01
For spherical antennas consisting of a solid magnetodielectric lossy core with an impressed surface current density exciting a superposition of the ${\\rm TE}_{mn}$ and ${\\rm TM}_{mn}$ spherical modes, we analytically determine the radiation quality factor $Q$ and radiation efficiency $e$ . Also, we...
Graphene Oxide in Lossy Mode Resonance-Based Optical Fiber Sensors for Ethanol Detection
Directory of Open Access Journals (Sweden)
Miguel Hernaez
2017-12-01
Full Text Available The influence of graphene oxide (GO over the features of an optical fiber ethanol sensor based on lossy mode resonances (LMR has been studied in this work. Four different sensors were built with this aim, each comprising a multimode optical fiber core fragment coated with a SnO2 thin film. Layer by layer (LbL coatings made of 1, 2 and 4 bilayers of polyethyleneimine (PEI and graphene oxide were deposited onto three of these devices and their behavior as aqueous ethanol sensors was characterized and compared with the sensor without GO. The sensors with GO showed much better performance with a maximum sensitivity enhancement of 176% with respect to the sensor without GO. To our knowledge, this is the first time that GO has been used to make an optical fiber sensor based on LMR.
Performance evaluation of a lossy transmission lines based diode detector at cryogenic temperature.
Villa, E; Aja, B; de la Fuente, L; Artal, E
2016-01-01
This work is focused on the design, fabrication, and performance analysis of a square-law Schottky diode detector based on lossy transmission lines working under cryogenic temperature (15 K). The design analysis of a microwave detector, based on a planar gallium-arsenide low effective Schottky barrier height diode, is reported, which is aimed for achieving large input return loss as well as flat sensitivity versus frequency. The designed circuit demonstrates good sensitivity, as well as a good return loss in a wide bandwidth at Ka-band, at both room (300 K) and cryogenic (15 K) temperatures. A good sensitivity of 1000 mV/mW and input return loss better than 12 dB have been achieved when it works as a zero-bias Schottky diode detector at room temperature, increasing the sensitivity up to a minimum of 2200 mV/mW, with the need of a DC bias current, at cryogenic temperature.
Graphene Oxide in Lossy Mode Resonance-Based Optical Fiber Sensors for Ethanol Detection.
Hernaez, Miguel; Mayes, Andrew G; Melendi-Espina, Sonia
2017-12-27
The influence of graphene oxide (GO) over the features of an optical fiber ethanol sensor based on lossy mode resonances (LMR) has been studied in this work. Four different sensors were built with this aim, each comprising a multimode optical fiber core fragment coated with a SnO₂ thin film. Layer by layer (LbL) coatings made of 1, 2 and 4 bilayers of polyethyleneimine (PEI) and graphene oxide were deposited onto three of these devices and their behavior as aqueous ethanol sensors was characterized and compared with the sensor without GO. The sensors with GO showed much better performance with a maximum sensitivity enhancement of 176% with respect to the sensor without GO. To our knowledge, this is the first time that GO has been used to make an optical fiber sensor based on LMR.
Directory of Open Access Journals (Sweden)
Ari Shawakat Tahir
2015-12-01
Full Text Available The Steganography is an art and science of hiding information by embedding messages within other, seemingly harmless messages and lots of researches are working in it. Proposed system is using AES Algorithm and Lossy technique to overcome the limitation of previous work and increasing the process’s speed. The sender uses AES Algorithm to encrypt message and image, then using LSB technique to hide encrypted data in encrypted message. The receive get the original data using the keys that had been used in encryption process. The proposed system has been implemented in NetBeans 7.3 software uses image and data in different size to find the system’s speed.
Homogenization approach in engineering
International Nuclear Information System (INIS)
Babuska, I.
1975-10-01
Homogenization is an approach which studies the macrobehavior of a medium by its microproperties. Problems with a microstructure play an essential role in such fields as mechanics, chemistry, physics, and reactor engineering. Attention is concentrated on a simple specific model problem to illustrate results and problems typical of the homogenization approach. Only the diffusion problem is treated here, but some statements are made about the elasticity of composite materials. The differential equation is solved for linear cases with and without boundaries and for the nonlinear case. 3 figures, 1 table
Dynamics of homogeneous nucleation
DEFF Research Database (Denmark)
Toxværd, Søren
2015-01-01
The classical nucleation theory for homogeneous nucleation is formulated as a theory for a density fluctuation in a supersaturated gas at a given temperature. But molecular dynamics simulations reveal that it is small cold clusters which initiates the nucleation. The temperature in the nucleating...
Homogeneous bilateral block shifts
Indian Academy of Sciences (India)
Douglas class were classified in [3]; they are unilateral block shifts of arbitrary block size (i.e. dim H(n) can be anything). However, no examples of irreducible homogeneous bilateral block shifts of block size larger than 1 were known until now.
Tignanelli, H. L.; Vazquez, R. A.; Mostaccio, C.; Gordillo, S.; Plastino, A.
1990-11-01
RESUMEN. Presentamos una metodologia de analisis de la homogeneidad a partir de la Teoria de la Informaci6n, aplicable a muestras de datos observacionales. ABSTRACT:Standard concepts that underlie Information Theory are employed in order design a methodology that enables one to analyze the homogeneity of a given data sample. Key : DATA ANALYSIS
Homogeneous Poisson structures
International Nuclear Information System (INIS)
Shafei Deh Abad, A.; Malek, F.
1993-09-01
We provide an algebraic definition for Schouten product and give a decomposition for any homogenenous Poisson structure in any n-dimensional vector space. A large class of n-homogeneous Poisson structures in R k is also characterized. (author). 4 refs
Singh, Abhishek Kumar; Das, Amrita; Parween, Zeenat; Chattopadhyay, Amares
2015-10-01
The present paper deals with the propagation of Love-type wave in an initially stressed irregular vertically heterogeneous layer lying over an initially stressed isotropic layer and an initially stressed isotropic half-space. Two different types of irregularities, viz., rectangular and parabolic, are considered at the interface of uppermost initially stressed heterogeneous layer and intermediate initially stressed isotropic layer. Dispersion equations are obtained in closed form for both cases of irregularities, distinctly. The effect of size and shape of irregularity, horizontal compressive initial stress, horizontal tensile initial stress, heterogeneity of the uppermost layer and width ratio of the layers on phase velocity of Love-type wave are the major highlights of the study. Comparative study has been made to identify the effects of different shapes of irregularity, presence of heterogeneity and initial stresses. Numerical computations have been carried out and depicted by means of graphs for the present study.
Dirichlet problem on the upper half space
Indian Academy of Sciences (India)
2School of Mathematics and Information Science, Henan University of Economics and ... The classical Poisson kernel for H is defined by P(x,y ) = 2xnω .... [5] Siegel D and Talvila E, Sharp growth estimates for modified Poisson integrals in a ...
Huimerind, Jaak, 1957-
2015-01-01
Eesti Ajaloomuuseumi Maarjamäe lossi renoveeritud tallihoone Tallinnas Pirita tee 66, valminud 2014. Arhitekt Jaak Huimerind (Studio Paralleel OÜ), sisearhitekt ja näituse kujundaja Tarmo Piirmets (Pink OÜ). Eesti Kultuurkapitali Arhitektuuri sihtkapitali renoveerimispreemia 2014
Homogeneous group, research, institution
Directory of Open Access Journals (Sweden)
Francesca Natascia Vasta
2014-09-01
Full Text Available The work outlines the complex connection among empiric research, therapeutic programs and host institution. It is considered the current research state in Italy. Italian research field is analyzed and critic data are outlined: lack of results regarding both the therapeutic processes and the effectiveness of eating disorders group analytic treatment. The work investigates on an eating disorders homogeneous group, led into an eating disorder outpatient service. First we present the methodological steps the research is based on including the strong connection among theory and clinical tools. Secondly clinical tools are described and the results commented. Finally, our results suggest the necessity of validating some more specifical hypothesis: verifying the relationship between clinical improvement (sense of exclusion and painful emotions reduction and specific group therapeutic processes; verifying the relationship between depressive feelings, relapses and transition trough a more differentiated groupal field.Keywords: Homogeneous group; Eating disorders; Institutional field; Therapeutic outcome
Homogeneous turbulence dynamics
Sagaut, Pierre
2018-01-01
This book provides state-of-the-art results and theories in homogeneous turbulence, including anisotropy and compressibility effects with extension to quantum turbulence, magneto-hydodynamic turbulence and turbulence in non-newtonian fluids. Each chapter is devoted to a given type of interaction (strain, rotation, shear, etc.), and presents and compares experimental data, numerical results, analysis of the Reynolds stress budget equations and advanced multipoint spectral theories. The role of both linear and non-linear mechanisms is emphasized. The link between the statistical properties and the dynamics of coherent structures is also addressed. Despite its restriction to homogeneous turbulence, the book is of interest to all people working in turbulence, since the basic physical mechanisms which are present in all turbulent flows are explained. The reader will find a unified presentation of the results and a clear presentation of existing controversies. Special attention is given to bridge the results obta...
Homogen Mur - et udviklingsprojekt
DEFF Research Database (Denmark)
Dahl, Torben; Beim, Anne; Sørensen, Peter
1997-01-01
Mølletorvet i Slagelse er det første byggeri i Danmark, hvor ydervæggen er udført af homogene bærende og isolerende teglblokke. Byggeriet viser en række af de muligheder, der både med hensyn til konstruktioner, energiforhold og arkitektur ligger i anvendelsen af homogent blokmurværk.......Mølletorvet i Slagelse er det første byggeri i Danmark, hvor ydervæggen er udført af homogene bærende og isolerende teglblokke. Byggeriet viser en række af de muligheder, der både med hensyn til konstruktioner, energiforhold og arkitektur ligger i anvendelsen af homogent blokmurværk....
Homogenization of resonant chiral metamaterials
DEFF Research Database (Denmark)
Andryieuski, Andrei; Menzel, C.; Rockstuhl, Carsten
2010-01-01
Homogenization of metamaterials is a crucial issue as it allows to describe their optical response in terms of effective wave parameters as, e.g., propagation constants. In this paper we consider the possible homogenization of chiral metamaterials. We show that for meta-atoms of a certain size...... an analytical criterion for performing the homogenization and a tool to predict the homogenization limit. We show that strong coupling between meta-atoms of chiral metamaterials may prevent their homogenization at all....
International Nuclear Information System (INIS)
Figueroa-O’Farrill, José; Ungureanu, Mara
2016-01-01
Motivated by the search for new gravity duals to M2 branes with N>4 supersymmetry — equivalently, M-theory backgrounds with Killing superalgebra osp(N|4) for N>4 — we classify (except for a small gap) homogeneous M-theory backgrounds with symmetry Lie algebra so(n)⊕so(3,2) for n=5,6,7. We find that there are no new backgrounds with n=6,7 but we do find a number of new (to us) backgrounds with n=5. All backgrounds are metrically products of the form AdS 4 ×P 7 , with P riemannian and homogeneous under the action of SO(5), or S 4 ×Q 7 with Q lorentzian and homogeneous under the action of SO(3,2). At least one of the new backgrounds is supersymmetric (albeit with only N=2) and we show that it can be constructed from a supersymmetric Freund-Rubin background via a Wick rotation. Two of the new backgrounds have only been approximated numerically.
Energy Technology Data Exchange (ETDEWEB)
Figueroa-O’Farrill, José [School of Mathematics and Maxwell Institute for Mathematical Sciences,The University of Edinburgh,James Clerk Maxwell Building, The King’s Buildings, Peter Guthrie Tait Road,Edinburgh EH9 3FD, Scotland (United Kingdom); Ungureanu, Mara [Humboldt-Universität zu Berlin, Institut für Mathematik,Unter den Linden 6, 10099 Berlin (Germany)
2016-01-25
Motivated by the search for new gravity duals to M2 branes with N>4 supersymmetry — equivalently, M-theory backgrounds with Killing superalgebra osp(N|4) for N>4 — we classify (except for a small gap) homogeneous M-theory backgrounds with symmetry Lie algebra so(n)⊕so(3,2) for n=5,6,7. We find that there are no new backgrounds with n=6,7 but we do find a number of new (to us) backgrounds with n=5. All backgrounds are metrically products of the form AdS{sub 4}×P{sup 7}, with P riemannian and homogeneous under the action of SO(5), or S{sup 4}×Q{sup 7} with Q lorentzian and homogeneous under the action of SO(3,2). At least one of the new backgrounds is supersymmetric (albeit with only N=2) and we show that it can be constructed from a supersymmetric Freund-Rubin background via a Wick rotation. Two of the new backgrounds have only been approximated numerically.
Hasar, U C
2009-05-01
A microcontroller-based noncontact and nondestructive microwave free-space measurement system for real-time and dynamic determination of complex permittivity of lossy liquid materials has been proposed. The system is comprised of two main sections--microwave and electronic. While the microwave section provides for measuring only the amplitudes of reflection coefficients, the electronic section processes these data and determines the complex permittivity using a general purpose microcontroller. The proposed method eliminates elaborate liquid sample holder preparation and only requires microwave components to perform reflection measurements from one side of the holder. In addition, it explicitly determines the permittivity of lossy liquid samples from reflection measurements at different frequencies without any knowledge on sample thickness. In order to reduce systematic errors in the system, we propose a simple calibration technique, which employs simple and readily available standards. The measurement system can be a good candidate for industrial-based applications.
The use of ZFP lossy floating point data compression in tornado-resolving thunderstorm simulations
Orf, L.
2017-12-01
In the field of atmospheric science, numerical models are used to produce forecasts of weather and climate and serve as virtual laboratories for scientists studying atmospheric phenomena. In both operational and research arenas, atmospheric simulations exploiting modern supercomputing hardware can produce a tremendous amount of data. During model execution, the transfer of floating point data from memory to the file system is often a significant bottleneck where I/O can dominate wallclock time. One way to reduce the I/O footprint is to compress the floating point data, which reduces amount of data saved to the file system. In this presentation we introduce LOFS, a file system developed specifically for use in three-dimensional numerical weather models that are run on massively parallel supercomputers. LOFS utilizes the core (in-memory buffered) HDF5 driver and includes compression options including ZFP, a lossy floating point data compression algorithm. ZFP offers several mechanisms for specifying the amount of lossy compression to be applied to floating point data, including the ability to specify the maximum absolute error allowed in each compressed 3D array. We explore different maximum error tolerances in a tornado-resolving supercell thunderstorm simulation for model variables including cloud and precipitation, temperature, wind velocity and vorticity magnitude. We find that average compression ratios exceeding 20:1 in scientifically interesting regions of the simulation domain produce visually identical results to uncompressed data in visualizations and plots. Since LOFS splits the model domain across many files, compression ratios for a given error tolerance can be compared across different locations within the model domain. We find that regions of high spatial variability (which tend to be where scientifically interesting things are occurring) show the lowest compression ratios, whereas regions of the domain with little spatial variability compress
HOMOGENEOUS NUCLEAR POWER REACTOR
King, L.D.P.
1959-09-01
A homogeneous nuclear power reactor utilizing forced circulation of the liquid fuel is described. The reactor does not require fuel handling outside of the reactor vessel during any normal operation including complete shutdown to room temperature, the reactor being selfregulating under extreme operating conditions and controlled by the thermal expansion of the liquid fuel. The liquid fuel utilized is a uranium, phosphoric acid, and water solution which requires no gus exhaust system or independent gas recombining system, thereby eliminating the handling of radioiytic gas.
Deng, Shaoqiang
2012-01-01
"Homogeneous Finsler Spaces" is the first book to emphasize the relationship between Lie groups and Finsler geometry, and the first to show the validity in using Lie theory for the study of Finsler geometry problems. This book contains a series of new results obtained by the author and collaborators during the last decade. The topic of Finsler geometry has developed rapidly in recent years. One of the main reasons for its surge in development is its use in many scientific fields, such as general relativity, mathematical biology, and phycology (study of algae). This monograph introduc
Homogeneity spoil spectroscopy
International Nuclear Information System (INIS)
Hennig, J.; Boesch, C.; Martin, E.; Grutter, R.
1987-01-01
One of the problems of in vivo MR spectroscopy of P-31 is spectra localization. Surface coil spectroscopy, which is the method of choice for clinical applications, suffers from the high-intensity signal from subcutaneous muscle tissue, which masks the spectrum of interest from deeper structures. In order to suppress this signal while maintaining the simplicity of surface coil spectroscopy, the authors introduced a small sheet of ferromagnetically dotted plastic between the surface coil and the body. This sheet destroys locally the field homogeneity and therefore all signal from structures around the coil. The very high reproducibility of the simple experimental procedure allows long-term studies important for monitoring tumor therapy
Kosiel, Kamil; Koba, Marcin; Masiewicz, Marcin; Śmietana, Mateusz
2018-06-01
The paper shows application of atomic layer deposition (ALD) technique as a tool for tailoring sensorial properties of lossy-mode-resonance (LMR)-based optical fiber sensors. Hafnium dioxide (HfO2), zirconium dioxide (ZrO2), and tantalum oxide (TaxOy), as high-refractive-index dielectrics that are particularly convenient for LMR-sensor fabrication, were deposited by low-temperature (100 °C) ALD ensuring safe conditions for thermally vulnerable fibers. Applicability of HfO2 and ZrO2 overlays, deposited with ALD-related atomic level thickness accuracy for fabrication of LMR-sensors with controlled sensorial properties was presented. Additionally, for the first time according to our best knowledge, the double-layer overlay composed of two different materials - silicon nitride (SixNy) and TaxOy - is presented for the LMR fiber sensors. The thin films of such overlay were deposited by two different techniques - PECVD (the SixNy) and ALD (the TaxOy). Such approach ensures fast overlay fabrication and at the same time facility for resonant wavelength tuning, yielding devices with satisfactory sensorial properties.
Delay reduction in lossy intermittent feedback for generalized instantly decodable network coding
Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Ai-Naffouri, Tareq Y.
2013-01-01
In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding. These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantly decodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expression of the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since finding the optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensive simulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for large feedback loss period. © 2013 IEEE.
Delay reduction in lossy intermittent feedback for generalized instantly decodable network coding
Douik, Ahmed S.
2013-10-01
In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding. These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantly decodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expression of the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since finding the optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensive simulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for large feedback loss period. © 2013 IEEE.
Genetic optimization of magneto-optic Kerr effect in lossy cavity-type magnetophotonic crystals
Energy Technology Data Exchange (ETDEWEB)
Ghanaatshoar, M., E-mail: m-ghanaat@cc.sbu.ac.i [Laser and Plasma Research Institute, Shahid Beheshti University, G.C., Evin 1983963113, Tehran (Iran, Islamic Republic of); Alisafaee, H. [Laser and Plasma Research Institute, Shahid Beheshti University, G.C., Evin 1983963113, Tehran (Iran, Islamic Republic of)
2011-07-15
We have demonstrated an optimization approach in order to obtain desired magnetophotonic crystals (MPCs) composed of a lossy magnetic layer (TbFeCo) placed within a multilayer structure. The approach is an amalgamation between a 4x4 transfer matrix method and a genetic algorithm. Our objective is to enhance the magneto-optic Kerr effect of TbFeCo at short visible wavelength of 405 nm. Through the optimization approach, MPC structures are found meeting definite criteria on the amount of reflectivity and Kerr rotation. The resulting structures are fitted more than 99.9% to optimization criteria. Computation of the internal electric field distribution shows energy localization in the vicinity of the magnetic layer, which is responsible for increased light-matter interaction and consequent enhanced magneto-optic Kerr effect. Versatility of our approach is also exhibited by examining and optimizing several MPC structures. - Research highlights: Structures comprising a highly absorptive TbFeCo layer are designed to work for data storage applications at 405 nm. Optimization algorithm resulted in structures fitted 99.9% to design criteria. More than 10 structures are found exhibiting magneto-optical response of about 1{sup o} rotation and 20% reflection. The ratio of the Kerr rotation to the Kerr ellipticity is enhanced by a factor of 30.
The effects of lossy compression on diagnostically relevant seizure information in EEG signals.
Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E
2013-01-01
This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.
Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data
Energy Technology Data Exchange (ETDEWEB)
Di, Sheng; Cappello, Franck
2018-01-01
Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.
Color image lossy compression based on blind evaluation and prediction of noise characteristics
Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena
2011-03-01
The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.
Coherent control of long-distance steady-state entanglement in lossy resonator arrays
Angelakis, D. G.; Dai, L.; Kwek, L. C.
2010-07-01
We show that coherent control of the steady-state long-distance entanglement between pairs of cavity-atom systems in an array of lossy and driven coupled resonators is possible. The cavities are doped with atoms and are connected through waveguides, other cavities or fibers depending on the implementation. We find that the steady-state entanglement can be coherently controlled through the tuning of the phase difference between the driving fields. It can also be surprisingly high in spite of the pumps being classical fields. For some implementations where the connecting element can be a fiber, long-distance steady-state quantum correlations can be established. Furthermore, the maximal of entanglement for any pair is achieved when their corresponding direct coupling is much smaller than their individual couplings to the third party. This effect is reminiscent of the establishment of coherence between otherwise uncoupled atomic levels using classical coherent fields. We suggest a method to measure this entanglement by analyzing the correlations of the emitted photons from the array and also analyze the above results for a range of values of the system parameters, different network geometries and possible implementation technologies.
Homogeneous instantons in bigravity
International Nuclear Information System (INIS)
Zhang, Ying-li; Sasaki, Misao; Yeom, Dong-han
2015-01-01
We study homogeneous gravitational instantons, conventionally called the Hawking-Moss (HM) instantons, in bigravity theory. The HM instantons describe the amplitude of quantum tunneling from a false vacuum to the true vacuum. Corrections to General Relativity (GR) are found in a closed form. Using the result, we discuss the following two issues: reduction to the de Rham-Gabadadze-Tolley (dRGT) massive gravity and the possibility of preference for a large e-folding number in the context of the Hartle-Hawking (HH) no-boundary proposal. In particular, concerning the dRGT limit, it is found that the tunneling through the so-called self-accelerating branch is exponentially suppressed relative to the normal branch, and the probability becomes zero in the dRGT limit. As far as HM instantons are concerned, this could imply that the reduction from bigravity to the dRGT massive gravity is ill-defined.
The relationship between continuum homogeneity and statistical homogeneity in cosmology
International Nuclear Information System (INIS)
Stoeger, W.R.; Ellis, G.F.R.; Hellaby, C.
1987-01-01
Although the standard Friedmann-Lemaitre-Robertson-Walker (FLRW) Universe models are based on the concept that the Universe is spatially homogeneous, up to the present time no definition of this concept has been proposed that could in principle be tested by observation. Such a definition is here proposed, based on a simple spatial averaging procedure, which relates observable properties of the Universe to the continuum homogeneity idea that underlies the FLRW models. It turns out that the statistical homogeneity often used to describe the distribution of matter on a large scale does not imply spatial homogeneity according to this definition, and so cannot be simply related to a FLRW Universe model. Values are proposed for the homogeneity parameter and length scale of homogeneity of the Universe. (author)
García, Aday; Santos, Lucana; López, Sebastián.; Callicó, Gustavo M.; Lopez, Jose F.; Sarmiento, Roberto
2014-05-01
Efficient onboard satellite hyperspectral image compression represents a necessity and a challenge for current and future space missions. Therefore, it is mandatory to provide hardware implementations for this type of algorithms in order to achieve the constraints required for onboard compression. In this work, we implement the Lossy Compression for Exomars (LCE) algorithm on an FPGA by means of high-level synthesis (HSL) in order to shorten the design cycle. Specifically, we use CatapultC HLS tool to obtain a VHDL description of the LCE algorithm from C-language specifications. Two different approaches are followed for HLS: on one hand, introducing the whole C-language description in CatapultC and on the other hand, splitting the C-language description in functional modules to be implemented independently with CatapultC, connecting and controlling them by an RTL description code without HLS. In both cases the goal is to obtain an FPGA implementation. We explain the several changes applied to the original Clanguage source code in order to optimize the results obtained by CatapultC for both approaches. Experimental results show low area occupancy of less than 15% for a SRAM-based Virtex-5 FPGA and a maximum frequency above 80 MHz. Additionally, the LCE compressor was implemented into an RTAX2000S antifuse-based FPGA, showing an area occupancy of 75% and a frequency around 53 MHz. All these serve to demonstrate that the LCE algorithm can be efficiently executed on an FPGA onboard a satellite. A comparison between both implementation approaches is also provided. The performance of the algorithm is finally compared with implementations on other technologies, specifically a graphics processing unit (GPU) and a single-threaded CPU.
DEFF Research Database (Denmark)
Hansen, Troels Vejle; Kim, Oleksiy S.; Breinbjerg, Olav
2014-01-01
For a spherical antenna exciting any arbitrary spherical mode, we derive exact closed-form expressions for the dissipated power and stored energy inside (and outside) the lossy magneto-dielectric spherical core, as well as the radiated power, radiation efficiency, and thus the radiation quality...... an increasing magnetic loss tangent initially leads to a decreasing radiation quality factor, but in the limit of a perfect magnetic conductor (PMC) core the dissipated power tends to zero and the radiation quality factor reaches the fundamental Chu lower bound....
DEFF Research Database (Denmark)
Pedersen, Jesper Goor; Xiao, Sanshui; Mortensen, Niels Asger
2008-01-01
Slow-light enhanced absorption in liquid-infiltrated photonic crystals has recently been proposed as a route to compensate for the reduced optical path in typical lab-on-a-chip systems for bio-chemical sensing applications. A simple perturbative expression has been applied to ideal structures...... composed of lossless dielectrics. In this work we study the enhancement in structures composed of lossy dielectrics such as a polymer. For this particular sensing application we find that the material loss has an unexpected limited drawback and surprisingly, it may even add to increase the bandwidth...
Homogenization of resonant chiral metamaterials
Andryieuski, Andrei; Menzel, Christoph; Rockstuhl, Carsten; Malureanu, Radu; Lederer, Falk; Lavrinenko, Andrei
2010-01-01
Homogenization of metamaterials is a crucial issue as it allows to describe their optical response in terms of effective wave parameters as e.g. propagation constants. In this paper we consider the possible homogenization of chiral metamaterials. We show that for meta-atoms of a certain size a critical density exists above which increasing coupling between neighboring meta-atoms prevails a reasonable homogenization. On the contrary, a dilution in excess will induce features reminiscent to pho...
Bilipschitz embedding of homogeneous fractals
Lü, Fan; Lou, Man-Li; Wen, Zhi-Ying; Xi, Li-Feng
2014-01-01
In this paper, we introduce a class of fractals named homogeneous sets based on some measure versions of homogeneity, uniform perfectness and doubling. This fractal class includes all Ahlfors-David regular sets, but most of them are irregular in the sense that they may have different Hausdorff dimensions and packing dimensions. Using Moran sets as main tool, we study the dimensions, bilipschitz embedding and quasi-Lipschitz equivalence of homogeneous fractals.
Chang, Yin-Jung; Lai, Chi-Sheng
2013-09-01
The mismatch in film thickness and incident angle between reflectance and transmittance extrema due to the presence of lossy film(s) is investigated toward the maximum transmittance design in the active region of solar cells. Using a planar air/lossy film/silicon double-interface geometry illustrates important and quite opposite mismatch behaviors associated with TE and TM waves. In a typical thin-film CIGS solar cell, mismatches contributed by TM waves in general dominate. The angular mismatch is at least 10° in about 37%-53% of the spectrum, depending on the thickness combination of all lossy interlayers. The largest thickness mismatch of a specific interlayer generally increases with the thickness of the layer itself. Antireflection coating designs for solar cells should therefore be optimized in terms of the maximum transmittance into the active region, even if the corresponding reflectance is not at its minimum.
Homogeneous versus heterogeneous zeolite nucleation
Dokter, W.H.; Garderen, van H.F.; Beelen, T.P.M.; Santen, van R.A.; Bras, W.
1995-01-01
Aggregates of fractal dimension were found in the intermediate gel phases that organize prior to nucleation and crystallization (shown right) of silicalite from a homogeneous reaction mixture. Small- and wide-angle X-ray scattering studies prove that for zeolites nucleation may be homogeneous or
Homogeneous crystal nucleation in polymers.
Schick, C; Androsch, R; Schmelzer, J W P
2017-11-15
The pathway of crystal nucleation significantly influences the structure and properties of semi-crystalline polymers. Crystal nucleation is normally heterogeneous at low supercooling, and homogeneous at high supercooling, of the polymer melt. Homogeneous nucleation in bulk polymers has been, so far, hardly accessible experimentally, and was even doubted to occur at all. This topical review summarizes experimental findings on homogeneous crystal nucleation in polymers. Recently developed fast scanning calorimetry, with cooling and heating rates up to 10 6 K s -1 , allows for detailed investigations of nucleation near and even below the glass transition temperature, including analysis of nuclei stability. As for other materials, the maximum homogeneous nucleation rate for polymers is located close to the glass transition temperature. In the experiments discussed here, it is shown that polymer nucleation is homogeneous at such temperatures. Homogeneous nucleation in polymers is discussed in the framework of the classical nucleation theory. The majority of our observations are consistent with the theory. The discrepancies may guide further research, particularly experiments to progress theoretical development. Progress in the understanding of homogeneous nucleation is much needed, since most of the modelling approaches dealing with polymer crystallization exclusively consider homogeneous nucleation. This is also the basis for advancing theoretical approaches to the much more complex phenomena governing heterogeneous nucleation.
Homogenization theory in reactor lattices
International Nuclear Information System (INIS)
Benoist, P.
1986-02-01
The purpose of the theory of homogenization of reactor lattices is to determine, by the mean of transport theory, the constants of a homogeneous medium equivalent to a given lattice, which allows to treat the reactor as a whole by diffusion theory. In this note, the problem is presented by laying emphasis on simplicity, as far as possible [fr
Directory of Open Access Journals (Sweden)
R. Krishnamoorthy
2012-05-01
Full Text Available In this paper, a new lossy to lossless image coding scheme combined with Orthogonal Polynomials Transform and Integer Wavelet Transform is proposed. The Lifting Scheme based Integer Wavelet Transform (LS-IWT is first applied on the image in order to reduce the blocking artifact and memory demand. The Embedded Zero tree Wavelet (EZW subband coding algorithm is used in this proposed work for progressive image coding which achieves efficient bit rate reduction. The computational complexity of lower subband coding of EZW algorithm is reduced in this proposed work with a new integer based Orthogonal Polynomials transform coding. The normalization and mapping are done on the subband of the image for exploiting the subjective redundancy and the zero tree structure is obtained for EZW coding and so the computation complexity is greatly reduced in this proposed work. The experimental results of the proposed technique also show that the efficient bit rate reduction is achieved for both lossy and lossless compression when compared with existing techniques.
Directory of Open Access Journals (Sweden)
Yibo Chen
2015-08-01
Full Text Available In recent years, IoT (Internet of Things technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL, which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.
Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil
2015-08-10
In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.
A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images
Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo
2007-03-01
Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.
Lossy and retardation effects on the localization of EM waves using a left-handed medium slab
International Nuclear Information System (INIS)
Cheng Qiang; Cui Tiejun; Lu Weibing
2005-01-01
It has been shown that a left-handed medium (LHM) slab with negative permittivity -ε0 and negative permeability -μ0 can be used to localize electromagnetic waves [T.J. Cui et al., Phys. Rev. B (January 2005)]. If two current sources with the same amplitudes and opposite directions are placed at the perfect-imaging points of the LHM slab, we have shown that all electromagnetic waves are completely confined in a region between the two sources. In this Letter, a slightly mismatched and lossy LHM lens is studied, where both the relative permittivity and permeability are slightly different from -1, and the lossy and retardation effects on the electromagnetic-wave localization are investigated. Due to the loss and retardation, strong surface waves exist along the slab surfaces. When two current sources are located at the perfect imaging points symmetrically, we show that electromagnetic waves are nearly confined in the region between the two sources and few energies are radiated outside if the retardation and loss are small. When the loss becomes larger, more energies will flow out of the region. Numerical experiments are given to illustrate the above conclusions
Directory of Open Access Journals (Sweden)
Jidong Wang
2016-01-01
Full Text Available The event-triggered energy-to-peak filtering for polytopic discrete-time linear systems is studied with the consideration of lossy network and quantization error. Because of the communication imperfections from the packet dropout of lossy link, the event-triggered condition used to determine the data release instant at the event generator (EG can not be directly applied to update the filter input at the zero order holder (ZOH when performing filter performance analysis and synthesis. In order to balance such nonuniform time series between the triggered instant of EG and the updated instant of ZOH, two event-triggered conditions are defined, respectively, whereafter a worst-case bound on the number of consecutive packet losses of the transmitted data from EG is given, which marginally guarantees the effectiveness of the filter that will be designed based on the event-triggered updating condition of ZOH. Then, the filter performance analysis conditions are obtained under the assumption that the maximum number of packet losses is allowable for the worst-case bound. In what follows, a two-stage LMI-based alternative optimization approach is proposed to separately design the filter, which reduces the conservatism of the traditional linearization method of filter analysis conditions. Subsequently a codesign algorithm is developed to determine the communication and filter parameters simultaneously. Finally, an illustrative example is provided to verify the validity of the obtained results.
ChIPWig: a random access-enabling lossless and lossy compression method for ChIP-seq data.
Ravanmehr, Vida; Kim, Minji; Wang, Zhiying; Milenkovic, Olgica
2018-03-15
Chromatin immunoprecipitation sequencing (ChIP-seq) experiments are inexpensive and time-efficient, and result in massive datasets that introduce significant storage and maintenance challenges. To address the resulting Big Data problems, we propose a lossless and lossy compression framework specifically designed for ChIP-seq Wig data, termed ChIPWig. ChIPWig enables random access, summary statistics lookups and it is based on the asymptotic theory of optimal point density design for nonuniform quantizers. We tested the ChIPWig compressor on 10 ChIP-seq datasets generated by the ENCODE consortium. On average, lossless ChIPWig reduced the file sizes to merely 6% of the original, and offered 6-fold compression rate improvement compared to bigWig. The lossy feature further reduced file sizes 2-fold compared to the lossless mode, with little or no effects on peak calling and motif discovery using specialized NarrowPeaks methods. The compression and decompression speed rates are of the order of 0.2 sec/MB using general purpose computers. The source code and binaries are freely available for download at https://github.com/vidarmehr/ChIPWig-v2, implemented in C ++. milenkov@illinois.edu. Supplementary data are available at Bioinformatics online.
Homogeneous Spaces and Equivariant Embeddings
Timashev, DA
2011-01-01
Homogeneous spaces of linear algebraic groups lie at the crossroads of algebraic geometry, theory of algebraic groups, classical projective and enumerative geometry, harmonic analysis, and representation theory. By standard reasons of algebraic geometry, in order to solve various problems on a homogeneous space it is natural and helpful to compactify it keeping track of the group action, i.e. to consider equivariant completions or, more generally, open embeddings of a given homogeneous space. Such equivariant embeddings are the subject of this book. We focus on classification of equivariant em
Qualitative analysis of homogeneous universes
International Nuclear Information System (INIS)
Novello, M.; Araujo, R.A.
1980-01-01
The qualitative behaviour of cosmological models is investigated in two cases: Homogeneous and isotropic Universes containing viscous fluids in a stokesian non-linear regime; Rotating expanding universes in a state which matter is off thermal equilibrium. (Author) [pt
A second stage homogenization method
International Nuclear Information System (INIS)
Makai, M.
1981-01-01
A second homogenization is needed before the diffusion calculation of the core of large reactors. Such a second stage homogenization is outlined here. Our starting point is the Floquet theorem for it states that the diffusion equation for a periodic core always has a particular solution of the form esup(j)sup(B)sup(x) u (x). It is pointed out that the perturbation series expansion of function u can be derived by solving eigenvalue problems and the eigenvalues serve to define homogenized cross sections. With the help of these eigenvalues a homogenized diffusion equation can be derived the solution of which is cos Bx, the macroflux. It is shown that the flux can be expressed as a series of buckling. The leading term in this series is the well known Wigner-Seitz formula. Finally three examples are given: periodic absorption, a cell with an absorber pin in the cell centre, and a cell of three regions. (orig.)
Homogenization methods for heterogeneous assemblies
International Nuclear Information System (INIS)
Wagner, M.R.
1980-01-01
The third session of the IAEA Technical Committee Meeting is concerned with the problem of homogenization of heterogeneous assemblies. Six papers will be presented on the theory of homogenization and on practical procedures for deriving homogenized group cross sections and diffusion coefficients. That the problem of finding so-called ''equivalent'' diffusion theory parameters for the use in global reactor calculations is of great practical importance. In spite of this, it is fair to say that the present state of the theory of second homogenization is far from being satisfactory. In fact, there is not even a uniquely accepted approach to the problem of deriving equivalent group diffusion parameters. Common agreement exists only about the fact that the conventional flux-weighting technique provides only a first approximation, which might lead to acceptable results in certain cases, but certainly does not guarantee the basic requirement of conservation of reaction rates
Spinor structures on homogeneous spaces
International Nuclear Information System (INIS)
Lyakhovskii, V.D.; Mudrov, A.I.
1993-01-01
For multidimensional models of the interaction of elementary particles, the problem of constructing and classifying spinor fields on homogeneous spaces is exceptionally important. An algebraic criterion for the existence of spinor structures on homogeneous spaces used in multidimensional models is developed. A method of explicit construction of spinor structures is proposed, and its effectiveness is demonstrated in examples. The results are of particular importance for harmonic decomposition of spinor fields
A personal view on homogenization
International Nuclear Information System (INIS)
Tartar, L.
1987-02-01
The evolution of some ideas is first described. Under the name homogenization are collected all the mathematical results who help understanding the relations between the microstructure of a material and its macroscopic properties. Homogenization results are given through a critically detailed bibliography. The mathematical models given are systems of partial differential equations, supposed to describe some properties at a scale ε and we want to understand what will happen to the solutions if ε tends to 0
Directory of Open Access Journals (Sweden)
S. Lamultree
2017-04-01
Full Text Available This paper presents a theoretical analysis of moving reference planes associated with unit cells of nonreciprocal lossy periodic transmission-line structures (NRLSPTLSs by the equivalent bi-characteristic-impedance transmission line (BCITL model. Applying the BCITL theory, only the equivalent BCITL parameters (characteristic impedances for waves propagating in forward and reverse directions and associated complex propagation constants are of interest. An infinite NRLSPTLS is considered first by shifting a reference position of unit cells along TLs of interest. Then, a semi-infinite terminated NRLSPTLS is investigated in terms of associated load reflection coefficients. It is found that the equivalent BCITL characteristic impedances of the original and shifted unit cells are mathematically related by the bilinear transformation. In addition, the associated load reflection coefficients of both unit cells are mathematically related by the bilinear transformation. However, the equivalent BCITL complex propagation constants remain unchanged. Numerical results are provided to show the validity of the proposed theoretical analysis.
Homogenization of neutronic diffusion models
International Nuclear Information System (INIS)
Capdebosq, Y.
1999-09-01
In order to study and simulate nuclear reactor cores, one needs to access the neutron distribution in the core. In practice, the description of this density of neutrons is given by a system of diffusion equations, coupled by non differential exchange terms. The strong heterogeneity of the medium constitutes a major obstacle to the numerical computation of this models at reasonable cost. Homogenization appears as compulsory. Heuristic methods have been developed since the origin by nuclear physicists, under a periodicity assumption on the coefficients. They consist in doing a fine computation one a single periodicity cell, to solve the system on the whole domain with homogeneous coefficients, and to reconstruct the neutron density by multiplying the solutions of the two computations. The objectives of this work are to provide mathematically rigorous basis to this factorization method, to obtain the exact formulas of the homogenized coefficients, and to start on geometries where two periodical medium are placed side by side. The first result of this thesis concerns eigenvalue problem models which are used to characterize the state of criticality of the reactor, under a symmetry assumption on the coefficients. The convergence of the homogenization process is proved, and formulas of the homogenized coefficients are given. We then show that without symmetry assumptions, a drift phenomenon appears. It is characterized by the mean of a real Bloch wave method, which gives the homogenized limit in the general case. These results for the critical problem are then adapted to the evolution model. Finally, the homogenization of the critical problem in the case of two side by side periodic medium is studied on a one dimensional on equation model. (authors)
7 CFR 58.920 - Homogenization.
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Homogenization. 58.920 Section 58.920 Agriculture... Procedures § 58.920 Homogenization. Where applicable concentrated products shall be homogenized for the... homogenization and the pressure at which homogenization is accomplished will be that which accomplishes the most...
Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression
Lindstrom, Peter; Chen, Po; Lee, En-Jui
2016-08-01
Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.
Genetic Homogenization of Composite Materials
Directory of Open Access Journals (Sweden)
P. Tobola
2009-04-01
Full Text Available The paper is focused on numerical studies of electromagnetic properties of composite materials used for the construction of small airplanes. Discussions concentrate on the genetic homogenization of composite layers and composite layers with a slot. The homogenization is aimed to reduce CPU-time demands of EMC computational models of electrically large airplanes. First, a methodology of creating a 3-dimensional numerical model of a composite material in CST Microwave Studio is proposed focusing on a sufficient accuracy of the model. Second, a proper implementation of a genetic optimization in Matlab is discussed. Third, an association of the optimization script and a simplified 2-dimensional model of the homogeneous equivalent model in Comsol Multiphysics is proposed considering EMC issues. Results of computations are experimentally verified.
Spontaneous compactification to homogeneous spaces
International Nuclear Information System (INIS)
Mourao, J.M.
1988-01-01
The spontaneous compactification of extra dimensions to compact homogeneous spaces is studied. The methods developed within the framework of coset space dimensional reduction scheme and the most general form of invariant metrics are used to find solutions of spontaneous compactification equations
Electro-magnetostatic homogenization of bianisotropic metamaterials
Fietz, Chris
2012-01-01
We apply the method of asymptotic homogenization to metamaterials with microscopically bianisotropic inclusions to calculate a full set of constitutive parameters in the long wavelength limit. Two different implementations of electromagnetic asymptotic homogenization are presented. We test the homogenization procedure on two different metamaterial examples. Finally, the analytical solution for long wavelength homogenization of a one dimensional metamaterial with microscopically bi-isotropic i...
Sayood, K.; Chen, Y. C.; Wang, X.
1992-01-01
During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.
Observational homogeneity of the Universe
International Nuclear Information System (INIS)
Bonnor, W.B.; Ellis, G.F.R.
1986-01-01
A new approach to observational homogeneity is presented. The observation that stars and galaxies in distant regions appear similar to those nearby may be taken to imply that matter has had a similar thermodynamic history in widely separated parts of the Universe (the Postulate of Uniform Thermal Histories, or PUTH). The supposition is now made that similar thermodynamic histories imply similar dynamical histories. Then the distant apparent similarity is evidence for spatial homogeneity of the Universe. General Relativity is used to test this idea, taking a perfect fluid model and implementing PUTH by the condition that the density and entropy per baryon shall be the same function of the proper time along all galaxy world-lines. (author)
Conclusions about homogeneity and devitrification
International Nuclear Information System (INIS)
Larche, F.
1997-01-01
A lot of experimental data concerning homogeneity and devitrification of R7T7 glass have been published. It appears that: - the crystallization process is very limited, - the interfaces due to bubbles and the container wall favor crystallization locally but the ratio of crystallized volume remains always below a few per cents, and - crystallization has no damaging long-term effects as far as leaching tests can be trusted. (A.C.)
Is charity a homogeneous good?
Backus, Peter
2010-01-01
In this paper I estimate income and price elasticities of donations to six different charitable causes to test the assumption that charity is a homogeneous good. In the US, charitable donations can be deducted from taxable income. This has long been recognized as producing a price, or taxprice, of giving equal to one minus the marginal tax rate faced by the donor. A substantial portion of the economic literature on giving has focused on estimating price and income elasticities of giving as th...
International Nuclear Information System (INIS)
Torres, V.; Beruete, M.; Sánchez, P.; Del Villar, I.
2016-01-01
An indium tin oxide (ITO) refractometer based on the generation of lossy mode resonances (LMRs) and surface plasmon resonances (SPRs) is presented. Both LMRs and SPRs are excited, in a single setup, under grazing angle incidence with Kretschmann configuration in an ITO thin-film deposited on a glass slide. The sensing capabilities of the device are demonstrated using several solutions of glycerin and water with refractive indices ranging from 1.33 to 1.47. LMRs are excited in the visible range, from 617 nm to 682 nm under TE polarization and from 533 nm to 637 nm under TM polarization, with a maximum sensitivity of 700 nm/RIU and 1200 nm/RIU, respectively. For the SPRs, a sensing range between 1375 nm and 2494 nm with a maximum sensitivity of 8300 nm/RIU is measured under TM polarization. Experimental results are supported with numerical simulations based on a modification of the plane-wave method for a one-dimensional multilayer waveguide
Energy Technology Data Exchange (ETDEWEB)
Torres, V. [Antenna Group–TERALAB, Public University of Navarra, 31006 Pamplona (Spain); Beruete, M. [Antenna Group–TERALAB, Public University of Navarra, 31006 Pamplona (Spain); Institute of Smart Cities, Public University of Navarra, 31006 Pamplona (Spain); Sánchez, P. [Department of Electric and Electronic Engineering, Public University of Navarra, Pamplona 31006 (Spain); Del Villar, I. [Institute of Smart Cities, Public University of Navarra, 31006 Pamplona (Spain); Department of Electric and Electronic Engineering, Public University of Navarra, Pamplona 31006 (Spain)
2016-01-25
An indium tin oxide (ITO) refractometer based on the generation of lossy mode resonances (LMRs) and surface plasmon resonances (SPRs) is presented. Both LMRs and SPRs are excited, in a single setup, under grazing angle incidence with Kretschmann configuration in an ITO thin-film deposited on a glass slide. The sensing capabilities of the device are demonstrated using several solutions of glycerin and water with refractive indices ranging from 1.33 to 1.47. LMRs are excited in the visible range, from 617 nm to 682 nm under TE polarization and from 533 nm to 637 nm under TM polarization, with a maximum sensitivity of 700 nm/RIU and 1200 nm/RIU, respectively. For the SPRs, a sensing range between 1375 nm and 2494 nm with a maximum sensitivity of 8300 nm/RIU is measured under TM polarization. Experimental results are supported with numerical simulations based on a modification of the plane-wave method for a one-dimensional multilayer waveguide.
Physical applications of homogeneous balls
Scarr, Tzvi
2005-01-01
One of the mathematical challenges of modern physics lies in the development of new tools to efficiently describe different branches of physics within one mathematical framework. This text introduces precisely such a broad mathematical model, one that gives a clear geometric expression of the symmetry of physical laws and is entirely determined by that symmetry. The first three chapters discuss the occurrence of bounded symmetric domains (BSDs) or homogeneous balls and their algebraic structure in physics. The book further provides a discussion of how to obtain a triple algebraic structure ass
Heterotic strings on homogeneous spaces
International Nuclear Information System (INIS)
Israel, D.; Kounnas, C.; Orlando, D.; Petropoulos, P.M.
2005-01-01
We construct heterotic string backgrounds corresponding to families of homogeneous spaces as exact conformal field theories. They contain left cosets of compact groups by their maximal tori supported by NS-NS 2-forms and gauge field fluxes. We give the general formalism and modular-invariant partition functions, then we consider some examples such as SU(2)/U(1)∝S 2 (already described in a previous paper) and the SU(3)/U(1) 2 flag space. As an application we construct new supersymmetric string vacua with magnetic fluxes and a linear dilaton. (Abstract Copyright [2005], Wiley Periodicals, Inc.)
Homogenization scheme for acoustic metamaterials
Yang, Min
2014-02-26
We present a homogenization scheme for acoustic metamaterials that is based on reproducing the lowest orders of scattering amplitudes from a finite volume of metamaterials. This approach is noted to differ significantly from that of coherent potential approximation, which is based on adjusting the effective-medium parameters to minimize scatterings in the long-wavelength limit. With the aid of metamaterials’ eigenstates, the effective parameters, such as mass density and elastic modulus can be obtained by matching the surface responses of a metamaterial\\'s structural unit cell with a piece of homogenized material. From the Green\\'s theorem applied to the exterior domain problem, matching the surface responses is noted to be the same as reproducing the scattering amplitudes. We verify our scheme by applying it to three different examples: a layered lattice, a two-dimensional hexagonal lattice, and a decorated-membrane system. It is shown that the predicted characteristics and wave fields agree almost exactly with numerical simulations and experiments and the scheme\\'s validity is constrained by the number of dominant surface multipoles instead of the usual long-wavelength assumption. In particular, the validity extends to the full band in one dimension and to regimes near the boundaries of the Brillouin zone in two dimensions.
ISOTOPE METHODS IN HOMOGENEOUS CATALYSIS.
Energy Technology Data Exchange (ETDEWEB)
BULLOCK,R.M.; BENDER,B.R.
2000-12-01
The use of isotope labels has had a fundamentally important role in the determination of mechanisms of homogeneously catalyzed reactions. Mechanistic data is valuable since it can assist in the design and rational improvement of homogeneous catalysts. There are several ways to use isotopes in mechanistic chemistry. Isotopes can be introduced into controlled experiments and followed where they go or don't go; in this way, Libby, Calvin, Taube and others used isotopes to elucidate mechanistic pathways for very different, yet important chemistries. Another important isotope method is the study of kinetic isotope effects (KIEs) and equilibrium isotope effect (EIEs). Here the mere observation of where a label winds up is no longer enough - what matters is how much slower (or faster) a labeled molecule reacts than the unlabeled material. The most careti studies essentially involve the measurement of isotope fractionation between a reference ground state and the transition state. Thus kinetic isotope effects provide unique data unavailable from other methods, since information about the transition state of a reaction is obtained. Because getting an experimental glimpse of transition states is really tantamount to understanding catalysis, kinetic isotope effects are very powerful.
Improving homogeneity by dynamic speed limit systems.
Nes, N. van Brandenberg, S. & Twisk, D.A.M.
2010-01-01
Homogeneity of driving speeds is an important variable in determining road safety; more homogeneous driving speeds increase road safety. This study investigates the effect of introducing dynamic speed limit systems on homogeneity of driving speeds. A total of 46 subjects twice drove a route along 12
7 CFR 58.636 - Homogenization.
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Homogenization. 58.636 Section 58.636 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.636 Homogenization. Homogenization of the pasteurized mix shall be accomplished to...
The homogeneous geometries of real hyperbolic space
DEFF Research Database (Denmark)
Castrillón López, Marco; Gadea, Pedro Martínez; Swann, Andrew Francis
We describe the holonomy algebras of all canonical connections of homogeneous structures on real hyperbolic spaces in all dimensions. The structural results obtained then lead to a determination of the types, in the sense of Tricerri and Vanhecke, of the corresponding homogeneous tensors. We use...... our analysis to show that the moduli space of homogeneous structures on real hyperbolic space has two connected components....
Orthogonality Measurement for Homogenous Projects-Bases
Ivan, Ion; Sandu, Andrei; Popa, Marius
2009-01-01
The homogenous projects-base concept is defined. Next, the necessary steps to create a homogenous projects-base are presented. A metric system is built, which then will be used for analyzing projects. The indicators which are meaningful for analyzing a homogenous projects-base are selected. The given hypothesis is experimentally verified. The…
The electromagnetic radiation from simple sources in the presence of a homogeneous dielectric sphere
Mason, V. B.
1973-01-01
In this research, the effect of a homogeneous dielectric sphere on the electromagnetic radiation from simple sources is treated as a boundary value problem, and the solution is obtained by the technique of dyadic Green's functions. Exact representations of the electric fields in the various regions due to a source located inside, outside, or on the surface of a dielectric sphere are formulated. Particular attention is given to the effect of sphere size, source location, dielectric constant, and dielectric loss on the radiation patterns and directivity of small spheres (less than 5 wavelengths in diameter) using the Huygens' source excitation. The computed results are found to closely agree with those measured for waveguide-excited plexiglas spheres. Radiation patterns for an extended Huygens' source and for curved electric dipoles located on the sphere's surface are also presented. The resonance phenomenon associated with the dielectric sphere is studied in terms of the modal representation of the radiated fields. It is found that when the sphere is excited at certain frequencies, much of the energy is radiated into the sidelobes. The addition of a moderate amount of dielectric loss, however, quickly attenuates this resonance effect. A computer program which may be used to calculate the directivity and radiation pattern of a Huygens' source located inside or on the surface of a lossy dielectric sphere is listed.
The evaporative vector: Homogeneous systems
International Nuclear Information System (INIS)
Klots, C.E.
1987-05-01
Molecular beams of van der Waals molecules are the subject of much current research. Among the methods used to form these beams, three-sputtering, laser ablation, and the sonic nozzle expansion of neat gases - yield what are now recognized to be ''warm clusters.'' They contain enough internal energy to undergo a number of first-order processes, in particular that of evaporation. Because of this evaporation and its attendant cooling, the properties of such clusters are time-dependent. The states of matter which can be arrived at via an evaporative vector on a typical laboratory time-scale are discussed. Topics include the (1) temperatures, (2) metastability, (3) phase transitions, (4) kinetic energies of fragmentation, and (5) the expression of magical properties, all for evaporating homogeneous clusters
Directory of Open Access Journals (Sweden)
Mbété, RA.
2007-01-01
Full Text Available Participative Management of the Sanctuary of Western Lowland Gorillas (Gorilla gorilla gorilla of Lossi in Republic of Congo-Brazzaville: Preliminary Results and Constraints Analysis. The gorilla sanctuary of Lossi experiments the synergy between scientific research and nature conservation. Three partners are involved in a management participative process. These partners include the Republic of Congo, the local community of Lossi and the European programme on the forest ecosystems in Central Africa. An investigation was carried out on the sanctuary of Lossi in 2003, in order to study in situ the effects generated by the participative management and to identify the constraints linked to the participative approach. The work of primatologists allowed the habituation of the gorillas to the human presence and opened eyesight tourism of western lowland gorillas. A camp for tourists and the access road to the sanctuary have been constructed. The tourism generated jobs in favour of the local population which is also a take-partner of contracts on road repairing. The income from the tourism allowed the construction of a health centre. However, the works of researchers and tourism activities failed during the outbreaks of the Ebola hemorrhagic fever and during the three civil war episodes. The consolidation and the long term of this process of co-management of natural resources of Lossi remains the establishment of a management that should include conservation, rural development and scientific research, with equitably in the distribution of gain between the partnerses.
Directory of Open Access Journals (Sweden)
F. Saez de Adana
2009-01-01
Full Text Available This paper presents an efficient application of the Time-Domain Uniform Theory of Diffraction (TD-UTD for the analysis of Ultra-Wideband (UWB mobile communications for indoor environments. The classical TD-UTD formulation is modified to include the contribution of lossy materials and multiple-ray interactions with the environment. The electromagnetic analysis is combined with a ray-tracing acceleration technique to treat realistic and complex environments. The validity of this method is tested with measurements performed inside the Polytechnic building of the University of Alcala and shows good performance of the model for the analysis of UWB propagation.
Reciprocity theory of homogeneous reactions
Agbormbai, Adolf A.
1990-03-01
The reciprocity formalism is applied to the homogeneous gaseous reactions in which the structure of the participating molecules changes upon collision with one another, resulting in a change in the composition of the gas. The approach is applied to various classes of dissociation, recombination, rearrangement, ionizing, and photochemical reactions. It is shown that for the principle of reciprocity to be satisfied it is necessary that all chemical reactions exist in complementary pairs which consist of the forward and backward reactions. The backward reaction may be described by either the reverse or inverse process. The forward and backward processes must satisfy the same reciprocity equation. Because the number of dynamical variables is usually unbalanced on both sides of a chemical equation, it is necessary that this balance be established by including as many of the dynamical variables as needed before the reciprocity equation can be formulated. Statistical transformation models of the reactions are formulated. The models are classified under the titles free exchange, restricted exchange and simplified restricted exchange. The special equations for the forward and backward processes are obtained. The models are consistent with the H theorem and Le Chatelier's principle. The models are also formulated in the context of the direct simulation Monte Carlo method.
Moral Beliefs and Cognitive Homogeneity
Directory of Open Access Journals (Sweden)
Nevia Dolcini
2018-04-01
Full Text Available The Emotional Perception Model of moral judgment intends to account for experientialism about morality and moral reasoning. In explaining how moral beliefs are formed and applied in practical reasoning, the model attempts to overcome the mismatch between reason and action/desire: morality isn’t about reason for actions, yet moral beliefs, if caused by desires, may play a motivational role in (moral agency. The account allows for two kinds of moral beliefs: genuine moral beliefs, which enjoy a relation to desire, and motivationally inert moral beliefs acquired in ways other than experience. Such etiology-based dichotomy of concepts, I will argue, leads to the undesirable view of cognition as a non-homogeneous phenomenon. Moreover, the distinction between moral beliefs and moral beliefs would entail a further dichotomy encompassing the domain of moral agency: one and the same action might possibly be either genuine moral, or not moral, if acted by individuals lacking the capacity for moral feelings, such as psychopaths.
Homogeneous modes of cosmological instantons
Energy Technology Data Exchange (ETDEWEB)
Gratton, Steven; Turok, Neil
2001-06-15
We discuss the O(4) invariant perturbation modes of cosmological instantons. These modes are spatially homogeneous in Lorentzian spacetime and thus not relevant to density perturbations. But their properties are important in establishing the meaning of the Euclidean path integral. If negative modes are present, the Euclidean path integral is not well defined, but may nevertheless be useful in an approximate description of the decay of an unstable state. When gravitational dynamics is included, counting negative modes requires a careful treatment of the conformal factor problem. We demonstrate that for an appropriate choice of coordinate on phase space, the second order Euclidean action is bounded below for normalized perturbations and has a finite number of negative modes. We prove that there is a negative mode for many gravitational instantons of the Hawking-Moss or Coleman{endash}De Luccia type, and discuss the associated spectral flow. We also investigate Hawking-Turok constrained instantons, which occur in a generic inflationary model. Implementing the regularization and constraint proposed by Kirklin, Turok and Wiseman, we find that those instantons leading to substantial inflation do not possess negative modes. Using an alternate regularization and constraint motivated by reduction from five dimensions, we find a negative mode is present. These investigations shed new light on the suitability of Euclidean quantum gravity as a potential description of our universe.
Homogeneous modes of cosmological instantons
International Nuclear Information System (INIS)
Gratton, Steven; Turok, Neil
2001-01-01
We discuss the O(4) invariant perturbation modes of cosmological instantons. These modes are spatially homogeneous in Lorentzian spacetime and thus not relevant to density perturbations. But their properties are important in establishing the meaning of the Euclidean path integral. If negative modes are present, the Euclidean path integral is not well defined, but may nevertheless be useful in an approximate description of the decay of an unstable state. When gravitational dynamics is included, counting negative modes requires a careful treatment of the conformal factor problem. We demonstrate that for an appropriate choice of coordinate on phase space, the second order Euclidean action is bounded below for normalized perturbations and has a finite number of negative modes. We prove that there is a negative mode for many gravitational instantons of the Hawking-Moss or ColemanendashDe Luccia type, and discuss the associated spectral flow. We also investigate Hawking-Turok constrained instantons, which occur in a generic inflationary model. Implementing the regularization and constraint proposed by Kirklin, Turok and Wiseman, we find that those instantons leading to substantial inflation do not possess negative modes. Using an alternate regularization and constraint motivated by reduction from five dimensions, we find a negative mode is present. These investigations shed new light on the suitability of Euclidean quantum gravity as a potential description of our universe
AQUEOUS HOMOGENEOUS REACTORTECHNICAL PANEL REPORT
Energy Technology Data Exchange (ETDEWEB)
Diamond, D.J.; Bajorek, S.; Bakel, A.; Flanagan, G.; Mubayi, V.; Skarda, R.; Staudenmeier, J.; Taiwo, T.; Tonoike, K.; Tripp, C.; Wei, T.; Yarsky, P.
2010-12-03
Considerable interest has been expressed for developing a stable U.S. production capacity for medical isotopes and particularly for molybdenum- 99 (99Mo). This is motivated by recent re-ductions in production and supply worldwide. Consistent with U.S. nonproliferation objectives, any new production capability should not use highly enriched uranium fuel or targets. Conse-quently, Aqueous Homogeneous Reactors (AHRs) are under consideration for potential 99Mo production using low-enriched uranium. Although the Nuclear Regulatory Commission (NRC) has guidance to facilitate the licensing process for non-power reactors, that guidance is focused on reactors with fixed, solid fuel and hence, not applicable to an AHR. A panel was convened to study the technical issues associated with normal operation and potential transients and accidents of an AHR that might be designed for isotope production. The panel has produced the requisite AHR licensing guidance for three chapters that exist now for non-power reactor licensing: Reac-tor Description, Reactor Coolant Systems, and Accident Analysis. The guidance is in two parts for each chapter: 1) standard format and content a licensee would use and 2) the standard review plan the NRC staff would use. This guidance takes into account the unique features of an AHR such as the fuel being in solution; the fission product barriers being the vessel and attached systems; the production and release of radiolytic and fission product gases and their impact on operations and their control by a gas management system; and the movement of fuel into and out of the reactor vessel.
Homogeneity and thermodynamic identities in geometrothermodynamics
Energy Technology Data Exchange (ETDEWEB)
Quevedo, Hernando [Universidad Nacional Autonoma de Mexico, Instituto de Ciencias Nucleares (Mexico); Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); ICRANet, Rome (Italy); Quevedo, Maria N. [Universidad Militar Nueva Granada, Departamento de Matematicas, Facultad de Ciencias Basicas, Bogota (Colombia); Sanchez, Alberto [CIIDET, Departamento de Posgrado, Queretaro (Mexico)
2017-03-15
We propose a classification of thermodynamic systems in terms of the homogeneity properties of their fundamental equations. Ordinary systems correspond to homogeneous functions and non-ordinary systems are given by generalized homogeneous functions. This affects the explicit form of the Gibbs-Duhem relation and Euler's identity. We show that these generalized relations can be implemented in the formalism of black hole geometrothermodynamics in order to completely fix the arbitrariness present in Legendre invariant metrics. (orig.)
A literature review on biotic homogenization
Guangmei Wang; Jingcheng Yang; Chuangdao Jiang; Hongtao Zhao; Zhidong Zhang
2009-01-01
Biotic homogenization is the process whereby the genetic, taxonomic and functional similarity of two or more biotas increases over time. As a new research agenda for conservation biogeography, biotic homogenization has become a rapidly emerging topic of interest in ecology and evolution over the past decade. However, research on this topic is rare in China. Herein, we introduce the development of the concept of biotic homogenization, and then discuss methods to quantify its three components (...
Hybrid diffusion–transport spatial homogenization method
International Nuclear Information System (INIS)
Kooreman, Gabriel; Rahnema, Farzad
2014-01-01
Highlights: • A new hybrid diffusion–transport homogenization method. • An extension of the consistent spatial homogenization (CSH) transport method. • Auxiliary cross section makes homogenized diffusion consistent with heterogeneous diffusion. • An on-the-fly re-homogenization in transport. • The method is faster than fine-mesh transport by 6–8 times. - Abstract: A new hybrid diffusion–transport homogenization method has been developed by extending the consistent spatial homogenization (CSH) transport method to include diffusion theory. As in the CSH method, an “auxiliary cross section” term is introduced into the source term, making the resulting homogenized diffusion equation consistent with its heterogeneous counterpart. The method then utilizes an on-the-fly re-homogenization in transport theory at the assembly level in order to correct for core environment effects on the homogenized cross sections and the auxiliary cross section. The method has been derived in general geometry and tested in a 1-D boiling water reactor (BWR) core benchmark problem for both controlled and uncontrolled configurations. The method has been shown to converge to the reference solution with less than 1.7% average flux error in less than one third the computational time as the CSH method – 6 to 8 times faster than fine-mesh transport
International Nuclear Information System (INIS)
Zhu, Qiong-gan; Wang, Zhi-guo; Tan, Wei
2014-01-01
The combined effect of side-coupled gain cavity and lossy cavity on the plasmonic response of metal-dielectric-metal (MDM) surface plasmon polariton (SPP) waveguide is investigated theoretically using Green's function method. Our result suggests that the gain and loss parameters influence the amplitude and phase of the fields localized in the two cavities. For the case of balanced gain and loss, the fields of the two cavities are always of equi-amplitude but out of phase. A plasmon induced transparency (PIT)-like transmission peak can be achieved by the destructive interference of two fields with anti-phase. For the case of unbalanced gain and loss, some unexpected responses of structure are generated. When the gain is more than the loss, the system response is dissipative at around the resonant frequency of the two cavities, where the sum of reflectance and transmittance becomes less than one. This is because the lossy cavity, with a stronger localized field, makes the main contribution to the system response. When the gain is less than the loss, the reverse is true. It is found that the metal loss dissipates the system energy but facilitates the gain cavity to make a dominant effect on the system response. This mechanism may have a potential application for optical amplification and for a plasmonic waveguide switch. (paper)
Self-consolidating concrete homogeneity
Directory of Open Access Journals (Sweden)
Jarque, J. C.
2007-08-01
Full Text Available Concrete instability may lead to the non-uniform distribution of its properties. The homogeneity of self-consolidating concrete in vertically cast members was therefore explored in this study, analyzing both resistance to segregation and pore structure uniformity. To this end, two series of concretes were prepared, self-consolidating and traditional vibrated materials, with different w/c ratios and types of cement. The results showed that selfconsolidating concretes exhibit high resistance to segregation, albeit slightly lower than found in the traditional mixtures. The pore structure in the former, however, tended to be slightly more uniform, probably as a result of less intense bleeding. Such concretes are also characterized by greater bulk density, lower porosity and smaller mean pore size, which translates into a higher resistance to pressurized water. For pore diameters of over about 0.5 Î¼m, however, the pore size distribution was found to be similar to the distribution in traditional concretes, with similar absorption rates.En este trabajo se estudia la homogeneidad de los hormigones autocompactantes en piezas hormigonadas verticalmente, determinando su resistencia a la segregación y la uniformidad de su estructura porosa, dado que la pérdida de estabilidad de una mezcla puede conducir a una distribución no uniforme de sus propiedades. Para ello se han fabricado dos tipos de hormigones, uno autocompactante y otro tradicional vibrado, con diferentes relaciones a/c y distintos tipos de cemento. Los resultados ponen de manifiesto que los hormigones autocompactantes presentan una buena resistencia a la segregación, aunque algo menor que la registrada en los hormigones tradicionales. A pesar de ello, su estructura porosa tiende a ser ligeramente más uniforme, debido probablemente a un menor sangrado. Asimismo, presentan una mayor densidad aparente, una menor porosidad y un menor tamaño medio de poro, lo que les confiere mejores
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.; Kronsbein, Cornelia; Legoll, Fré dé ric
2015-01-01
it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison
Benchmarking homogenization algorithms for monthly data
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.
2013-09-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Investigations into homogenization of electromagnetic metamaterials
DEFF Research Database (Denmark)
Clausen, Niels Christian Jerichau
This dissertation encompasses homogenization methods, with a special interest into their applications to metamaterial homogenization. The first method studied is the Floquet-Bloch method, that is based on the assumption of a material being infinite periodic. Its field can then be expanded in term...
Homogeneity of Prototypical Attributes in Soccer Teams
Directory of Open Access Journals (Sweden)
Christian Zepp
2015-09-01
Full Text Available Research indicates that the homogeneous perception of prototypical attributes influences several intragroup processes. The aim of the present study was to describe the homogeneous perception of the prototype and to identify specific prototypical subcategories, which are perceived as homogeneous within sport teams. The sample consists of N = 20 soccer teams with a total of N = 278 athletes (age M = 23.5 years, SD = 5.0 years. The results reveal that subcategories describing the cohesiveness of the team and motivational attributes are mentioned homogeneously within sport teams. In addition, gender, identification, team size, and the championship ranking significantly correlate with the homogeneous perception of prototypical attributes. The results are discussed on the basis of theoretical and practical implications.
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
String pair production in non homogeneous backgrounds
Energy Technology Data Exchange (ETDEWEB)
Bolognesi, S. [Department of Physics “E. Fermi” University of Pisa, and INFN - Sezione di Pisa,Largo Pontecorvo, 3, Ed. C, 56127 Pisa (Italy); Rabinovici, E. [Racah Institute of Physics, The Hebrew University of Jerusalem,91904 Jerusalem (Israel); Tallarita, G. [Departamento de Ciencias, Facultad de Artes Liberales,Universidad Adolfo Ibáñez, Santiago 7941169 (Chile)
2016-04-28
We consider string pair production in non homogeneous electric backgrounds. We study several particular configurations which can be addressed with the Euclidean world-sheet instanton technique, the analogue of the world-line instanton for particles. In the first case the string is suspended between two D-branes in flat space-time, in the second case the string lives in AdS and terminates on one D-brane (this realizes the holographic Schwinger effect). In some regions of parameter space the result is well approximated by the known analytical formulas, either the particle pair production in non-homogeneous background or the string pair production in homogeneous background. In other cases we see effects which are intrinsically stringy and related to the non-homogeneity of the background. The pair production is enhanced already for particles in time dependent electric field backgrounds. The string nature enhances this even further. For spacial varying electrical background fields the string pair production is less suppressed than the rate of particle pair production. We discuss in some detail how the critical field is affected by the non-homogeneity, for both time and space dependent electric field backgrouds. We also comment on what could be an interesting new prediction for the small field limit. The third case we consider is pair production in holographic confining backgrounds with homogeneous and non-homogeneous fields.
String pair production in non homogeneous backgrounds
International Nuclear Information System (INIS)
Bolognesi, S.; Rabinovici, E.; Tallarita, G.
2016-01-01
We consider string pair production in non homogeneous electric backgrounds. We study several particular configurations which can be addressed with the Euclidean world-sheet instanton technique, the analogue of the world-line instanton for particles. In the first case the string is suspended between two D-branes in flat space-time, in the second case the string lives in AdS and terminates on one D-brane (this realizes the holographic Schwinger effect). In some regions of parameter space the result is well approximated by the known analytical formulas, either the particle pair production in non-homogeneous background or the string pair production in homogeneous background. In other cases we see effects which are intrinsically stringy and related to the non-homogeneity of the background. The pair production is enhanced already for particles in time dependent electric field backgrounds. The string nature enhances this even further. For spacial varying electrical background fields the string pair production is less suppressed than the rate of particle pair production. We discuss in some detail how the critical field is affected by the non-homogeneity, for both time and space dependent electric field backgrouds. We also comment on what could be an interesting new prediction for the small field limit. The third case we consider is pair production in holographic confining backgrounds with homogeneous and non-homogeneous fields.
Benchmarking homogenization algorithms for monthly data
Directory of Open Access Journals (Sweden)
V. K. C. Venema
2012-01-01
Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.
Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve
Poisson-Jacobi reduction of homogeneous tensors
International Nuclear Information System (INIS)
Grabowski, J; Iglesias, D; Marrero, J C; Padron, E; Urbanski, P
2004-01-01
The notion of homogeneous tensors is discussed. We show that there is a one-to-one correspondence between multivector fields on a manifold M, homogeneous with respect to a vector field Δ on M, and first-order polydifferential operators on a closed submanifold N of codimension 1 such that Δ is transversal to N. This correspondence relates the Schouten-Nijenhuis bracket of multivector fields on M to the Schouten-Jacobi bracket of first-order polydifferential operators on N and generalizes the Poissonization of Jacobi manifolds. Actually, it can be viewed as a super-Poissonization. This procedure of passing from a homogeneous multivector field to a first-order polydifferential operator can also be understood as a sort of reduction; in the standard case-a half of a Poisson reduction. A dual version of the above correspondence yields in particular the correspondence between Δ-homogeneous symplectic structures on M and contact structures on N
Computational Method for Atomistic-Continuum Homogenization
National Research Council Canada - National Science Library
Chung, Peter
2002-01-01
The homogenization method is used as a framework for developing a multiscale system of equations involving atoms at zero temperature at the small scale and continuum mechanics at the very large scale...
Homogenization and Control of Lattice Structures
National Research Council Canada - National Science Library
Blankenship, G. L
1985-01-01
...., trusses may be modeled by beam equations). Using a technique from the mathematics of asymptotic analysis called "homogenization," the author shows how such approximations may be derived in a systematic way that avoids errors made using...
Homogenization of High-Contrast Brinkman Flows
Brown, Donald L.; Efendiev, Yalchin R.; Li, Guanglian; Savatorova, Viktoria
2015-01-01
, Homogenization: Methods and Applications, Transl. Math. Monogr. 234, American Mathematical Society, Providence, RI, 2007, G. Allaire, SIAM J. Math. Anal., 23 (1992), pp. 1482--1518], although a powerful tool, are not applicable here. Our second point
Homogenized thermal conduction model for particulate foods
Chinesta , Francisco; Torres , Rafael; Ramón , Antonio; Rodrigo , Mari Carmen; Rodrigo , Miguel
2002-01-01
International audience; This paper deals with the definition of an equivalent thermal conductivity for particulate foods. An homogenized thermal model is used to asses the effect of particulate spatial distribution and differences in thermal conductivities. We prove that the spatial average of the conductivity can be used in an homogenized heat transfer model if the conductivity differences among the food components are not very large, usually the highest conductivity ratio between the foods ...
Layout optimization using the homogenization method
Suzuki, Katsuyuki; Kikuchi, Noboru
1993-01-01
A generalized layout problem involving sizing, shape, and topology optimization is solved by using the homogenization method for three-dimensional linearly elastic shell structures in order to seek a possibility of establishment of an integrated design system of automotive car bodies, as an extension of the previous work by Bendsoe and Kikuchi. A formulation of a three-dimensional homogenized shell, a solution algorithm, and several examples of computing the optimum layout are presented in this first part of the two articles.
Diffusion piecewise homogenization via flux discontinuity ratios
International Nuclear Information System (INIS)
Sanchez, Richard; Dante, Giorgio; Zmijarevic, Igor
2013-01-01
We analyze piecewise homogenization with flux-weighted cross sections and preservation of averaged currents at the boundary of the homogenized domain. Introduction of a set of flux discontinuity ratios (FDR) that preserve reference interface currents leads to preservation of averaged region reaction rates and fluxes. We consider the class of numerical discretizations with one degree of freedom per volume and per surface and prove that when the homogenization and computing meshes are equal there is a unique solution for the FDRs which exactly preserve interface currents. For diffusion sub-meshing we introduce a Jacobian-Free Newton-Krylov method and for all cases considered obtain an 'exact' numerical solution (eight digits for the interface currents). The homogenization is completed by extending the familiar full assembly homogenization via flux discontinuity factors to the sides of regions laying on the boundary of the piecewise homogenized domain. Finally, for the familiar nodal discretization we numerically find that the FDRs obtained with no sub-mesh (nearly at no cost) can be effectively used for whole-core diffusion calculations with sub-mesh. This is not the case, however, for cell-centered finite differences. (authors)
International Nuclear Information System (INIS)
Moutsopoulos, George
2013-01-01
We solve the equations of topologically massive gravity (TMG) with a potentially non-vanishing cosmological constant for homogeneous metrics without isotropy. We only reproduce known solutions. We also discuss their homogeneous deformations, possibly with isotropy. We show that de Sitter space and hyperbolic space cannot be infinitesimally homogeneously deformed in TMG. We clarify some of their Segre–Petrov types and discuss the warped de Sitter spacetime. (paper)
Moutsopoulos, George
2013-06-01
We solve the equations of topologically massive gravity (TMG) with a potentially non-vanishing cosmological constant for homogeneous metrics without isotropy. We only reproduce known solutions. We also discuss their homogeneous deformations, possibly with isotropy. We show that de Sitter space and hyperbolic space cannot be infinitesimally homogeneously deformed in TMG. We clarify some of their Segre-Petrov types and discuss the warped de Sitter spacetime.
Rapid biotic homogenization of marine fish assemblages
Magurran, Anne E.; Dornelas, Maria; Moyes, Faye; Gotelli, Nicholas J.; McGill, Brian
2015-01-01
The role human activities play in reshaping biodiversity is increasingly apparent in terrestrial ecosystems. However, the responses of entire marine assemblages are not well-understood, in part, because few monitoring programs incorporate both spatial and temporal replication. Here, we analyse an exceptionally comprehensive 29-year time series of North Atlantic groundfish assemblages monitored over 5° latitude to the west of Scotland. These fish assemblages show no systematic change in species richness through time, but steady change in species composition, leading to an increase in spatial homogenization: the species identity of colder northern localities increasingly resembles that of warmer southern localities. This biotic homogenization mirrors the spatial pattern of unevenly rising ocean temperatures over the same time period suggesting that climate change is primarily responsible for the spatial homogenization we observe. In this and other ecosystems, apparent constancy in species richness may mask major changes in species composition driven by anthropogenic change. PMID:26400102
Two-Dimensional Homogeneous Fermi Gases
Hueck, Klaus; Luick, Niclas; Sobirey, Lennart; Siegl, Jonas; Lompe, Thomas; Moritz, Henning
2018-02-01
We report on the experimental realization of homogeneous two-dimensional (2D) Fermi gases trapped in a box potential. In contrast to harmonically trapped gases, these homogeneous 2D systems are ideally suited to probe local as well as nonlocal properties of strongly interacting many-body systems. As a first benchmark experiment, we use a local probe to measure the density of a noninteracting 2D Fermi gas as a function of the chemical potential and find excellent agreement with the corresponding equation of state. We then perform matter wave focusing to extract the momentum distribution of the system and directly observe Pauli blocking in a near unity occupation of momentum states. Finally, we measure the momentum distribution of an interacting homogeneous 2D gas in the crossover between attractively interacting fermions and bosonic dimers.
Internal homogenization: effective permittivity of a coated sphere.
Chettiar, Uday K; Engheta, Nader
2012-10-08
The concept of internal homogenization is introduced as a complementary approach to the conventional homogenization schemes, which could be termed as external homogenization. The theory for the internal homogenization of the permittivity of subwavelength coated spheres is presented. The effective permittivity derived from the internal homogenization of coreshells is discussed for plasmonic and dielectric constituent materials. The effective model provided by the homogenization is a useful design tool in constructing coated particles with desired resonant properties.
Statistical methods for assessment of blend homogeneity
DEFF Research Database (Denmark)
Madsen, Camilla
2002-01-01
In this thesis the use of various statistical methods to address some of the problems related to assessment of the homogeneity of powder blends in tablet production is discussed. It is not straight forward to assess the homogeneity of a powder blend. The reason is partly that in bulk materials......, it is shown how to set up parametric acceptance criteria for the batch that gives a high confidence that future samples with a probability larger than a specified value will pass the USP threeclass criteria. Properties and robustness of proposed changes to the USP test for content uniformity are investigated...
Homogenization of High-Contrast Brinkman Flows
Brown, Donald L.
2015-04-16
Modeling porous flow in complex media is a challenging problem. Not only is the problem inherently multiscale but, due to high contrast in permeability values, flow velocities may differ greatly throughout the medium. To avoid complicated interface conditions, the Brinkman model is often used for such flows [O. Iliev, R. Lazarov, and J. Willems, Multiscale Model. Simul., 9 (2011), pp. 1350--1372]. Instead of permeability variations and contrast being contained in the geometric media structure, this information is contained in a highly varying and high-contrast coefficient. In this work, we present two main contributions. First, we develop a novel homogenization procedure for the high-contrast Brinkman equations by constructing correctors and carefully estimating the residuals. Understanding the relationship between scales and contrast values is critical to obtaining useful estimates. Therefore, standard convergence-based homogenization techniques [G. A. Chechkin, A. L. Piatniski, and A. S. Shamev, Homogenization: Methods and Applications, Transl. Math. Monogr. 234, American Mathematical Society, Providence, RI, 2007, G. Allaire, SIAM J. Math. Anal., 23 (1992), pp. 1482--1518], although a powerful tool, are not applicable here. Our second point is that the Brinkman equations, in certain scaling regimes, are invariant under homogenization. Unlike in the case of Stokes-to-Darcy homogenization [D. Brown, P. Popov, and Y. Efendiev, GEM Int. J. Geomath., 2 (2011), pp. 281--305, E. Marusic-Paloka and A. Mikelic, Boll. Un. Mat. Ital. A (7), 10 (1996), pp. 661--671], the results presented here under certain velocity regimes yield a Brinkman-to-Brinkman upscaling that allows using a single software platform to compute on both microscales and macroscales. In this paper, we discuss the homogenized Brinkman equations. We derive auxiliary cell problems to build correctors and calculate effective coefficients for certain velocity regimes. Due to the boundary effects, we construct
Flows and chemical reactions in homogeneous mixtures
Prud'homme, Roger
2013-01-01
Flows with chemical reactions can occur in various fields such as combustion, process engineering, aeronautics, the atmospheric environment and aquatics. The examples of application chosen in this book mainly concern homogeneous reactive mixtures that can occur in propellers within the fields of process engineering and combustion: - propagation of sound and monodimensional flows in nozzles, which may include disequilibria of the internal modes of the energy of molecules; - ideal chemical reactors, stabilization of their steady operation points in the homogeneous case of a perfect mixture and c
Homogenization versus homogenization-free method to measure muscle glycogen fractions.
Mojibi, N; Rasouli, M
2016-12-01
The glycogen is extracted from animal tissues with or without homogenization using cold perchloric acid. Three methods were compared for determination of glycogen in rat muscle at different physiological states. Two groups of five rats were kept at rest or 45 minutes muscular activity. The glycogen fractions were extracted and measured by using three methods. The data of homogenization method shows that total glycogen decreased following 45 min physical activity and the change occurred entirely in acid soluble glycogen (ASG), while AIG did not change significantly. Similar results were obtained by using "total-glycogen-fractionation methods". The findings of "homogenization-free method" indicate that the acid insoluble fraction (AIG) was the main portion of muscle glycogen and the majority of changes occurred in AIG fraction. The results of "homogenization method" are identical with "total glycogen fractionation", but differ with "homogenization-free" protocol. The ASG fraction is the major portion of muscle glycogen and is more metabolically active form.
The homogeneous marginal utility of income assumption
Demuynck, T.
2015-01-01
We develop a test to verify if every agent from a population of heterogeneous consumers has the same marginal utility of income function. This homogeneous marginal utility of income assumption is often (implicitly) used in applied demand studies because it has nice aggregation properties and
Synthesis of silica nanosphere from homogeneous and ...
Indian Academy of Sciences (India)
WINTEC
avoid it, reaction in heterogeneous system using CTABr was carried out. Nanosized silica sphere with ... Homogeneous system contains a mixture of ethanol, water, aqueous ammonia and ... heated to 823 K (rate, 1 K/min) in air and kept at this.
Gravitational Metric Tensor Exterior to Rotating Homogeneous ...
African Journals Online (AJOL)
The covariant and contravariant metric tensors exterior to a homogeneous spherical body rotating uniformly about a common φ axis with constant angular velocity ω is constructed. The constructed metric tensors in this gravitational field have seven non-zero distinct components.The Lagrangian for this gravitational field is ...
Homogeneous nucleation of water in synthetic air
Fransen, M.A.L.J.; Sachteleben, E.; Hruby, J.; Smeulders, D.M.J.; DeMott, P.J.; O'Dowd, C.D.
2013-01-01
Homogeneous nucleation rates for water vapor in synthetic air are measured by means of a Pulse-Expansion Wave Tube (PEWT). A comparison of the experimental nucleation rates with the Classical Nucleation Theory (CNT) shows that a more elaborated model is necessary to describe supercooled water
Homogeneity in Social Groups of Iraqis
Gresham, J.; Saleh, F.; Majid, S.
With appreciation to the Royal Institute for Inter-Faith Studies for initiating the Second World Congress for Middle Eastern Studies, this paper summarizes findings on homogeneity in community-level social groups derived from inter-ethnic research conducted during 2005 among Iraqi Arabs and Kurds
Abelian gauge theories on homogeneous spaces
International Nuclear Information System (INIS)
Vassilevich, D.V.
1992-07-01
An algebraic technique of separation of gauge modes in Abelian gauge theories on homogeneous spaces is proposed. An effective potential for the Maxwell-Chern-Simons theory on S 3 is calculated. A generalization of the Chern-Simons action is suggested and analysed with the example of SU(3)/U(1) x U(1). (author). 11 refs
Benchmarking homogenization algorithms for monthly data
Czech Academy of Sciences Publication Activity Database
Venema, V. K. C.; Mestre, O.; Aquilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertačník, G.; Szentimrey, T.; Štěpánek, Petr; Zahradníček, Pavel; Viarre, J.; Mueller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Duran, M. P.; Likso, T.; Esteban, P.; Brandsma, T.
2012-01-01
Roč. 8, č. 1 (2012), s. 89-115 ISSN 1814-9324 Institutional support: RVO:67179843 Keywords : climate data * instrumental time-series * greater alpine region * homogeneity test * variability * inhomogeneities Subject RIV: EH - Ecology, Behaviour Impact factor: 3.556, year: 2012
Extension theorems for homogenization on lattice structures
Miller, Robert E.
1992-01-01
When applying homogenization techniques to problems involving lattice structures, it is necessary to extend certain functions defined on a perforated domain to a simply connected domain. This paper provides general extension operators which preserve bounds on derivatives of order l. Only the special case of honeycomb structures is considered.
Homogeneous scintillating LKr/Xe calorimeters
International Nuclear Information System (INIS)
Chen, M.; Mullins, M.; Pelly, D.; Shotkin, S.; Sumorok, K.; Akyuz, D.; Chen, E.; Gaudreau, M.P.J.; Bolozdynya, A.; Tchernyshev, V.; Goritchev, P.; Khovansky, V.; Koutchenkov, A.; Kovalenko, A.; Lebedenko, V.; Vinogradov, V.; Gusev, L.; Sheinkman, V.; Krasnokutsky, R.N.; Shuvalov, R.S.; Fedyakin, N.N.; Sushkov, V.; Akopyan, M.; Doke, T.; Kikuchi, J.; Hitachi, A.; Kashiwagi, T.; Masuda, K.; Shibamura, E.; Ishida, N.; Sugimoto, S.
1993-01-01
Recent R and D work on full length scintillating homogeneous liquid xenon/krypton (LXe/Kr) cells has established the essential properties for precision EM calorimeters: In-situ calibration using α's, radiation hardness as well as the uniformity required for δE/E≅0.5% for e/γ's above 50 GeV. (orig.)
Traffic planning for non-homogeneous traffic
Indian Academy of Sciences (India)
Western trafﬁc planning methodologies mostly address the concerns of homogeneous trafﬁc and therefore often prove inadequate in solving problems involving ... Transportation Research and Injury Prevention Programme, Indian Institute of Technology, Hauz Khas, New Delhi 110 016; Civil and Architectural Engineering ...
A generalized model for homogenized reflectors
International Nuclear Information System (INIS)
Pogosbekyan, Leonid; Kim, Yeong Il; Kim, Young Jin; Joo, Hyung Kook
1996-01-01
A new concept of equivalent homogenization is proposed. The concept employs new set of homogenized parameters: homogenized cross sections (XS) and interface matrix (IM), which relates partial currents at the cell interfaces. The idea of interface matrix generalizes the idea of discontinuity factors (DFs), proposed and developed by K. Koebke and K. Smith. The method of K. Smith can be simulated within framework of new method, while the new method approximates hetero-geneous cell better in case of the steep flux gradients at the cell interfaces. The attractive shapes of new concept are:improved accuracy, simplicity of incorporation in the existing codes, equal numerical expenses in comparison to the K. Smith's approach. The new concept is useful for: (a) explicit reflector/baffle simulation; (b)control blades simulation; (c) mixed UO 2 /MOX core simulation. The offered model has been incorporated in the finite difference code and in the nodal code PANBOX. The numerical results show good accuracy of core calculations and insensitivity of homogenized parameters with respect to in-core conditions
Inverse acoustic problem of N homogeneous scatterers
DEFF Research Database (Denmark)
Berntsen, Svend
2002-01-01
The three-dimensional inverse acoustic medium problem of N homogeneous objects with known geometry and location is considered. It is proven that one scattering experiment is sufficient for the unique determination of the complex wavenumbers of the objects. The mapping from the scattered fields...
Mach's principle in spatially homogeneous spacetimes
International Nuclear Information System (INIS)
Tipler, F.J.
1978-01-01
On the basis of Mach's Principle it is concluded that the only singularity-free solution to the empty space Einstein equations is flat space. It is shown that the only singularity-free solution to the empty space Einstein equations which is spatially homogeneous and globally hyperbolic is in fact suitably identified Minkowski space. (Auth.)
Water Filtration through Homogeneous Granulated Charge
Directory of Open Access Journals (Sweden)
A. M. Krautsou
2005-01-01
Full Text Available General relationship for calculation of water filtration through homogeneous granulated charge has been obtained. The obtained relationship has been compared with experimental data. Discrepancies between calculated and experimental values do not exceed 6 % throughout the entire investigated range.
A new concept of equivalent homogenization method
Energy Technology Data Exchange (ETDEWEB)
Kim, Young Jin; Pogoskekyan, Leonid; Kim, Young Il; Ju, Hyung Kook; Chang, Moon Hee [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1996-07-01
A new concept of equivalent homogenization is proposed. The concept employs new set of homogenized parameters: homogenized cross sections (XS) and interface matrix (IM), which relates partial currents at the cell interfaces. The idea of interface matrix generalizes the idea of discontinuity factors (DFs), proposed and developed by K. Koebke and K. Smith. The offered concept covers both those of K. Koebke and K. Smith; both of them can be simulated within framework of new concept. Also, the offered concept covers Siemens KWU approach for baffle/reflector simulation, where the equivalent homogenized reflector XS are derived from the conservation of response matrix at the interface in 1D simi-infinite slab geometry. The IM and XS of new concept satisfy the same assumption about response matrix conservation in 1D semi-infinite slab geometry. It is expected that the new concept provides more accurate approximation of heterogeneous cell, especially in case of the steep flux gradients at the cell interfaces. The attractive shapes of new concept are: improved accuracy, simplicity of incorporation in the existing codes, equal numerical expenses in comparison to the K. Smith`s approach. The new concept is useful for: (a) explicit reflector/baffle simulation; (b) control blades simulation; (c) mixed UO{sub 2}/MOX core simulation. The offered model has been incorporated in the finite difference code and in the nodal code PANDOX. The numerical results show good accuracy of core calculations and insensitivity of homogenized parameters with respect to in-core conditions. 9 figs., 7 refs. (Author).
Directory of Open Access Journals (Sweden)
Phungamngoen Chanthima
2016-01-01
Full Text Available Coconut milk is one of the most important protein-rich food sources available today. Separation of an emulsion into an aqueous phase and cream phase is commonly occurred and this leads an unacceptably physical defect of either fresh or processed coconut milk. Since homogenization steps are known to affect the stability of coconut milk. This work was aimed to study the effect of homogenization steps on quality of coconut milk. The samples were subject to high speed homogenization in the range of 5000-15000 rpm under sterilize temperatures at 120-140 °C for 15 min. The result showed that emulsion stability increase with increasing speed of homogenization. The lower fat particles were generated and easy to disperse in continuous phase lead to high stability. On the other hand, the stability of coconut milk decreased, fat globule increased, L value decreased and b value increased when the high sterilization temperature was applied. Homogenization after heating led to higher stability than homogenization before heating due to the reduced particle size of coconut milk after aggregation during sterilization process. The results implied that homogenization after sterilization process might play an important role on the quality of the sterilized coconut milk.
Enhancement of anaerobic sludge digestion by high-pressure homogenization.
Zhang, Sheng; Zhang, Panyue; Zhang, Guangming; Fan, Jie; Zhang, Yuxuan
2012-08-01
To improve anaerobic sludge digestion efficiency, the effects of high-pressure homogenization (HPH) conditions on the anaerobic sludge digestion were investigated. The VS and TCOD were significantly removed with the anaerobic digestion, and the VS removal and TCOD removal increased with increasing the homogenization pressure and homogenization cycle number; correspondingly, the accumulative biogas production also increased with increasing the homogenization pressure and homogenization cycle number. The optimal homogenization pressure was 50 MPa for one homogenization cycle and 40 MPa for two homogenization cycles. The SCOD of the sludge supernatant significantly increased with increasing the homogenization pressure and homogenization cycle number due to the sludge disintegration. The relationship between the biogas production and the sludge disintegration showed that the accumulative biogas and methane production were mainly enhanced by the sludge disintegration, which accelerated the anaerobic digestion process and improved the methane content in the biogas. Copyright © 2012 Elsevier Ltd. All rights reserved.
Homogenized group cross sections by Monte Carlo
International Nuclear Information System (INIS)
Van Der Marck, S. C.; Kuijper, J. C.; Oppe, J.
2006-01-01
Homogenized group cross sections play a large role in making reactor calculations efficient. Because of this significance, many codes exist that can calculate these cross sections based on certain assumptions. However, the application to the High Flux Reactor (HFR) in Petten, the Netherlands, the limitations of such codes imply that the core calculations would become less accurate when using homogenized group cross sections (HGCS). Therefore we developed a method to calculate HGCS based on a Monte Carlo program, for which we chose MCNP. The implementation involves an addition to MCNP, and a set of small executables to perform suitable averaging after the MCNP run(s) have completed. Here we briefly describe the details of the method, and we report on two tests we performed to show the accuracy of the method and its implementation. By now, this method is routinely used in preparation of the cycle to cycle core calculations for HFR. (authors)
Design of SC solenoid with high homogeneity
International Nuclear Information System (INIS)
Yang Xiaoliang; Liu Zhong; Luo Min; Luo Guangyao; Kang Qiang; Tan Jie; Wu Wei
2014-01-01
A novel kind of SC (superconducting) solenoid coil is designed to satisfy the homogeneity requirement of the magnetic field. In this paper, we first calculate the current density distribution of the solenoid coil section through the linear programming method. Then a traditional solenoid and a nonrectangular section solenoid are designed to produce a central field up to 7 T with a homogeneity to the greatest extent. After comparison of the two solenoid coils designed in magnet field quality, fabrication cost and other aspects, the new design of the nonrectangular section of a solenoid coil can be realized through improving the techniques of framework fabrication and winding. Finally, the outlook and error analysis of this kind of SC magnet coil are also discussed briefly. (authors)
Testing homogeneity in Weibull-regression models.
Bolfarine, Heleno; Valença, Dione M
2005-10-01
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.
Core homogenization method for pebble bed reactors
International Nuclear Information System (INIS)
Kulik, V.; Sanchez, R.
2005-01-01
This work presents a core homogenization scheme for treating a stochastic pebble bed loading in pebble bed reactors. The reactor core is decomposed into macro-domains that contain several pebble types characterized by different degrees of burnup. A stochastic description is introduced to account for pebble-to-pebble and pebble-to-helium interactions within a macro-domain as well as for interactions between macro-domains. Performance of the proposed method is tested for the PROTEUS and ASTRA critical reactor facilities. Numerical simulations accomplished with the APOLLO2 transport lattice code show good agreement with the experimental data for the PROTEUS reactor facility and with the TRIPOLI4 Monte Carlo simulations for the ASTRA reactor configuration. The difference between the proposed method and the traditional volume-averaged homogenization technique is negligible while only one type of fuel pebbles present in the system, but it grows rapidly with the level of pebble heterogeneity. (authors)
Smooth homogeneous structures in operator theory
Beltita, Daniel
2005-01-01
Geometric ideas and techniques play an important role in operator theory and the theory of operator algebras. Smooth Homogeneous Structures in Operator Theory builds the background needed to understand this circle of ideas and reports on recent developments in this fruitful field of research. Requiring only a moderate familiarity with functional analysis and general topology, the author begins with an introduction to infinite dimensional Lie theory with emphasis on the relationship between Lie groups and Lie algebras. A detailed examination of smooth homogeneous spaces follows. This study is illustrated by familiar examples from operator theory and develops methods that allow endowing such spaces with structures of complex manifolds. The final section of the book explores equivariant monotone operators and Kähler structures. It examines certain symmetry properties of abstract reproducing kernels and arrives at a very general version of the construction of restricted Grassmann manifolds from the theory of loo...
Genetic homogeneity of Fascioloides magna in Austria.
Husch, Christian; Sattmann, Helmut; Hörweg, Christoph; Ursprung, Josef; Walochnik, Julia
2017-08-30
The large American liver fluke, Fascioloides magna, is an economically relevant parasite of both domestic and wild ungulates. F. magna was repeatedly introduced into Europe, for the first time already in the 19th century. In Austria, a stable population of F. magna has established in the Danube floodplain forests southeast of Vienna. The aim of this study was to determine the genetic diversity of F. magna in Austria. A total of 26 individuals from various regions within the known area of distribution were investigated for their cytochrome oxidase subunit 1 (cox1) and nicotinamide dehydrogenase subunit 1 (nad1) gene haplotypes. Interestingly, all 26 individuals revealed one and the same haplotype, namely concatenated haplotype Ha5. This indicates a homogenous population of F. magna in Austria and may argue for a single introduction. Alternatively, genetic homogeneity might also be explained by a bottleneck effect and/or genetic drift. Copyright © 2017 Elsevier B.V. All rights reserved.
Shape optimization in biomimetics by homogenization modelling
International Nuclear Information System (INIS)
Hoppe, Ronald H.W.; Petrova, Svetozara I.
2003-08-01
Optimal shape design of microstructured materials has recently attracted a great deal of attention in material science. The shape and the topology of the microstructure have a significant impact on the macroscopic properties. The present work is devoted to the shape optimization of new biomorphic microcellular ceramics produced from natural wood by biotemplating. We are interested in finding the best material-and-shape combination in order to achieve the optimal prespecified performance of the composite material. The computation of the effective material properties is carried out using the homogenization method. Adaptive mesh-refinement technique based on the computation of recovered stresses is applied in the microstructure to find the homogenized elasticity coefficients. Numerical results show the reliability of the implemented a posteriori error estimator. (author)
Homogenization of variational inequalities for obstacle problems
International Nuclear Information System (INIS)
Sandrakov, G V
2005-01-01
Results on the convergence of solutions of variational inequalities for obstacle problems are proved. The variational inequalities are defined by a non-linear monotone operator of the second order with periodic rapidly oscillating coefficients and a sequence of functions characterizing the obstacles. Two-scale and macroscale (homogenized) limiting variational inequalities are obtained. Derivation methods for such inequalities are presented. Connections between the limiting variational inequalities and two-scale and macroscale minimization problems are established in the case of potential operators.
Quantum groups and quantum homogeneous spaces
International Nuclear Information System (INIS)
Kulish, P.P.
1994-01-01
The usefulness of the R-matrix formalism and the reflection equations is demonstrated on examples of the quantum group covariant algebras (quantum homogeneous spaces): quantum Minkowski space-time, quantum sphere and super-sphere. The irreducible representations of some covariant algebras are constructed. The generalization of the reflection equation to super case is given and the existence of the quasiclassical limits is pointed out. (orig.)
Process to produce homogenized reactor fuels
International Nuclear Information System (INIS)
Hart, P.E.; Daniel, J.L.; Brite, D.W.
1980-01-01
The fuels consist of a mixture of PuO 2 and UO 2 . In order to increase the homogeneity of mechanically mixed fuels the pellets are sintered in a hydrogen atmosphere with a sufficiently low oxygen potential. This results in a reduction of Pu +4 to Pu +3 . By the reduction process water vapor is obtained increasing the pressure within the PuO 2 particles and causing PuO 2 to be pressed into the uranium oxide structure. (DG) [de
Homogeneous scintillating LKr/Xe calorimeters
Energy Technology Data Exchange (ETDEWEB)
Chen, M.; Mullins, M.; Pelly, D.; Shotkin, S.; Sumorok, K. (Lab. for Nuclear Science, MIT, Cambridge, MA (United States)); Akyuz, D.; Chen, E.; Gaudreau, M.P.J. (Plasma Fusion Center, MIT, Cambridge, MA (United States)); Bolozdynya, A.; Tchernyshev, V.; Goritchev, P.; Khovansky, V.; Koutchenkov, A.; Kovalenko, A.; Lebedenko, V.; Vinogradov, V.; Gusev, L.; Sheinkman, V. (ITEP, Moscow (Russia)); Krasnokutsky, R.N.; Shuvalov, R.S.; Fedyakin, N.N.; Sushkov, V. (IHEP, Serpukhov (Russia)); Akopyan, M. (Inst. for Nuclear Research, Moscow (Russia)); Doke, T.; Kikuchi, J.; Hitachi, A.; Kashiwagi, T. (Science and Eng. Res. Lab., Waseda Univ., Tokyo (Japan)); Masuda, K.; Shibamura, E. (Saitama Coll. of Health (Japan)); Ishida, N. (Seikei Univ. (Japan)); Sugimoto, S. (INS, Univ. Tokyo (Japan))
1993-03-20
Recent R and D work on full length scintillating homogeneous liquid xenon/krypton (LXe/Kr) cells has established the essential properties for precision EM calorimeters: In-situ calibration using [alpha]'s, radiation hardness as well as the uniformity required for [delta]E/E[approx equal]0.5% for e/[gamma]'s above 50 GeV. (orig.).
Fluoroscopic screen which is optically homogeneous
International Nuclear Information System (INIS)
1975-01-01
A high efficiency fluoroscopic screen for X-ray examination consists of an optically homogeneous crystal plate of fluorescent material such as activated cesium iodide, supported on a transparent protective plate, with the edges of the assembly beveled and optically coupled to a light absorbing compound. The product is dressed to the desired thickness and provided with an X-ray-transparent light-opaque cover. (Auth.)
Correlated equilibria in homogenous good Bertrand competition
DEFF Research Database (Denmark)
Jann, Ole; Schottmüller, Christoph
2015-01-01
We show that there is a unique correlated equilibrium, identical to the unique Nash equilibrium, in the classic Bertrand oligopoly model with homogenous goods and identical marginal costs. This provides a theoretical underpinning for the so-called "Bertrand paradox'' as well as its most general f...... formulation to date. Our proof generalizes to asymmetric marginal costs and arbitrarily many players in the following way: The market price cannot be higher than the second lowest marginal cost in any correlated equilibrium....
Homogeneous Biosensing Based on Magnetic Particle Labels
Schrittwieser, Stefan
2016-06-06
The growing availability of biomarker panels for molecular diagnostics is leading to an increasing need for fast and sensitive biosensing technologies that are applicable to point-of-care testing. In that regard, homogeneous measurement principles are especially relevant as they usually do not require extensive sample preparation procedures, thus reducing the total analysis time and maximizing ease-of-use. In this review, we focus on homogeneous biosensors for the in vitro detection of biomarkers. Within this broad range of biosensors, we concentrate on methods that apply magnetic particle labels. The advantage of such methods lies in the added possibility to manipulate the particle labels by applied magnetic fields, which can be exploited, for example, to decrease incubation times or to enhance the signal-to-noise-ratio of the measurement signal by applying frequency-selective detection. In our review, we discriminate the corresponding methods based on the nature of the acquired measurement signal, which can either be based on magnetic or optical detection. The underlying measurement principles of the different techniques are discussed, and biosensing examples for all techniques are reported, thereby demonstrating the broad applicability of homogeneous in vitro biosensing based on magnetic particle label actuation.
Some properties of spatially homogeneous spacetimes
International Nuclear Information System (INIS)
Coomer, G.C.
1979-01-01
This paper discusses two features of the universe which are influenced in a fundamental way by the spacetime geometry of the universe. The first is the growth of density fluctuations in the early stages of the evolution of the universe. The second is the propagation of electromagnetic radiation in the universe. A spatially homogeneous universe is assumed in both discussions. The gravitational instability theory of galaxy formation is investigated for a viscous fluid and for a charged, conducting fluid with a magnetic field added as a perturbation. It is found that the growth rate of density perturbations in both cases is lower than in the perfect fluid case. Spatially homogeneous but nonisotropic spacetimes are investigated next. Two perfect fluid solutions of Einstein's field equations are found which have spacelike hypersurfaces with Bianchi type II geometry. An expression for the spectrum of the cosmic microwave background radiation in a spatially homogeneous but nonisotropic universe is found. The expression is then used to determine the angular distribution of the intensity of the radiation in the simpler of the two solutions. When accepted values of the matter density and decoupling temperature are inserted into this solution, values for the age of the universe and the time of decoupling are obtained which agree reasonably well with the values of the standard model of the universe
Commensurability effects in holographic homogeneous lattices
International Nuclear Information System (INIS)
Andrade, Tomas; Krikun, Alexander
2016-01-01
An interesting application of the gauge/gravity duality to condensed matter physics is the description of a lattice via breaking translational invariance on the gravity side. By making use of global symmetries, it is possible to do so without scarifying homogeneity of the pertinent bulk solutions, which we thus term as “homogeneous holographic lattices.' Due to their technical simplicity, these configurations have received a great deal of attention in the last few years and have been shown to correctly describe momentum relaxation and hence (finite) DC conductivities. However, it is not clear whether they are able to capture other lattice effects which are of interest in condensed matter. In this paper we investigate this question focusing our attention on the phenomenon of commensurability, which arises when the lattice scale is tuned to be equal to (an integer multiple of) another momentum scale in the system. We do so by studying the formation of spatially modulated phases in various models of homogeneous holographic lattices. Our results indicate that the onset of the instability is controlled by the near horizon geometry, which for insulating solutions does carry information about the lattice. However, we observe no sharp connection between the characteristic momentum of the broken phase and the lattice pitch, which calls into question the applicability of these models to the physics of commensurability.
Homogeneous Biosensing Based on Magnetic Particle Labels
Schrittwieser, Stefan; Pelaz, Beatriz; Parak, Wolfgang J.; Lentijo-Mozo, Sergio; Soulantica, Katerina; Dieckhoff, Jan; Ludwig, Frank; Guenther, Annegret; Tschöpe, Andreas; Schotter, Joerg
2016-01-01
The growing availability of biomarker panels for molecular diagnostics is leading to an increasing need for fast and sensitive biosensing technologies that are applicable to point-of-care testing. In that regard, homogeneous measurement principles are especially relevant as they usually do not require extensive sample preparation procedures, thus reducing the total analysis time and maximizing ease-of-use. In this review, we focus on homogeneous biosensors for the in vitro detection of biomarkers. Within this broad range of biosensors, we concentrate on methods that apply magnetic particle labels. The advantage of such methods lies in the added possibility to manipulate the particle labels by applied magnetic fields, which can be exploited, for example, to decrease incubation times or to enhance the signal-to-noise-ratio of the measurement signal by applying frequency-selective detection. In our review, we discriminate the corresponding methods based on the nature of the acquired measurement signal, which can either be based on magnetic or optical detection. The underlying measurement principles of the different techniques are discussed, and biosensing examples for all techniques are reported, thereby demonstrating the broad applicability of homogeneous in vitro biosensing based on magnetic particle label actuation. PMID:27275824
Homogeneous Biosensing Based on Magnetic Particle Labels
Schrittwieser, Stefan; Pelaz, Beatriz; Parak, Wolfgang; Lentijo Mozo, Sergio; Soulantica, Katerina; Dieckhoff, Jan; Ludwig, Frank; Guenther, Annegret; Tschö pe, Andreas; Schotter, Joerg
2016-01-01
The growing availability of biomarker panels for molecular diagnostics is leading to an increasing need for fast and sensitive biosensing technologies that are applicable to point-of-care testing. In that regard, homogeneous measurement principles are especially relevant as they usually do not require extensive sample preparation procedures, thus reducing the total analysis time and maximizing ease-of-use. In this review, we focus on homogeneous biosensors for the in vitro detection of biomarkers. Within this broad range of biosensors, we concentrate on methods that apply magnetic particle labels. The advantage of such methods lies in the added possibility to manipulate the particle labels by applied magnetic fields, which can be exploited, for example, to decrease incubation times or to enhance the signal-to-noise-ratio of the measurement signal by applying frequency-selective detection. In our review, we discriminate the corresponding methods based on the nature of the acquired measurement signal, which can either be based on magnetic or optical detection. The underlying measurement principles of the different techniques are discussed, and biosensing examples for all techniques are reported, thereby demonstrating the broad applicability of homogeneous in vitro biosensing based on magnetic particle label actuation.
Testing Homogeneity with the Galaxy Fossil Record
Hoyle, Ben; Jimenez, Raul; Heavens, Alan; Clarkson, Chris; Maartens, Roy
2013-01-01
Observationally confirming spatial homogeneity on sufficiently large cosmological scales is of importance to test one of the underpinning assumptions of cosmology, and is also imperative for correctly interpreting dark energy. A challenging aspect of this is that homogeneity must be probed inside our past lightcone, while observations take place on the lightcone. The history of star formation rates (SFH) in the galaxy fossil record provides a novel way to do this. We calculate the SFH of stacked Luminous Red Galaxy (LRG) spectra obtained from the Sloan Digital Sky Survey. We divide the LRG sample into 12 equal area contiguous sky patches and 10 redshift slices (0.2
Investigation of methods for hydroclimatic data homogenization
Steirou, E.; Koutsoyiannis, D.
2012-04-01
We investigate the methods used for the adjustment of inhomogeneities of temperature time series covering the last 100 years. Based on a systematic study of scientific literature, we classify and evaluate the observed inhomogeneities in historical and modern time series, as well as their adjustment methods. It turns out that these methods are mainly statistical, not well justified by experiments and are rarely supported by metadata. In many of the cases studied the proposed corrections are not even statistically significant. From the global database GHCN-Monthly Version 2, we examine all stations containing both raw and adjusted data that satisfy certain criteria of continuity and distribution over the globe. In the United States of America, because of the large number of available stations, stations were chosen after a suitable sampling. In total we analyzed 181 stations globally. For these stations we calculated the differences between the adjusted and non-adjusted linear 100-year trends. It was found that in the two thirds of the cases, the homogenization procedure increased the positive or decreased the negative temperature trends. One of the most common homogenization methods, 'SNHT for single shifts', was applied to synthetic time series with selected statistical characteristics, occasionally with offsets. The method was satisfactory when applied to independent data normally distributed, but not in data with long-term persistence. The above results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.
Exponential Stability of Switched Positive Homogeneous Systems
Directory of Open Access Journals (Sweden)
Dadong Tian
2017-01-01
Full Text Available This paper studies the exponential stability of switched positive nonlinear systems defined by cooperative and homogeneous vector fields. In order to capture the decay rate of such systems, we first consider the subsystems. A sufficient condition for exponential stability of subsystems with time-varying delays is derived. In particular, for the corresponding delay-free systems, we prove that this sufficient condition is also necessary. Then, we present a sufficient condition of exponential stability under minimum dwell time switching for the switched positive nonlinear systems. Some results in the previous literature are extended. Finally, a numerical example is given to demonstrate the effectiveness of the obtained results.
Diffusion piecewise homogenization via flux discontinuity factors
International Nuclear Information System (INIS)
Sanchez, Richard; Zmijarevic, Igor
2011-01-01
We analyze the calculation of flux discontinuity factors (FDFs) for use with piecewise subdomain assembly homogenization. These coefficients depend on the numerical mesh used to compute the diffusion problem. When the mesh has a single degree of freedom on subdomain interfaces the solution is unique and can be computed independently per subdomain. For all other cases we have implemented an iterative calculation for the FDFs. Our numerical results show that there is no solution to this nonlinear problem but that the iterative algorithm converges towards FDFs values that reproduce subdomains reaction rates with a relatively high precision. In our test we have included both the GET and black-box FDFs. (author)
Tensor harmonic analysis on homogenous space
International Nuclear Information System (INIS)
Wrobel, G.
1997-01-01
The Hilbert space of tensor functions on a homogenous space with the compact stability group is considered. The functions are decomposed onto a sum of tensor plane waves (defined in the text), components of which are transformed by irreducible representations of the appropriate transformation group. The orthogonality relation and the completeness relation for tensor plane waves are found. The decomposition constitutes a unitary transformation, which allows to obtain the Parseval equality. The Fourier components can be calculated by means of the Fourier transformation, the form of which is given explicitly. (author)
Multifractal spectra in homogeneous shear flow
Deane, A. E.; Keefe, L. R.
1988-01-01
Employing numerical simulations of 3-D homogeneous shear flow, the associated multifractal spectra of the energy dissipation, scalar dissipation and vorticity fields were calculated. The results for (128) cubed simulations of this flow, and those obtained in recent experiments that analyzed 1- and 2-D intersections of atmospheric and laboratory flows, are in some agreement. A two-scale Cantor set model of the energy cascade process which describes the experimental results from 1-D intersections quite well, describes the 3-D results only marginally.
Stimulus homogeneity enhances implicit learning: evidence from contextual cueing.
Feldmann-Wüstefeld, Tobias; Schubö, Anna
2014-04-01
Visual search for a target object is faster if the target is embedded in a repeatedly presented invariant configuration of distractors ('contextual cueing'). It has also been shown that the homogeneity of a context affects the efficiency of visual search: targets receive prioritized processing when presented in a homogeneous context compared to a heterogeneous context, presumably due to grouping processes at early stages of visual processing. The present study investigated in three Experiments whether context homogeneity also affects contextual cueing. In Experiment 1, context homogeneity varied on three levels of the task-relevant dimension (orientation) and contextual cueing was most pronounced for context configurations with high orientation homogeneity. When context homogeneity varied on three levels of the task-irrelevant dimension (color) and orientation homogeneity was fixed, no modulation of contextual cueing was observed: high orientation homogeneity led to large contextual cueing effects (Experiment 2) and low orientation homogeneity led to low contextual cueing effects (Experiment 3), irrespective of color homogeneity. Enhanced contextual cueing for homogeneous context configurations suggest that grouping processes do not only affect visual search but also implicit learning. We conclude that memory representation of context configurations are more easily acquired when context configurations can be processed as larger, grouped perceptual units. However, this form of implicit perceptual learning is only improved by stimulus homogeneity when stimulus homogeneity facilitates grouping processes on a dimension that is currently relevant in the task. Copyright © 2014 Elsevier B.V. All rights reserved.
Edge-based compression of cartoon-like images with homogeneous diffusion
DEFF Research Database (Denmark)
Mainberger, Markus; Bruhn, Andrés; Weickert, Joachim
2011-01-01
Edges provide semantically important image features. In this paper a lossy compression method for cartoon-like images is presented, which is based on edge information. Edges together with some adjacent grey/colour values are extracted and encoded using a classical edge detector, binary compressio...
Usha, Sruthi P; Gupta, Banshi D
2018-03-15
A lossy mode resonance (LMR) based sensor for urinary p-cresol testing on optical fiber substrate is developed. The sensor probe fabrication includes dip coating of nanocomposite layer of zinc oxide and molybdenum sulphide (ZnO/MoS 2 ) over unclad core of optical fiber as the transducer layer followed by the layer of molecular imprinted polymer (MIP) as the recognition medium. The addition of molybdenum sulphide in the transducer layer increases the absorption of light in the medium which enhances the LMR properties of zinc oxide thereby increasing the conductivity and hence the sensitivity of the sensor. The sensor probe is characterized for p-cresol concentration range from 0µM (reference sample) to 1000µM in artificially prepared urine. Optimizations of various probe fabrication parameters are carried to bring out the sensor's optimal performance with a sensitivity of 11.86nm/µM and 28nM as the limit of detection (LOD). A two-order improvement in LOD is obtained as compared to the recently reported p-cresol sensor. The proposed sensor possesses a response time of 15s which is 8 times better than that reported in the literature utilizing electrochemical method. Its response time is also better than the p-cresol sensor currently available in the market for the medical field. Thus, with a fast response, significant stability and repeatability, the proposed sensor holds practical implementation possibilities in the medical field. Further, the realization of sensor probe over optical fiber substrate adds remote sensing and online monitoring feasibilities. Copyright © 2017 Elsevier B.V. All rights reserved.
Topology of actions and homogeneous spaces
International Nuclear Information System (INIS)
Kozlov, Konstantin L
2013-01-01
Topologization of a group of homeomorphisms and its action provide additional possibilities for studying the topological space, the group of homeomorphisms, and their interconnections. The subject of the paper is the use of the property of d-openness of an action (introduced by Ancel under the name of weak micro-transitivity) in the study of spaces with various forms of homogeneity. It is proved that a d-open action of a Čech-complete group is open. A characterization of Polish SLH spaces using d-openness is given, and it is established that any separable metrizable SLH space has an SLH completion that is a Polish space. Furthermore, the completion is realized in coordination with the completion of the acting group with respect to the two-sided uniformity. A sufficient condition is given for extension of a d-open action to the completion of the space with respect to the maximal equiuniformity with preservation of d-openness. A result of van Mill is generalized, namely, it is proved that any homogeneous CDH metrizable compactum is the only G-compactification of the space of rational numbers for the action of some Polish group. Bibliography: 39 titles.
TWO FERROMAGNETIC SPHERES IN HOMOGENEOUS MAGNETIC FIELD
Directory of Open Access Journals (Sweden)
Yury A. Krasnitsky
2018-01-01
Full Text Available The problem of two spherical conductors is studied quite in detail with bispherical coordinates usage and has numerous appendices in an electrostatics. The boundary-value problem about two ferromagnetic spheres enclosed on homogeneous and infinite environment in which the lack of spheres exists like homogeneous magnetic field is considered. The solution of Laplace's equation in the bispherical system of coordinates allows us to find the potential and field distribution in all spaces, including area between spheres. The boundary conditions in potential continuity and in ordinary density constituent of spheres surfaces induction flux are used. It is supposed that spheres are identical, and magnetic permeability of their material is expressed in >> 0. The problem about falling of electromagnetic plane wave on the system of two spheres, which possesses electrically small sizes, can be considered as quasistationary. The scalar potentials received as a result of Laplace's equation solution are represented by the series containing Legendre polynomials. The concept of two spheres system effective permeability is introduced. It is equal to the advantage in magnitude of magnetic induction flux vector through a certain system’s section arising due to its magnetic properties. Necessary ratios for the effective permeability referred to the central system’s section are obtained. Particularly, the results can be used during the analysis of ferroxcube core clearance, which influences on the magnetic antenna properties.
Primary healthcare solo practices: homogeneous or heterogeneous?
Pineault, Raynald; Borgès Da Silva, Roxane; Provost, Sylvie; Beaulieu, Marie-Dominique; Boivin, Antoine; Couture, Audrey; Prud'homme, Alexandre
2014-01-01
Introduction. Solo practices have generally been viewed as forming a homogeneous group. However, they may differ on many characteristics. The objective of this paper is to identify different forms of solo practice and to determine the extent to which they are associated with patient experience of care. Methods. Two surveys were carried out in two regions of Quebec in 2010: a telephone survey of 9180 respondents from the general population and a postal survey of 606 primary healthcare (PHC) practices. Data from the two surveys were linked through the respondent's usual source of care. A taxonomy of solo practices was constructed (n = 213), using cluster analysis techniques. Bivariate and multilevel analyses were used to determine the relationship of the taxonomy with patient experience of care. Results. Four models were derived from the taxonomy. Practices in the "resourceful networked" model contrast with those of the "resourceless isolated" model to the extent that the experience of care reported by their patients is more favorable. Conclusion. Solo practice is not a homogeneous group. The four models identified have different organizational features and their patients' experience of care also differs. Some models seem to offer a better organizational potential in the context of current reforms.
Cosmic Ray Hit Detection with Homogenous Structures
Smirnov, O. M.
Cosmic ray (CR) hits can affect a significant number of pixels both on long-exposure ground-based CCD observations and on the Space Telescope frames. Thus, methods of identifying the damaged pixels are an important part of the data preprocessing for practically any application. The paper presents an implementation of a CR hit detection algorithm based on a homogenous structure (also called cellular automata ), a concept originating in artificial intelligence and dicrete mathematics. Each pixel of the image is represented by a small automaton, which interacts with its neighbors and assumes a distinct state if it ``decides'' that a CR hit is present. On test data, the algorithm has shown a high detection rate (~0.7 ) and a low false alarm rate (frame. A homogenous structure is extremely trainable, which can be very important for processing large batches of data obtained under similar conditions. Training and optimizing issues are discussed, as well as possible other applications of this concept to image processing.
Photo-electret effects in homogenous semiconductors
International Nuclear Information System (INIS)
Nabiev, G.A.
2004-01-01
In the given work is shown the opportunity and created the theory of photo-electret condition in semiconductors with Dember mechanism of photo-voltage generation. Photo-electret of such type can be created, instead of traditional and without an external field as a result of only one illumination. Polar factor, in this case, is the distinction of electrons and holes mobility. Considered the multilayered structure with homogeneous photoactive micro areas shared by the layers, which are interfering to alignment of carriers concentration. We consider, that the homogeneous photoactive areas contain deep levels of stick. Because of addition of elementary photo voltage in separate micro photo cells it is formed the abnormal-large photo voltage (APV-effect). Let's notice, that Dember photo-voltage in a separate micro photo-cell ≤kT/q. From the received expressions, in practically important, special case, when quasi- balance between valent zone and stick levels established in much more smaller time, than free hole lifetime, and we received, that photo-voltage is relaxing. Comparing of the received expressions with the laws of photo voltage attenuation in p-n- junction structures shows their identity; the difference is only in absolute meanings of photo voltage. During the illumination in the semiconductor are created the superfluous concentration of charge carriers and part from them stays at deep levels. At de-energizing light there is a gradual generation of carriers located at these levels
Irregular Homogeneity Domains in Ternary Intermetallic Systems
Directory of Open Access Journals (Sweden)
Jean-Marc Joubert
2015-12-01
Full Text Available Ternary intermetallic A–B–C systems sometimes have unexpected behaviors. The present paper examines situations in which there is a tendency to simultaneously form the compounds ABx, ACx and BCx with the same crystal structure. This causes irregular shapes of the phase homogeneity domains and, from a structural point of view, a complete reversal of site occupancies for the B atom when crossing the homogeneity domain. This work reviews previous studies done in the systems Fe–Nb–Zr, Hf–Mo–Re, Hf–Re–W, Mo–Re–Zr, Re–W–Zr, Cr–Mn–Si, Cr–Mo–Re, and Mo–Ni–Re, and involving the topologically close-packed Laves, χ and σ phases. These systems have been studied using ternary isothermal section determination, DFT calculations, site occupancy measurement using joint X-ray, and neutron diffraction Rietveld refinement. Conclusions are drawn concerning this phenomenon. The paper also reports new experimental or calculated data on Co–Cr–Re and Fe–Nb–Zr systems.
WHAMP - waves in homogeneous, anisotropic, multicomponent plasmas
International Nuclear Information System (INIS)
Roennmark, K.
1982-06-01
In this report, a computer program which solves the dispersion relation of waves in a magnetized plasma is described. The dielectric tensor is derived using the kinetic theory of homogeneous plasmas with Maxwellian velocity distribution. Up to six different plasma components can be included in this version of the program, and each component is specified by its density, temperature, particle mass, anisotropy and drift velocity along the magnetic field. The program is thus applicable to a very wide class of plasmas, and the method should in general be useful whenever a homogeneous magnetized plasma can be approximated by a linear combination of Maxwellian components. The general theory underlying the program is outlined. It is shown that by introducing a Pade approximant for the plasma dispersion function Z, the infinite sums of modified Bessel functions which appear in the dielectric tensor may be reduced to a summable form. The Pade approximant is derived and the accuracy of the approximation is also discussed. The subroutines making up the program are described. (Author)
Hydrogen Production by Homogeneous Catalysis: Alcohol Acceptorless Dehydrogenation
DEFF Research Database (Denmark)
Nielsen, Martin
2015-01-01
in hydrogen production from biomass using homogeneous catalysis. Homogeneous catalysis has the advance of generally performing transformations at much milder conditions than traditional heterogeneous catalysis, and hence it constitutes a promising tool for future applications for a sustainable energy sector...
Radiotracer investigation of cement raw meal homogenizers. Pt. 2
International Nuclear Information System (INIS)
Baranyai, L.
1983-01-01
Based on radioisotopic tracer technique a method has been worked out to study the homogenization and segregation processes of cement-industrial raw meal homogenizers. On-site measurements were carried out by this method in some Hungarian cement works to determine the optimal homogenization parameters of operating homogenizers. The motion and distribution of different raw meal fractions traced with 198 Au radioisotope was studied in homogenization processes proceeding with different parameters. In the first part of the publication the change of charge homogenity in time was discussed which had been measured as the resultant of mixing and separating processes. In the second part the parameters and types of homogenizers influencing the efficiency of homogenization have been detailed. (orig.) [de
Radiotracer investigation of cement raw meal homogenizers. Pt. 2
Energy Technology Data Exchange (ETDEWEB)
Baranyai, L
1983-12-01
Based on radioisotopic tracer technique a method has been worked out to study the homogenization and segregation processes of cement-industrial raw meal homogenizers. On-site measurements were carried out by this method in some Hungarian cement works to determine the optimal homogenization parameters of operating homogenizers. The motion and distribution of different raw meal fractions traced with /sup 198/Au radioisotope was studied in homogenization processes proceeding with different parameters. In the first part of the publication the change of charge homogenity in time was discussed which had been measured as the resultant of mixing and separating processes. In the second part the parameters and types of homogenizers influencing the efficiency of homogenization have been detailed.
Layered Fiberconcrete with Non-Homogeneous Fibers Distribution
Lūsis, V; Krasņikovs, A
2013-01-01
The aim of present research is to create fiberconcrete construction with non-homogeneous fibers distribution in it. Traditionally fibers are homogeneously dispersed in a concrete. At the same time in many situations fiberconcretes with homogeneously dispersed fibers are not optimal (majority of added fibers are not participating in a loads bearing process).
Non-homogeneous dynamic Bayesian networks for continuous data
Grzegorczyk, Marco; Husmeier, Dirk
Classical dynamic Bayesian networks (DBNs) are based on the homogeneous Markov assumption and cannot deal with non-homogeneous temporal processes. Various approaches to relax the homogeneity assumption have recently been proposed. The present paper presents a combination of a Bayesian network with
Bounds for nonlinear composites via iterated homogenization
Ponte Castañeda, P.
2012-09-01
Improved estimates of the Hashin-Shtrikman-Willis type are generated for the class of nonlinear composites consisting of two well-ordered, isotropic phases distributed randomly with prescribed two-point correlations, as determined by the H-measure of the microstructure. For this purpose, a novel strategy for generating bounds has been developed utilizing iterated homogenization. The general idea is to make use of bounds that may be available for composite materials in the limit when the concentration of one of the phases (say phase 1) is small. It then follows from the theory of iterated homogenization that it is possible, under certain conditions, to obtain bounds for more general values of the concentration, by gradually adding small amounts of phase 1 in incremental fashion, and sequentially using the available dilute-concentration estimate, up to the final (finite) value of the concentration (of phase 1). Such an approach can also be useful when available bounds are expected to be tighter for certain ranges of the phase volume fractions. This is the case, for example, for the "linear comparison" bounds for porous viscoplastic materials, which are known to be comparatively tighter for large values of the porosity. In this case, the new bounds obtained by the above-mentioned "iterated" procedure can be shown to be much improved relative to the earlier "linear comparison" bounds, especially at low values of the porosity and high triaxialities. Consistent with the way in which they have been derived, the new estimates are, strictly, bounds only for the class of multi-scale, nonlinear composites consisting of two well-ordered, isotropic phases that are distributed with prescribed H-measure at each stage in the incremental process. However, given the facts that the H-measure of the sequential microstructures is conserved (so that the final microstructures can be shown to have the same H-measure), and that H-measures are insensitive to length scales, it is conjectured
Homogenization of the lipid profile values.
Pedro-Botet, Juan; Rodríguez-Padial, Luis; Brotons, Carlos; Esteban-Salán, Margarita; García-Lerín, Aurora; Pintó, Xavier; Lekuona, Iñaki; Ordóñez-Llanos, Jordi
Analytical reports from the clinical laboratory are essential to guide clinicians about what lipid profile values should be considered altered and, therefore, require intervention. Unfortunately, there is a great heterogeneity in the lipid values reported as "normal, desirable, recommended or referenced" by clinical laboratories. This can difficult clinical decisions and be a barrier to achieve the therapeutic goals for cardiovascular prevention. A recent international recommendation has added a new heterogeneity factor for the interpretation of lipid profile, such as the possibility of measuring it without previous fasting. All this justifies the need to develop a document that adapts the existing knowledge to the clinical practice of our health system. In this regard, professionals from different scientific societies involved in the measurement and use of lipid profile data have developed this document to establish recommendations that facilitate their homogenization. Copyright © 2017. Publicado por Elsevier España, S.L.U.
The structure and homogeneity of Psalm 32
Directory of Open Access Journals (Sweden)
J. Henk Potgieter
2014-11-01
Full Text Available Psalm 32 is widely regarded as a psalm of thanksgiving with elements of wisdom poetry intermingled into it. The wisdom elements are variously explained as having been present from the beginning, or as having been added to a foundational composition. Such views of the Gattung have had a decisive influence on the interpretation of the psalm. This article argues, on the basis of a structural analysis, that Psalm 32 should be understood as a homogeneous wisdom composition. The parallel and inverse structure of its two stanzas demonstrate that the aim of its author was to encourage the upright to foster an open, intimate relationship with Yahweh in which transgressions are confessed and Yahweh’s benevolent guidance on the way of life is wisely accepted.
Precipitation of plutonium oxalate from homogeneous solutions
International Nuclear Information System (INIS)
Rao, V.K.; Pius, I.C.; Subbarao, M.; Chinnusamy, A.; Natarajan, P.R.
1986-01-01
A method for the precipitation of plutonium(IV) oxalate from homogeneous solutions using diethyl oxalate is reported. The precipitate obtained is crystalline and easily filterable with yields in the range of 92-98% for precipitations involving a few mg to g quantities of plutonium. Decontamination factors for common impurities such as U(VI), Am(III) and Fe(III) were determined. TGA and chemical analysis of the compound indicate its composition as Pu(Csub(2)Osub(4))sub(2).6Hsub(2)O. Data are obtained on the solubility of the oxalate in nitric acid and in mixtures of nitric acid and oxalic acid of varying concentrations. Green PuOsub(2) obtained by calcination of the oxalate has specifications within the recommended values for trace foreign substances such as chlorine, fluorine, carbon and nitrogen. (author)
Homogenization in thermoelasticity: application to composite materials
Energy Technology Data Exchange (ETDEWEB)
Peyroux, R [Lab. de Mecanique et Genie Civil, Univ. Montpellier 2, 34 Montpellier (France); Licht, C [Lab. de Mecanique et Genie Civil, Univ. Montpellier 2, 34 Montpellier (France)
1993-11-01
One of the obstacles to the industrial use of metal matrix composite materials is the damage they rapidly undergo when they are subjected to cyclic thermal loadings; local thermal stresses of high level can develop, sometimes nearby or over the elastic limit, due to the mismatch of elastic and thermal coefficients between the fibers and the matrix. For the same reasons, early cracks can appear in composites like ceramic-ceramic. Therefore, we investigate the linear thermoelastic behaviour of heterogeneous materials, taking account of the isentropic coupling term in the heat conduction equation. In the case of periodic materials, recent results, using the homogenization theory, allowed us to describe macroscopic and microscopic behaviours of such materials. This paper is concerned with the numerical simulation of this problem by a finite element method, using a multiscale approach. (orig.).
Modelling of an homogeneous equilibrium mixture model
International Nuclear Information System (INIS)
Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.
2014-01-01
We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)
Homogeneous wave turbulence driven by tidal flows
Favier, B.; Le Reun, T.; Barker, A.; Le Bars, M.
2017-12-01
When a moon orbits around a planet, the rotation of the induced tidal bulge drives a homogeneous, periodic, large-scale flow. The combination of such an excitation with the rotating motion of the planet has been shown to drive parametric resonance of a pair of inertial waves in a mechanism called the elliptical instability. Geophysical fluid layers can also be stratified: this is the case for instance of the Earth's oceans and, as suggested by several studies, of the upper part of the Earth's liquid Outer Core. We thus investigate the stability of a rotating and stratified layer undergoing tidal distortion in the limit where either rotation or stratification is dominant. We show that the periodic tidal flow drives a parametric subharmonic resonance of inertial (resp. internal) waves in the rotating (resp. stratified) case. The instability saturates into a wave turbulence pervading the whole fluid layer. In such a state, the instability mechanism conveys the tidal energy from the large scale tidal flow to the resonant modes, which then feed a succession of triadic resonances also generating small spatial scales. In the rotating case, we observe a kinetic energy spectrum with a k-2 slope for which the Coriolis force is dominant at all spatial scales. In the stratified case, where the timescale separation is increased between the tidal excitation and the Brunt-Väisälä frequencies, the temporal spectrum decays with a ω-2 power law up to the cut-off frequency beyond which waves do not exist. This result is reminiscent of the Garrett and Munk spectrum measured in the oceans and theoretically described as a manifestation of internal wave turbulence. In addition to revealing an instability driving homogeneous turbulence in geophysical fluid layers, our approach is also an efficient numerical tool to investigate the possibly universal properties of wave turbulence in a geophysical context.
Conformally compactified homogeneous spaces (Possible Observable Consequences)
International Nuclear Information System (INIS)
Budinich, P.
1995-01-01
Some arguments based on the possible spontaneous violation of the Cosmological Principles (represented by the observed large-scale structures of galaxies), the Cartan-geometry of simple spinors and on the Fock-formulation of hydrogen-atom wave-equation in momentum-space, are presented in favour of the hypothesis that space-time and momentum-space should be both conformally compactified and represented by the two four-dimensional homogeneous spaces of the conformal group, both isomorphic to (S 3 X S 1 )/Z 2 and correlated by conformal inversion. Within this framework, the possible common origin for the S0(4) symmetry underlying the geometrical structure of the Universe, of Kepler orbits and of the H-atom is discussed. On of the consequences of the proposed hypothesis could be that any quantum field theory should be naturally free from both infrared and ultraviolet divergences. But then physical spaces defined as those where physical phenomena may be best described, could be different from those homogeneous spaces. A simple, exactly soluble, toy model, valid for a two-dimensional space-time is presented where the conjecture conformally compactified space-time and momentum-space are both isomorphic to (S 1 X S 1 )/Z 2 , while the physical spaces are two finite lattice which are dual since Fourier transforms, represented by finite, discrete, sums may be well defined on them. Furthermore, a q-deformed SU q (1,1) may be represented on them if q is a root of unity. (author). 22 refs, 3 figs
Directory of Open Access Journals (Sweden)
Luo Li-Qin
2016-01-01
Full Text Available In this paper, we investigate the value distribution of meromorphic solutions of homogeneous and non-homogeneous complex linear differential-difference equations, and obtain the results on the relations between the order of the solutions and the convergence exponents of the zeros, poles, a-points and small function value points of the solutions, which show the relations in the case of non-homogeneous equations are sharper than the ones in the case of homogeneous equations.
Dissolution test for homogeneity of mixed oxide fuel pellets
International Nuclear Information System (INIS)
Lerch, R.E.
1979-08-01
Experiments were performed to determine the relationship between fuel pellet homogeneity and pellet dissolubility. Although, in general, the amount of pellet residue decreased with increased homogeneity, as measured by the pellet figure of merit, the relationship was not absolute. Thus, all pellets with high figure of merit (excellent homogeneity) do not necessarily dissolve completely and all samples that dissolve completely do not necessarily have excellent homogeneity. It was therefore concluded that pellet dissolubility measurements could not be substituted for figure of merit determinations as a measurement of pellet homogeneity. 8 figures, 3 tables
Assembly homogenization techniques for light water reactor analysis
International Nuclear Information System (INIS)
Smith, K.S.
1986-01-01
Recent progress in development and application of advanced assembly homogenization methods for light water reactor analysis is reviewed. Practical difficulties arising from conventional flux-weighting approximations are discussed and numerical examples given. The mathematical foundations for homogenization methods are outlined. Two methods, Equivalence Theory and Generalized Equivalence Theory which are theoretically capable of eliminating homogenization error are reviewed. Practical means of obtaining approximate homogenized parameters are presented and numerical examples are used to contrast the two methods. Applications of these techniques to PWR baffle/reflector homogenization and BWR bundle homogenization are discussed. Nodal solutions to realistic reactor problems are compared to fine-mesh PDQ calculations, and the accuracy of the advanced homogenization methods is established. Remaining problem areas are investigated, and directions for future research are suggested. (author)
Homogeneous deuterium exchange using rhenium and platinum chloride catalysts
International Nuclear Information System (INIS)
Fawdry, R.M.
1979-01-01
Previous studies of homogeneous hydrogen isotope exchange are mostly confined to one catalyst, the tetrachloroplatinite salt. Recent reports have indicated that chloride salts of iridium and rhodium may also be homogeneous exchange catalysts similar to the tetrachloroplatinite, but with much lower activities. Exchange by these homogeneous catalysts is frequently accompanied by metal precipitation with the termination of homogeneous exchange, particularly in the case of alkane exchange. The studies presented in this thesis describe two different approaches to overcome this limitation of homogeneous hydrogen isotope exchange catalysts. The first approach was to improve the stability of an existing homogeneous catalyst and the second was to develop a new homogeneous exchange catalyst which is free of the instability limitation
On the decay of homogeneous isotropic turbulence
Skrbek, L.; Stalp, Steven R.
2000-08-01
Decaying homogeneous, isotropic turbulence is investigated using a phenomenological model based on the three-dimensional turbulent energy spectra. We generalize the approach first used by Comte-Bellot and Corrsin [J. Fluid Mech. 25, 657 (1966)] and revised by Saffman [J. Fluid Mech. 27, 581 (1967); Phys. Fluids 10, 1349 (1967)]. At small wave numbers we assume the spectral energy is proportional to the wave number to an arbitrary power. The specific case of power 2, which follows from the Saffman invariant, is discussed in detail and is later shown to best describe experimental data. For the spectral energy density in the inertial range we apply both the Kolmogorov -5/3 law, E(k)=Cɛ2/3k-5/3, and the refined Kolmogorov law by taking into account intermittency. We show that intermittency affects the energy decay mainly by shifting the position of the virtual origin rather than altering the power law of the energy decay. Additionally, the spectrum is naturally truncated due to the size of the wind tunnel test section, as eddies larger than the physical size of the system cannot exist. We discuss effects associated with the energy-containing length scale saturating at the size of the test section and predict a change in the power law decay of both energy and vorticity. To incorporate viscous corrections to the model, we truncate the spectrum at an effective Kolmogorov wave number kη=γ(ɛ/v3)1/4, where γ is a dimensionless parameter of order unity. We show that as the turbulence decays, viscous corrections gradually become more important and a simple power law can no longer describe the decay. We discuss the final period of decay within the framework of our model, and show that care must be taken to distinguish between the final period of decay and the change of the character of decay due to the saturation of the energy containing length scale. The model is applied to a number of experiments on decaying turbulence. These include the downstream decay of turbulence in
Homogeneous Thorium Fuel Cycles in Candu Reactors
Energy Technology Data Exchange (ETDEWEB)
Hyland, B.; Dyck, G.R.; Edwards, G.W.R.; Magill, M. [Chalk River Laboratories, Atomic Energy of Canada Limited (Canada)
2009-06-15
The CANDU{sup R} reactor has an unsurpassed degree of fuel-cycle flexibility, as a consequence of its fuel-channel design, excellent neutron economy, on-power refueling, and simple fuel bundle [1]. These features facilitate the introduction and full exploitation of thorium fuel cycles in Candu reactors in an evolutionary fashion. Because thorium itself does not contain a fissile isotope, neutrons must be provided by adding a fissile material, either within or outside of the thorium-based fuel. Those same Candu features that provide fuel-cycle flexibility also make possible many thorium fuel-cycle options. Various thorium fuel cycles can be categorized by the type and geometry of the added fissile material. The simplest of these fuel cycles are based on homogeneous thorium fuel designs, where the fissile material is mixed uniformly with the fertile thorium. These fuel cycles can be competitive in resource utilization with the best uranium-based fuel cycles, while building up a 'mine' of U-233 in the spent fuel, for possible recycle in thermal reactors. When U-233 is recycled from the spent fuel, thorium-based fuel cycles in Candu reactors can provide substantial improvements in the efficiency of energy production from existing fissile resources. The fissile component driving the initial fuel could be enriched uranium, plutonium, or uranium-233. Many different thorium fuel cycle options have been studied at AECL [2,3]. This paper presents the results of recent homogeneous thorium fuel cycle calculations using plutonium and enriched uranium as driver fuels, with and without U-233 recycle. High and low burnup cases have been investigated for both the once-through and U-233 recycle cases. CANDU{sup R} is a registered trademark of Atomic Energy of Canada Limited (AECL). 1. Boczar, P.G. 'Candu Fuel-Cycle Vision', Presented at IAEA Technical Committee Meeting on 'Fuel Cycle Options for LWRs and HWRs', 1998 April 28 - May 01, also Atomic Energy
Persymmetric Adaptive Detectors of Subspace Signals in Homogeneous and Partially Homogeneous Clutter
Directory of Open Access Journals (Sweden)
Ding Hao
2015-08-01
Full Text Available In the field of adaptive radar detection, an effective strategy to improve the detection performance is to exploit the structural information of the covariance matrix, especially in the case of insufficient reference cells. Thus, in this study, the problem of detecting multidimensional subspace signals is discussed by considering the persymmetric structure of the clutter covariance matrix, which implies that the covariance matrix is persymmetric about its cross diagonal. Persymmetric adaptive detectors are derived on the basis of the one-step principle as well as the two-step Generalized Likelihood Ratio Test (GLRT in homogeneous and partially homogeneous clutter. The proposed detectors consider the structural information of the covariance matrix at the design stage. Simulation results suggest performance improvement compared with existing detectors when reference cells are insufficient. Moreover, the detection performance is assessed with respect to the effects of the covariance matrix, signal subspace dimension, and mismatched performance of signal subspace as well as signal fluctuations.
Dynamic contact angle cycling homogenizes heterogeneous surfaces.
Belibel, R; Barbaud, C; Mora, L
2016-12-01
In order to reduce restenosis, the necessity to develop the appropriate coating material of metallic stent is a challenge for biomedicine and scientific research over the past decade. Therefore, biodegradable copolymers of poly((R,S)-3,3 dimethylmalic acid) (PDMMLA) were prepared in order to develop a new coating exhibiting different custom groups in its side chain and being able to carry a drug. This material will be in direct contact with cells and blood. It consists of carboxylic acid and hexylic groups used for hydrophilic and hydrophobic character, respectively. The study of this material wettability and dynamic surface properties is of importance due to the influence of the chemistry and the potential motility of these chemical groups on cell adhesion and polymer kinetic hydrolysis. Cassie theory was used for the theoretical correction of contact angles of these chemical heterogeneous surfaces coatings. Dynamic Surface Analysis was used as practical homogenizer of chemical heterogeneous surfaces by cycling during many cycles in water. In this work, we confirmed that, unlike receding contact angle, advancing contact angle is influenced by the difference of only 10% of acidic groups (%A) in side-chain of polymers. It linearly decreases with increasing acidity percentage. Hysteresis (H) is also a sensitive parameter which is discussed in this paper. Finally, we conclude that cycling provides real information, thus avoiding theoretical Cassie correction. H(10)is the most sensible parameter to %A. Copyright © 2016 Elsevier B.V. All rights reserved.
Theoretical studies of homogeneous catalysts mimicking nitrogenase.
Sgrignani, Jacopo; Franco, Duvan; Magistrato, Alessandra
2011-01-10
The conversion of molecular nitrogen to ammonia is a key biological and chemical process and represents one of the most challenging topics in chemistry and biology. In Nature the Mo-containing nitrogenase enzymes perform nitrogen 'fixation' via an iron molybdenum cofactor (FeMo-co) under ambient conditions. In contrast, industrially, the Haber-Bosch process reduces molecular nitrogen and hydrogen to ammonia with a heterogeneous iron catalyst under drastic conditions of temperature and pressure. This process accounts for the production of millions of tons of nitrogen compounds used for agricultural and industrial purposes, but the high temperature and pressure required result in a large energy loss, leading to several economic and environmental issues. During the last 40 years many attempts have been made to synthesize simple homogeneous catalysts that can activate dinitrogen under the same mild conditions of the nitrogenase enzymes. Several compounds, almost all containing transition metals, have been shown to bind and activate N₂ to various degrees. However, to date Mo(N₂)(HIPTN)₃N with (HIPTN)₃N= hexaisopropyl-terphenyl-triamidoamine is the only compound performing this process catalytically. In this review we describe how Density Functional Theory calculations have been of help in elucidating the reaction mechanisms of the inorganic compounds that activate or fix N₂. These studies provided important insights that rationalize and complement the experimental findings about the reaction mechanisms of known catalysts, predicting the reactivity of new potential catalysts and helping in tailoring new efficient catalytic compounds.
Theoretical Studies of Homogeneous Catalysts Mimicking Nitrogenase
Directory of Open Access Journals (Sweden)
Alessandra Magistrato
2011-01-01
Full Text Available The conversion of molecular nitrogen to ammonia is a key biological and chemical process and represents one of the most challenging topics in chemistry and biology. In Nature the Mo-containing nitrogenase enzymes perform nitrogen ‘fixation’ via an iron molybdenum cofactor (FeMo-co under ambient conditions. In contrast, industrially, the Haber-Bosch process reduces molecular nitrogen and hydrogen to ammonia with a heterogeneous iron catalyst under drastic conditions of temperature and pressure. This process accounts for the production of millions of tons of nitrogen compounds used for agricultural and industrial purposes, but the high temperature and pressure required result in a large energy loss, leading to several economic and environmental issues. During the last 40 years many attempts have been made to synthesize simple homogeneous catalysts that can activate dinitrogen under the same mild conditions of the nitrogenase enzymes. Several compounds, almost all containing transition metals, have been shown to bind and activate N2 to various degrees. However, to date Mo(N2(HIPTN3N with (HIPTN3N= hexaisopropyl-terphenyl-triamidoamine is the only compound performing this process catalytically. In this review we describe how Density Functional Theory calculations have been of help in elucidating the reaction mechanisms of the inorganic compounds that activate or fix N2. These studies provided important insights that rationalize and complement the experimental findings about the reaction mechanisms of known catalysts, predicting the reactivity of new potential catalysts and helping in tailoring new efficient catalytic compounds.
Elastic metamaterials and dynamic homogenization: a review
Directory of Open Access Journals (Sweden)
Ankit Srivastava
2015-01-01
Full Text Available In this paper, we review the recent advances which have taken place in the understanding and applications of acoustic/elastic metamaterials. Metamaterials are artificially created composite materials which exhibit unusual properties that are not found in nature. We begin with presenting arguments from discrete systems which support the case for the existence of unusual material properties such as tensorial and/or negative density. The arguments are then extended to elastic continuums through coherent averaging principles. The resulting coupled and nonlocal homogenized relations, called the Willis relations, are presented as the natural description of inhomogeneous elastodynamics. They are specialized to Bloch waves propagating in periodic composites and we show that the Willis properties display the unusual behavior which is often required in metamaterial applications such as the Veselago lens. We finally present the recent advances in the area of transformation elastodynamics, charting its inspirations from transformation optics, clarifying its particular challenges, and identifying its connection with the constitutive relations of the Willis and the Cosserat types.
Homogenization models for 2-D grid structures
Banks, H. T.; Cioranescu, D.; Rebnord, D. A.
1992-01-01
In the past several years, we have pursued efforts related to the development of accurate models for the dynamics of flexible structures made of composite materials. Rather than viewing periodicity and sparseness as obstacles to be overcome, we exploit them to our advantage. We consider a variational problem on a domain that has large, periodically distributed holes. Using homogenization techniques we show that the solution to this problem is in some topology 'close' to the solution of a similar problem that holds on a much simpler domain. We study the behavior of the solution of the variational problem as the holes increase in number, but decrease in size in such a way that the total amount of material remains constant. The result is an equation that is in general more complex, but with a domain that is simply connected rather than perforated. We study the limit of the solution as the amount of material goes to zero. This second limit will, in most cases, retrieve much of the simplicity that was lost in the first limit without sacrificing the simplicity of the domain. Finally, we show that these results can be applied to the case of a vibrating Love-Kirchhoff plate with Kelvin-Voigt damping. We rely heavily on earlier results of (Du), (CS) for the static, undamped Love-Kirchhoff equation. Our efforts here result in a modification of those results to include both time dependence and Kelvin-Voigt damping.
Homogeneous cosmology with aggressively expanding civilizations
International Nuclear Information System (INIS)
Jay Olson, S
2015-01-01
In the context of a homogeneous Universe, we note that the appearance of aggressively expanding advanced life is geometrically similar to the process of nucleation and bubble growth in a first-order cosmological phase transition. We exploit this similarity to describe the dynamics of life saturating the Universe on a cosmic scale, adapting the phase transition model to incorporate probability distributions of expansion and resource consumption strategies. Through a series of numerical solutions spanning several orders of magnitude in the input assumption parameters, the resulting cosmological model is used to address basic questions related to the intergalactic spreading of life, dealing with issues such as timescales, observability, competition between strategies, and first-mover advantage. Finally, we examine physical effects on the Universe itself, such as reheating and the backreaction on the evolution of the scale factor, if such life is able to control and convert a significant fraction of the available pressureless matter into radiation. We conclude that the existence of life, if certain advanced technologies are practical, could have a significant influence on the future large-scale evolution of the Universe. (paper)
Numerical computation of homogeneous slope stability.
Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong
2015-01-01
To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).
Numerical Computation of Homogeneous Slope Stability
Directory of Open Access Journals (Sweden)
Shuangshuang Xiao
2015-01-01
Full Text Available To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM and particle swarm optimization algorithm (PSO to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759 were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS.
Thermal neutron diffusion parameters in homogeneous mixtures
Energy Technology Data Exchange (ETDEWEB)
Drozdowicz, K.; Krynicka, E. [Institute of Nuclear Physics, Cracow (Poland)
1995-12-31
A physical background is presented for a computer program which calculates the thermal neutron diffusion parameters for homogeneous mixtures of any compounds. The macroscopic absorption, scattering and transport cross section of the mixture are defined which are generally function of the incident neutron energy. The energy-averaged neutron parameters are available when these energy dependences and the thermal neutron energy distribution are assumed. Then the averaged diffusion coefficient and the pulsed thermal neutron parameters (the absorption rare and the diffusion constant) are also defined. The absorption cross section is described by the 1/v law and deviations from this behaviour are considered. The scattering cross section can be assumed as being almost constant in the thermal neutron region (which results from the free gas model). Serious deviations are observed for hydrogen atoms bound in molecules and a special study in the paper is devoted to this problem. A certain effective scattering cross section is found in this case on a base of individual exact data for a few hydrogenous media. Approximations assumed for the average cosine of the scattering angle are also discussed. The macroscopic parameters calculated are averaged over the Maxwellian energy distribution for the thermal neutron flux. An information on the input data for the computer program is included. (author). 10 refs, 4 figs, 5 tabs.
Lagrangian statistics in compressible isotropic homogeneous turbulence
Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi
2011-11-01
In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.
Forming homogeneous clusters for differential risk information
International Nuclear Information System (INIS)
Maardberg, B.
1996-01-01
Latent risk situations are always present in society. General information on these risk situations is supposed to be received differently by different groups of people in the population. In the aftermath of specific accidents different groups presumably have need of specific information about how to act to survive, to avoid injuries, to find more information, to obtain facts about the accidents etc. As targets for information these different groups could be defined in different ways. The conventional way is to divide the population according to demographic variables, such as age, sex, occupation etc. Another way would be to structure the population according to dependent variables measured in different studies. They may concern risk perception, emotional reactions, specific technical knowledge of the accidents, and belief in the information sources. One procedure for forming such groupings of people into homogeneous clusters would be by statistical clustering methods on dependent variables. Examples of such clustering procedures are presented and discussed. Data are from a Norwegian study on the perception of radiation from nuclear accidents and other radiation sources. Speculations are made on different risk information strategies. Elements of a research programme are proposed. (author)
Homogeneous purely buoyancy driven turbulent flow
Arakeri, Jaywant; Cholemari, Murali; Pawar, Shashikant
2010-11-01
An unstable density difference across a long vertical tube open at both ends leads to convection that is axially homogeneous with a linear density gradient. We report results from such tube convection experiments, with driving density caused by salt concentration difference or temperature difference. At high enough Rayleigh numbers (Ra) the convection is turbulent with zero mean flow and zero mean Reynolds shear stresses; thus turbulent production is purely by buoyancy. We observe different regimes of turbulent convection. At very high Ra the Nusselt number scales as the square root of the Rayleigh number, giving the so-called "ultimate regime" of convection predicted for Rayleigh-Benard convection in limit of infinite Ra. Turbulent convection at intermediate Ra, the Nusselt number scales as Ra^0.3. In both regimes, the flux and the Taylor scale Reynolds number are more than order of magnitude larger than those obtained in Rayleigh-Benard convection. Absence of a mean flow makes this an ideal flow to study shear free turbulence near a wall.
Thermal neutron diffusion parameters in homogeneous mixtures
International Nuclear Information System (INIS)
Drozdowicz, K.; Krynicka, E.
1995-01-01
A physical background is presented for a computer program which calculates the thermal neutron diffusion parameters for homogeneous mixtures of any compounds. The macroscopic absorption, scattering and transport cross section of the mixture are defined which are generally function of the incident neutron energy. The energy-averaged neutron parameters are available when these energy dependences and the thermal neutron energy distribution are assumed. Then the averaged diffusion coefficient and the pulsed thermal neutron parameters (the absorption rare and the diffusion constant) are also defined. The absorption cross section is described by the 1/v law and deviations from this behaviour are considered. The scattering cross section can be assumed as being almost constant in the thermal neutron region (which results from the free gas model). Serious deviations are observed for hydrogen atoms bound in molecules and a special study in the paper is devoted to this problem. A certain effective scattering cross section is found in this case on a base of individual exact data for a few hydrogenous media. Approximations assumed for the average cosine of the scattering angle are also discussed. The macroscopic parameters calculated are averaged over the Maxwellian energy distribution for the thermal neutron flux. An information on the input data for the computer program is included. (author). 10 refs, 4 figs, 5 tabs
Generalized quantum theory of recollapsing homogeneous cosmologies
International Nuclear Information System (INIS)
Craig, David; Hartle, James B.
2004-01-01
A sum-over-histories generalized quantum theory is developed for homogeneous minisuperspace type A Bianchi cosmological models, focusing on the particular example of the classically recollapsing Bianchi type-IX universe. The decoherence functional for such universes is exhibited. We show how the probabilities of decoherent sets of alternative, coarse-grained histories of these model universes can be calculated. We consider in particular the probabilities for classical evolution defined by a suitable coarse graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not. For these situations we show that the probability is near unity for the universe to recontract classically if it expands classically. We also determine the relative probabilities of quasiclassical trajectories for initial states of WKB form, recovering for such states a precise form of the familiar heuristic 'J·dΣ' rule of quantum cosmology, as well as a generalization of this rule to generic initial states
Radiation statistics in homogeneous isotropic turbulence
International Nuclear Information System (INIS)
Da Silva, C B; Coelho, P J; Malico, I
2009-01-01
An analysis of turbulence-radiation interaction (TRI) in statistically stationary (forced) homogeneous and isotropic turbulence is presented. A direct numerical simulation code was used to generate instantaneous turbulent scalar fields, and the radiative transfer equation (RTE) was solved to provide statistical data relevant in TRI. The radiation intensity is non-Gaussian and is not spatially correlated with any of the other turbulence or radiation quantities. Its power spectrum exhibits a power-law region with a slope steeper than the classical -5/3 law. The moments of the radiation intensity, Planck-mean and incident-mean absorption coefficients, and emission and absorption TRI correlations are calculated. The influence of the optical thickness of the medium, mean and variance of the temperature and variance of the molar fraction of the absorbing species is studied. Predictions obtained from the time-averaged RTE are also included. It was found that while turbulence yields an increase of the mean blackbody radiation intensity, it causes a decrease of the time-averaged Planck-mean absorption coefficient. The absorption coefficient self-correlation is small in comparison with the temperature self-correlation, and the role of TRI in radiative emission is more important than in radiative absorption. The absorption coefficient-radiation intensity correlation is small, which supports the optically thin fluctuation approximation, and justifies the good predictions often achieved using the time-averaged RTE.
Radiation statistics in homogeneous isotropic turbulence
Energy Technology Data Exchange (ETDEWEB)
Da Silva, C B; Coelho, P J [Mechanical Engineering Department, IDMEC/LAETA, Instituto Superior Tecnico, Technical University of Lisbon, Av. Rovisco Pais, 1049-001 Lisboa (Portugal); Malico, I [Physics Department, University of Evora, Rua Romao Ramalho, 59, 7000-671 Evora (Portugal)], E-mail: carlos.silva@ist.utl.pt, E-mail: imbm@uevora.pt, E-mail: pedro.coelho@ist.utl.pt
2009-09-15
An analysis of turbulence-radiation interaction (TRI) in statistically stationary (forced) homogeneous and isotropic turbulence is presented. A direct numerical simulation code was used to generate instantaneous turbulent scalar fields, and the radiative transfer equation (RTE) was solved to provide statistical data relevant in TRI. The radiation intensity is non-Gaussian and is not spatially correlated with any of the other turbulence or radiation quantities. Its power spectrum exhibits a power-law region with a slope steeper than the classical -5/3 law. The moments of the radiation intensity, Planck-mean and incident-mean absorption coefficients, and emission and absorption TRI correlations are calculated. The influence of the optical thickness of the medium, mean and variance of the temperature and variance of the molar fraction of the absorbing species is studied. Predictions obtained from the time-averaged RTE are also included. It was found that while turbulence yields an increase of the mean blackbody radiation intensity, it causes a decrease of the time-averaged Planck-mean absorption coefficient. The absorption coefficient self-correlation is small in comparison with the temperature self-correlation, and the role of TRI in radiative emission is more important than in radiative absorption. The absorption coefficient-radiation intensity correlation is small, which supports the optically thin fluctuation approximation, and justifies the good predictions often achieved using the time-averaged RTE.
Ceria powders by homogeneous precipitation technique
International Nuclear Information System (INIS)
Ramanathan, S.; Roy, S.K.
2003-01-01
Formation of precursors for ceria by two homogeneous precipitation reactions - (cerium chloride + urea at 95 degC - called reaction A and cerium chloride + hexamethylenetetramine at 85 degC - called reaction B) - has been studied. The variation of size of the colloidal particles formed and the zeta potential of the suspensions with progress of reactions exhibited similar trends for both the precipitation processes. Particle size increased from 100 to 300 nm with increasing temperature and extent of reaction. The zeta potential was found to decrease with increasing extent of precipitation in the pH range of 5 to 7. Filtration and drying led to agglomeration of the fine particles in case of the precursor from reaction B. The as-formed precursors were crystalline - a basic carbonate in case of reaction A and hydrous oxide in case of reaction B. It was found that nano-crystalline ceria powders (average crystallite size -10 nm) formed above 400 degC from both these precursors. The agglomerate size (D50) of the precursors and ceria powders formed after calcination at 600 degC varied from 0.7 to 3 μm. Increasing calcination temperature up to 800 degC, increased the crystallite size (50 nm). The zeta potential variation with pH and concentration of an anionic dispersant (Calgon) for the ceria powders formed was studied to determine the ideal conditions for suspension stability. It was found to be maximum (i.e., the suspensions stable) in the pH range of 3 to 4 or Calgon concentration of 0.01 to 0.1 weight percent. (author)
A Modified Homogeneous Balance Method and Its Applications
International Nuclear Information System (INIS)
Liu Chunping
2011-01-01
A modified homogeneous balance method is proposed by improving some key steps in the homogeneous balance method. Bilinear equations of some nonlinear evolution equations are derived by using the modified homogeneous balance method. Generalized Boussinesq equation, KP equation, and mKdV equation are chosen as examples to illustrate our method. This approach is also applicable to a large variety of nonlinear evolution equations. (general)
Economical preparation of extremely homogeneous nuclear accelerator targets
International Nuclear Information System (INIS)
Maier, H.J.
1983-01-01
Techniques for target preparation with a minimum consumption of isotopic material are described. The rotating substrate method, which generates extremely homogeneous targets, is discussed in some detail
Structural changes in heat resisting high nickel alloys during homogenization
International Nuclear Information System (INIS)
Kleshchev, A.S.; Korneeva, N.N.; Yurina, O.M.; Guzej, L.S.
1981-01-01
Effect of homogenization on the structure and technological plasticity of the KhN73MBTYu and KhN62BMKTYu alloys during treatment with pressure is investigated taking into account peculiarities if the phase composition. It is shown that homogenization of the KhN73MBTYu and KhN62BMKTYu alloys increases the technological plasticity. Homogenization efficiency is conditioned by the change of the grain boundaries and carbide morphology as well as by homogeneous distribution of the large γ'-phase [ru
Sewage sludge solubilization by high-pressure homogenization.
Zhang, Yuxuan; Zhang, Panyue; Guo, Jianbin; Ma, Weifang; Fang, Wei; Ma, Boqiang; Xu, Xiangzhe
2013-01-01
The behavior of sludge solubilization using high-pressure homogenization (HPH) treatment was examined by investigating the sludge solid reduction and organics solubilization. The sludge volatile suspended solids (VSS) decreased from 10.58 to 6.67 g/L for the sludge sample with a total solids content (TS) of 1.49% after HPH treatment at a homogenization pressure of 80 MPa with four homogenization cycles; total suspended solids (TSS) correspondingly decreased from 14.26 to 9.91 g/L. About 86.15% of the TSS reduction was attributed to the VSS reduction. The increase of homogenization pressure from 20 to 80 MPa or homogenization cycle number from 1 to 4 was favorable to the sludge organics solubilization, and the protein and polysaccharide solubilization linearly increased with the soluble chemical oxygen demand (SCOD) solubilization. More proteins were solubilized than polysaccharides. The linear relationship between SCOD solubilization and VSS reduction had no significant change under different homogenization pressures, homogenization cycles and sludge solid contents. The SCOD of 1.65 g/L was solubilized for the VSS reduction of 1.00 g/L for the three experimental sludge samples with a TS of 1.00, 1.49 and 2.48% under all HPH operating conditions. The energy efficiency results showed that the HPH treatment at a homogenization pressure of 30 MPa with a single homogenization cycle for the sludge sample with a TS of 2.48% was the most energy efficient.
A non-asymptotic homogenization theory for periodic electromagnetic structures.
Tsukerman, Igor; Markel, Vadim A
2014-08-08
Homogenization of electromagnetic periodic composites is treated as a two-scale problem and solved by approximating the fields on both scales with eigenmodes that satisfy Maxwell's equations and boundary conditions as accurately as possible. Built into this homogenization methodology is an error indicator whose value characterizes the accuracy of homogenization. The proposed theory allows one to define not only bulk, but also position-dependent material parameters (e.g. in proximity to a physical boundary) and to quantify the trade-off between the accuracy of homogenization and its range of applicability to various illumination conditions.
Mechanized syringe homogenization of human and animal tissues.
Kurien, Biji T; Porter, Andrew C; Patel, Nisha C; Kurono, Sadamu; Matsumoto, Hiroyuki; Scofield, R Hal
2004-06-01
Tissue homogenization is a prerequisite to any fractionation schedule. A plethora of hands-on methods are available to homogenize tissues. Here we report a mechanized method for homogenizing animal and human tissues rapidly and easily. The Bio-Mixer 1200 (manufactured by Innovative Products, Inc., Oklahoma City, OK) utilizes the back-and-forth movement of two motor-driven disposable syringes, connected to each other through a three-way stopcock, to homogenize animal or human tissue. Using this method, we were able to homogenize human or mouse tissues (brain, liver, heart, and salivary glands) in 5 min. From sodium dodecyl sulfate-polyacrylamide gel electrophoresis analysis and a matrix-assisted laser desorption/ionization time-of-flight mass spectrometric enzyme assay for prolidase, we have found that the homogenates obtained were as good or even better than that obtained used a manual glass-on-Teflon (DuPont, Wilmington, DE) homogenization protocol (all-glass tube and Teflon pestle). Use of the Bio-Mixer 1200 to homogenize animal or human tissue precludes the need to stay in the cold room as is the case with the other hands-on homogenization methods available, in addition to freeing up time for other experiments.
Cross section homogenization analysis for a simplified Candu reactor
International Nuclear Information System (INIS)
Pounders, Justin; Rahnema, Farzad; Mosher, Scott; Serghiuta, Dumitru; Turinsky, Paul; Sarsour, Hisham
2008-01-01
The effect of using zero current (infinite medium) boundary conditions to generate bundle homogenized cross sections for a stylized half-core Candu reactor problem is examined. Homogenized cross section from infinite medium lattice calculations are compared with cross sections homogenized using the exact flux from the reference core environment. The impact of these cross section differences is quantified by generating nodal diffusion theory solutions with both sets of cross sections. It is shown that the infinite medium spatial approximation is not negligible, and that ignoring the impact of the heterogeneous core environment on cross section homogenization leads to increased errors, particularly near control elements and the core periphery. (authors)
Pi overlapping ring systems contained in a homogeneous assay: a novel homogeneous assay for antigens
Kidwell, David A.
1993-05-01
A novel immunoassay, Pi overlapping ring systems contained in a homogeneous assay (PORSCHA), is described. This assay relies upon the change in fluorescent spectral properties that pyrene and its derivatives show with varying concentration. Because antibodies and other biomolecules can bind two molecules simultaneously, they can change the local concentration of the molecules that they bind. This concentration change may be detected spectrally as a change in the fluorescence emission wavelength of an appropriately labeled biomolecule. Several tests of PORSCHA have been performed which demonstrate this principle. For example: with streptavidin as the binding biomolecule and a biotin labeled pyrene derivative, the production of the excimer emitting at 470 nm is observed. Without the streptavidin present, only the monomer emitting at 378 and 390 nm is observed. The ratio of monomer to excimer provides the concentration of unlabeled biotin in the sample. Approximately 1 ng/mL of biotin may be detected with this system using a 50 (mu) l sample (2 X 10-16 moles biotin). The principles behind PORSCHA, the results with the streptavidin/biotin system are discussed and extensions of the PORSCHA concept to antibodies as the binding partner and DNA in homogeneous assays are suggested.
On integral representation, relaxation and homogenization for unbounded functionals
International Nuclear Information System (INIS)
Carbone, L.; De Arcangelis, R.
1997-01-01
A theory of integral representation, relaxation and homogenization for some types of variational functionals taking extended real values and possibly being not finite also on large classes of regular functions is presented. Some applications to gradient constrained relaxation and homogenization problems are given
Non-linear waves in heterogeneous elastic rods via homogenization
Quezada de Luna, Manuel
2012-03-01
We consider the propagation of a planar loop on a heterogeneous elastic rod with a periodic microstructure consisting of two alternating homogeneous regions with different material properties. The analysis is carried out using a second-order homogenization theory based on a multiple scale asymptotic expansion. © 2011 Elsevier Ltd. All rights reserved.
Is it possible to homogenize resonant chiral metamaterials ?
DEFF Research Database (Denmark)
Andryieuski, Andrei; Menzel, Christoph; Rockstuhl, Carsten
2010-01-01
Homogenization of metamaterials is very important as it makes possible description in terms of effective parameters. In this contribution we consider the homogenization of chiral metamaterials. We show that for some metamaterials there is an optimal meta-atom size which depends on the coupling...
Large-scale Homogenization of Bulk Materials in Mammoth Silos
Schott, D.L.
2004-01-01
This doctoral thesis concerns the large-scale homogenization of bulk materials in mammoth silos. The objective of this research was to determine the best stacking and reclaiming method for homogenization in mammoth silos. For this purpose a simulation program was developed to estimate the
Homogeneous Buchberger algorithms and Sullivant's computational commutative algebra challenge
DEFF Research Database (Denmark)
Lauritzen, Niels
2005-01-01
We give a variant of the homogeneous Buchberger algorithm for positively graded lattice ideals. Using this algorithm we solve the Sullivant computational commutative algebra challenge.......We give a variant of the homogeneous Buchberger algorithm for positively graded lattice ideals. Using this algorithm we solve the Sullivant computational commutative algebra challenge....
Verification of homogenization in fast critical assembly analyses
International Nuclear Information System (INIS)
Chiba, Go
2006-01-01
In the present paper, homogenization procedures for fast critical assembly analyses are investigated. Errors caused by homogenizations are evaluated by the exact perturbation theory. In order to obtain reference solutions, three-dimensional plate-wise transport calculations are performed. It is found that the angular neutron flux along plate boundaries has a significant peak in the fission source energy range. To treat this angular dependence accurately, the double-Gaussian Chebyshev angular quadrature set with S 24 is applied. It is shown that the difference between the heterogeneous leakage theory and the homogeneous theory is negligible, and that transport cross sections homogenized with neutron flux significantly underestimate neutron leakage. The error in criticality caused by a homogenization is estimated at about 0.1%Δk/kk' in a small fast critical assembly. In addition, the neutron leakage is overestimated by both leakage theories when sodium plates in fuel lattices are voided. (author)
Cosmic homogeneity: a spectroscopic and model-independent measurement
Gonçalves, R. S.; Carvalho, G. C.; Bengaly, C. A. P., Jr.; Carvalho, J. C.; Bernui, A.; Alcaniz, J. S.; Maartens, R.
2018-03-01
Cosmology relies on the Cosmological Principle, i.e. the hypothesis that the Universe is homogeneous and isotropic on large scales. This implies in particular that the counts of galaxies should approach a homogeneous scaling with volume at sufficiently large scales. Testing homogeneity is crucial to obtain a correct interpretation of the physical assumptions underlying the current cosmic acceleration and structure formation of the Universe. In this letter, we use the Baryon Oscillation Spectroscopic Survey to make the first spectroscopic and model-independent measurements of the angular homogeneity scale θh. Applying four statistical estimators, we show that the angular distribution of galaxies in the range 0.46 < z < 0.62 is consistent with homogeneity at large scales, and that θh varies with redshift, indicating a smoother Universe in the past. These results are in agreement with the foundations of the standard cosmological paradigm.
Turbulent Diffusion in Non-Homogeneous Environments
Diez, M.; Redondo, J. M.; Mahjoub, O. B.; Sekula, E.
2012-04-01
Many experimental studies have been devoted to the understanding of non-homogeneous turbulent dynamics. Activity in this area intensified when the basic Kolmogorov self-similar theory was extended to two-dimensional or quasi 2D turbulent flows such as those appearing in the environment, that seem to control mixing [1,2]. The statistical description and the dynamics of these geophysical flows depend strongly on the distribution of long lived organized (coherent) structures. These flows show a complex topology, but may be subdivided in terms of strongly elliptical domains (high vorticity regions), strong hyperbolic domains (deformation cells with high energy condensations) and the background turbulent field of moderate elliptic and hyperbolic characteristics. It is of fundamental importance to investigate the different influence of these topological diverse regions. Relevant geometrical information of different areas is also given by the maximum fractal dimension, which is related to the energy spectrum of the flow. Using all the available information it is possible to investigate the spatial variability of the horizontal eddy diffusivity K(x,y). This information would be very important when trying to model numerically the behaviour in time of the oil spills [3,4] There is a strong dependence of horizontal eddy diffusivities with the Wave Reynolds number as well as with the wind stress measured as the friction velocity from wind profiles measured at the coastline. Natural sea surface oily slicks of diverse origin (plankton, algae or natural emissions and seeps of oil) form complicated structures in the sea surface due to the effects of both multiscale turbulence and Langmuir circulation. It is then possible to use the topological and scaling analysis to discriminate the different physical sea surface processes. We can relate higher orden moments of the Lagrangian velocity to effective diffusivity in spite of the need to calibrate the different regions determining the
Pattern and process of biotic homogenization in the New Pangaea.
Baiser, Benjamin; Olden, Julian D; Record, Sydne; Lockwood, Julie L; McKinney, Michael L
2012-12-07
Human activities have reorganized the earth's biota resulting in spatially disparate locales becoming more or less similar in species composition over time through the processes of biotic homogenization and biotic differentiation, respectively. Despite mounting evidence suggesting that this process may be widespread in both aquatic and terrestrial systems, past studies have predominantly focused on single taxonomic groups at a single spatial scale. Furthermore, change in pairwise similarity is itself dependent on two distinct processes, spatial turnover in species composition and changes in gradients of species richness. Most past research has failed to disentangle the effect of these two mechanisms on homogenization patterns. Here, we use recent statistical advances and collate a global database of homogenization studies (20 studies, 50 datasets) to provide the first global investigation of the homogenization process across major faunal and floral groups and elucidate the relative role of changes in species richness and turnover. We found evidence of homogenization (change in similarity ranging from -0.02 to 0.09) across nearly all taxonomic groups, spatial extent and grain sizes. Partitioning of change in pairwise similarity shows that overall change in community similarity is driven by changes in species richness. Our results show that biotic homogenization is truly a global phenomenon and put into question many of the ecological mechanisms invoked in previous studies to explain patterns of homogenization.
Peripheral nerve magnetic stimulation: influence of tissue non-homogeneity
Directory of Open Access Journals (Sweden)
Papazov Sava P
2003-12-01
Full Text Available Abstract Background Peripheral nerves are situated in a highly non-homogeneous environment, including muscles, bones, blood vessels, etc. Time-varying magnetic field stimulation of the median and ulnar nerves in the carpal region is studied, with special consideration of the influence of non-homogeneities. Methods A detailed three-dimensional finite element model (FEM of the anatomy of the wrist region was built to assess the induced currents distribution by external magnetic stimulation. The electromagnetic field distribution in the non-homogeneous domain was defined as an internal Dirichlet problem using the finite element method. The boundary conditions were obtained by analysis of the vector potential field excited by external current-driven coils. Results The results include evaluation and graphical representation of the induced current field distribution at various stimulation coil positions. Comparative study for the real non-homogeneous structure with anisotropic conductivities of the tissues and a mock homogeneous media is also presented. The possibility of achieving selective stimulation of either of the two nerves is assessed. Conclusion The model developed could be useful in theoretical prediction of the current distribution in the nerves during diagnostic stimulation and therapeutic procedures involving electromagnetic excitation. The errors in applying homogeneous domain modeling rather than real non-homogeneous biological structures are demonstrated. The practical implications of the applied approach are valid for any arbitrary weakly conductive medium.
At-tank Low-Activity Feed Homogeneity Analysis Verification
International Nuclear Information System (INIS)
DOUGLAS, J.G.
2000-01-01
This report evaluates the merit of selecting sodium, aluminum, and cesium-137 as analytes to indicate homogeneity of soluble species in low-activity waste (LAW) feed and recommends possible analytes and physical properties that could serve as rapid screening indicators for LAW feed homogeneity. The three analytes are adequate as screening indicators of soluble species homogeneity for tank waste when a mixing pump is used to thoroughly mix the waste in the waste feed staging tank and when all dissolved species are present at concentrations well below their solubility limits. If either of these conditions is violated, then the three indicators may not be sufficiently chemically representative of other waste constituents to reliably indicate homogeneity in the feed supernatant. Additional homogeneity indicators that should be considered are anions such as fluoride, sulfate, and phosphate, total organic carbon/total inorganic carbon, and total alpha to estimate the transuranic species. Physical property measurements such as gamma profiling, conductivity, specific gravity, and total suspended solids are recommended as possible at-tank methods for indicating homogeneity. Indicators of LAW feed homogeneity are needed to reduce the U.S. Department of Energy, Office of River Protection (ORP) Program's contractual risk by assuring that the waste feed is within the contractual composition and can be supplied to the waste treatment plant within the schedule requirements
An iterative homogenization technique that preserves assembly core exchanges
International Nuclear Information System (INIS)
Mondot, Ph.; Sanchez, R.
2003-01-01
A new interactive homogenization procedure for reactor core calculations is proposed that requires iterative transport assembly and diffusion core calculations. At each iteration the transport solution of every assembly type is used to produce homogenized cross sections for the core calculation. The converged solution gives assembly fine multigroup transport fluxes that preserve macro-group assembly exchanges in the core. This homogenization avoids the periodic lattice-leakage model approximation and gives detailed assembly transport fluxes without need of an approximated flux reconstruction. Preliminary results are given for a one-dimensional core model. (authors)
Hydrogen storage materials and method of making by dry homogenation
Jensen, Craig M.; Zidan, Ragaiy A.
2002-01-01
Dry homogenized metal hydrides, in particular aluminum hydride compounds, as a material for reversible hydrogen storage is provided. The reversible hydrogen storage material comprises a dry homogenized material having transition metal catalytic sites on a metal aluminum hydride compound, or mixtures of metal aluminum hydride compounds. A method of making such reversible hydrogen storage materials by dry doping is also provided and comprises the steps of dry homogenizing metal hydrides by mechanical mixing, such as be crushing or ball milling a powder, of a metal aluminum hydride with a transition metal catalyst. In another aspect of the invention, a method of powering a vehicle apparatus with the reversible hydrogen storage material is provided.
Homogenization and structural topology optimization theory, practice and software
Hassani, Behrooz
1999-01-01
Structural topology optimization is a fast growing field that is finding numerous applications in automotive, aerospace and mechanical design processes. Homogenization is a mathematical theory with applications in several engineering problems that are governed by partial differential equations with rapidly oscillating coefficients Homogenization and Structural Topology Optimization brings the two concepts together and successfully bridges the previously overlooked gap between the mathematical theory and the practical implementation of the homogenization method. The book is presented in a unique self-teaching style that includes numerous illustrative examples, figures and detailed explanations of concepts. The text is divided into three parts which maintains the book's reader-friendly appeal.
Bridging heterogeneous and homogeneous catalysis concepts, strategies, and applications
Li, Can
2014-01-01
This unique handbook fills the gap in the market for an up-to-date work that links both homogeneous catalysis applied to organic reactions and catalytic reactions on surfaces of heterogeneous catalysts.
Time travel in the homogeneous Som-Raychaudhuri Universe
International Nuclear Information System (INIS)
Paiva, F.M.; Reboucas, M.J.; Teixeira, A.F.F.
1987-01-01
Properties of the rotating Som-Raychaudhuri homogeneous space-time are investigated: time-like and null geodesics, causality features, horizons and invariant characterization. An integral representation of its five isometries is also discussed. (author) [pt
[Methods for enzymatic determination of triglycerides in liver homogenates].
Höhn, H; Gartzke, J; Burck, D
1987-10-01
An enzymatic method is described for the determination of triacylglycerols in liver homogenate. In contrast to usual methods, higher reliability and selectivity are achieved by omitting the extraction step.
A convenient procedure for magnetic field homogeneity evaluation
International Nuclear Information System (INIS)
Teles, J; Garrido, C E; Tannus, A
2004-01-01
In many areas of research that utilize magnetic fields in their studies, it is important to obtain fields with a spatial distribution as homogeneous as possible. A procedure usually utilized to evaluate and to optimize field homogeneity is the expansion of the measured field in spherical harmonic components. In addition to the methods proposed in the literature, we present a more convenient procedure for evaluation of field homogeneity inside a spherical volume. The procedure uses the orthogonality property of the spherical harmonics to find the field variance. It is shown that the total field variance is equal to the sum of the individual variances of each field component in the spherical harmonic expansion. Besides the advantages of the linear behaviour of the individual variances, there is the fact that the field variance and standard deviation are the best parameters to achieve global homogeneity field information
Homogeneity Study of UO2 Pellet Density for Quality Control
International Nuclear Information System (INIS)
Moon, Je Seon; Park, Chang Je; Kang, Kwon Ho; Moon, Heung Soo; Song, Kee Chan
2005-01-01
A homogeneity study has been performed with various densities of UO 2 pellets as the work of a quality control. The densities of the UO 2 pellets are distributed randomly due to several factors such as the milling conditions and sintering environments, etc. After sintering, total fourteen bottles were chosen for UO 2 density and each bottle had three samples. With these bottles, the between-bottle and within-bottle homogeneity were investigated via the analysis of the variance (ANOVA). From the results of ANOVA, the calculated F-value is used to determine whether the distribution is accepted or rejected from the view of a homogeneity under a certain confidence level. All the homogeneity checks followed the International Standard Guide 35
Tests for homogeneity for multiple 2 x 2 contingency tables
International Nuclear Information System (INIS)
Carr, D.B.
1986-01-01
Frequently data are described by 2 x 2 contingency tables. For example, each 2 x 2 table arises from two dichotomous classifications such as control/treated and respond/did not respond. Multiple 2 x 2 tables result from stratifying the observational units on the basis of other characteristics. For example, stratifying by sex produces separate 2 x 2 tables for males and females. From each table a measure of difference between the response rates for the control and the treated groups is computed. The researcher usually wants to know if the response-rate difference is zero for each table. If the tables are homogeneous, the researcher can generalize from a statement concerning an average to a statement concerning each table. If tables are not homogeneous, homogeneous subsets of the tables should be described separately. This paper presents tests for homogeneity and illustrates their use. 11 refs., 6 tabs
Engineered CHO cells for production of diverse, homogeneous glycoproteins
DEFF Research Database (Denmark)
Yang, Zhang; Wang, Shengjun; Halim, Adnan
2015-01-01
Production of glycoprotein therapeutics in Chinese hamster ovary (CHO) cells is limited by the cells' generic capacity for N-glycosylation, and production of glycoproteins with desirable homogeneous glycoforms remains a challenge. We conducted a comprehensive knockout screen of glycosyltransferas...
Homogenization of aligned “fuzzy fiber” composites
Chatzigeorgiou, George; Efendiev, Yalchin; Lagoudas, Dimitris C.
2011-01-01
The aim of this work is to study composites in which carbon fibers coated with radially aligned carbon nanotubes are embedded in a matrix. The effective properties of these composites are identified using the asymptotic expansion homogenization
Jordan's algebra of a facially homogeneous autopolar cone
International Nuclear Information System (INIS)
Bellissard, Jean; Iochum, Bruno
1979-01-01
It is shown that a Jordan-Banach algebra with predual may be canonically associated with a facially homogeneous autopolar cone. This construction generalizes the case where a trace vector exists in the cone [fr
Notes on a homogeneous reactor project; Idees sur un projet de reacteur homogene
Energy Technology Data Exchange (ETDEWEB)
Benveniste, J; Bernot, J; Eidelman, D; Grenon, M; Portes, L; Raspaud, G; Tachon, J; Vendryes, G [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Berthod, L; Cohen de Lara, G; Delachanal, M; Fontanet, P; Halbronn, G [Societe Grenobloise d' Etudes et d' Applications Hydrauliques, 38 (France)
1958-07-01
An attempt has been made to develop certain ideas concerning homogeneous reactors. The project under consideration is based on the simultaneous use of a suspension of uranium dispersed in heavy or light water and of boiling in the reactor for heat extraction. However, the studies of suspensions and of boiling are relatively independent and can also be developed for reactors of different types using one or the other. Our aim is a minimum investment in fissile material; for this we propose to extract the steam directly from the core and to make use of a cyclone to accelerate this extraction; a cyclone-type circulation creating a field of increasing tangential velocities of the fluid towards the axis causes the droplets of vapour to accelerate towards the axial vortex in which they are collected; the steam output is then evacuated to the external heat utilisation system, for example an exchanger of the condenser-boiler type. The input speed of water into the reactor being one of the important parameters in the running of the pile, a spiral supply input chamber is used, allowing this speed to be regulated in amount and direction. (author)Fren. [French] Nous nous sommes attaches a developper certaines idees relatives aux piles homogenes. Le projet que nous etudions est base sur l'emploi simultane d'une suspension contenant de l'uranium disperse dans l'eau legere ou lourde et de l'ebullition dans le reacteur pour l'extraction de chaleur. Neanmoins, les etudes de suspensions et d'ebullition sont relativement independantes et peuvent egalement etre developpees pour des reacteurs de type different utilisant l'une ou l'autre. Le but que nous cherchons a atteindre est un investissement minimum en matiere fissile; pour cela, nous proposons d'extraire directement la vapeur dans le coeur et de recourir a un dispositif cyclone pour accelerer cette extraction; une circulation type cyclone creant un champ de vitesses tangentielles du fluide croissantes veraxe a pour effet d
Homogenization of aligned “fuzzy fiber” composites
Chatzigeorgiou, George
2011-09-01
The aim of this work is to study composites in which carbon fibers coated with radially aligned carbon nanotubes are embedded in a matrix. The effective properties of these composites are identified using the asymptotic expansion homogenization method in two steps. Homogenization is performed in different coordinate systems, the cylindrical and the Cartesian, and a numerical example are presented. © 2011 Elsevier Ltd. All rights reserved.
The Perron-Frobenius theorem for multi-homogeneous mappings
Gautier, Antoine; Tudisco, Francesco; Hein, Matthias
2018-01-01
The Perron-Frobenius theory for nonnegative matrices has been generalized to order-preserving homogeneous mappings on a cone and more recently to nonnegative multilinear forms. We unify both approaches by introducing the concept of order-preserving multi-homogeneous mappings, their associated nonlinear spectral problems and spectral radii. We show several Perron-Frobenius type results for these mappings addressing existence, uniqueness and maximality of nonnegative and positive eigenpairs. We...
Homogeneity in Luxury Fashion Consumption: an Exploration of Arab Women
Marciniak, R.; Gad Mohsen, Marwa
2014-01-01
Consumer perceptions and consumer motivations are complex and whilst it is acknowledged within literature\\ud that heterogeneity exists, homogenous models dominate consumer behaviour research. The primary purpose of this\\ud paper is to explore the extent to which Arab women are a homogeneous group of consumers in regard to perceptions\\ud and motivations to consume luxury fashion goods. In particular, the paper seeks to present a critical review of luxury consumption frameworks. As part of the ...
Matrix-dependent multigrid-homogenization for diffusion problems
Energy Technology Data Exchange (ETDEWEB)
Knapek, S. [Institut fuer Informatik tu Muenchen (Germany)
1996-12-31
We present a method to approximately determine the effective diffusion coefficient on the coarse scale level of problems with strongly varying or discontinuous diffusion coefficients. It is based on techniques used also in multigrid, like Dendy`s matrix-dependent prolongations and the construction of coarse grid operators by means of the Galerkin approximation. In numerical experiments, we compare our multigrid-homogenization method with homogenization, renormalization and averaging approaches.
Spray structure as generated under homogeneous flash boiling nucleation regime
International Nuclear Information System (INIS)
Levy, M.; Levy, Y.; Sher, E.
2014-01-01
We show the effect of the initial pressure and temperature on the spatial distribution of droplets size and their velocity profile inside a spray cloud that is generated by a flash boiling mechanism under homogeneous nucleation regime. We used TSI's Phase Doppler Particle Analyzer (PDPA) to characterize the spray. We conclude that the homogeneous nucleation process is strongly affected by the initial liquid temperature while the initial pressure has only a minor effect. The spray shape is not affected by temperature or pressure under homogeneous nucleation regime. We noted that the only visible effect is in the spray opacity. Finally, homogeneous nucleation may be easily achieved by using a simple atomizer construction, and thus is potentially suitable for fuel injection systems in combustors and engines. - Highlights: • We study the characteristics of a spray that is generated by a flash boiling process. • In this study, the flash boiling process occurs under homogeneous nucleation regime. • We used Phase Doppler Particle Analyzer (PDPA) to characterize the spray. • The SMD has been found to be strongly affected by the initial liquid temperature. • Homogeneous nucleation may be easily achieved by using a simple atomizer unit
Applications of a systematic homogenization theory for nodal diffusion methods
International Nuclear Information System (INIS)
Zhang, Hong-bin; Dorning, J.J.
1992-01-01
The authors recently have developed a self-consistent and systematic lattice cell and fuel bundle homogenization theory based on a multiple spatial scales asymptotic expansion of the transport equation in the ratio of the mean free path to the reactor characteristics dimension for use with nodal diffusion methods. The mathematical development leads naturally to self-consistent analytical expressions for homogenized diffusion coefficients and cross sections and flux discontinuity factors to be used in nodal diffusion calculations. The expressions for the homogenized nuclear parameters that follow from the systematic homogenization theory (SHT) are different from those for the traditional flux and volume-weighted (FVW) parameters. The calculations summarized here show that the systematic homogenization theory developed recently for nodal diffusion methods yields accurate values for k eff and assembly powers even when compared with the results of a fine mesh transport calculation. Thus, it provides a practical alternative to equivalence theory and GET (Ref. 3) and to simplified equivalence theory, which requires auxiliary fine-mesh calculations for assemblies embedded in a typical environment to determine the discontinuity factors and the equivalent diffusion coefficient for a homogenized assembly
Homogenization patterns of the world’s freshwater fish faunas
Villéger, Sébastien; Blanchet, Simon; Beauchard, Olivier; Oberdorff, Thierry; Brosse, Sébastien
2011-01-01
The world is currently undergoing an unprecedented decline in biodiversity, which is mainly attributable to human activities. For instance, nonnative species introduction, combined with the extirpation of native species, affects biodiversity patterns, notably by increasing the similarity among species assemblages. This biodiversity change, called taxonomic homogenization, has rarely been assessed at the world scale. Here, we fill this gap by assessing the current homogenization status of one of the most diverse vertebrate groups (i.e., freshwater fishes) at global and regional scales. We demonstrate that current homogenization of the freshwater fish faunas is still low at the world scale (0.5%) but reaches substantial levels (up to 10%) in some highly invaded river basins from the Nearctic and Palearctic realms. In these realms experiencing high changes, nonnative species introductions rather than native species extirpations drive taxonomic homogenization. Our results suggest that the “Homogocene era” is not yet the case for freshwater fish fauna at the worldwide scale. However, the distressingly high level of homogenization noted for some biogeographical realms stresses the need for further understanding of the ecological consequences of homogenization processes. PMID:22025692
Homogenization patterns of the world's freshwater fish faunas.
Villéger, Sébastien; Blanchet, Simon; Beauchard, Olivier; Oberdorff, Thierry; Brosse, Sébastien
2011-11-01
The world is currently undergoing an unprecedented decline in biodiversity, which is mainly attributable to human activities. For instance, nonnative species introduction, combined with the extirpation of native species, affects biodiversity patterns, notably by increasing the similarity among species assemblages. This biodiversity change, called taxonomic homogenization, has rarely been assessed at the world scale. Here, we fill this gap by assessing the current homogenization status of one of the most diverse vertebrate groups (i.e., freshwater fishes) at global and regional scales. We demonstrate that current homogenization of the freshwater fish faunas is still low at the world scale (0.5%) but reaches substantial levels (up to 10%) in some highly invaded river basins from the Nearctic and Palearctic realms. In these realms experiencing high changes, nonnative species introductions rather than native species extirpations drive taxonomic homogenization. Our results suggest that the "Homogocene era" is not yet the case for freshwater fish fauna at the worldwide scale. However, the distressingly high level of homogenization noted for some biogeographical realms stresses the need for further understanding of the ecological consequences of homogenization processes.
Homogenization models for thin rigid structured surfaces and films.
Marigo, Jean-Jacques; Maurel, Agnès
2016-07-01
A homogenization method for thin microstructured surfaces and films is presented. In both cases, sound hard materials are considered, associated with Neumann boundary conditions and the wave equation in the time domain is examined. For a structured surface, a boundary condition is obtained on an equivalent flat wall, which links the acoustic velocity to its normal and tangential derivatives (of the Myers type). For a structured film, jump conditions are obtained for the acoustic pressure and the normal velocity across an equivalent interface (of the Ventcels type). This interface homogenization is based on a matched asymptotic expansion technique, and differs slightly from the classical homogenization, which is known to fail for small structuration thicknesses. In order to get insight into what causes this failure, a two-step homogenization is proposed, mixing classical homogenization and matched asymptotic expansion. Results of the two homogenizations are analyzed in light of the associated elementary problems, which correspond to problems of fluid mechanics, namely, potential flows around rigid obstacles.
Directory of Open Access Journals (Sweden)
Olaniyi Samuel Iyiola
2014-09-01
Full Text Available In this paper, we obtain analytical solutions of homogeneous time-fractional Gardner equation and non-homogeneous time-fractional models (including Buck-master equation using q-Homotopy Analysis Method (q-HAM. Our work displays the elegant nature of the application of q-HAM not only to solve homogeneous non-linear fractional differential equations but also to solve the non-homogeneous fractional differential equations. The presence of the auxiliary parameter h helps in an effective way to obtain better approximation comparable to exact solutions. The fraction-factor in this method gives it an edge over other existing analytical methods for non-linear differential equations. Comparisons are made upon the existence of exact solutions to these models. The analysis shows that our analytical solutions converge very rapidly to the exact solutions.
Toward whole-core neutron transport without spatial homogenization
International Nuclear Information System (INIS)
Lewis, E. E.
2009-01-01
Full text of publication follows: A long-term goal of computational reactor physics is the deterministic analysis of power reactor core neutronics without incurring significant discretization errors in the energy, spatial or angular variables. In principle, given large enough parallel configurations with unlimited CPU time and memory, this goal could be achieved using existing three-dimensional neutron transport codes. In practice, however, solving the Boltzmann equation for neutrons over the six-dimensional phase space is made intractable by the nature of neutron cross-sections and the complexity and size of power reactor cores. Tens of thousands of energy groups would be required for faithful cross section representation. Likewise, the numerous material interfaces present in power reactor lattices require exceedingly fine spatial mesh structures; these ubiquitous interfaces preclude effective implementation of adaptive grid, mesh-less methods and related techniques that have been applied so successfully in other areas of engineering science. These challenges notwithstanding, substantial progress continues in the pursuit for more robust deterministic methods for whole-core neutronics analysis. This paper examines the progress over roughly the last decade, emphasizing the space-angle variables and the quest to eliminate errors attributable to spatial homogenization. As prolog we briefly assess 1990's methods used in light water reactor analysis and review the lessons learned from the C5G7 benchmark exercises which were originated in 1999 to appraise the ability of transport codes to perform core calculations without homogenization. We proceed by examining progress over the last decade much of which falls into three areas. These may be broadly characterized as reduced homogenization, dynamic homogenization and planar-axial synthesis. In the first, homogenization in three-dimensional calculations is reduced from the fuel assembly to the pin-cell level. In the second
Spatial homogenization method based on the inverse problem
International Nuclear Information System (INIS)
Tóta, Ádám; Makai, Mihály
2015-01-01
Highlights: • We derive a spatial homogenization method in slab and cylindrical geometries. • The fluxes and the currents on the boundary are preserved. • The reaction rates and the integral of the fluxes are preserved. • We present verification computations utilizing two- and four-energy groups. - Abstract: We present a method for deriving homogeneous multi-group cross sections to replace a heterogeneous region’s multi-group cross sections; providing that the fluxes, the currents on the external boundary, the reaction rates and the integral of the fluxes are preserved. We consider one-dimensional geometries: a symmetric slab and a homogeneous cylinder. Assuming that the boundary fluxes are given, two response matrices (RMs) can be defined concerning the current and the flux integral. The first one derives the boundary currents from the boundary fluxes, while the second one derives the flux integrals from the boundary fluxes. Further RMs can be defined that connects reaction rates to the boundary fluxes. Assuming that these matrices are known, we present formulae that reconstruct the multi-group diffusion cross-section matrix, the diffusion coefficients and the reaction cross sections in case of one-dimensional (1D) homogeneous regions. We apply these formulae to 1D heterogeneous regions and thus obtain a homogenization method. This method produces such an equivalent homogeneous material, that the fluxes and the currents on the external boundary, the reaction rates and the integral of the fluxes are preserved for any boundary fluxes. We carry out the exact derivations in 1D slab and cylindrical geometries. Verification computations for the presented homogenization method were performed using two- and four-group material cross sections, both in a slab and in a cylindrical geometry
Land-use intensification causes multitrophic homogenization of grassland communities.
Gossner, Martin M; Lewinsohn, Thomas M; Kahl, Tiemo; Grassein, Fabrice; Boch, Steffen; Prati, Daniel; Birkhofer, Klaus; Renner, Swen C; Sikorski, Johannes; Wubet, Tesfaye; Arndt, Hartmut; Baumgartner, Vanessa; Blaser, Stefan; Blüthgen, Nico; Börschig, Carmen; Buscot, Francois; Diekötter, Tim; Jorge, Leonardo Ré; Jung, Kirsten; Keyel, Alexander C; Klein, Alexandra-Maria; Klemmer, Sandra; Krauss, Jochen; Lange, Markus; Müller, Jörg; Overmann, Jörg; Pašalić, Esther; Penone, Caterina; Perović, David J; Purschke, Oliver; Schall, Peter; Socher, Stephanie A; Sonnemann, Ilja; Tschapka, Marco; Tscharntke, Teja; Türke, Manfred; Venter, Paul Christiaan; Weiner, Christiane N; Werner, Michael; Wolters, Volkmar; Wurst, Susanne; Westphal, Catrin; Fischer, Markus; Weisser, Wolfgang W; Allan, Eric
2016-12-08
Land-use intensification is a major driver of biodiversity loss. Alongside reductions in local species diversity, biotic homogenization at larger spatial scales is of great concern for conservation. Biotic homogenization means a decrease in β-diversity (the compositional dissimilarity between sites). Most studies have investigated losses in local (α)-diversity and neglected biodiversity loss at larger spatial scales. Studies addressing β-diversity have focused on single or a few organism groups (for example, ref. 4), and it is thus unknown whether land-use intensification homogenizes communities at different trophic levels, above- and belowground. Here we show that even moderate increases in local land-use intensity (LUI) cause biotic homogenization across microbial, plant and animal groups, both above- and belowground, and that this is largely independent of changes in α-diversity. We analysed a unique grassland biodiversity dataset, with abundances of more than 4,000 species belonging to 12 trophic groups. LUI, and, in particular, high mowing intensity, had consistent effects on β-diversity across groups, causing a homogenization of soil microbial, fungal pathogen, plant and arthropod communities. These effects were nonlinear and the strongest declines in β-diversity occurred in the transition from extensively managed to intermediate intensity grassland. LUI tended to reduce local α-diversity in aboveground groups, whereas the α-diversity increased in belowground groups. Correlations between the β-diversity of different groups, particularly between plants and their consumers, became weaker at high LUI. This suggests a loss of specialist species and is further evidence for biotic homogenization. The consistently negative effects of LUI on landscape-scale biodiversity underscore the high value of extensively managed grasslands for conserving multitrophic biodiversity and ecosystem service provision. Indeed, biotic homogenization rather than local diversity
Central Andean temperature and precipitation measurements and its homogenization
Hunziker, Stefan; Gubler, Stefanie
2015-04-01
Observation of climatological parameters and the homogenization of these time series have a well-established history in western countries. This is not the case for many other countries, such as Bolivia and Peru. In Bolivia and Peru, the organization of measurements, quality of measurement equipment, equipment maintenance, training of staff and data management are fundamentally different compared to the western standard. The data needs special attention, because many problems are not detected by standard quality control procedures. Information about the weather stations, best achieved by station visits, is very beneficial. If the cause of the problem is known, some of the data may be corrected. In this study, cases of typical problems and measurement errors will be demonstrated. Much of research on homogenization techniques (up to subdaily scale) has been completed in recent years. However, data sets of the quality of western station networks have been used, and little is known about the performance of homogenization methods on data sets from countries such as Bolivia and Peru. HOMER (HOMogenizaton softwarE in R) is one of the most recent and widely used homogenization softwares. Its performance is tested on Peruvian-like data that has been sourced from Swiss stations (similar station density and metadata availability). The Swiss station network is a suitable test bed, because climate gradients are strong and the terrain is complex, as is also found in the Central Andes. On the other hand, the Swiss station network is dense, and long time series and extensive metadata are available. By subsampling the station network and omitting the metadata, the conditions of a Peruvian test region are mimicked. Results are compared to a dataset homogenized by THOMAS (Tool for Homogenization of Monthly Data Series), the homogenization tool used by MeteoSwiss.
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
Automatic Control of the Concrete Mixture Homogeneity in Cycling Mixers
Anatoly Fedorovich, Tikhonov; Drozdov, Anatoly
2018-03-01
The article describes the factors affecting the concrete mixture quality related to the moisture content of aggregates, since the effectiveness of the concrete mixture production is largely determined by the availability of quality management tools at all stages of the technological process. It is established that the unaccounted moisture of aggregates adversely affects the concrete mixture homogeneity and, accordingly, the strength of building structures. A new control method and the automatic control system of the concrete mixture homogeneity in the technological process of mixing components have been proposed, since the tasks of providing a concrete mixture are performed by the automatic control system of processing kneading-and-mixing machinery with operational automatic control of homogeneity. Theoretical underpinnings of the control of the mixture homogeneity are presented, which are related to a change in the frequency of vibrodynamic vibrations of the mixer body. The structure of the technical means of the automatic control system for regulating the supply of water is determined depending on the change in the concrete mixture homogeneity during the continuous mixing of components. The following technical means for establishing automatic control have been chosen: vibro-acoustic sensors, remote terminal units, electropneumatic control actuators, etc. To identify the quality indicator of automatic control, the system offers a structure flowchart with transfer functions that determine the ACS operation in transient dynamic mode.
Higher-order asymptotic homogenization of periodic materials with low scale separation
Ameen, M.M.; Peerlings, R.H.J.; Geers, M.G.D
2016-01-01
In this work, we investigate the limits of classical homogenization theories pertaining to homogenization of periodic linear elastic composite materials at low scale separations and demonstrate the effectiveness of higher-order periodic homogenization in alleviating this limitation. Classical
Homogeneous nucleation in 4He: A corresponding-states analysis
International Nuclear Information System (INIS)
Sinha, D.N.; Semura, J.S.; Brodie, L.C.
1982-01-01
We report homogeneous-nucleation-temperature measurements in liquid 4 He over a bath-temperature range 2.31 4 He, in a region far from the critical point. A simple empirical form is presented for estimating the homogeneous nucleation temperatures for any liquid with a spherically symmetric interatomic potential. The 4 He data are compared with nucleation data for Ar, Kr, Xe, and H; theoretical predictions for 3 He are given in terms of reduced quantities. It is shown that the nucleation data for both quantum and classical liquids obey a quantum law of corresponding states (QCS). On the basis of this QCS analysis, predictions of homogeneous nucleation temperatures are made for hydrogen isotopes such as HD, DT, HT, and T 2
Radiotracer application in determining changes in cement mix homogeneity
International Nuclear Information System (INIS)
Breda, M.
1979-01-01
A small amount of cement labelled with 24 Na is added to the concrete mix and the relative activity of the mix is measured using a scintillation detector in preset points at different time intervals of the mixing process. The detector picks up information from a volume of 10 to 15 litres. The values characterize the degree of homogeneity of the cement component in the mix. Mathematical statistics methods are used for assessing mixing or the homogeneity changes. The technique is quick and simple and is used to advantage in determining the effect of the duration and method of transport of the cement mix on its homogeneity, and in monitoring the mixing process and determining the minimum mixing time for all types of concrete mix. (M.S.)
Homogeneous-heterogeneous reactions in curved channel with porous medium
Hayat, T.; Ayub, Sadia; Alsaedi, A.
2018-06-01
Purpose of the present investigation is to examine the peristaltic flow through porous medium in a curved conduit. Problem is modeled for incompressible electrically conducting Ellis fluid. Influence of porous medium is tackled via modified Darcy's law. The considered model utilizes homogeneous-heterogeneous reactions with equal diffusivities for reactant and autocatalysis. Constitutive equations are formulated in the presence of viscous dissipation. Channel walls are compliant in nature. Governing equations are modeled and simplified under the assumptions of small Reynolds number and large wavelength. Graphical results for velocity, temperature, heat transfer coefficient and homogeneous-heterogeneous reaction parameters are examined for the emerging parameters entering into the problem. Results reveal an activation in both homogenous-heterogenous reaction effect and heat transfer rate with increasing curvature of the channel.
Cryogenic homogenization and sampling of heterogeneous multi-phase feedstock
Doyle, Glenn Michael; Ideker, Virgene Linda; Siegwarth, James David
2002-01-01
An apparatus and process for producing a homogeneous analytical sample from a heterogenous feedstock by: providing the mixed feedstock, reducing the temperature of the feedstock to a temperature below a critical temperature, reducing the size of the feedstock components, blending the reduced size feedstock to form a homogeneous mixture; and obtaining a representative sample of the homogeneous mixture. The size reduction and blending steps are performed at temperatures below the critical temperature in order to retain organic compounds in the form of solvents, oils, or liquids that may be adsorbed onto or absorbed into the solid components of the mixture, while also improving the efficiency of the size reduction. Preferably, the critical temperature is less than 77 K (-196.degree. C.). Further, with the process of this invention the representative sample may be maintained below the critical temperature until being analyzed.
Pyroxene Homogenization and the Isotopic Systematics of Eucrites
Nyquist, L. E.; Bogard, D. D.
1996-01-01
The original Mg-Fe zoning of eucritic pyroxenes has in nearly all cases been partly homogenized, an observation that has been combined with other petrographic and compositional criteria to establish a scale of thermal "metamorphism" for eucrites. To evaluate hypotheses explaining development of conditions on the HED parent body (Vesta?) leading to pyroxene homogenization against their chronological implications, it is necessary to know whether pyroxene metamorphism was recorded in the isotopic systems. However, identifying the effects of the thermal metamorphism with specific effects in the isotopic systems has been difficult, due in part to a lack of correlated isotopic and mineralogical studies of the same eucrites. Furthermore, isotopic studies often place high demands on analytical capabilities, resulting in slow growth of the isotopic database. Additionally, some isotopic systems would not respond in a direct and sensitive way to pyroxene homogenization. Nevertheless, sufficient data exist to generalize some observations, and to identify directions of potentially fruitful investigations.
Variable valve timing in a homogenous charge compression ignition engine
Lawrence, Keith E.; Faletti, James J.; Funke, Steven J.; Maloney, Ronald P.
2004-08-03
The present invention relates generally to the field of homogenous charge compression ignition engines, in which fuel is injected when the cylinder piston is relatively close to the bottom dead center position for its compression stroke. The fuel mixes with air in the cylinder during the compression stroke to create a relatively lean homogeneous mixture that preferably ignites when the piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. The present invention utilizes internal exhaust gas recirculation and/or compression ratio control to control the timing of ignition events and combustion duration in homogeneous charge compression ignition engines. Thus, at least one electro-hydraulic assist actuator is provided that is capable of mechanically engaging at least one cam actuated intake and/or exhaust valve.
Radiation Resistance and Gain of Homogeneous Ring Quasi-Array
DEFF Research Database (Denmark)
Knudsen, H. L.
1954-01-01
In a previous paper homogeneous ring quasi-arrays of tangential or radial dipoles were introduced, i.e. systems of dipoles arranged equidistantly along a circle, the dipoles being oriented in tangential or radial directions and carrying currents with the same amplitude, but with a phase that incr......In a previous paper homogeneous ring quasi-arrays of tangential or radial dipoles were introduced, i.e. systems of dipoles arranged equidistantly along a circle, the dipoles being oriented in tangential or radial directions and carrying currents with the same amplitude, but with a phase...... that increases uniformly along the circle. Such quasi-arrays are azimuthally omnidirectional, and the radiated field will be mainly horizontally polarized and concentrated around the plane of the circle. In this paper expressions are obtained for the radiation resistance and the gain of homogeneous ring quasi...
Heterogenization of Homogeneous Catalysts: the Effect of the Support
Energy Technology Data Exchange (ETDEWEB)
Earl, W.L.; Ott, K.C.; Hall, K.A.; de Rege, F.M.; Morita, D.K.; Tumas, W.; Brown, G.H.; Broene, R.D.
1999-06-29
We have studied the influence of placing a soluble, homogeneous catalyst onto a solid support. We determined that such a 'heterogenized' homogeneous catalyst can have improved activity and selectivity for the asymmetric hydrogenation of enamides to amino acid derivatives. The route of heterogenization of RhDuPhos(COD){sup +} cations occurs via electrostatic interactions with anions that are capable of strong hydrogen bonding to silica surfaces. This is a novel approach to supported catalysis. Supported RhDuPhos(COD){sup +} is a recyclable, non-leaching catalyst in non-polar media. This is one of the few heterogenized catalysts that exhibits improved catalytic performance as compared to its homogeneous analog.
Homogenization technique for strongly heterogeneous zones in research reactors
International Nuclear Information System (INIS)
Lee, J.T.; Lee, B.H.; Cho, N.Z.; Oh, S.K.
1991-01-01
This paper reports on an iterative homogenization method using transport theory in a one-dimensional cylindrical cell model developed to improve the homogenized cross sections fro strongly heterogeneous zones in research reactors. The flux-weighting homogenized cross sections are modified by a correction factor, the cell flux ratio under an albedo boundary condition. The albedo at the cell boundary is iteratively determined to reflect the geometry effects of the material properties of the adjacent cells. This method has been tested with a simplified core model of the Korea Multipurpose Research Reactor. The results demonstrate that the reaction rates of an off-center control shroud cell, the multiplication factor, and the power distribution of the reactor core are close to those of the fine-mesh heterogeneous transport model
Soluble Molecularly Imprinted Nanorods for Homogeneous Molecular Recognition
Directory of Open Access Journals (Sweden)
Rongning Liang
2018-03-01
Full Text Available Nowadays, it is still difficult for molecularly imprinted polymers (MIPs to achieve homogeneous recognition since they cannot be easily dissolved in organic or aqueous phase. To address this issue, soluble molecularly imprinted nanorods have been synthesized by using soluble polyaniline doped with a functionalized organic protonic acid as the polymer matrix. By employing 1-naphthoic acid as a model, the proposed imprinted nanorods exhibit an excellent solubility and good homogeneous recognition ability. The imprinting factor for the soluble imprinted nanoroads is 6.8. The equilibrium dissociation constant and the apparent maximum number of the proposed imprinted nanorods are 248.5 μM and 22.1 μmol/g, respectively. We believe that such imprinted nanorods may provide an appealing substitute for natural receptors in homogeneous recognition related fields.
Soluble Molecularly Imprinted Nanorods for Homogeneous Molecular Recognition
Liang, Rongning; Wang, Tiantian; Zhang, Huan; Yao, Ruiqing; Qin, Wei
2018-03-01
Nowadays, it is still difficult for molecularly imprinted polymer (MIPs) to achieve homogeneous recognition since they cannot be easily dissolved in organic or aqueous phase. To address this issue, soluble molecularly imprinted nanorods have been synthesized by using soluble polyaniline doped with a functionalized organic protonic acid as the polymer matrix. By employing 1-naphthoic acid as a model, the proposed imprinted nanorods exhibit an excellent solubility and good homogeneous recognition ability. The imprinting factor for the soluble imprinted nanoroads is 6.8. The equilibrium dissociation constant and the apparent maximum number of the proposed imprinted nanorods are 248.5 μM and 22.1 μmol/g, respectively. We believe that such imprinted nanorods may provide an appealing substitute for natural receptors in homogeneous recognition related fields.
Homogeneous versus heterogeneous shielding modeling of spent-fuel casks
International Nuclear Information System (INIS)
Carbajo, J.J.; Lindner, C.N.
1992-01-01
The design of spent-fuel casks for storage and transport requires modeling the cask for criticality, shielding, thermal, and structural analyses. While some parts of the cask are homogeneous, other regions are heterogeneous with different materials intermixed. For simplicity, some of the heterogeneous regions may be modeled as homogeneous. This paper evaluates the effect of homogenizing some regions of a cask on calculating radiation dose rates outside the cask. The dose rate calculations were performed with the one-dimensional discrete ordinates shielding XSDRNPM code coupled with the XSDOSE code and with the three-dimensional QAD-CGGP code. Dose rates were calculated radially at the midplane of the cask at two locations, cask surface and 2.3 m from the radial surface. The last location corresponds to a point 2 m from the lateral sides of a transport railroad car
Method of the characteristics for calculation of VVER without homogenization
Energy Technology Data Exchange (ETDEWEB)
Suslov, I.R.; Komlev, O.G.; Novikova, N.N.; Zemskov, E.A.; Tormyshev, I.V.; Melnikov, K.G.; Sidorov, E.B. [Institute of Physics and Power Engineering, Obninsk (Russian Federation)
2005-07-01
The first stage of the development of characteristics code MCCG3D for calculation of the VVER-type reactor without homogenization is presented. The parallel version of the code for MPI was developed and tested on cluster PC with LINUX-OS. Further development of the MCCG3D code for design-level calculations with full-scale space-distributed feedbacks is discussed. For validation of the MCCG3D code we use the critical assembly VENUS-2. The geometrical models with and without homogenization have been used. With both models the MCCG3D results agree well with the experimental power distribution and with results generated by the other codes, but model without homogenization provides better results. The perturbation theory for MCCG3D code is developed and implemented in the module KEFSFGG. The calculations with KEFSFGG are in good agreement with direct calculations. (authors)
Does prescribed burning result in biotic homogenization of coastal heathlands?
Velle, Liv Guri; Nilsen, Liv Sigrid; Norderhaug, Ann; Vandvik, Vigdis
2014-05-01
Biotic homogenization due to replacement of native biodiversity by widespread generalist species has been demonstrated in a number of ecosystems and taxonomic groups worldwide, causing growing conservation concern. Human disturbance is a key driver of biotic homogenization, suggesting potential conservation challenges in seminatural ecosystems, where anthropogenic disturbances such as grazing and burning are necessary for maintaining ecological dynamics and functioning. We test whether prescribed burning results in biotic homogenization in the coastal heathlands of north-western Europe, a seminatural landscape where extensive grazing and burning has constituted the traditional land-use practice over the past 6000 years. We compare the beta-diversity before and after fire at three ecological scales: within local vegetation patches, between wet and dry heathland patches within landscapes, and along a 470 km bioclimatic gradient. Within local patches, we found no evidence of homogenization after fire; species richness increased, and the species that entered the burnt Calluna stands were not widespread specialists but native grasses and herbs characteristic of the heathland system. At the landscapes scale, we saw a weak homogenization as wet and dry heathland patches become more compositionally similar after fire. This was because of a decrease in habitat-specific species unique to either wet or dry habitats and postfire colonization by a set of heathland specialists that established in both habitat types. Along the bioclimatic gradient, species that increased after fire generally had more specific environmental requirements and narrower geographical distributions than the prefire flora, resulting in a biotic 'heterogenisation' after fire. Our study demonstrates that human disturbance does not necessarily cause biotic homogenization, but that continuation of traditional land-use practices can instead be crucial for the maintenance of the diversity and ecological
Numerical computing of elastic homogenized coefficients for periodic fibrous tissue
Directory of Open Access Journals (Sweden)
Roman S.
2009-06-01
Full Text Available The homogenization theory in linear elasticity is applied to a periodic array of cylindrical inclusions in rectangular pattern extending to infinity in the inclusions axial direction, such that the deformation of tissue along this last direction is negligible. In the plane of deformation, the homogenization scheme is based on the average strain energy whereas in the third direction it is based on the average normal stress along this direction. Namely, these average quantities have to be the same on a Repeating Unit Cell (RUC of heterogeneous and homogenized media when using a special form of boundary conditions forming by a periodic part and an affine part of displacement. It exists an infinity of RUCs generating the considered array. The computing procedure is tested with different choices of RUC to control that the results of the homogenization process are independent of the kind of RUC we employ. Then, the dependence of the homogenized coefficients on the microstructure can be studied. For instance, a special anisotropy and the role of the inclusion volume are investigated. In the second part of this work, mechanical traction tests are simulated. We consider two kinds of loading, applying a density of force or imposing a displacement. We test five samples of periodic array containing one, four, sixteen, sixty-four and one hundred of RUCs. The evolution of mean stresses, strains and energy with the numbers of inclusions is studied. Evolutions depend on the kind of loading, but not their limits, which could be predicted by simulating traction test of the homogenized medium.
Stochastic model of milk homogenization process using Markov's chain
Directory of Open Access Journals (Sweden)
A. A. Khvostov
2016-01-01
Full Text Available The process of development of a mathematical model of the process of homogenization of dairy products is considered in the work. The theory of Markov's chains was used in the development of the mathematical model, Markov's chain with discrete states and continuous parameter for which the homogenisation pressure is taken, being the basis for the model structure. Machine realization of the model is implemented in the medium of structural modeling MathWorks Simulink™. Identification of the model parameters was carried out by minimizing the standard deviation calculated from the experimental data for each fraction of dairy products fat phase. As the set of experimental data processing results of the micrographic images of fat globules of whole milk samples distribution which were subjected to homogenization at different pressures were used. Pattern Search method was used as optimization method with the Latin Hypercube search algorithm from Global Optimization Тoolbox library. The accuracy of calculations averaged over all fractions of 0.88% (the relative share of units, the maximum relative error was 3.7% with the homogenization pressure of 30 MPa, which may be due to the very abrupt change in properties from the original milk in the particle size distribution at the beginning of the homogenization process and the lack of experimental data at homogenization pressures of below the specified value. The mathematical model proposed allows to calculate the profile of volume and mass distribution of the fat phase (fat globules in the product, depending on the homogenization pressure and can be used in the laboratory and research of dairy products composition, as well as in the calculation, design and modeling of the process equipment of the dairy industry enterprises.
A homogeneous cooling scheme investigation for high power slab laser
He, Jianguo; Lin, Weiran; Fan, Zhongwei; Chen, Yanzhong; Ge, Wenqi; Yu, Jin; Liu, Hao; Mo, Zeqiang; Fan, Lianwen; Jia, Dan
2017-10-01
The forced convective heat transfer with the advantages of reliability and durability is widely used in cooling the laser gain medium. However, a flow direction induced temperature gradient always appears. In this paper, a novel cooling configuration based on longitudinal forced convective heat transfer is presented. In comparison with two different types of configurations, it shows a more efficient heat transfer and more homogeneous temperature distribution. The investigation of the flow rate reveals that the higher flow rate the better cooling performance. Furthermore, the simulation results with 20 L/min flow rate shows an adequate temperature level and temperature homogeneity which keeps a lower hydrostatic pressure in the flow path.
Early capillary flux homogenization in response to neural activation.
Lee, Jonghwan; Wu, Weicheng; Boas, David A
2016-02-01
This Brief Communication reports early homogenization of capillary network flow during somatosensory activation in the rat cerebral cortex. We used optical coherence tomography and statistical intensity variation analysis for tracing changes in the red blood cell flux over hundreds of capillaries nearly at the same time with 1-s resolution. We observed that while the mean capillary flux exhibited a typical increase during activation, the standard deviation of the capillary flux exhibited an early decrease that happened before the mean flux increase. This network-level data is consistent with the theoretical hypothesis that capillary flow homogenizes during activation to improve oxygen delivery. © The Author(s) 2015.
Note on integrability of certain homogeneous Hamiltonian systems
Energy Technology Data Exchange (ETDEWEB)
Szumiński, Wojciech [Institute of Physics, University of Zielona Góra, Licealna 9, PL-65-407, Zielona Góra (Poland); Maciejewski, Andrzej J. [Institute of Astronomy, University of Zielona Góra, Licealna 9, PL-65-407, Zielona Góra (Poland); Przybylska, Maria, E-mail: M.Przybylska@if.uz.zgora.pl [Institute of Physics, University of Zielona Góra, Licealna 9, PL-65-407, Zielona Góra (Poland)
2015-12-04
In this paper we investigate a class of natural Hamiltonian systems with two degrees of freedom. The kinetic energy depends on coordinates but the system is homogeneous. Thanks to this property it admits, in a general case, a particular solution. Using this solution we derive necessary conditions for the integrability of such systems investigating differential Galois group of variational equations. - Highlights: • Necessary integrability conditions for some 2D homogeneous Hamilton systems are given. • Conditions are obtained analysing differential Galois group of variational equations. • New integrable and superintegrable systems are identified.
How to determine composite material properties using numerical homogenization
DEFF Research Database (Denmark)
Andreassen, Erik; Andreasen, Casper Schousboe
2014-01-01
Numerical homogenization is an efficient way to determine effective macroscopic properties, such as the elasticity tensor, of a periodic composite material. In this paper an educational description of the method is provided based on a short, self-contained Matlab implementation. It is shown how...... the basic code, which computes the effective elasticity tensor of a two material composite, where one material could be void, is easily extended to include more materials. Furthermore, extensions to homogenization of conductivity, thermal expansion, and fluid permeability are described in detail. The unit...
Homogenization of long fiber reinforced composites including fiber bending effects
DEFF Research Database (Denmark)
Poulios, Konstantinos; Niordson, Christian Frithiof
2016-01-01
This paper presents a homogenization method, which accounts for intrinsic size effects related to the fiber diameter in long fiber reinforced composite materials with two independent constitutive models for the matrix and fiber materials. A new choice of internal kinematic variables allows...... of the reinforcing fibers is captured by higher order strain terms, resulting in an accurate representation of the micro-mechanical behavior of the composite. Numerical examples show that the accuracy of the proposed model is very close to a non-homogenized finite-element model with an explicit discretization...
Preparation of homogeneous isotopic targets with rotating substrate
International Nuclear Information System (INIS)
Xu, G.J.; Zhao, Z.G.
1993-01-01
Isotopically enriched accelerator targets were prepared using the evaporation-condensation method from a resistance heating crucible. For high collection efficiency and good homogeneity the substrate was rotated at a vertical distance of 1.3 to 2.5 cm from the evaporation source. Measured collection efficiencies were 13 to 51 μg cm -2 mg -1 and homogeneity tests showed values close to the theoretically calculated ones for a point source. Targets, selfsupporting or on backings, could be fabricated with this method for elements and some compounds with evaporation temperatures up to 2300 K. (orig.)
A critical review of homogenization techniques in reactor lattices
International Nuclear Information System (INIS)
Benoist, P.
1983-01-01
The determination of the shape of the neutron flux in a whole reactor is, at the time being, a much too complex problem to be treated by transport theory. Since the earlier times of reactor theory, the necessity appeared to solve the problem in two steps. First the reactor is divided into zones, each of them forming a regular lattice. In each of these zones, homogenized parameters are determined by transport theory, in order to define an equivalent smeared medium. In a second step, these parameters are introduced in a diffusion theory scheme in order to treat the reactor as a whole. This is the homogenization procedure. 14 refs
Relativistic cosmologies with closed, locally homogeneous space sections
International Nuclear Information System (INIS)
Fagundes, H.V.
1985-01-01
The homogeneous Bianchi and Kantowski-Sachs metrics of relativistic cosmology are investigated through their correspondence with recent geometrical results of Thurston. These allow a partial classification of the topologies for closed, locally homogeneous spaces according to Thurston's eight geometric types. Besides, which of the Bianchi-Kantowski-Sachs metrics can be imposed on closed space sections of cosmological models are learned. This is seen as a progress toward implementation of a postulate of the closure of space for both classical and quantum gravity. (Author) [pt
Control rod homogenization in heterogeneous sodium-cooled fast reactors
International Nuclear Information System (INIS)
Andersson, Mikael
2016-01-01
The sodium-cooled fast reactor is one of the candidates for a sustainable nuclear reactor system. In particular, the French ASTRID project employs an axially heterogeneous design, proposed in the so-called CFV (low sodium effect) core, to enhance the inherent safety features of the reactor. This thesis focuses on the accurate modeling of the control rods, through the homogenization method. The control rods in a sodium-cooled fast reactor are used for reactivity compensation during the cycle, power shaping, and to shutdown the reactor. In previous control rod homogenization procedures, only a radial description of the geometry was implemented, hence the axially heterogeneous features of the CFV core could not be taken into account. This thesis investigates the different axial variations the control rod experiences in a CFV core, to determine the impact that these axial environments have on the control rod modeling. The methodology used in this work is based on previous homogenization procedures, the so-called equivalence procedure. The procedure was newly implemented in the PARIS code system in order to be able to use 3D geometries, and thereby be take axial effects into account. The thesis is divided into three parts. The first part investigates the impact of different neutron spectra on the homogeneous control-rod cross sections. The second part investigates the cases where the traditional radial control-rod homogenization procedure is no longer applicable in the CFV core, which was found to be 5-10 cm away from any material interface. In the third part, based on the results from the second part, a 3D model of the control rod is used to calculate homogenized control-rod cross sections. In a full core model, a study is made to investigate the impact these axial effects have on control rod-related core parameters, such as the control rod worth, the capture rates in the control rod, and the power in the adjacent fuel assemblies. All results were compared to a Monte
A critical review of homogenization techniques in reactor lattices
International Nuclear Information System (INIS)
Benoist, P.
1983-01-01
The determination of the shape of the neutron flux in a whole reactor is, at the time being, a much too complex problem to be treated by transport theory. Since the earlier times of reactor theory, the necessity appeared to solve the problem in two steps. First the reactor is divided into zones, each of them forming a regular lattice. In each of these zones, homogenized parameters are determined by transport theory, in order to define an equivalent smeared medium. In a second step, these parameters are introduced in a diffusion theory scheme in order to treat the reactor as a whole. This is the homogenization procedure
A homogeneous catalogue of quasar candidates found with slitless spectroscopy
International Nuclear Information System (INIS)
Beauchemin, M.; Borra, E.F.; Edwards, G.
1990-01-01
This paper gives a list of all quasar candidates obtained from an automated computer search performed on 11 grens plates. The description of the main characteristics of the survey is given along with the latest improvements in the selection techniques. Particular attention has been paid to understanding and quantifying selection effects. This allows the construction of homogeneous samples having well-understood characteristics. The noteworthy aspect of our homogenization process is the correction that we apply to our probability classes in order to take into account the signal-to-noise differences; at a given magnitude, among plates of different limiting magnitudes. (author)
Size-dependent homogenized diffusion parameters for a finite lattice
International Nuclear Information System (INIS)
Premuda, F.
1980-01-01
A numerical technique is reported for solving the transcendental equation for unknown Ysub(n+1). The solution is expressed in terms of quantities related to Ysub(n). This is an iterative reversion technique which has already been proven to converge rapidly in the homogeneous slab problem considered herein. (author)
Isotopic homogeneity of iron in the early solar nebula.
Zhu, X K; Guo, Y; O'Nions, R K; Young, E D; Ash, R D
2001-07-19
The chemical and isotopic homogeneity of the early solar nebula, and the processes producing fractionation during its evolution, are central issues of cosmochemistry. Studies of the relative abundance variations of three or more isotopes of an element can in principle determine if the initial reservoir of material was a homogeneous mixture or if it contained several distinct sources of precursor material. For example, widespread anomalies observed in the oxygen isotopes of meteorites have been interpreted as resulting from the mixing of a solid phase that was enriched in 16O with a gas phase in which 16O was depleted, or as an isotopic 'memory' of Galactic evolution. In either case, these anomalies are regarded as strong evidence that the early solar nebula was not initially homogeneous. Here we present measurements of the relative abundances of three iron isotopes in meteoritic and terrestrial samples. We show that significant variations of iron isotopes exist in both terrestrial and extraterrestrial materials. But when plotted in a three-isotope diagram, all of the data for these Solar System materials fall on a single mass-fractionation line, showing that homogenization of iron isotopes occurred in the solar nebula before both planetesimal accretion and chondrule formation.
Homogeneity of Moral Judgment? Apprentices Solving Business Conflicts.
Beck, Klaus; Heinrichs, Karin; Minnameier, Gerhard; Parche-Kawik, Kirsten
In an ongoing longitudinal study that started in 1994, the moral development of business apprentices is being studied. The focal point of this project is a critical analysis of L. Kohlberg's thesis of homogeneity, according to which people should judge every moral issue from the point of view of their "modal" stage (the most frequently…
Gauge freedom in perfect fluid spatially homogeneous spacetimes
International Nuclear Information System (INIS)
Jantzen, R.T.
1983-01-01
The class of reference systems compatible with the symmetry of a spatially homogeneous perfect fluid spacetime is discussed together with the associated class of symmetry adapted comoving ADM frames (or computational frames). The fluid equations of motion are related to the four functions on the space of fluid flow lines discovered by Taub and which characterize an isentropic flow. (Auth.)
Lagrangian statistics of particle pairs in homogeneous isotropic turbulence
Biferale, L.; Boffeta, G.; Celani, A.; Devenish, B.J.; Lanotte, A.; Toschi, F.
2005-01-01
We present a detailed investigation of the particle pair separation process in homogeneous isotropic turbulence. We use data from direct numerical simulations up to R????280 following the evolution of about two million passive tracers advected by the flow over a time span of about three decades. We
Electromagnetic Radiation in a Uniformly Moving, Homogeneous Medium
DEFF Research Database (Denmark)
Johannsen, Günther
1972-01-01
A new method of treating radiation problems in a uniformly moving, homogeneous medium is presented. A certain transformation technique in connection with the four-dimensional Green's function method makes it possible to elaborate the Green's functions of the governing differential equations...
Class Management and Homogeneous Grouping in Kindergarten Literacy Instruction
Hong, Guanglei; Pelletier, Janette; Hong, Yihua; Corter, Carl
2010-01-01
The purpose of this study is two-fold. Firstly the authors examine, given the amount of time allocated to literacy instruction, whether homogeneous grouping helps improve class manageability over the kindergarten year and whether individual students' externalizing problem behaviors will decrease in tandem. Secondly, they investigate whether the…
On superspinor structure of homogeneous superspace of orthosymplectic groups
International Nuclear Information System (INIS)
Volkov, D.V.; Soroka, V.A.; Tkach, V.I.
1984-01-01
Superspinor structure of homogeneous superspaces of orthosymplectic groups are considered. It is shown how the properties of orthosymplectic group superspaces of OSp(N, 2K) group playing an important role in the supersymmetry theory can be described using superspinors. An example confirming a possibility of the relation between . canonical ratios of Butten bracket and conventional methods of quantization is considered
Molecular weight enlargement : a molecular approach to continuous homogeneous catalysis
Janssen, M.C.C.
2010-01-01
Homogeneous catalysts play an increasingly important role in organic synthesis today, because of their high activity and selectivity. Usually, precious metals are used in combination with valuable ligands and since metal prices are expected to increase further in the future, methods for their
Non-homogeneous polymer model for wave propagation and its ...
African Journals Online (AJOL)
user
density are functions of space i.e. non-homogeneous engineering material. .... The Solution of equation Eq. (9) in the form of Eq. (10) can be obtained by taking a phase ..... Viscoelastic Model Applied to a Particular Case .... p m i exp m α α σ σ σ. = −. +. −. (35). The progressive harmonic wave which starts from the end. 0 x =.
DNA Dynamics Studied Using the Homogeneous Balance Method
International Nuclear Information System (INIS)
Zayed, E. M. E.; Arnous, A. H.
2012-01-01
We employ the homogeneous balance method to construct the traveling waves of the nonlinear vibrational dynamics modeling of DNA. Some new explicit forms of traveling waves are given. It is shown that this method provides us with a powerful mathematical tool for solving nonlinear evolution equations in mathematical physics. Strengths and weaknesses of the proposed method are discussed. (general)
Homogenization and isotropization of an inflationary cosmological model
International Nuclear Information System (INIS)
Barrow, J.D.; Groen, Oe.; Oslo Univ.
1986-01-01
A member of the class of anisotropic and inhomogeneous cosmological models constructed by Wainwright and Goode is investigated. It is shown to describe a universe containing a scalar field which is minimally coupled to gravitation and a positive cosmological constant. It is shown that this cosmological model evolves exponentially rapidly towards the homogeneous and isotropic de Sitter universe model. (orig.)
Revisiting the homogenization of dammed rivers in the southeastern US
Ryan A. McManamay; Donald J. Orth; Charles A. Dolloff
2012-01-01
For some time, ecologists have attempted to make generalizations concerning how disturbances influence natural ecosystems, especially river systems. The existing literature suggests that dams homogenize the hydrologic variability of rivers. However, this might insinuate that dams affect river systems similarly despite a large gradient in natural hydrologic character....
Homogeneous axisymmetric model with a limitting stiff equation of state
International Nuclear Information System (INIS)
Korkina, M.P.; Martynenko, V.G.
1976-01-01
A solution is obtained for Einstein's equations in which all metric coefficients are time functions for a limiting stiff equation of the substance state. Thr solution describes a homogeneous cosmological model with cylindrical symmetry. It is shown that the same metrics can be induced by a massless scalar only time-dependent field. Analysis of this solution is presented
Fraisse sequences: category-theoretic approach to universal homogeneous structures
Czech Academy of Sciences Publication Activity Database
Kubiś, Wieslaw
2014-01-01
Roč. 165, č. 11 (2014), s. 1755-1811 ISSN 0168-0072 R&D Projects: GA ČR(CZ) GAP201/12/0290 Institutional support: RVO:67985840 Keywords : universal homogeneous object * Fraissé sequence * amalgamation Subject RIV: BA - General Mathematics Impact factor: 0.548, year: 2014 http://www.sciencedirect.com/science/article/pii/S0168007214000773
KINETIC THEORY OF PLASMA WAVES: Part II: Homogeneous Plasma
Westerhof, E.
2010-01-01
The theory of electromagnetic waves in a homogeneous plasma is reviewed. The linear response of the plasma to the waves is obtained in the form of the dielectric tensor. Waves ranging from the low frequency Alfven to the high frequency electron cyclotron waves are discussed in the limit of the cold
Kinetic theory of plasma waves: Part II homogeneous plasma
Westerhof, E.
2000-01-01
The theory of electromagnetic waves in a homogeneous plasma is reviewed. The linear response of the plasma to the waves is obtained in the form of the dielectric tensor. Waves ranging from the low frequency Alfven to the high frequency electron cyclotron waves are discussed in the limit of the cold
Kinetic theory of plasma waves - Part II: Homogeneous plasma
Westerhof, E.
2008-01-01
The theory of electromagnetic waves in a homogeneous plasma is reviewed. The linear response of the plasma to the waves is obtained in the form of the dielectric tensor. Waves ranging from the low frequency Alfven to the high frequency electron cyclotron waves axe discussed in the limit of the cold
Homogeneous Nucleation Rate Measurements in Supersaturated Water Vapor
Czech Academy of Sciences Publication Activity Database
Brus, David; Ždímal, Vladimír; Smolík, Jiří
2008-01-01
Roč. 129, č. 17 (2008), , 174501-1-174501-8 ISSN 0021-9606 R&D Projects: GA ČR GA101/05/2214 Institutional research plan: CEZ:AV0Z40720504 Keywords : homogeneous nucleation * water * diffusion chamber Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 3.149, year: 2008
Homogenization and Optimal Control S. Kesavan The Institute of ...
Indian Academy of Sciences (India)
Homogenization permits us to study the global behaviour of heterogeneous bodies with a lot of heterogeneities whose dimen- sions are small compared to those of the body. • It describes the macroscopic behaviour of systems with a fine microstructure. 2 ...
Exploring cosmic homogeneity with the BOSS DR12 galaxy sample
Energy Technology Data Exchange (ETDEWEB)
Ntelis, Pierros; Hamilton, Jean-Christophe; Busca, Nicolas Guillermo; Aubourg, Eric [APC, Université Paris Diderot-Paris 7, CNRS/IN2P3, CEA, Observatoire de Paris, 10, rue A. Domon and L. Duquet, Paris (France); Goff, Jean-Marc Le; Burtin, Etienne; Laurent, Pierre; Rich, James; Bourboux, Hélion du Mas des; Delabrouille, Nathalie Palanque [CEA, Centre de Saclay, IRFU/SPP, F-91191 Gif-sur-Yvette (France); Tinker, Jeremy [Department of Physics and Center for Cosmology and Particle Physics, New York University, 726 Broadway, New York (United States); Bautista, Julian [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States); Delubac, Timothée [Laboratoire d' astrophysique, Ecole Polytechnique Fédérale de Lausanne (EPFL), Observatoire de Sauverny, CH-1290 Versoix (Switzerland); Eftekharzadeh, Sarah; Myers, Adam [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Hogg, David W. [Center for Cosmology and Particle Physics, New York University, 4 Washington Place, Meyer Hall of Physics, New York, NY 10003 (United States); Vargas-Magaña, Mariana [Instituto de Física, Universidad Nacional Autónoma de México, Apdo. Postal 20-364, México (Mexico); Pâris, Isabelle [Aix Marseille Universite, CNRS, LAM (Laboratoire d' Astrophysique de Marseille) UMR 7326, 13388, Marseille (France); Petitjean, Partick [Institut d' Astrophysique de Paris, CNRS-UPMC, UMR7095, 98bis bd Arago, Paris, 75014 France (France); Rossi, Graziano, E-mail: pntelis@apc.in2p3.fr, E-mail: jchamilton75@gmail.com [Department of Astronomy and Space Science, Sejong University, Seoul, 143-747 (Korea, Republic of); and others
2017-06-01
In this study, we probe the transition to cosmic homogeneity in the Large Scale Structure (LSS) of the Universe using the CMASS galaxy sample of BOSS spectroscopic survey which covers the largest effective volume to date, 3 h {sup −3} Gpc{sup 3} at 0.43 ≤ z ≤ 0.7. We study the scaled counts-in-spheres, N(< r ), and the fractal correlation dimension, D{sub 2}( r ), to assess the homogeneity scale of the universe using a Landy and Szalay inspired estimator. Defining the scale of transition to homogeneity as the scale at which D{sub 2}( r ) reaches 3 within 1%, i.e. D{sub 2}( r )>2.97 for r >R {sub H} , we find R {sub H} = (63.3±0.7) h {sup −1} Mpc, in agreement at the percentage level with the predictions of the ΛCDM model R {sub H} =62.0 h {sup −1} Mpc. Thanks to the large cosmic depth of the survey, we investigate the redshift evolution of the transition to homogeneity scale and find agreement with the ΛCDM prediction. Finally, we find that D{sub 2} is compatible with 3 at scales larger than 300 h {sup −1} Mpc in all redshift bins. These results consolidate the Cosmological Principle and represent a precise consistency test of the ΛCDM model.
Homogenization of a thermo-diffusion system with Smoluchowski interactions
Krehel, O.; Aiki, T.; Muntean, A.
2014-01-01
We study the solvability and homogenization of a thermal-diffusion reaction problem posed in a periodically perforated domain. The system describes the motion of populations of hot colloidal particles interacting together via Smoluchowski production terms. The upscaled system, obtained via two-scale
Homogeneous optical cloak constructed with uniform layered structures
DEFF Research Database (Denmark)
Zhang, Jingjing; Liu, Liu; Luo, Yu
2011-01-01
, the majority of the invisibility cloaks reported so far have a spatially varying refractive index which requires complicated design processes. Besides, the size of the hidden object is usually small relative to that of the cloak device. Here we report the experimental realization of a homogenous invisibility...
Homogenization of compacted blends of Ni and Mo powders
International Nuclear Information System (INIS)
Lanam, R.D.; Yeh, F.C.H.; Rovsek, J.E.; Smith, D.W.; Heckel, R.W.
1975-01-01
The homogenization behavior of compacted blends of Ni and Mo powders was studied primarily as a function of temperature, mean compact composition, and Mo powder particle size. All compact compositions were in the Ni-rich terminal solid-solution range; temperatures were between 950 and 1200 0 C (in the region of the phase diagram where only the Mo--Ni intermediate phase forms); average Mo particle sizes ranged from 8.4 mu m to 48 mu m. Homogenization was characterized in terms of the rate of decrease of the amounts of the Mo-rich terminal solid-solution phase and the Mo--Ni intermediate phase. The experimental results were compared to predictions based upon the three-phase, concentric-sphere homogenization model. In general, agreement between experimental data and model predictions was fairly good for high-temperature treatments and for compact compositions which were not close to the solubility limit of Mo in Ni. Departures from the model are discussed in terms of surface diffusion contributions to homogenization and non-uniform mixing effects. (U.S.)
A new formulation for the problem of fuel cell homogenization
International Nuclear Information System (INIS)
Chao, Y.-A.; Martinez, A.S.
1982-01-01
A new homogenization method for reactor cells is described. This new method consists in eliminating the NR approximation for the fuel resonance and the Wigner approximation for the resonance escape probability; the background cross section is then redefined and the problem studied is reanalyzed. (E.G.) [pt
Notes on a class of homogeneous space-times
International Nuclear Information System (INIS)
Calvao, M.O.; Reboucas, M.J.; Teixeira, A.F.F.; Silva Junior, W.M.
1987-01-01
The breakdown of causality in homogeneous Goedel-type space-time manifolds is examined. An extension of Reboucas-Tiomno (RT) study is made. The existence of noncausal curves is also investigated under two different conditions on the energy-momentum tensor. An integral representation of the infinitesimal generators of isometries is obtained extending previous works on the RT geometry. (Author) [pt
Transient computational homogenization for heterogeneous materials under dynamic excitation
Pham, N.K.H.; Kouznetsova, V.; Geers, M.G.D.
2013-01-01
This paper presents a novel transient computational homogenization procedure that is suitable for the modelling of the evolution in space and in time of materials with non-steady state microstructure, such as metamaterials. This transient scheme is an extension of the classical (first-order)
Non-homogeneous polymer model for wave propagation and its ...
African Journals Online (AJOL)
This article concerns certain aspects of four parameter polymer models to study harmonic waves in the non-homogeneous polymer rods of varying density. There are two sections of this paper, in first section, the rheological behaviour of the model is discussed numerically and then it is solved analytically with the help of ...
A characterization of Markovian homogeneous multicomponent Gaussian fields
International Nuclear Information System (INIS)
Ekhaguere, G.O.S.
1980-01-01
Necessary and sufficient conditions are given for a certain class of homogeneous multicomponent Gaussian generalized stochastic fields to possess a Markov property equivalent to Nelson's. The class of Markov fields so characterized has a as a cubclass the class of Markov fields which lead by Nelson's Reconstruction Theorem to some covariant (free) quantum fields. (orig.)
Homogenization of Stokes and Navier-Stokes equations
International Nuclear Information System (INIS)
Allaire, G.
1990-04-01
This thesis is devoted to homogenization of Stokes and Navier-Stokes equations with a Dirichlet boundary condition in a domain containing many tiny obstacles. Tipycally those obstacles are distributed at the modes of a periodic lattice with same small period in each axe's direction, and their size is always asymptotically smaller than the lattice's step. With the help of the energy method, and thanks to a suitable pressure's extension, we prove the convergence of the homogenization process when the lattice's step tends to zero (and thus the number of obstacles tends to infinity). For a so-called critical size of the obstacles, the homogenized problem turns out to be a Brinkman's law (i.e. Stokes or Navier-Stokes equation plus a linear zero-order term for the velocity in the momentum equation). For obstacles which have a size smaller than the critical one, the limit problem reduces to the initial Stokes or Navier-Stokes equations, while for larger sizes the homogenized problem a Darcy's law. Furthermore, those results have been extended to the case of obstacles included in a hyperplane, and we establish a simple model of fluid flows through grids, which is based on a special form of Brinkman's law [fr
Microsegregation and homogenization in U-Nb alloy
International Nuclear Information System (INIS)
Leal, J. Fernando; Nogueira, R.A.; Ambrozio Filho, F.
1987-01-01
Microsegregation results in U-4 w t% Nb alloys casted in nonconsumable electrode arc furnace are presented. The microsegregation is studied qualitatively by optical microscopy and quantitatively by electron microprobe. The degreee of homogenetization has been measured after 800 0 C heat treatments. The times required for homogeneization of the alloys are also discussed. (author) [pt
Environmental Kuznets Curves for CO2 : Heterogeneity Versus Homogeneity
Vollebergh, H.R.J.; Dijkgraaf, E.; Melenberg, B.
2005-01-01
We explore the emissions income relationship for CO2 in OECD countries using various modelling strategies.Even for this relatively homogeneous sample, we find that the inverted-U-shaped curve is quite sensitive to the degree of heterogeneity included in the panel estimations.This finding is robust,
Subspace identification of distributed clusters of homogeneous systems
Yu, C.; Verhaegen, M.H.G.
2017-01-01
This note studies the identification of a network comprised of interconnected clusters of LTI systems. Each cluster consists of homogeneous dynamical systems, and its interconnections with the rest of the network are unmeasurable. A subspace identification method is proposed for identifying a single
Quasi-single-mode homogeneous 31-core fibre
DEFF Research Database (Denmark)
Sasaki, Y.; Saitoh, S.; Amma, Y.
2015-01-01
A homogeneous 31-core fibre with a cladding diameter of 230 μm for quasi-single-mode transmission is designed and fabricated. LP01-crosstalk of -38.4 dB/11 km at 1550 nm is achieved by using few-mode trench-assisted cores....
Directory of Open Access Journals (Sweden)
K. Ioannidi
2014-01-01
Full Text Available We consider the problem of radiation from a vertical short (Hertzian dipole above flat lossy ground, which represents the well-known “Sommerfeld radiation problem” in the literature. The problem is formulated in a novel spectral domain approach, and by inverse three-dimensional Fourier transformation the expressions for the received electric and magnetic (EM field in the physical space are derived as one-dimensional integrals over the radial component of wavevector, in cylindrical coordinates. This formulation appears to have inherent advantages over the classical formulation by Sommerfeld, performed in the spatial domain, since it avoids the use of the so-called Hertz potential and its subsequent differentiation for the calculation of the received EM field. Subsequent use of the stationary phase method in the high frequency regime yields closed-form analytical solutions for the received EM field vectors, which coincide with the corresponding reflected EM field originating from the image point. In this way, we conclude that the so-called “space wave” in the literature represents the total solution of the Sommerfeld problem in the high frequency regime, in which case the surface wave can be ignored. Finally, numerical results are presented, in comparison with corresponding numerical results based on Norton’s solution of the problem.
Biotic homogenization of three insect groups due to urbanization.
Knop, Eva
2016-01-01
Cities are growing rapidly, thereby expected to cause a large-scale global biotic homogenization. Evidence for the homogenization hypothesis is mostly derived from plants and birds, whereas arthropods have so far been neglected. Here, I tested the homogenization hypothesis with three insect indicator groups, namely true bugs, leafhoppers, and beetles. In particular, I was interested whether insect species community composition differs between urban and rural areas, whether they are more similar between cities than between rural areas, and whether the found pattern is explained by true species turnover, species diversity gradients and geographic distance, by non-native or specialist species, respectively. I analyzed insect species communities sampled on birch trees in a total of six Swiss cities and six rural areas nearby. In all indicator groups, urban and rural community composition was significantly dissimilar due to native species turnover. Further, for bug and leafhopper communities, I found evidence for large-scale homogenization due to urbanization, which was driven by reduced species turnover of specialist species in cities. Species turnover of beetle communities was similar between cities and rural areas. Interestingly, when specialist species of beetles were excluded from the analyses, cities were more dissimilar than rural areas, suggesting biotic differentiation of beetle communities in cities. Non-native species did not affect species turnover of the insect groups. However, given non-native arthropod species are increasing rapidly, their homogenizing effect might be detected more often in future. Overall, the results show that urbanization has a negative large-scale impact on the diversity specialist species of the investigated insect groups. Specific measures in cities targeted at increasing the persistence of specialist species typical for the respective biogeographic region could help to stop the loss of biodiversity. © 2015 John Wiley & Sons Ltd.
Homogenized description and retrieval method of nonlinear metasurfaces
Liu, Xiaojun; Larouche, Stéphane; Smith, David R.
2018-03-01
A patterned, plasmonic metasurface can strongly scatter incident light, functioning as an extremely low-profile lens, filter, reflector or other optical device. When the metasurface is patterned uniformly, its linear optical properties can be expressed using effective surface electric and magnetic polarizabilities obtained through a homogenization procedure. The homogenized description of a nonlinear metasurface, however, presents challenges both because of the inherent anisotropy of the medium as well as the much larger set of potential wave interactions available, making it challenging to assign effective nonlinear parameters to the otherwise inhomogeneous layer of metamaterial elements. Here we show that a homogenization procedure can be developed to describe nonlinear metasurfaces, which derive their nonlinear response from the enhanced local fields arising within the structured plasmonic elements. With the proposed homogenization procedure, we are able to assign effective nonlinear surface polarization densities to a nonlinear metasurface, and link these densities to the effective nonlinear surface susceptibilities and averaged macroscopic pumping fields across the metasurface. These effective nonlinear surface polarization densities are further linked to macroscopic nonlinear fields through the generalized sheet transition conditions (GSTCs). By inverting the GSTCs, the effective nonlinear surface susceptibilities of the metasurfaces can be solved for, leading to a generalized retrieval method for nonlinear metasurfaces. The application of the homogenization procedure and the GSTCs are demonstrated by retrieving the nonlinear susceptibilities of a SiO2 nonlinear slab. As an example, we investigate a nonlinear metasurface which presents nonlinear magnetoelectric coupling in near infrared regime. The method is expected to apply to any patterned metasurface whose thickness is much smaller than the wavelengths of operation, with inclusions of arbitrary geometry
Two-dimensional deformation of a uniform half-space due to non ...
Indian Academy of Sciences (India)
quake sources cannot be represented by double- couple source mechanism which models a shear fault. According to Sipkin (1986), the non-double- couple mechanism might be due to ..... Commission, New Delhi for the financial support in the form of Junior Research Fellowship sanc- tioned to R C Verma and Major ...
1982-03-01
NUMB9ER 00 AU THOR(s) 8. CON7RACT OR GRANT .%Uv3ERHj) Frederick Bloom AFOSR-81-0171 PERFORMING ORGANIZATION NAME AND ADDRESS 10. PrOGRAK ELEMAE:NT...material coff -iceret which may be associated with a particular nonlinear dielectric substance. For most common nonlinear dielectric substance, e
International Nuclear Information System (INIS)
Sentis, R.
1984-07-01
The radiative transfer equations may be approximated by a non linear diffusion equation (called Rosseland equation) when the mean free paths of the photons are small with respect to the size of the medium. Some technical assomptions are made, namely about the initial conditions, to avoid any problem of initial layer terms
Electromagnetic diffraction by an impedance cylinder buried halfway between two half-spaces
Salem, Mohamed; Kamel, Aladin Hassan
2011-01-01
We consider the problem of electromagnetic diffraction from a cylinder with impedance surface and half-buried between two dielectric media. An arbitrary located electric dipole provides the excitation. The harmonic solution is presented as a series sum over a spectrum of a discrete-index Hankel transform, and the spectral amplitudes are determined by solving an infinite linear system of equations, which is constructed by applying the orthogonality relation of the 1D Green's function. © 2011 IEEE.
Berberyan, A. Kh; Garakov, V. G.
2018-04-01
A large number of works have been devoted to investigation of the influence of the piezoelectric properties of a material on the propagation of elastic waves [1–3]. Herewith, the quasi-static piezoelasticity model was mainly used. In the problem of an electromagnetic wave reflection from an elastic medium with piezoelectric properties, it is necessary to consider hyperbolic equations [4].
Electromagnetic diffraction by an impedance cylinder buried halfway between two half-spaces
Salem, Mohamed
2011-08-01
We consider the problem of electromagnetic diffraction from a cylinder with impedance surface and half-buried between two dielectric media. An arbitrary located electric dipole provides the excitation. The harmonic solution is presented as a series sum over a spectrum of a discrete-index Hankel transform, and the spectral amplitudes are determined by solving an infinite linear system of equations, which is constructed by applying the orthogonality relation of the 1D Green\\'s function. © 2011 IEEE.
Reflection and transmission of full-vector X-waves normally incident on dielectric half spaces
Salem, Mohamed; Bagci, Hakan
2011-01-01
polarization components, which are derived from the scalar X-Wave solution. The analysis of transmission and reflection is carried out via a straightforward but yet effective method: First, the X-Wave is decomposed into vector Bessel beams via the Bessel-Fourier
Homogeneous nucleation limit on the bulk formation of metallic glasses
International Nuclear Information System (INIS)
Drehman, A.J.
1983-01-01
Glassy Pd 82 Si 18 spheres, of up to 1 mm diameter, were formed in a drop tube filled with He gas. The largest spheres were successfully cooled to a glass using a cooling rate of less than 800 K/sec. Even at this low cooling rate, crystallization (complete or partial) was the result of heterogeneous nucleation at a high temperature, relative to the temperature at which copious homogeneous nucleation would commence. Bulk underscoring experiments demonstrated that this alloy could be cooled to 385 K below its eutectic melting temperature (1083 K) without the occurrence of crystallization. If heterogeneous nucleation can be avoided, it is estimated that a cooling rate of at most 100 K/sec would be required to form this alloy in the glassy state. Ingots of glassy Pd 40 Ni 40 P 20 were formed from the liquid by cooling at a rate of only 1 K/sec. It was found that glassy samples of this alloy could be heated well above the glass transition temperature without the occurrence of rapid divitrification. This is a result due, in part of the low density of pre-existing nuclei, but, more importantly, due to the low homogeneous nucleation rate and the slow crystal growth kinetics. Based on the observed devitrification kinetics, the steady-state homogeneous nucleation rate is approximately 1 nuclei/cm 3 sec at 590 K (the temperature at which the homogeneous nucleation rate is estimated to be a maximum). Two iron-nickel based glass-forming alloys (Fe 40 Ni 40 P 14 B 6 and Fe 40 Ni 40 B 20 , were not successfully formed into glassy spheres, however, microstructural examination indicates that crystallization was not the result of copious homogeneous nucleation. In contrast, glass forming iron based alloys (Fe 80 B 20 and Fe/sub 79.3/B/sub 16.4/Si/sub 4.0/C/sub 0.3/) exhibit copious homogeneous nucleation when cooled at approximately the same rate
Self-formed waterfall plunge pools in homogeneous rock
Scheingross, Joel S.; Lo, Daniel Y.; Lamb, Michael P.
2017-01-01
Waterfalls are ubiquitous, and their upstream propagation can set the pace of landscape evolution, yet no experimental studies have examined waterfall plunge pool erosion in homogeneous rock. We performed laboratory experiments, using synthetic foam as a bedrock simulant, to produce self-formed waterfall plunge pools via particle impact abrasion. Plunge pool vertical incision exceeded lateral erosion by approximately tenfold until pools deepened to the point that the supplied sediment could not be evacuated and deposition armored the pool bedrock floor. Lateral erosion of plunge pool sidewalls continued after sediment deposition, but primarily at the downstream pool wall, which might lead to undermining of the plunge pool lip, sediment evacuation, and continued vertical pool floor incision in natural streams. Undercutting of the upstream pool wall was absent, and our results suggest that vertical drilling of successive plunge pools is a more efficient waterfall retreat mechanism than the classic model of headwall undercutting and collapse in homogeneous rock.
Parametric dependence of two-plasmon decay in homogeneous plasma
International Nuclear Information System (INIS)
Dimitrijevic, Dejan R
2010-01-01
A hydrodynamic model of two-plasmon decay in a homogeneous plasma slab near the quarter-critical density is constructed in order to improve our understanding of the spatio-temporal evolution of the daughter electron plasma waves in plasma in the course of the instability. The scaling of the amplitudes of the participating waves with laser and plasma parameters is investigated. The secondary coupling of two daughter electron plasma waves with an ion-acoustic wave is assumed to be the principal mechanism of saturation of the instability. The impact of the inherently nonresonant nature of this secondary coupling on the development of two-plasmon decay is researched and it is shown to significantly influence the electron plasma wave dynamics. Its inclusion leads to nonuniformity of the spatial profile of the instability and causes the burst-like pattern of the instability development, which should result in the burst-like hot-electron production in homogeneous plasma.
The coherent state on SUq(2) homogeneous space
International Nuclear Information System (INIS)
Aizawa, N; Chakrabarti, R
2009-01-01
The generalized coherent states for quantum groups introduced by Jurco and StovIcek are studied for the simplest example SU q (2) in full detail. It is shown that the normalized SU q (2) coherent states enjoy the property of completeness, and allow a resolution of the unity. This feature is expected to play a key role in the application of these coherent states in physical models. The homogeneous space of SU q (2), i.e. the q-sphere of Podles, is reproduced in complex coordinates by using the coherent states. Differential calculus in the complex form on the homogeneous space is developed. The high spin limit of the SU q (2) coherent states is also discussed.
Homogenization of the critically spectral equation in neutron transport
Energy Technology Data Exchange (ETDEWEB)
Allaire, G. [CEA Saclay, 91 - Gif-sur-Yvette (France). Dept. de Mecanique et de Technologie]|[Paris-6 Univ., 75 (France). Lab. d' Analyse Numerique; Bal, G. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches
1998-07-01
We address the homogenization of an eigenvalue problem for the neutron transport equation in a periodic heterogeneous domain, modeling the criticality study of nuclear reactor cores. We prove that the neutron flux, corresponding to the first and unique positive eigenvector, can be factorized in the product of two terms, up to a remainder which goes strongly to zero with the period. On terms is the first eigenvector of the transport equation in the periodicity cell. The other term is the first eigenvector of a diffusion equation in the homogenized domain. Furthermore, the corresponding eigenvalue gives a second order corrector for the eigenvalue of the heterogeneous transport problem. This result justifies and improves the engineering procedure used in practice for nuclear reactor cores computations. (author)
Constructing Bridges between Computational Tools in Heterogeneous and Homogeneous Catalysis
Falivene, Laura; Kozlov, Sergey M.; Cavallo, Luigi
2018-01-01
Better catalysts are needed to address numerous challenges faced by humanity. In this perspective, we review concepts and tools in theoretical and computational chemistry that can help to accelerate the rational design of homogeneous and heterogeneous catalysts. In particular, we focus on the following three topics: 1) identification of key intermediates and transition states in a reaction using the energetic span model, 2) disentanglement of factors influencing the relative stability of the key species using energy decomposition analysis and the activation strain model, and 3) discovery of new catalysts using volcano relationships. To facilitate wider use of these techniques across different areas, we illustrate their potentials and pitfalls when applied to the study of homogeneous and heterogeneous catalysts.